IA lessons from publishing Sarah Palin's email
Chris Elliott, Readers’ Editor at The Guardian, recently addressed the issue of our coverage of the Sarah Palin email release in his Open Door column. The project raised some interesting questions about the information architecture of how we publish this kind of crowd-sourcing exercise on the Guardian website.
Sign-posting and context
Some of the complaints from users seemed to be on the basis that they hadn’t understood the context of the release of the email. Given our recent association with Wikileaks, it is only to be expected that some people would assume that the emails were obtained not through a lengthy FOI process, but via a whistle-blower or stolen data dump. Typical was this comment by user BigNowitzki:
“Is the Guardian still crusading over the phone-hacking scandal? Just asking.”
“Have to agree with many of the comments above, I do not share the Guardian's obsession with all things Sarah Palin, as I suspect the majority of Guardian readers do not. This feels tacky and irrelevant, at the end of the day is this any better than the NOTW hacking into celebs' phones?” - jobloobird
The sentiment persists, with a recent online Q&A with the Guardian’s editor Alan Rusbridger about the News Of The World hacking scandal featuring frequent references to the emails as being similarly hacked or a “fishing exercise”.
We did have a “backgrounder”, but maybe this needed to be linked to more prominently. Or maybe the readers concerned didn’t read the article, jumped to their own conclusions, and commented accordingly - it can happen ;-)
“Palin-free version” not multi-platform
We revisited the “Republican/Monarchist” switch that Paul Haine built for the Royal Wedding early in the year, offering users the chance to experience a Palin-free homepage if they wished.
This switch is done on the client-side of the website, however, and does not permeate through all our platforms. Users of m.guardian or our iPhone app were stuck with Sarah.
When is an article not an article?
Early on in the planning for the project we made a decision to take the scanned documents, and have them uploaded as individual articles on our website. That gave us lots of advantages, like a ready-built template, unique URLs, and the ability to add a series tag “Sarah Palin emails” to them.
This didn’t work on a pure IA level, however, as the scanned images are not strictly articles. We suddenly dumped about 20,000 new articles into our API that only consisted of the reference to an image. They will be there clogging up certain API queries and our site search for years to come. And we spammed the hell out of our World News RSS feed, which saw each uploaded email image as a new news story.
So what do you learn?
Two things stick out for me. Firstly, I don’t think anyone would be that bothered about the deluge of coverage or some of the technical limitations of the project, if there had been a story there. It is only hindsight that tells us that the emails were on the whole deathly dull. As Chris Elliott put it:
“The ‘ball-by-ball’ nature of our coverage, a growing and often successful method of real-time coverage on the web, meant we sounded way more excited about the emails than their substance warranted...Web techniques such as live blogging and crowdsourcing expose the process of a story in a way that has hitherto been largely hidden to readers, which is a good thing. But in future we should be much warier of the glee quota until we know what we have got.”
And the second thing I learned?
That I never expect to work on a project brief again in my life that starts with the phrase “We are going to send a man to Alaska with a scanner...”