Microsoft Research Paper on software re-design

 by Martin Belam, 3 October 2004

Microsoft Research published a very interesting paper this week - Contextual Method for the Re-design of Existing Software Products. It laid out a framework for user-testing and user centred design methodologies being applied to incremental enhancements to applications. The work they use to illustrate their point was a study of the use of Internet Explorer at Cambridgeshire County Council, with a view to introducing new features that would improve the efficiency of the workflow in the department.

Two things stood out for me. At one point when measuring strategies for information access they note that:

"In our observations, we noticed that people had regular sites which they commonly visited by typing the name and using the auto-complete feature. Interestingly, participants rarely used on-line search for a topic; only 6.6% of all navigation activities were related to search."

I wonder if that figure is still accurate, or accurate for people's use of web search technology as a whole? At the conclusion of their paper they state that "The data we collected was found extremely useful to the design team and was used extensively over a three year period, both within the research and development teams". To my mind that implies that the observational work was done in 2002 at the earliest, and I would guess that the general brand awareness of search tools on the internet has increased greatly during that period. We also see a much higher rate of usage of search on bbc.co.uk, many more than 6.6% of our users take advantage of the search facility.

The second point I noted was the technique they used to combine logging data with interviews, in order to eliminate the weaknesses inherent in each technique:

"Firstly, it resolved a criticism of interviews that participants only mention what they are aware of and what they think the interviewer wants to know. For example, participants did not tend to mention activities that were unsuccessful, and yet failures were of interest to us. In the first study, we were keen to know what they were trying to achieve. In the second study, we were also interested in whether the features were working correctly. We could see problems had occurred in the logs and we could probe in the interviews. In addition, we could see patterns developing in the logs, such as someone's use of Favorites or Overview, and we could probe the participant for greater detail.
Secondly, logged information is typically hard to interpret reliably, but the interviews and field observation provided a broader understanding that allowed us to make better sense of the data. The visualization tool in the logger was extremely helpful. However, it was still incredibly time-consuming to go through participants' logs. The interviews added structure to log data and made analysis of the logs easier. For example, from the interview, it was clear there were daily patterns in people's activities and we used the logs to explore these for closely. Participants told us they visited particular sites, such as the daily newsletter, in the morning. We could see this acted as a portal site with participants performing hub and spoke navigation. As a result, we began to discern between transitional and intentional back navigation. The logs showed a surprisingly low use of search tools and so we explored how people navigated to sites. This led to our discovery of trails."

It neatly summed up what was always a frustration of mine, that by examining search logs I could see people refining their query, perhaps by entering three or four related terms, but that I had no way of telling whether they stopped because they found what they were looking for, or because they gave up unsatisfied. If only I could have looked them up and asked them.

Keep up to date on my new blog