hand-pointing The art of the scenario task

Scenario tasks are the backbone of a successful usability test. Follow these best practices for writing your next scenario tasks.

Scenario tasks are the heart of a usability test. They are what you provide to the tester to actually perform the usability test. As such, scenario tasks need to represent what real people do with the system to do real work.

Writing scenario tasks is often more "art" than "science." When I wrote my first scenario tasks, they were pretty basic. I had a difficult time balancing what to ask without giving too much away. Scenario tasks need to be focused and directed, but not prescriptive. Here's how I learned to write good scenario tasks for usability testing.

Keeping it real

Let's start with why we do usability testing. A usability test examines if an application or other information product can be used by real people to do real work in a reasonable amount of time. To do the usability test, you'll recruit testers to use the system under observation. Each scenario task performed by your testers needs to reflect real work that real people would do on the system.

I like to write scenario tasks that set a brief context then ask the tester to do something specific. This simple two-step "recipe" serves as a good guide to write scenario tasks that expose how real people use the system to do real work.

At the same time, the scenario task cannot provide accidental hints. This is often a challenging balance. Examine the path a user might take to complete a task. Does your scenario task borrow wording from any of the menu actions or other user interface elements that perform the task? Reusing phrases or special terms can provide hints to the tester for how to complete the task. With these hints, the tester is no longer using the system as a user, but simply following your directions to complete a task.

A few examples

Over several years, I mentored usability testing in open source software. In examining a plain text editor, we needed to understand if the user interface was simple enough for typical use by casual users. In our usability test, we wrote a series of scenario tasks that asked the user to open an existing file, edit some text, and save the file under a different filename. While working on the file, we prompted the user to perform several edits including these:

Opening a file

You want to finish writing a draft of a blog post that you are using in a project. You start the __ text editor (this has been done for you). Please open the file blog post draft.txt from the Documents folder.

This scenario task starts with the editor application already open; we didn't need to examine the desktop operating system, so we started with the application already running, but without a file loaded.

The scenario task sets a brief context (you need to finish writing a document for a project) and asks the tester to do something specific (open a file). This scenario task prompted the tester to open a specific file from a specific directory. In the usability test, the tester was given a "test" account to work in, which we had populated with a variety of files that a typical user might have.

The "test" account included folders for documents, music, videos, and downloads. In each folder, we created sample files you might expect to find in that location, such as copies of Word documents and text files in the Documents folder. Some files were also in other folders inside Documents, representing other projects someone may have worked on, or other collections of files. 

If we had given the tester an otherwise empty account to work from, with only the Documents folder that contained just the blog post draft.txt file, that would not be a very representative test of opening a file. In everyday life, real people have lots of files. We wanted to represent that typical usage in our test.

Search and Replace

Some of the names are incorrect in the document. Change every occurrence of Applejack to Fluttershy, and all instances of Rainbow Dash to Twilight Sparkle.

This scenario task examined if testers could use the "Search and Replace" feature in the text editor. The text file was an excerpt from an article I had written about the importance of relationship-building in IT leadership, using characters from the then-popular My Little Pony: Friendship is Magic cartoon. (We ran this test around 2013 or 2014 when MLP was popular on college campuses, where we recruited our testers.)

The first time we ran this usability test, we observed testers visually scanning the document, looking for each term to replace. This did not exercise the "Search and Replace" feature; the document was short enough that most testers felt it was easier to just look for the text on their own and replace what they found.

In future iterations of the test, we used the full text of the article and broke up the longer paragraphs into shorter ones. At over 900 words, this new text file was long enough that no testers wanted to search the document with their eyes; every tester looked for a "Search and Replace" feature.

Another feature to note is the use of the term "change" instead of "replace" when asking the tester to make the changes. Since the menu action was called "Search and Replace" in this editor, if we had used "replace" to ask the tester to edit the file, that would have re-used the menu's exact text. Instead, we altered the request to avoid giving an accidental "hint" to the tester.

Writing scenario tasks

Scenario tasks are the backbone of a successful usability test. With a great set of scenario tasks, most usability tests do well. But with poorly written scenario tasks, you'll probably have a hard time. Here are my best practices for writing scenario tasks:

  1. Set a brief context
  2. Ask the tester to do something specific
  3. Avoid specific terms that the UI uses to complete the task