Usability Testing at the Rockefeller Archive Center


These templates and guides support rapid, iterative, and scalable usability testing designed to center user perspectives and continually improve the usability and accessibility of the Rockefeller Archive Center’s systems and collections.

The templates are designed to be adapted for various team sizes, experience, goals, and types of user interfaces. The testing approach is informed by Steve Krug’s book Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems (2010), which emphasizes iterative and incremental tactics with a focus on early and frequent testing.


The following roles and associated activities are recommended when planning, conducting, and evaluating usability tests. Multiple roles can be filled by the same person.

  • UX project lead: Creates testing plan, coordinates task creation and revision, recruits participants and observers, schedules tests, manages report production, files participant consent forms.
  • Facilitator: Facilitates usability tests including pilot tests, leads debrief sessions, contributes to report production.
  • Reporter: Observes or facilitates tests, participates in debrief session, drafts reports for project team.
  • Observer(s): Watches tests, takes notes, identifies problems, participates in debrief.
  • Test participant: Meets with the facilitator and attempts to accomplish the tasks and to talk through the experience of using the product.

Usability Testing Process

The usability testing process is iterative and incremental:

  1. Plan
  2. Test
  3. Evaluate
  4. Implement

Repeat as necessary.


  1. In coordination with designers, developers, and other project stakeholders, identify features and processes that necessitate testing with users. Draft tasks and a testing script to target these features.
  2. Recruit participants from project user communities.
  3. Recruit observers who represent stakeholders.
  4. Conduct a pilot study to test the test and make any necessary adjustments to the tasks and script.


  • Facilitator script: The script keeps tests consistent and helps the facilitator to stay organized. The script can also be adapted for informal A/B or low fidelity prototype testing with or without observers.
  • Tasks: Concise scenarios and tasks designed to target specific questions about the usability of the product. The task text will be read aloud and shared directly with participants during the test.


Conduct 3-4 usability tests. With permission from participants, record the screen and audio as the user attempts to accomplish the tasks and talks through their experience of using the product.


  • Recording consent form: Informed consent should be obtained prior to recording participants. Obtain the participant’s signature for in-person testing, or obtain a recorded verbal consent for remote testing.

  • Tasks and difficulty ratings: A handout for participants with the text of each task on its own page to reference during test and to indicate task difficulty rating. For remote tests, the facilitator can share the tasks in chat and record the difficulty rating separately.


Conduct a debrief with observers to identify the most serious usability problems. Debrief sessions can be held after each test, or after a set of tests.



Prepare a short report for the project team identifying usability problems, how serious they are, and possible solutions. Coordinate with the project team to implement solutions.


  • Report: An actionable summary of testing results for the project team/s. The report format can be adjusted according to the needs of the team.
  • External project report: An optional report to communicate observations and resulting actions to project stakeholders outside of the project team.