For this study, I tested the usability of the ‘Your Lists’ feature on Amazon.com with an aim of evaluating its design in terms of functionality and intuitiveness for first time users. ‘Your Lists’ is a feature that allows Amazon users to construct shopping lists that can be edited, shared, and organized to the user’s liking. Tetra made it easy to generate clear insights on this feature! You can read more about the research focus of this study in the testing plan.
Three participants, Jared, Chris, and Paul, engaged in a series of task-oriented questions, with ease-of-use surveys per task, and an overall Systems Usability Scale survey. After completing the research sessions, I prepared to generate insights using Tetra. Here’s what the process looked like.
Step 1: Generating Tags
Consistency is key when analyzing across participants, so building a Tag Library was useful to have as a reference.
For each research question or task in the script of my discussion guide, I generated an associated tag. Here’s a few examples:
- “Without touching your mouse yet, please tell me what you see and what you would expect to be able to do.” → #Initial Reaction#
- “To begin, please create a new shopping list. This list will be for bathroom supplies so please name it accordingly and make the list private, so only you can see it.” → #Task1: Create List#
- “How would you rate the ability to accomplish this task? Why?” → #Participant Score#
A tag was also generated for key observations that I expected to make in analysis, such as:
- Moderator scores of the participant’s ease-of-use per task → #Moderator Score# (Specific scales can be referenced at the end of the discussion guide.)
- Difficulties demonstrated or noted by participants → #Pain Points#
Here’s the resulting Tag Library for this study:
Step 2: Assigning Metadata Tags
Knowing that later on, after analysis, I may want to view all the highlights and annotations of a single participant collectively, I assigned a metadata tag to each participant: #Jared, #Chris, and #Paul. This allowed me to easily isolate all the annotations and associated highlight reels of a participant during the synthesis step.
Step 3: Analyzing with Annotations
Using my Tag Library, I created annotations using the appropriate tag(s) along with notes on observed actions, participants’ responses, and other key information that addresses our research questions. Sticking to the notion that consistency is key in analysis, all discussion guide-based tags were used for each participant. This allows for a simple comparison between participants for each question or task in the synthesis step.
Step 4: Synthesizing My Project
To quickly and effectively generate insights across all interviews, I used the synthesis page, which combines all tags and annotations for every interview within the project. This one-page view allowed me to navigate all feedback and more easily identify patterns and themes for a given task or a response to a question. For example, when viewing #Initial Reaction# of all participants, I discovered:
- All participants (3/3) noticed the ‘Invite’ feature when looking at the ‘Your Lists’ page for the first time. It is clear and intuitive to users that they can invite others to interact with their lists.
- Most participants (2/3) expect ‘Your Lists’ to allow you to create and organize lists of products you may want to buy later. Most participants have an appropriate initial understanding of the purpose of ‘Your Lists.’
- One participant (1/3) expects ‘Your Lists’ to provide reminders on price changes for selected products. There may be some initial misunderstandings surrounding the purpose of ‘Your Lists.’
Here is an example of the synthesis page:
And here is the resulting #Initial Reaction# highlight reel:
Additionally, by selecting multiple tags I was able to generate more specific insights.For example, selecting #Pain Point# and #Task2: Add Ideas# allowed me to look at all the pain points experienced in Task 2 of adding ideas to a list.
- For one participant (1/3), the Idea Search Bar was too small to catch his attention. His focus on the word “idea” caused him to follow an incorrect route by selecting the biggest button with the word “idea” in it.
- This puts forth the suggestion that the font of the Idea Search Bar should be made larger.
And here is the resulting #Pain Point# highlight reel:
Want to see more? The synthesis pages of the other tags used in this study can be found at the end of this article.
Step 5: Exploring with Universal Search
For exploratory purposes, I wanted to collectively see how the participants interacted with the dropdown buttons on the interface. Since I did not generate a dropdown button-related tag, I used Tetra’s universal search feature to generate insights surrounding dropdown button usage. By using the search term “dropdown,” I was shown 13 results that included every instance that a “dropdown” was mentioned in the annotations and scripts across my entire project.
Step 6: Sharing Highlight Reels
When I finished generating valuable insights through the synthesis page, I was ready to share the clip reels with my teammates and with you by selecting ‘Share Screen’ and creating a public link. For my own reference, I downloaded my highlight reels as mp4 files.
To see the complete synthesis of this study, check out the links below for the synthesis page of each tag and metadata tag used:
#Typical Amazon Use#
#Your Lists Experience#
#Task1: Create List#
#Task2: Add Ideas#
#Task3: Change Quantity#
#Task4: Add Specific Product#
Through this six-step process using Tetra Insights, I was able to generate clear insights surrounding the functionality of Amazon’s ‘Your Lists’ feature. The patterns and themes that were revealed in this study informed me that ‘Your Lists’ is an overall positive feature that is intuitive and easy to use but would benefit from a few tweaks and adjustments as shown by the participants’ #Pain Points#.
Interested in using Tetra Insights to conduct your own user research?
SIGN UP FOR A FREE ACCOUNT