There are countless guides to running individual user interviews. But executing a single interview often isn’t difficult for effective user researchers. Instead, the challenge is with building a repeatable research operations system that can meet the dynamic and rapid needs of stakeholders across the organization. Without such a system, research teams quickly become bottlenecks as inquiries sit in a never-ending queue.
Here’s how to avoid pitfalls and thoughtfully develop the capabilities that make up a robust and flexible system for running user interviews at scale.
Step 1: Outline the required capabilities for specific scenarios
No two research projects are exactly the same. Even two user interviews for the same project may be completely different depending on the participant’s background and experiences!
Today, research teams can’t afford to be one-tricky ponies. Imagine only knowing how to run in-person focus groups in March 2020. Or only having access to participants within one market segment when your company shifts to another. To avoid being in one of these circumstances, research teams should have a diversity of capabilities at their disposal for the following scenarios:
Learning objectives
It’s important to begin every research project by carefully considering the decisions it needs to inform. Are stakeholders trying to gain a deeper emotional and empathic understanding of a market segment? Or do they need a more transactional analysis of how to improve conversion rates across a buying journey? By reviewing recent research inquiries, you can get a better understanding of common learning objectives and the corresponding data, insights, and methodologies required to inform them.
Balancing velocity and quality
When it comes to research, there’s usually a tradeoff between moving quickly and achieving confidence. Neither is inherently better – some decisions are irreversible and must be made with absolute confidence, while other decisions are reversible and suffer from the opportunity cost of waiting. Clarify with stakeholders which is more important: to make the best decision or to make a decision quickly. For each learning objective, it’s important to have the capability to throttle resources to optimize for velocity or quality.
Step 2: Define a playbook for each interview objective
Once research teams have an outline of the various capabilities and scenarios that must be accommodated, they should document step-by-step playbooks for execution. For example, here are the steps researchers take to run user interview on the Tetra Insights platform:
Collaborative planning
Researchers using Tetra don’t schedule an interview until they have clear alignment with stakeholders about learning objectives, participant criteria, key deliverables, and anticipated timelines.
By collaborating on a research plan upfront, you can avoid having to do rework later or, worse, delivering insights that don’t answer key questions in the required timeframe.
Recruiting participants
Recruiting target participants is one of the most common challenges and bottlenecks in research. There are a number of solutions available for on-demand recruiting of consumer panels. However, if you are targeting professional audiences, you’ll either want to build and maintain a standing panel or work with a partner like Tetra that can efficiently support your targeting needs.
Note-taking during interviews
Highly-resourced research teams may be able to staff each interview with two researchers – one for asking questions and one for taking notes. But often that’s not the case and it can be frustrating to finish a user interview knowing there were insightful moments throughout and then having to rewatch the video to find them. To avoid that, researchers use Tetra’s Live Notes feature to annotate and add tags to timestamped moments during the interview.
Synthesis and analysis
After completing thorough interviews, researchers need a consistent process for synthesizing feedback into themes and patterns. Stakeholders without research backgrounds frequently won’t have the time or expertise to parse raw interviews.
On Tetra, researchers create a taxonomy of themes, tag supporting clips and transcripts, and provide accompanying expert analysis.
Generating informative reports
Reporting is perhaps the most critical and overlooked part of running user interviews. Rather than creating a report from scratch that slows down decision-making, research teams should prepare reusable templates that are easy to prepare and digest. Ideally, reports are hosted in the cloud and interactive. This enables research teams to monitor and manage access and provide clarifying analysis.
Store insights in a repository for reuse
The research team’s job isn’t done once they’ve conducted and presented research and informed key decisions. The study needs to be catalogued alongside other studies in a research and insights repository for future benchmarking and reuse.
In Tetra, stakeholders can use a universal search to find other research about key topics. This reduces redundant inquiries and increases the ROI of each user interview.
Step 3: Proactively partner with the right technology partners
With the capabilities and playbooks documented, research teams finally have clear specifications for what’s required to run user interviews at scale. The missing piece however is technology to function as a force multiplier for research.
Research teams can use separate tools to execute studies and then share them afterward, and there are certainly benefits to hand-selecting individual solutions. But teams using Tetra get access to powerful qualitative research and analysis capabilities, along with a world-class insights repository.
Today, there are more tools than ever before to support the research function. Here is a helpful guide to navigating the landscape.