Usability Testing Strategies + Process

Usability Testing Strategies + Process

Here’s Rev’s guide to everything you need to know about usability testing and UI testing, including UX testing strategies and potential test questions.

Written by:
Jake Gibbs
July 3, 2025
A woman and a man sitting at a table in front of a laptop.
Table of contents
Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
The LinkedIn logo in black.
The Facebook logo in black.
X logo
The Pinterest logo in black.
A icon of a piece of mail in black.

Usability testing is a form of market research that helps you understand your users on a deeper level, including their needs, expectations, and how they use your app, website, or product. Building what you think is the best product, app, or service in the world doesn’t matter if your users don’t agree! 

Usability testing, also called UI and UX testing, helps you learn what your audience really thinks about your offerings so that you can adjust accordingly. These adjustments might be minor tweaks to a final product, wholesale changes to your best practices for UI design, or anything in between. The point is to better satisfy your audience and increase your potential audience.

With an entire subset of services dedicated to market research, we’ve learned a thing or two about how to get it done. Read on for everything you need to know about usability testing, from choosing the right test to implementing it properly.

Types of Usability Tests

The type of information you’re looking for will largely determine the kind of usability test you should run, but there are other factors, like whether it's more practical to conduct your tests remotely or in person, or whether to use a moderator or not, that can help dictate how you build your tests. 

The primary types of usability tests are qualitative or quantitative, and can be broken into further categories as well. Here are the three most common types of usability tests.

Qualitative and Quantitative Usability Tests

A qualitative usability test is more about understanding users’ feelings about a product or service. It gathers their thoughts, insights, stories, and experiences in their own words or actions. Qualitative data is more flexible than quantitative data, allowing for the testing organization to glean insights that might not be drawn from raw numbers

Quantitative usability testing is a numbers-based method of testing or interviewing users. It collects data that can be broken down and analyzed numerically. For example, a quantitative usability test might ask questions about completion times, satisfaction ratings, time on site, and success rates. Quantitative data is easily sortable, making it simpler to spot trends and spikes. It’s also more “concrete” with less room for interpretation.

Moderated or Unmoderated Usability Tests

Moderated vs. unmoderated testing is somewhat self-explanatory. In moderated usability testing, there’s a moderator to help walk users through the test. They act as a guide, answer questions, observe the users as they complete the test, ask follow-up questions, and keep everyone on task. Moderating can be done both in-person and remotely.

In unmoderated testing, users complete the test on their own, using guided tools or apps that record answers and/or actions. Unmoderated testing’s advantage is that users cannot be influenced by a moderator. The disadvantage, however, is that you lose some information that can only be gained by observation.

Remote or In-Person Usability Tests

Deciding between remote and in-person usability testing often comes down to cost; in-person can be exponentially more expensive. However, other factors, like the type of product or service you’re testing, can also dictate which you go with.

In-person usability testing is conducted on site, often in a research lab or other area built for testing or research and development. It requires a lot of resources, and it can be more difficult to entice users to visit a physical location, so your audience might be limited. But if possible, in-person testing is ideal for control and observation purposes. It might even be mandatory, if product testing requires safety equipment or supervision.

Because in-person testing can be expensive and time-consuming, some market researchers opt for remote usability tests, if possible. These are conducted using online tools like survey apps, observation software, or even teleconferencing. Users record their answers and experiences and submit them via the software. Remote usability tests can be administered moderated or unmoderated.

Other Types of Usability Testing

After sorting your testing in each of the three categories above, you can further drill down and get very specific with the types of tests you run. Some of the more common tests used for specific purposes are:

  • Card sorting. By asking users to organize the content they gathered from your website. This method lets you understand if the site’s layout is intuitive and leads your audience where you want it to go.
  • A/B testing. A/B testing is great and can teach you important things about your product, but it’s solely about preference as opposed to user behavior. For this reason, it’s not considered usability testing by many.
  • Tree testing: Tree testing tells you how easily users can find information using your website navigation. It breaks your site’s navigation into a tree so that users can see how many “branches” are needed to find what they want.
  • Guerilla testing: For those who don’t have the time or resources to conduct more elaborate usability testing, there’s guerilla testing. This is a DIY approach to understanding your users’ behaviors. Essentially, you pick people who have potentially never heard of or used your product and ask them to use it right there, on the spot, and provide feedback. It’s not very scientific, but it’s far better than not doing any testing at all.

How to Run a Usability Test

The key to successful usability testing is keeping your testing methods consistent so that results are always comparable. No matter your capabilities or budget, it’s not only possible to run a usability test; it's vital. Understanding how real users use, interact with, and feel about your product can’t be a guess.

Here are the steps for running a usability test.

1. Define Your Goals

Before you ever write your first test question, you have to define the ultimate goals of the usability test. Do you want to learn about a particular workflow? Do you need to understand how easy it is to use your app? Are you looking for insights about how you’ve formatted information? A clear objective helps you create the right types of questions and mold the test to get the results you need.

2. Define Your Audience

Knowing who uses your product will tell you who you should target for usability testing. You don’t want to target English-only speakers if your app is exclusively in Spanish, after all. Knowing your target audience’s age, profession, economic category, location…any audience criteria you uncovered in your initial user research can and should apply to your usability testing. Certain testing might only be applicable for certain segments of your audience, so it’s important to understand every aspect of your audience before you begin.

3. Establish Testing Criteria

Unlike your goals and objectives, which define the ultimate answers you’re seeking, your testing criteria establishes the parameters of the test itself. This criterion might revolve around usability metrics, such as time on page or success rate, but it can also factor in more opinion-based sections, like “Did you like the dark mode version better than normal mode?”

Replication and iteration of testing is something you should establish now. After your initial testing, you might want to run the test again to either establish consistency or gauge how your audience reacts to changes. Now’s the time to establish how many iterations of the test you’ll run.

4. Brainstorm Questions for Before, During, and After the Test

Your usability test questions have to be specific, relevant, clear, and concise, but they also need to lead your user through their journey. One question doesn’t have to build upon the last, necessarily, but they also should not be presented willy-nilly with no rhyme or reason. 

Your questions have to get to the heart of why users use your products the way they do. Not all users are conscious of the ‘why’ behind their actions, so your questions have to help them talk through their usage patterns. Keep in mind that in order to stimulate conversation and gather feedback, include a mix of open-ended questions with follow-ups and multiple-choice questions. The more details you capture, the better.

Consider writing questions for before, during, and after the test in order to capture the entire user experience.

What Are the Four Types of Usability Test Questions?

To help guide usability testing, there are four types of usability test questions:

  • Screening questions
    • These early-stage questions help you decide on what types of users to include in the process. Depending on the objective of the test, create a list of criteria that defines an ideal participant. Do you want new users who’ve just started using your platform, or do you want power users with extensive experience? Defining your ideal participants ensures that the data you collect comes from users most likely to give valid and relevant insights.
  • Pre-test questions
    • These questions are meant to gauge the participants’ experience and their usage patterns. Similar to screening questions, pre-test questions eliminate participants who don’t fit the criteria you’ve established and can risk skewing your results. Your pre-test questions should be designed to uncover whether or not participants know enough about your website or app to give the type of insights needed for you to make improvements or changes.
  • In-test questions
    • These questions cut to the heart of your research and yield the information you’ll use to uncover bottlenecks, pain points, and opportunities. The goal of these questions is to have a conversation with participants vs. following a formal question-and-answer format. A casual conversation will help the usability test flow naturally and put participants at ease. The more at ease they are, the better the chances of them sharing their honest opinions.
  • Post-test questions
    • These questions make up the final stage of the user interview, and present a chance for you to ask follow-up questions you missed or to ask for clarification. These user-testing questions can be more general, but you should continue to use open-ended questions to maximize the amount of information you receive.

Choosing the right usability test question matters because it can make or break what kind of responses you get. Picking questions that help you meet your end goal or hypothesis can help the entire process run more smoothly. In contrast, picking questions that have nothing to do with your end goal can lead you down rabbit holes that don’t serve the purpose of your test in the first place.

Here are a few good examples of screening questions, pre-test questions, in-test questions, and post-test questions.

Screening Questions Pre-Test Questions In-Test Questions Post-Test Questions
How Old Are You? What Do You Use the Website or App For? How Often? When You Log On, What’s the First Thing You Do? Is There Another Way To Complete This Task? Overall, What’s Your Experience Been With the Website or App?
What’s Your Income Level? Which Features Do You Use Most? How Do You Use X Feature? If You Could Change One Thing About the Website or App, What Would It Be? Why?
What’s Your Highest Level of Education? How Satisfied Are You With the Available Workflows? What Parts of the Website or App Do You Use the Most? Why? What One Thing Are You Most Excited About With the Website or App? Why?
When Was the Last Time You Used the Website or App? What Other Apps Did You Use or Research Before Choosing This App? What Parts of the Website or App Do You Use the Least? Why? Why Will You Continue To Use This Website or App? What Will Stop You From Using This Website or App in the Future?
How Often Do You Use the Website or App? Why Did You Choose This Website or App? Do You Like the Interface? Is It Easy To Use? How Likely Are You To Refer to This Website or App? Why or Why Not?
How Much Time Do You Spend on the Website or App? What Do You Think About How Information and Features Are Laid Out?
What Do You Think of X Page? How Easy Is It To Find?
Screening Questions
How Old Are You?
What’s Your Income Level?
What’s Your Highest Level of Education?
When Was the Last Time You Used the Website or App?
How Often Do You Use the Website or App?
How Much Time Do You Spend on the Website or App?
Pre-Test Questions
What Do You Use the Website or App For? How Often?
Which Features Do You Use Most?
How Satisfied Are You With the Available Workflows?
What Other Apps Did You Use or Research Before Choosing This App?
Why Did You Choose This Website or App?
In-Test Questions
When You Log On, What’s the First Thing You Do? Is There Another Way To Complete This Task?
How Do You Use X Feature?
What Parts of the Website or App Do You Use the Most? Why?
What Parts of the Website or App Do You Use the Least? Why?
Do You Like the Interface? Is It Easy To Use?
What Do You Think About How Information and Features Are Laid Out?
What Do You Think of X Page? How Easy Is It To Find?
Post-Test Questions
Overall, What’s Your Experience Been With the Website or App?
If You Could Change One Thing About the Website or App, What Would It Be? Why?
What One Thing Are You Most Excited About With the Website or App? Why?
Why Will You Continue To Use This Website or App? What Will Stop You From Using This Website or App in the Future?
How Likely Are You To Refer to This Website or App? Why or Why Not?

5. Test Your Test

Running a pilot test is a smart idea that can clear up problems before you begin talking to users. You can run a pilot test using a small group of individuals who are familiar with your goals but not part of the actual project (think: coworkers in a different department). 

This way, they can provide valuable insight about how your test works, with enough “insider” knowledge that their intel will be valuable. You can use your pilot to clarify confusing questions, fix “glitches” if using software, change locations, etc.

6. Recruit Participants

Now that you’ve established your goals, know your audience, written your questions, and tested your test, you need to find people to take it. There are quite a few ways you can recruit test participants. The most direct and obvious way is to simply invite them. You can send email invites to an existing mailing list, contact a list of known product users, or recruit them in person if you’re at an event or convention; whatever generates willing users who meet your pre-established criteria. 

7. Run the Test

After all the pre-test work, it’s finally time to run the test. Make sure that you run the test the same way, every time, so that your results are comparable.

When administering the test make sure that your audience understands the instructions and test questions. Be prepared to clear up any confusion they voice, so that their answers are based on the questions as you intended. Be sure to observe how they behave during the test, and if they’re using an actual product as part of the test, note any confusion or other negative behavior. Remember to ask follow-up questions, or steer your respondents as needed.

Give the respondents your prepared scenarios, see how they interact with your product while trying to complete tasks, and uncover the confusion points.

8. Use Transcription

Using an accurate and reliable transcription service like Rev can save you an incredible amount of time throughout the whole process. If you’re running moderated tests with an active moderator who leads the conversation, be sure to record the session for transcription, which can free the moderator up from taking notes.

Transcription can also streamline your data analysis process. Instead of wading through hours of recorded test audio and video, you can search for the exact data you need with easily searchable, scannable text documents. Transcripts are also easy to share, foster collaboration, and make the whole process easier compared to traditional audio recordings.

Be sure to use a transcription platform that offers multi-file insights, like Rev. Being able to search multiple files at once is a boon for those who conduct many test iterations, and our AI can analyze multiple tests at once, noting trends and other insights.

9. Analyze Your Data

You have your data. Now it’s time to learn from it! Based on the goals you set way back at the beginning of the process, you should be able to examine all the user feedback or numerical data you’ve gained to create action items that will help you hone your products to better meet your customers’ needs. If you used Rev to transcribe your data, you can use Rev to help analyze it as well, which is awfully handy.

Remember, the goal of your usability tests and the questions you ask should be to have a smooth flow and accurate results that you can use to identify areas for improvement.

Benefits of UI and UX Testing

The primary benefit of UI and UX testing is to learn more about how your customers feel about your products or services. You can then use that information to make informed decisions about upgrading your product, changing your UI, expanding your services…whatever better meets your audience’s needs. 

Other benefits of usability testing include:

  • Better understanding of your users’ knowledge about you and your products
  • Discovering unexpected errors or user frustrations
  • Reduced research and development costs
  • Higher user satisfaction rate

Common Usability Misconceptions

Usability testing is a vast field, and it can be intimidating to those who have never conducted it before. However, by better understanding some common misconceptions, it becomes much less intimidating. Here are a few common things people misunderstand about usability testing.

  • Misconception: Usability testing is expensive
    • Reality: It can be. In-person and moderated testing takes a lot of time and resources. But it doesn’t have to be that way. You can conduct usability tests by sending your users a survey electronically or hopping on a video call, for a fraction of the price. 
  • Misconception: Usability testing is only for finished products
    • Reality: Conducting usability testing on a trial or beta version of a product can save you a lot of time and money in the long run, correcting problems before they’re in large-scale production.
  • Misconception: Usability testing is only for established users
    • Reality: While known users of your product provide valuable insight, they’re not the only people who can provide insight. Internal staff, potential users, and even total outliers can offer completely unexpected thoughts that can change your entire trajectory.
  • Misconception: Usability testing is easy
    • Reality: Again, it can be easy. But to get the most out of your data, your tests need to be thoroughly thought out and vetted, and your results need to be analyzed in a way that creates actionable insights.

Technology + User Tests

Like many other industries, technology has streamlined the entire process and made it a little more accessible, especially for those with limited time, money, or manpower. AI has especially aided in this effort, streamlining processes and doing the work of multiple people. 

With AI usability testing, the following processes can be automated:

  • Transcribing interviews and test sessions
  • Taking notes during testing
  • Scheduling testing sessions
  • Sending invitations
  • Writing questions (though we recommend human oversight here)
  • Data analysis, summarization, and trend finding

With the rise of remote testing, usability testing is much less analog than it used to be. Since most teleconferencing apps have some form of built-in AI or AI-generated transcription, taking notes and turning hours of audio recordings into text is a snap. And with AI assistants like Rev’s, you can automatically generate session summaries and insights that you might have otherwise missed.

There are plenty of other tools that make usability testing easier than ever, too. Here are a few  of the more notable ones:

  • Hotjar can create heatmaps for your website and can automatically generate surveys and other test formats.
  • Optimal Workshop provides a plethora of straightforward testing tools focused on apps, websites, and other digital products.
  • UserFeel creates videos of people while they take your test, use your website, or share their insights about your product.

Rev For Productive Market Research

Rev can make nearly every aspect of your market research and usability testing easier. Our industry-leading transcription is reliable, secure, and offers fast turnaround times. And with a built-in AI assistant, you can schedule events, send invitations, and take notes automatically.

Hungry For More?

Luckily for you, we deliver. Subscribe to our blog today.

Thank You for Subscribing!

A confirmation email is on it’s way to your inbox.

Share this post
The LinkedIn logo in black.
The Facebook logo in black.
X logo
The Pinterest logo in black.
A icon of a piece of mail in black.

Subscribe to The Rev Blog

Sign up to get Rev content delivered straight to your inbox.