It doesn’t matter what you call it. Here’s the bottom line: you need to test software with its future users.
One of the topics that create much confusion in the software world is the difference between usability testing (UT) and user acceptance testing (UAT). User experience (UX) professionals only worry about usability testing, while developers and Q&A people focus on user acceptance testing. Other stakeholders use both terms interchangeably believing in the sentiment of, “It doesn’t matter what you call it — as long as we test our software with users.”
In fact, both tests involve end users utilizing software with the goal of finding shortcomings. However, the flaws being identified differ, and with that the purpose of why you want to execute each test. Further, the point in time when the tests are conducted may also differ depending on the development process that is being used. So let us try to untangle all of this.
What Do the Tests Do?
In both usability testing and user acceptance testing end users engage with a product and go through certain test scenarios. Observing test users succeed or fail with test tasks and hearing their comments about the product itself provides insights into the quality of the product.
UT is concerned with understanding the user experience that manifests itself when a user engages with a product or concept. UX is a comprehensive concept describing and measuring the objective and subjective effectiveness and efficiency of the interaction — that is to what degree users can achieve objectives and how much effort they must put forth.
UX also includes the psychological impact of the user-product interaction. Is it perceived as comfortable, joyful, stressful, confusing or straight-forward? During usability tests qualitative and quantitative aspects are being assessed. Qualitative aspects encompass the sentiment of test users; their comments and reactions about the product. An example is a finding “Test user is surprised by the ‘Close’ button saving their settings in the modal window.” Quantitative aspects are measurable findings, like task success rates, task completion times, number of user errors committed during task execution, etc. For instance, a finding may be: “85% of test users completed test task #1 in under 1 minute.”
UAT is concerned with understanding if the product does what it is supposed to be doing and that its target audience in fact can achieve their objectives. In other words: UAT checks whether defined business requirements are being met by the product. This means that the right functions are built in, and that the code is faultless.
For an example, we may test whether an ecommerce website allows users to add an item from a product detail page to the shopping cart. The product passes the test if pre-determined acceptance criteria are met. For example: User selects the desired quantity of the product; user clicks the “add to cart” button; page shows a pop-up message “added to cart.” UAT considers usability issues that test users comment on, yet the usability itself is not being measured.
Does the product produce a good user experience?
User acceptance testing
Is the product fit for purpose?
Related Article: UX Is a Continuous Investment for Profitable Companies. Here’s Why
What Value Do the Tests Provide?
As mentioned above, seeing how actual users from the target demographics interact with the product provides an impartial and authentic understanding of the product’s quality.
In most cases usability testing is used formatively — it entails a rework of the product to mitigate identified shortcomings. The testing is part of the iterative design and development process and ensures that usability defects are uncovered as early as possible and fixed before the product is released to the market.
The role of user acceptance testing on the other hand is to verify that the product to be released serves its stated purpose, and that the code is faultless, thus allowing target users to successfully work with it. It provides the company that builds the product a level of assurance that what they are about to release is delivering the capabilities that it was built to provide. Traditionally, this verification has happened only once and at the end of the development process — a summative testing approach oftentimes called “beta test.” As I will explain next, this has changed in the age of agile development processes.
When to Carry Out the Tests?
The type of development process being used to create the product has an impact on the timing of the tests. Let’s consider waterfall and agile processes.
To help shape further design/development iterations, UT in a waterfall process, where each of the development phases (analysis, design, implementation and testing) are done one after the other, does NOT happen during the regular testing phase. It happens instead already during the design and development phases. Because the cost of change grows exponentially during the product development process, the sooner UT can be carried out, the better. All that is needed is something experienceable by a test user (more about that later).
UAT in waterfall is conducted at the end of the development process. It is not done to improve the product further, but instead to verify that the business requirements have been met and the code is fully functional to enable users to achieve their objectives. If the test reveals that this is not the case, then this poses a challenge, because the developer must then go back in the process (to stay in the waterfall metaphor: swimming upstream) to mitigate the issues in earlier process stages — which is costly in terms of time and resources.
In agile, where a development project is sliced into micro-projects called sprints, for each product development phase (analysis, design, implementation, and testing), both UT and UAT are principally part of the individual sprints. In practice, with sprints typically being only 2 weeks long, it can be challenging to fit the tests with end users into the tight schedule. It is an option to carry them out in a certain cadence as own, separate sprints.
Interestingly, when UAT is done within agile, it is formative rather than summative — it no longer happens only once at the end, but throughout the product development. Since both tests are formative, providing insights that are being used to further advance and optimize the product, you could argue that they are essentially one and the same. However, the difference in scope remains: UT focuses solely on the user experience while UAT focuses on adherence to business requirements and flawless code.
Related Article: Agility Is No Longer Optional in Business
What Product Fidelity Do You Need to Run Each Test?
Fidelity refers to how real the product to be tested is — from a paper prototype to a fully working product with all features, functions, look and feel.
A usability test requires less fidelity than a user acceptance test. Testing paper prototypes or simple click-through mockups is done regularly in UT. The required fidelity depends on what you want to usability test — is it the layout and information architecture? Then a paper prototype or a set of static images is fine. Is it the micro-interactions and animations? Then testing anything static would not yield good insights.
To conduct a user acceptance test there is less wiggle room — you need running software with executable code, allowing people from the target demographics to carry out the test scenarios. In a waterfall process, that lack of flexibility does not mean that earlier stages of the code project could not and would not be tested.
But these are different tests; for example unit tests (testing the smallest parts of a product individually) and integration tests (testing the parts together as a combined system). In agile, as you test formatively, not all features and functions that the final product will offer have to be implemented at the time of user accepting testing. The only ones required are those in the scope of the test scenarios and acceptance criteria.
Who Is Orchestrating the Tests?
During both UA and UAT the software under review is being exposed to end users. After all, it is the customers who determine the success or failure of the product. A defective or hard-to-use application would not gain traction in the market making it difficult to reach the desired revenue. The corporate functions that are responsible for planning, executing, analyzing and reporting the results of the tests are different for each test.
UTs are carried out by members of the UX team. Depending on how that team is built, a user researcher or usability tester is responsible. To avoid confirmation bias, the test should not be carried out by the UX designers themselves.
UATs are executed by the Q&A or testing team, which normally is a team separate from both UX and development.
Can Both Tests Be Combined?
Would it not make sense to fuse both tests into one? This may save time and money.
In the waterfall world this is not possible, because as stated above, both tests are carried out at different points in time: UT as a formative test is being executed repeatedly and as soon as possible in the development process, while UAT is summative and being done once at the end of the process.
In agile where both tests are used formatively, they could be combined into one — if they are both based on the same test scenarios. Test users would carry out these test scenarios and share their reactions and sentiments to the UX researcher and Q&A person, both of whom would still be needed.
I hope this article has helped to explain the commonalities and differences between usability testing and user acceptance testing. They both have their place and relevance, and they’re not the same. To answer the question in the title of the article, it is not an “or,” but an “and.”