SoftwareUsabilityUser Experience

Enhancing UX: Exploring Software Usability Evaluation

According to Krug et al. (2014), it can be assumed that conducting usability evaluations at regular intervals with the involvement of potential future users will increase the potential of the application to be accepted and used. If usability is only evaluated by those who were involved in the creation of the application, it is more likely that primarily technical aspects are taken into account and that processes in the application typical for the developers are perceived as effective and efficient, although the standard user finds them bumpy in his or her daily work.

Before getting into the Evaluation of Software Usability you should make yourself comfortable with the topic Software Usability in general by reading Enhancing User Experience: An Introduction to Software Usability

Evaluation

Usability evaluation usually involves users evaluating the system and its elements, observers interacting with users, and usability experts evaluating actions performed and documented. Different usability evaluation methods are described below.

Heuristic evaluation

A heuristic evaluation involves both evaluators and usability experts. Here, a simple and generally valid set of heuristics and usability principles is used by evaluators to examine and evaluate the system interface (Incarnati, 2011). The evaluator walks alone through the system interface, analyzing and comparing the existing elements according to predefined usability standards. The evaluator can also pass on ideas for improving the user experience to the usability experts (Incarnati, 2011).

Thinking aloud

In the Thinking aloud method, users perform specific tasks and are asked to speak out loud or write down all thoughts concerning the system. The users are recorded while performing the tasks or activities. These recordings together with the users’ notes are then analyzed by usability experts and the usability level of the product is determined (Incarnati, 2011).

Cognitive Walkthrough

In the Cognitive Walkthrough, usability experts go through typical user tasks step by step. At each step, the evaluators ask themselves the following four questions regarding the expectations of the behavior of actual users:

Expectations of the behavior of actual users:

  • Is the user trying to achieve the right effect?
  • Does the user recognize that the right action is available?
  • Will the user associate the right action with the effect to be achieved?
  • When the user performs the correct action, does he/she see that he/she is making progress in solving the task?

The answers should be as realistic as possible. For all questions with negative answers, a list of points is created to explain why the user cannot perform the task from the experts’ point of view (Incarnati, 2011).

Pluralistic Walkthrough

The Pluralistic Walkthrough involves three different roles of evaluators: the end users, usability experts and software developers. All these participants assume the role of the end-user and before one of the chosen real-world tasks is performed directly on the user interface, it is written down in order to analyze in advance the interaction strategy with the user interface and thus increase usability (Incarnati, 2011).

Questionnaire

Various standardized questionnaires developed by experts have become established for the subjective evaluation of usability as perceived by users. These questionnaires contain a series of questions in a specific order and format with specific rules to obtain ratings of user interfaces and user satisfaction based on user responses (Sauro, Lewis, Hartson & Pyla, 2016).

Post-Study questionnaires

Post study questionnaires are meant to be applied after all tasks have been executed and the usability study is at its end.

Website Analysis and Measurement Inventory

WAMMI is a short but very reliable questionnaire to find out what users think about the website (J. R. Lewis, 2018). The WAMMI consists of 20 statements that are rated by users on a scale from Strongly Agree to Strongly Disagree (Kirakowski & Cierlik, 1998).

The following quality aspects are controlled by the WAMMI Questionnaire (Assila et al., 2016):

  • Attractiveness
  • Controllability
  • Efficiency
  • Learnability
  • Helpfulness

Standardized User Experience Percentile Rank Questionnaire

SUPR-Q is a questionnaire for evaluating web applications. The SUPR-Q consists of 8 questions and statements that are rated by users on a 5-point likert scale (Sauro, 2015).

Through the SUPR-Q Questionnaire, the following quality aspects are controlled (Assila et al., 2016):

  • Appearance
  • loyalty
  • Usability
  • Trust

Questionnaire for User Interface Satisfaction

QUIS is a universally applicable questionnaire for evaluating products and software at the end of a usability study. The QUIS consists of 27 statements that are rated by users on a 10-point scale (Naeini & Mostowfi, 2015).

Through the QUIS Questionnaire, the following quality aspects are controlled (Naeini & Mostowfi, 2015):

  • Overall reaction to the software / Overall system
  • Screen factors
  • Terminology and system information
  • (Ease of) Learning factors / (Learnability)
  • System capabilities

Computer System Usability Questionnaire Post-Study System Usability Questionnaire

CSUQ and PSSUQ are universally applicable questionnaires for evaluating computer systems at the end of a usability study. Both CSUQ and PSSUQ consist of 16 statements that are rated by users on a 7-point scale (J. Lewis, 1992). Differences between CSUQ and PSSUQ exist only in the wording of the 16 statements (J. R. Lewis, 2018).

With CSUQ/PSSUQ Questionnaires the following quality aspects are controlled (Assila et al., 2016):

  • Overall reaction to the software / Overall system
  • System usefulness
  • Information quality
  • Interface quality

Software Usability Measurement Inventory

SUMI is a universally applicable questionnaire for evaluating software applications at the end of a usability study. The SUMI consists of 50 statements that are rated by users on a 3-point scale (Kirakowski & Corbett, 2006).

The following quality aspects are controlled by the SUMI Questionnaire (Kirakowski & Corbett, 2006):

  • (Ease of) Learning factors / (Learnability).
  • Efficiency/ Efficient
  • Affect
  • Helpfulness
  • Control

System Usability Scale

SUS is a universally applicable questionnaire for evaluating software at the end of a usability study. The SUS consists of 10 statements that are rated by users on a 5-point scale (Brooke, 1995).

The following quality aspects are controlled by the SUS Questionnaire (Assila et al., 2016):

  • Ease of use / Usability
  • (Ease of) Learning factors / (Learnability)

User Experience Questionnaire

UEQ is a universally applicable questionnaire for evaluating software at the end of a usability study. The UEQ consists of 26 characteristics that are evaluated by users on a 6-point scale (-3 to +3) (Schrepp, 2019).

The following quality aspects are controlled by the UEQ Questionnaire (Schrepp, 2019):

  • Attractiveness
  • Perspicuity
  • Efficiency
  • Dependability
  • Stimulation
  • novelty

Post-Task questionnaires

Post task questionnaires are meant to be applied after every task.

After-Scenario Questionnaire

ASQ is a universally applicable questionnaire for the evaluation of software after completion of a task (J. Lewis, 1991). The ASQ consists of 3 statements which are evaluated by users on a 7-point scale.

The following quality aspects are controlled by the ASQ Questionnaire (Assila et al., 2016):

  • Overall ease of task completion
  • Satisfaction with completion time
  • Satisfaction with support information

Usability Magnitude Estimation

UME is a universally applicable questionnaire for evaluating software or products in general after completion of a task. The UME consists of a statement or question that is evaluated by users on a 100-point scale with reference to the difficulty of the task just performed (McGee, 2003).

The UME Questionnaire controls the following quality aspect (Assila et al., 2016):

  • Overall ease of task completion

Single Ease Question

SEQ is a universally applicable question for evaluating software after completion of a task. The SEQ consists of a question that is rated by users on a 7-point scale, referring to the difficulty of the task just completed (Assila et al., 2016).

The following quality aspect are controlled by the SEQ Questionnaire (Assila et al., 2016):

  • Overall ease of task completion

Subjective Mental Effort Question

SMEQ is a universally applicable question for evaluating software after completion of a task. The SMEQ consists of 1 question that is rated by users with a 150-point scale, referring to the difficulty of the task just performed (Sauro & Dumas, 2009).

The SMEQ Questionnaire controls the following quality aspect (Assila et al., 2016):

  • Overall ease of task completion

Comparison of questionnaires

Following table shows a direct comparison of the previously described questionnaires in terms of the number of questions, the type of questionnaire, the aspects covered by the questionnaire and the scale on which the questions are rated.

Usability Framework

As a first step, care should be taken to consider and incorporate established standards, norms, heuristics and aspects into the design of the progressive web application as early as the conception stage. A decisive factor here is to consider the design not only from a technical point of view, but also to incorporate the view of potential future users.

The second step is to integrate usability evaluation as early as possible in the development process and to perform it iteratively, as in agile processes. An application according to classic process models (e.g., waterfall model) bears the risk that the usability evaluation is performed only once after the development activities have been completed and that information is lost or overlooked due to various factors, that the evaluation is assigned too low a priority, or that it is not performed in the worst case. To perform the usability evaluation, this concept proposes Thinking aloud.

The third and final step of this concept, which flows into step two, the usability evaluation, is the use of the questionnaires presented in the sections Post-study Questionnaire and Post-task Questionnaire.

The usability evaluation method Thinking aloud already offers a good insight into the usability of the tested application by means of the review by potential future users and the documentation of their thoughts and feelings during the review. By using questions that are proven in practice and scientifically based, this insight into usability is additionally strengthened, offers more significance and makes usability quantifiable.

Following figure shows the steps mentioned in the concept graphically depicted parallel to a typical agile development process. While planning the development steps, the execution of the usability evaluation for a sprint should be planned at the same time. The usability evaluation itself is considered in step two together with step three, the questionnaires, thematically appropriate, with conventional testing. In the concept, it is described to perform the usability evaluation as early as possible in the development process. If the evaluation is carried out at the same time as testing, there is still time to evaluate the collected results and plan further steps based on them.

Following figure takes a detailed look at the actual evaluation step. In preparation for the evaluation, the participants are assembled and demographically surveyed. In the evaluation, tasks are performed by the participants following the Thinking aloud usability evaluation methodology. After each task, the participants evaluate it with the Post-task Questionnaire UME. When all tasks have been performed and evaluated, the web application is evaluated using the post-study questionnaire SUPR-Q at the end of the usability evaluation.

Ready to elevate your software’s user experience? Whether you’re a developer, designer, or usability enthusiast, putting these evaluation methods into practice can make a real difference. Try out one of these usability evaluation techniques or the given usability framework in your next project and share your experiences with our community! We’re eager to hear about your successes, challenges, and insights. Join the conversation by leaving a comment below. Together, let’s make software usability the foundation of exceptional user-centric design!

Sources

Krug, S., Bayle, E., Straiger, A. & Matcho, M. (2014). Don’t make me think, revisited : a common sense approach to web usability. New Riders Press.

Incarnati, A. (2011). Usability of web-based software products : usability evaluation methods and optimization techniques applied to web-based software.

Sauro, J., Lewis, J. R., Hartson, R. & Pyla, P. (2016). Usability questionnaire standardized usability questionnaires empirical ux evaluation: Data collec-tion methods and techniques.

Lewis, J. R. (2018). The system usability scale: Past, present, and future. International Journal of Human-Computer Interaction, 34, 577-590. doi: 10.1080/10447318.2018.1455307

Assila, A., Oliveira, K. D., Ezzedine, H., Assila, A., Oliveira, K. M. D. & Ezzedine, H. (2016). Standardized usability questionnaires: Features and quality focus (Bd. 6).

Kirakowski, J. & Cierlik, B. (1998). Measuring the usability of web sites. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 42, 424-428. doi: 10.1177/154193129804200405

Sauro, J. (2015). Supr-q: A comprehensive measure of the quality of the website user experience (Bd. 10).

Naeini, S. & Mostowfi, S. (2015). Using quis as a measurement tool for user satisfaction evaluation (case study: Vending machine). , 2015, 14-23. doi: 10.5923/ j.ijis.20150501.03

Brooke, J. (1995). Sus: A quick and dirty usability scale. Usability Eval. Ind., 189.

Schrepp, M. (2019). User experience questionnaire handbook. Zugriff auf www.ueq -online.org

McGee, M. (2003). Expected usability magnitude estimation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 47, 691-695. doi: 10.1177/154193120304700406

Sauro, J. & Dumas, J. (2009). Comparison of three one-question, post-task usability questionnaires. In Conference on human factors in computing systems – proceedings (S. 1599-1608). doi: 10.1145/1518701.1518946

Hi, I’m Mario

Leave a Reply

Your email address will not be published. Required fields are marked *