QA Dashboard

Reducing turnaround time by optimizing quality assurance reviewer's decision-making tool
Timeline
Jan-Apr 2022
My role
UX Designer

Project Brief

Molecular You is a health-tech startup focusing on improving personal health through comprehensive analytics of biomarkers.
I worked with the company on an end-to-end project for a QA Dashboard.

Team

  • Product Manager
  • Director of Clinical Informatics (Product Owner)
  • UX Designer (myself)
  • Bioinformatician (QA)
  • Developer * 3

Deliverables

  • Service blueprint (Entire pipeline)
  • User flow
  • Prototypes (Lo-fi & Hi-fi)
  • Component library
  • Handover instructions

Collaborations

  • Interviews with staff members
  • Co-design session with cross-functional team
  • Usability testings
  • Technical & security design audit with developers
Project brief image

📦

DESIGN OUTCOMES

Fast and simple QA process

Overview page

Task-oriented table view that helps QA navigate through pipeline.
Quickly identify important task through search, filter, and sort.

Review module for efficiency

Make QA decisions fast with options to view details.
Navigate through tasks with one click.

🔬

STAGE 1: RESEARCH & NARROW DOWN THE SCOPE

CHALLENGE

01

Reduce the turnaround time for customer health report

The major problem when I joined the team was the turnaround time (TAT) needed to generate health reports for each customer. Although I cannot disclose any numbers, for the company's business model to scale, the TAT needed to be improved instrumentally.

There were many factors that influence the TAT, both internal and external. As a cross-functional experience design team, we wanted to figure out as much as factors as possible, and then identify one factor that's more suitable for improvement with consideration on capacity, feasibility, and efficiency.

Identify the Cause for Delay

Pipeline interview

Teams interviewed

  • Customer success
  • Quality control
  • Quality assurance

Interview scope

  • Touch points (Workflow + tools used)
  • Dependencies
  • Communication
  • Blocks in performing task
  • Concerns

The company was run by a few teams on each segmment of the operation. Each team has deep understanding for their own challenges, but there was no overview documentations for global problems or interconnected issues.

For the first two weeks, I interviewed 8 representative staff members across 3 teams down the entire operational pipeline (from customer giving blood samples at a test site, to receiving a health report on their phone). The interviewees generously shared detailed steps of their daily works through screenshareing, documents and diagrams.

Service map & Pain point map

The interviews revealed a complex web of problems.
To better analyze the research insights, I organized the documented information into a service map (learned from an article by Chloe Luxford). Each step was labled with causes for delay and security concerns. The service map was used as a visual reference document and contributed to company’s future plans.

Service map (blurred on purpose)

However, as the image above shows, the documented problems were stretched out (literally) pretty far and it was hard to compare between the delay causes.

To better present to stakeholders about the challenges and to make a decision on the first round of TAT improvement, I made a pain point map exclusively illustrating how the problems were interconnected and highlighted the challenges and opportunities brought up by the interviewees.

From here, we reached to the decision that QA improvement will be the most fitted project to work on for this quarter.

Pain point map (blurred on purpose)

Reimagine the QA task flow

What is a typical QA process?

QA reviewers are responsible for checking health reports generated by the algorithm, before pushing them to the customer end. In the past, QA reviewers would need to go through every report to flag out any potential mistakes and publish the verified report to the customer, one after another.

🧩

STAGE 2: CO-DESIGN for the start

CHALLENGE

02

Reduce the decision-making time for QA

At the beginning, the team tried to find out if there's possibility to directly improve from the current tool.

Problems

  • Manually identify problems in every report
  • Uneeded information adds cognitive load
  • Clicking and switching tabs for comparison
  • Extra steps needed to approve each report
  • Priority is communicated externally

Opportunities

  • Patterns in common algorithmic errors makes room for Automation scripts
  • QA should focus on making decisions but not identifying errors
  • Priority labeling can be internalized
Original QA tool (blurred on purpose)

After discussion, the team agreed to build a brand new manual review tool for QA, with scientists and engineers working on an automated script to detect suspicious data.

Define high-level goals for the new QA tool

High-level goals

  • QA wants to only review suspicious reports so that they can make decisions for the next step
  • QA wants to compare health data efficiently
  • QA wants to have the option to use approved reports for references
  • QA and other team members want to look at a record of all reports

Lower-level goals

  • A component library that provides reusable assets for future integrated internal tools

Co-design exercise

Co-design whiteboard

I assisted the product manager to organize a co-design exercise involving both the project team members as well as developers who were involved in building the health report algorithms. During that brainstorm session, everyone was prompted to talk about their favorite 'reviewing' and 'processing' digital products, and then make quick sketches of their ideal QA tool.

The major benefit of this exercise was: everyone was able to bring in different sets of considerations to the table. Design sketches brought out tacit and vital hints in the designing of this tool that verbal communication couldn't. A couple of features in non-designers' sketches were so inspring and highly functional that they went into the final design.

🖥

PROTOTYPING & TESTING

Prototype interations

The prototype went through two major iterations.

V.1 - Quick mockup to test out hypothesis

Features

  • Task tabs for switching between reviewing flagged reports & in history
  • Pass or fail reports in an expanded table row area
  • QA status to label the reports in the record
  • Filter, search and sort at the righ side column

V.1 - Feedback

What's working:

  • The task tabs to separate user needs are well appreciated
  • Table view is a familiar viewing pattern
  • QA Status is needed
  • Important to have filter and search functions so they can focus on certain type of reports

Areas of improvement

  • The expanded area might not be enough to display health information
  • Besides pass & fail, there are also intermediate situations such as "hold"
  • There's still no way for QA reviewers to communicate a case without relying on external tools, which is risky for security reasons

V.2 - Bring in new concepts and UI patterns

Learning from the best UX practices in popular management tools, I expanded the QA Status to fit all the situational needs. Now each report can be labeled properly to reflect its status.

Changes

  • Individual report page with columns for comparison
  • Expandable health info for quick glance & complete view
  • A column for basic info and setting status
  • Note area and asign function
  • New UI component set

V.2 - Feedback

What's working:

  • The columnized viewing area is great for making comparisons
  • Color-labeled notification area explains the flagging reason
  • Notes and asign function helps with communication

Areas of improvement

  • Too much white space
  • There is no assurance when a user changes the QA Status
  • QA reviewers may want to change their decisions later
  • The asign function is helpful but might be an overkill for the current communication needs, and is difficult to implement

What needs to be added:

  • Bug reporting feature when things go wrong (There will be errors from time to time as the algorithm will take a while to be refined)
Changing QA Status has no assurance

V.3 - Prioritize assurance and optimize details

With the feedback above, it was quite clear how to improve.

When the improvements were approved, I optimized the UI and made the design more space-consious.

Notifies the user that their decisions has been saved
Decision pool
Finalize decisions
Bug report on any page

Eventually, the design was approved by QA reviewers and stakeholders, and was presented to the company in a town hall meeting.

🛠

Development

Developer Handoff

After a design review meeting with the developer team, I prepared a handoff file with instructions on user flow and UI components.

Dev instructions
Component library

🎁

Takeaways

I acquired immense amount of experience and knowledge in many areas from this 4-month project: user interviews, stakeholder meetings, co-design exercises, prototyping, usability testing, design systems, and developer handoff. Among them I would like to emphasize one major take away:

Advocate for simplicity.

By focusing on what's important and distinguish what's possible vs. what's preferred helped me to stay on track when there was a plethora of complex problems.

Thank you for your time.
Feel free to contact me or view more projects below.