Back to top button. Click or tap this to jump to the top section of this page.

QA Dashboard

Reducing turnaround time by optimizing quality assurance reviewer's decision-making tool
Jan-Apr 2022
My role
UX Designer

Project Brief

Molecular You is a health-tech startup focusing on improving personal health through comprehensive analytics of biomarkers.
I worked with the company on 3-month project with multiple impactful deliverables.


  • Product Manager
  • Director of Clinical Informatics (Product Owner)
  • UX Designer (myself)
  • Bioinformatician (QA)
  • Developer * 3


  • Service blueprint (Entire pipeline)
  • User flow
  • Prototypes (Lo-fi & Hi-fi)
  • Component library
  • Handover instructions


  • Interviews with staff members
  • Co-design session with cross-functional team
  • Usability testings
  • Technical & security design audit with developers
Project brief image



Fast and simple QA process

Overview page

Task-oriented table view that provides QAs oversight throughout the pipeline.
Quickly identify important tasks through search, filter, and sort.

Review module for efficiency

Make QA decisions fast with options to view details.
Navigate through tasks with one click.





Reduce the turnaround time for customer health report

The primary challenge upon joining the team centered around the turnaround time (TAT) required for generating individual health reports for each customer. While I'm unable to share specific metrics, it was evident that substantial enhancements to the TAT were imperative for the company's scalable business model.

Many factors had influence on the TAT, both internal and external. As a cross-functional team of scientists, product, and design, our objective was to comprehensively identify these factors, followed by a meticulous assessment to pinpoint the most suitable element for improvement, taking into account capacity, feasibility, and efficiency considerations.

Identify Causes for Delay

Pipeline interview

Teams interviewed

  • Customer success
  • Quality control
  • Quality assurance

Interview scope

  • Workflow
  • Touchpoints
  • Dependencies
  • Communication
  • Blockers in performing task
  • Concerns

The company operated through various teams, each responsible for a specific segment of the operation. While these teams possessed in-depth knowledge about their respective challenges, there was a lack of comprehensive documentation addressing overarching issues or interconnected concerns.

During the initial two weeks, I conducted interviews with eight representative staff members spanning three teams along the entire operational pipeline – from customers providing blood samples at test sites to receiving health reports on their phones. These interview sessions involved sharing detailed insights into their daily tasks through screen sharing, documents, and diagrams.

Service map & Pain point map

The interviews revealed a complex web of problems.
To better analyze the research insights, I organized the documented information into a service map (learned from an article by Chloe Luxford). Each step was labled with causes for delay and security concerns. The service map was used as a visual reference document and contributed to company’s future operational plans.

Service map (blurred on purpose)

However, as the image above shows, the documented problems were stretched out (literally) pretty far and it was hard to compare between the delay causes.

To better present to stakeholders about the challenges and to make a decision on the first round of TAT improvement, I made a pain point map. It exclusively illustrates from a high-level how the problems were interconnected and highlights the challenges and opportunities brought up by the interviewees.

From here, the team discussed feasibility and ROI, then mutually agreed to the decision that QA improvement will be the most fitted challenge to solve for this quarter.

Pain point map (blurred on purpose)

Reimagine the QA task flow

What is a typical QA process?

Through the previous interview, the team and I already have a in-depth understanding of the QA process.
QA reviewers are responsible for checking health reports generated by the algorithm, before pushing them to production. In the past, QA reviewers would need to go through every report to flag out any potential mistakes and publish the verified report to the customer, one after another.


STAGE 2: CO-DESIGN for the start



Streamline the decision-making process

At the beginning, the team tried to find out if there's possibility to directly improve from the current tool.


  • Manually identify problems in every report
  • Uneeded information adds cognitive load
  • Clicking and switching tabs for comparison
  • Extra steps needed to approve each report
  • Priority is communicated externally


  • Patterns in common algorithmic errors makes room for Automation scripts
  • QA should focus on making decisions but not identifying errors
  • Priority labeling can be internalized
Original QA tool (blurred on purpose)

After discussion, the team agreed to build a brand new manual review tool for QA, with scientists and engineers working on an automated script to detect suspicious data.

Defining goals for the QA improvement project

High-level goals

  • QA wants to only review suspicious reports so that they can make decisions for the next step
  • QA wants to compare health data efficiently
  • QA wants to have the option to use approved reports for references
  • QA and other team members want to look at a record of all reports

Lower-level goals

  • A component library that provides reusable assets for future internal tools and features

Co-design exercise

Co-design whiteboard

Rather than immediately diving into wireframes, I aimed to gain a deeper understanding of the habits and preferences of the quality assurance (QA) experts, who specialize in scientific and quality-related responsibilities. Recognizing their expertise, I believed that patterns and choices in their daily tasks held potential for emphasis and improvement.

Thus, I facilitated a co-design exercise with the QAs. During this session, participants were encouraged to discuss their preferred digital products for "reviewing" and "processing," followed by creating quick sketches outlining their ideal QA tool. This approach yielded significant advantages, as it allowed diverse considerations to surface.

The design sketches showed subtle yet crucial insights that verbal communications alone couldn't achieve. Notably, several features from the sketches on efficient reviewing process directly influenced the final design.



Prototype interations

The prototype went through two major iterations.

V.1 - Quick mockup to test out hypothesis

Ideas to verify

  • Task-orientated tabs that lets QA switch between items need attention and everything else
  • See report details in an expanded table row area and make QA decisions
  • The QA status tag provides a clear indication of a report's current status
  • Filter, search and sort to locate review or group of reviews

V.1 - Feedback

What's working:

  • The task tabs to separate user needs are well appreciated
  • Table view is a familiar viewing pattern
  • QA Status is needed
  • Important to have filter and search functions so they can focus on certain type of reports

Areas of improvement

  • The expanded area might not be enough to display health information
  • Besides pass & fail, there are also intermediate situations such as "hold"
  • There's still no way for QA reviewers to communicate a case without relying on external tools, which is risky for security reasons

V.2 - Bring in new concepts and UI patterns

Learning from the best UX practices in popular management tools, I expanded the QA Status to fit all the situational needs. Now each report can be labeled properly to reflect its status.


  • Individual report page with columns for comparison
  • Expandable health info for quick glance & complete view
  • A column for basic info and setting status
  • Note area and asign function
  • New UI component set

V.2 - Feedback

What's working:

  • The columnized viewing area is great for making comparisons
  • Color-labeled notification area explains the flagging reason
  • Notes and asign function helps with communication

Areas of improvement

  • Too much white space
  • There is no assurance when a user changes the QA Status
  • QA reviewers may want to change their decisions later
  • The asign function is helpful but might be an overkill for the current communication needs, and is difficult to implement

What needs to be added:

  • Bug reporting feature when things go wrong (There will be errors from time to time as the algorithm will take a while to be refined)
Changing QA Status has no assurance

V.3 - Prioritize assurance and optimize details

With the feedback above, it was quite clear how to improve.

When the improvements were approved, I optimized the UI and made the design more space-consious.

Notifies the user that their decisions has been saved
Decision pool
Finalize decisions
Bug report on any page

Eventually, the design was approved by QA reviewers and stakeholders, and was presented to the company in a town hall meeting.



Developer Handoff

After a design review meeting with the developer team, I prepared a handoff file with instructions on user flow and UI components.

Dev instructions
Component library



I acquired immense amount of experience and knowledge in many areas from this 4-month project: user interviews, stakeholder meetings, co-design exercises, prototyping, usability testing, design systems, and developer handoff. Among them I would like to emphasize one major take away:

Advocate for simplicity.

By focusing on what's important and distinguish what's possible vs. what's preferred helped me to stay on track when there was a plethora of complex problems.

Thank you for your time.
Feel free to contact me or view more projects below.