Back to Professional Projects

Clario
Requirements Traceability & Release Quality

Clario

At Clario we moved quickly, but release quality depended on one thing: making sure requirements, testing, and reporting stayed connected. When that chain broke, it became hard to tell what was truly done, what was still risky, and what was impacting service performance after go live.

This project focused on bringing that connection back with structured requirements, clear traceability, stronger UAT, and reporting that leaders could rely on during release decisions.

Context & Problem

Product and delivery teams at Clario were shipping new capabilities regularly, but requirements, testing, and reporting did not always tell the same story. It was too easy for a requirement to be written in a document, implemented slightly differently, and then reported on in a way that did not clearly tie back to the original intent.

The aim of this work was to tighten that chain: from requirement to test case to report, so we could make better release decisions and reduce recurring SLA breaches.

Project Objectives

Clario is a global clinical trial endpoint technology company with 50+ years of experience, supporting pharmaceutical and biotech partners across 100+ countries to generate reliable evidence for drug approvals. Their work spans cardiac safety, medical imaging, eCOA, and respiratory solutions, where precision and traceability directly affect patient safety and regulatory outcomes. In that context, release quality is not just an internal metric; it underpins the certainty sponsors and regulators need when evaluating clinical evidence.

The goal was to reduce the gap between what stakeholders expected and what teams actually delivered. When requirements were vague or prioritisation was unclear, scope crept in late and releases carried more risk. By tightening how we captured and prioritised business requirements, we could focus on the highest impact needs and avoid surprises later in the cycle. That meant turning loose asks into concrete, testable statements that delivery and QA could align on before build started.

Alongside better requirements, we needed stronger validation. We improved UAT coverage and connected each requirement to traceability and testing evidence, so release decisions were based on what had been verified, not just what had been built. When something failed a test or a metric drifted, we could trace it back to a specific requirement or gap instead of guessing. That made it easier to fix root causes and reduce recurring SLA issues.

Finally, reporting had to stay consistent with the same structure. We refined how metrics were calculated and presented so leaders could see accurate progress, track risks, and respond earlier when patterns suggested future SLA issues. When reporting and traceability shared the same language, stakeholders could have one conversation about release readiness instead of reconciling different views from different tools.

We also aimed to improve audit readiness, so when sponsors or regulators asked what had been tested and why, we could produce a clear chain of evidence from requirement to test to result. Reducing post release defects was another priority, since incidents in production not only hurt service levels but also eroded trust with external partners. We wanted quality gates to be explicit and agreed up front, so teams knew exactly what had to pass before a release could go out. Cross team alignment was important too, so business, delivery, QA, and reporting all used the same definitions of done and the same view of progress. By the end of the initiative, the objective was for every release to have a defensible story: what we set out to do, how we verified it, and what we measured to confirm it was working as intended.

Clario traceability and quality details

My Role & Approach

I worked across business, delivery, QA, and reporting teams. The focus was on adding just enough structure to make work traceable and testable, without slowing teams down.

Stakeholder Workshops & Prioritisation

I ran workshops with business owners and delivery leads to surface the most important requirements and align on priorities. We translated loosely defined asks into concrete, testable statements that teams could execute against.

Gap Analysis & Traceability

I designed a gap analysis and traceability framework that connected requirements to design artefacts, user stories, and test cases. That made risks easier to see when scope changed and clarified which tests needed to pass before a release could be called “ready”.

Power BI Reporting Improvements

I partnered with reporting and data teams to refine Power BI dashboards so they reflected the same structure as the requirements and traceability. Leaders could then review progress with fewer assumptions and respond earlier when patterns suggested recurring issues.

UAT Leadership & Root Cause Analysis

For key releases, I led UAT planning and execution, making sure test scenarios covered the highest risk requirements. After go live, I used root cause analysis on recurring incidents to identify whether the fix needed to happen at the requirement level, test design, or the underlying process.

Outcomes & Impact

Connecting requirements, tests, and reporting in a consistent way gave teams a much clearer picture of release readiness and service performance.

15%
Improvement in delivery timelines
10%
Process efficiency improvement
20%
Increase in data accuracy
95%
UAT pass rate for key release
22%
Reduction in SLA breaches