At Sharp Healthcare I work across clinical, operations, and reporting teams to improve how patients move from first contact to being seen. This project focused on untangling the patient management workflow, tightening expectations for key clinical deliverables, and giving leaders clearer, faster visibility into what is happening on the floor.
The work combines process mapping, requirements, UAT, and reporting into one view of the patient journey so teams can see where time is lost, agree on what “good” looks like, and measure whether changes are actually working.
Patient intake at Sharp touches schedulers, front‑desk staff, clinicians, and reporting teams. Over time, extra steps, handoffs, and local workarounds had crept into the workflow. The result was slower intake, duplicated work, and inconsistent visibility into what was actually happening across units.
This project treated the intake workflow as one connected system. We wanted to know exactly where time and quality were being lost, agree on what “good” looks like for key clinical deliverables, and make sure those expectations flowed cleanly into UAT and reporting so improvements would stick.
The patient intake journey has a direct impact on timely access to care, so delays and unclear handoffs create real downstream cost for both patients and teams. At Sharp Healthcare, the project objectives were to reduce intake delays created by workflow bottlenecks and to make those delays easier to trace to specific steps in the journey.
Quality and consistency mattered just as much as speed. Requirements for clinical deliverables needed to be clearer, UAT needed stronger coverage based on real intake scenarios, and reporting needed standardisation across departments so leaders could track the same performance story and act quickly when metrics drifted.
I partnered with clinical leaders, operations, IT, UAT coordinators, and reporting teams. The work followed a simple pattern: make the current journey visible, agree on the target experience, then wire that target into requirements, tests, and metrics that everyone could see and rely on. That approach kept discussions grounded and made the final outcome easier to validate. It also helped stakeholders align on what mattered most, so decisions were based on the workflow itself rather than assumptions or one off feedback.
I ran working sessions with frontline staff to map how patients actually moved from request to scheduled appointment and then through to arrival. The mapping exercise made the hidden friction visible, including where approvals stalled, where information had to be re entered, and where ownership was unclear between teams. We also captured the small workarounds people used to keep things moving, because those were often the best clues for where the workflow was breaking. Once we understood the journey end to end, it became much easier to decide what needed to change first and what could wait.
Using the mapped journey, I worked with clinical stakeholders to define what each step needed to produce. This included the required fields, what a complete request looks like, and the rules that govern movement between steps. I made sure the expectations were written in a way that could be tested, not just understood, so IT and UAT could validate the same outcome. That clarity reduced rework and prevented the same ambiguity from showing up again during testing.
Instead of generic test cases, I supported UAT coordinators in building scenarios around real intake patterns such as new patients, returning patients, and urgent bookings. Each scenario included clear expected outcomes and evidence, which made defects easier to triage and reduced late surprises during sign off. Because the scenarios reflected how work actually happens, the team spent less time arguing about whether a defect was real and more time fixing the right issues. This also improved the consistency of results across testers and clinical participants.
I collaborated with reporting teams to define a focused set of intake and workflow KPIs and to standardise how they were calculated and shared. That reduced ad hoc reporting requests and gave managers a consistent view across departments, so performance changes were easier to spot and act on. With standard reporting in place, leaders could compare performance across units and identify patterns instead of chasing individual exceptions. That made it easier to sustain improvements after rollout.
By treating the workflow, requirements, UAT, and reporting as one connected system, we were able to move from “we feel intake is slow” to measurable improvements. The impact was not just faster intake, it was also better quality because requirements were clearer and UAT was grounded in real scenarios. As reporting became standard across departments, teams could validate progress consistently and act sooner when performance slipped.