Pandora - Speciality Drug Enrollment Automation
PRODUCT DESIGN
HEALTH CARE
AI

Role
UX research, Workflow design,
UI Design, User testing
Impact
82% faster processing,
207% higher approvals, and 40%
fewer loops
Team
1 Designer, 1 PM, 2 ML Engineers,
2 Developers
Background and Problem ✨
Every day, nurses and coordinators help patients access life-saving specialty therapies, but the real delays come from paperwork rather than clinical complexity:
62% of enrollment forms fail on first submission
8–10 day average start delay for therapy
2–3 hours spent manually validating each form
Case Example :
A 56-year-old woman with glomerular disease was prescribed a specialty biologic. Her doctor submitted the prior authorization form via fax, as is common. But the handwritten diagnosis and lab results were hard to read, and key supporting documents came separately as a scanned lab report with over 10 pages.
The nurse reviewing the submission spent hours reconciling values, clarifying incorrect data, and chasing missing details. After multiple clarification cycles with the provider, the form was approved but it took nine days.
The delay had nothing to do with clinical decision-making. It was caused entirely by broken workflows and paperwork inefficiencies.
Goal 🚀
Design an intelligent, semi-automated system to:
Extract structured data from handwritten/scanned forms
Infer clinical data from unstructured lab reports and PDFs
Validate fields using drug-specific business rules
Give nurses full transparency and control over AI-assisted output
Reduce therapy delays without sacrificing clinical trust
Outcome
Processing
Time
Approval
Rate
Clarification
Loops
Before
After
• 82% reduction in processing time (from 2.5 hours to 26 minutes) - patients get approved faster and start therapy sooner.
• More than doubled first-pass approval rates (up to 207% of baseline) - reducing rework and giving nurses more confidence in submissions.
• 40% fewer clarification loops - streamlining communication and allowing clinical teams to focus more on patient care.
End-to-End Workflow
Turning Paperwork
into Digital Data
Step 1 :
Forms sent via fax or scan are automatically converted to digital data using OCR. Key fields are extracted, and low-confidence values are flagged for nurse review and correction.
Validate Against
Database
Step 2 :
Extracted data is cross-checked with the medical database to spot discrepancies. Users resolve any mismatches directly in the interface.
Pull Clinical Info with
AI (LLM + RAG)

Step 3 :
The system scans related lab reports and clinical notes to auto-fill missing fields using AI. Each suggestion includes a source link and confidence level. Users review, edit, or approve.
Apply Business Rules &
Approval Submission
Step 4 :
All data is automatically checked against drug- and payer-specific rules. Any violations are flagged for review. Once verified, the completed application is immediately packaged and submitted for approval. Users are notified of approval status or any outstanding issues.
Key UX Challenges

Problem: Full-Form Validation Overload
• Validating all 40+ fields in one click overwhelmed users.
• Too much output made errors hard to identify and fix.
• Debugging mistakes was tedious, increasing stress and decision fatigue.
Solution: Validate Section by Section
• The form was divided into logical sections: Patient Info, Diagnosis, Labs, Insurance, etc.
• Nurses validated and resolved issues one section at a time. This process became more manageable, transparent, and user-controlled.
• Reduced time spent validating data.
• Fewer clarification loops (cut by 40%).
UX Problem: Building Trust in AI
Nurses in healthcare were asked to approve AI-inferred patient data, but the main UX challenge was building trust requiring visibility and user control, not just technical accuracy.
Solution:
• Side-by-Side View: Nurses could compare human data with AI-suggested values at a glance.
• Status Chips: Visual indicators showed matches, mismatches, and low-confidence fields.
• Hover Tooltips: Extra context from data sources appeared on hover, increasing transparency.
• Inline Actions: Nurses could accept, edit, or clarify each field directly, not just approve everything at once.



Iterations
Clinical Info Input Without PDF Viewer
• Users entered or validated extracted clinical data (e.g., diagnosis, labs) without access to the source document alongside the form.
• Trade-off: Simple interface, but lacked context—users couldn’t easily verify values, which affected trust and introduced cognitive strain.
Separate Business Rules Validation
• Business logic (e.g., biopsy present + lab value > threshold) was validated in a dedicated screen after data entry.
• Trade-off: Made rule outcomes (pass/fail) explicit and easier to resolve, but disconnected from the form, requiring extra navigation and breaking user flow.
Key UX Insights
• Source visibility and real-time rule feedback are critical.
• Users need everything in one place—data, documents, and rule validation—to make faster, more confident decisions with less back-and-forth.
Designing the Solution
I designed a modular, section-wise interface that simplifies validation and builds trust in AI-generated values. Each screen is focused on clarity, control, and reducing cognitive load for clinical reviewers.
- Selecting the Speciality drug and corresponding enrollment form to begin the validation process.


- Validating each section by reviewing OCR-extracted values and comparing them with database records for accuracy.


- Validating each section by comparing OCR-extracted values with the database — if data is missing, users can manually add and flag it for tracking.

- For Clinical information section,After OCR Validation, each field is compared against patient clinical documents.Users can view the supporting source using arrows in the PDF Viewer or by clicking the icon next to each field.

- For each speciality drug to get it approved they have a set of business rules that should pass.If pass or fail we can check the source using arrows in the PDF Viewer or by clicking the icon next to each business rule.


Execution & Integration
All backend configurations that power Pandora’s AI-driven data extraction were set up using an internal application called Pi Design Studio. This tool enables data engineers to define field-level logic for every data point that needs to be extracted or inferred such as diagnosis, lab values, or insurance details by leveraging RAG (Retrieval-Augmented Generation) and LLM (Large Language Model) pipelines.
Using Pi Design Studio, engineers can:
Map each field to specific document sections (like lab reports or biopsy PDFs).
Define how RAG retrieves relevant context for each field using embeddings and prompt templates.
Set up LLM prompts to infer values based on retrieved context.
Configure validation rules and acceptable field formats.
Handle logic for multiple form types, so that the system can adapt dynamically to different specialty drug enrollment templates.
All of this logic is authored in structured Excel templates within Pi Design Studio, making the setup flexible and scalable.
Once these logic sets are authored, Precog the automation layer comes into play.
Precog hosts and manages fnFM bots like the Clinical Information Extractor and Tarpeyo Form Extractor. These bots execute the logic built in Pi Design Studio. On bot run:
The clinical documents are pulled from source systems.
Logic written in Pi is executed field-by-field.
Outputs are pushed into Pandora for human-in-the-loop review and submission.
Testing & Validation
We conducted pilot testing with clinical operations teams using real patient scenarios to gather metrics on processing time, error rates, and approval success.
This was followed by two rounds of usability testing with nurses and coordinators to observe how they handled field validation, mismatches, and clarification loops.
Success was measured through quantitative metrics (validation time, rule failures, override rates) and qualitative feedback (user trust, clarity, cognitive load).
Key learnings included strong preference for section-wise validation, high trust in source-linked AI suggestions, and the importance of visibility and control in gaining user confidence.
Made with Love, Framer,
and a lot of Ctrl+Z