Last week OpenAI published a field study with Penda Health, a primary-care network in Nairobi, exploring an "AI clinical copilot."* The system—built on GPT-4o—quietly double-checks diagnoses and treatments in real time, surfacing suggestions only when it spots a possible error. Penda reports fewer diagnostic and treatment mistakes when the copilot is active and, just as important, solid adoption among clinicians who helped design the workflow.
We can treat those numbers as early signals, not final verdicts. But they raise a set of questions that feel worth asking—especially for teams, like ours at heva, that serve busy healthcare providers in emerging markets.
What might an always-present second opinion unlock?
Cognitive breathing room
Many clinicians in lower-resource settings see dozens of cases a day across nearly every body system. An assistant that scans for overlooked labs or contradicting symptoms could function as a mental guardrail, freeing scarce attention for patient rapport and education.
Faster guideline diffusion
Local protocols change; international best practices evolve. A live copilot that references current guidance—adapted to regional epidemiology—might shorten the lag between new evidence and real-world use.
A gentler entry point for AI
The Penda pilot emphasised that the clinician stays in charge. Alerts are suggestions, not commands. That framing may lower resistance among professionals who worry that algorithms will one day dictate care.