Economist Impact’s inaugural AI in Health Summit was convened to discuss real-world AI applications in healthcare. Refreshingly, it wasn’t all about flashy demonstrations; instead, over 60 speakers from diverse backgrounds (healthcare professionals, pharma, medtech, start-ups, regulators and NGOs) focused attention on how AI can realistically shape healthcare. Right now, and in the near future.
Key themes & insights:
- Where AI is delivering impact now
Sessions consistently emphasized that AI should be applied where it works “not as a magic wand,” but as targeted, validated applications that fit real workflows. As Natasha Loder, Health Policy Editor at The Economist and meeting facilitator, put it: “AI should be explainable, equitable, and embedded…not just experimental. Not AI for AI's sake." Despite the note of caution, it is clear that AI is already providing a substantial impact with potential to touch every aspect of the healthcare ecosystem.
Accelerating drug discovery & development
Life-sciences speakers, including Danielle Belgrave (GSK), James Weatherall (AstraZeneca) and Selim Aydin (Novartis), outlined how AI is already speeding up research and development (R&D), from target identification and molecular design to pre-clinical filtering and trial optimisation with impressive results. One speaker referred to a “30% decrease in time from concept to Phase 2 trial using AI tools in early phase clinical development”. The adoption of digital twins serving as synthetic control arms or simulating treatment responses was also discussed as a way to enhance trial efficiency2.
The need for clarity on exactly what is meant by acceleration was highlighted: is it speed, success probability, or cost? Human oversight remains essential to avoid pitfalls such as false positives and overfitting.
Diagnostics: Real gains
Near-term wins are very clear in the field of patient diagnostics, with just four of many examples highlighted below:
- Teledermatology: United Kingdom (UK) evaluations of Skin Analytics’ DERM have shown pathway benefits across National Health Service (NHS) sites.3
- Breast cancer screening: In a UK prospective study, Mia AI detected 12–13% more cancers than routine practice and modelled a 30% workload reduction.4
- Endoscopy: Meta-analyses show higher adenoma detection and lower miss rates for AI-assisted systems, though performance varies by workflow and human factors.5
- Stroke diagnosis: England’s 107 stroke units now use Brainomix 360 Stroke AI. NHS analysis indicates this could triple recovery rates through faster, more accurate treatment decisions.6
Clinical and comms note-taking: helpful, but not “low-risk”
Generative tools now draft notes, summarise case histories and propose care-plan language and similar tools are entering healthcare communications to summarise literature, draft copy and localise assets. These are often labelled “low-risk” because they only generate words. But words are decisions. Risks include automation bias, hallucinated attributions, subtle drift beyond on-label language, prompt or metadata leakage, and error propagation into records and downstream materials. Done well, the upside is real: a GOSH-led multicentre evaluation reported 23.5% more patient-facing time and shorter appointments after AI scribe deployment.7 The key is to pair efficiency with verification by design, not as an afterthought.
Equity-focused AI and digital tools in low-resource settings
AI and other digital tools can succeed best in lower-income settings by focusing on real barriers, such as training, logistics and access to healthcare, rather than copying high-income solutions,8. In low-resource settings, AI-assisted neonatal-resuscitation training has improved learning outcomes (e.g., in Türkiye)..9 In sub-Saharan Africa, augmented-reality (AR)-based training tools are being used to upskill midwives and nurses on neonatal resuscitation skills to directly address the region’s high neonatal mortality rate10. Discussions emphasized how equity should not be an afterthought but a design principle that maximises return on investment, especially when resources are limited.
If the evidence is this strong, why isn’t AI everywhere? The summit’s validated examples show what AI can deliver; the bottlenecks are how information moves, how it is governed, and how quickly rules adapt to real-world use.
- What barriers must we overcome
Keeping patients at the centre, in data, and policy
Right now, policy and public debate are about 80% focused on data protection and only 20% on the benefits. Discussions highlighted that “while fears about data misuse are common, actual instances of harm are rare”. And in a world where sharing personal life details and events on social media is normality, should policy and regulations shift the balance to focus on patient benefits? Patients increasingly express a desire for faster answers, fewer hand-offs, and greater certainty—especially in rare diseases, where data scarcity limits innovation11. Current data sharing regulations are evolving fast with stringent guidelines and highly fragmented approaches across geographies, states and even institutions. Global data-sharing under robust governance, could reveal patterns, accelerate diagnoses, and unlock personalised treatments for every patient. The World Health Organization’s (WHO) AI guidance supports safety, transparency, and equity as core design features, not barriers12.
Lawrence Tallon (CEO, Medicines and Healthcare products Regulatory Agency, MHRA) warned that the UK “can’t afford to wait years” for AI regulation to catch up, showing promising signs of government support to ensure healthcare isn’t left behind. Agile, proportionate oversight is needed now to support innovation and public trust. The UK is already moving: new post-market surveillance rules for medical devices (from June 2025) require ongoing monitoring, especially relevant for adaptive AI13. A new National Commission on AI in Healthcare, chaired by Prof. Denniston, is also set to deliver recommendations within a year. “Let's not be too late with AI in healthcare as usual; there haven’t been radical changes for 20 years, the time is now”. Yet rules and data alone won’t put safe AI into daily practice. Adoption lives (or dies) in the workflow.
Trust Is Essential for Adoption
Even with strong evidence and clearer rules, AI adoption stalls without clinician trust and fit-for-purpose workflows. Hospitals may pilot dozens of tools, but only a few achieve real uptake. As one speaker noted: “We have 100 AI tools in the hospital but only use five.”
AI tools that scale in clinical practice are those that solve real problems, fit within workflows, and have visible human oversight. Automation bias is a real issue, and clear human oversight is imperative to avoid it. Clinicians don’t want black boxes; they want clarity, control, and context. This is where translating complex tools into everyday clinical value matters most. Successful programmes tracked trust alongside accuracy and cost, ensuring systems are explainable, transparent, and relevant. Discussions also highlighted the importance of providing protected time for users to trial the tools.
Throughout the day, audience clinicians raised repeated concerns about AI replacing doctors. But discussions reminded how radiology provides a case in point. While early predictions claimed AI would replace radiologists, the opposite has happened. AI now supports diagnostic work and demand for radiology professionals continues to rise. The often-repeated line “AI won’t replace doctors; doctors who use AI will replace those who don’t” has become a cliché. The more useful takeaway is: teams that use human-led, accountable AI make care safer and faster; teams that don’t, won’t. That is the competitive edge worth measuring. For healthcare communicators, the same rule applies: speed matters but auditable integrity is what earns trust.
Conclusion
First, there is evidence that AI already works in specific settings. Second, policy and data still shape what scales. Third, adoption lags because trust and workflow fit are hard. In healthcare communications, these dynamics are amplified: if clinicians are cautious about AI and stakeholders are cautious about pharma, the burden on communicators is integrity by design.
- Design with users; keep humans in charge. Clinicians and healthcare communicators should drive AI tool design: from identifying a clear problem to solve, designing workflows and championing deployment.
- Treat AI as a living product: continuous oversight, post-market surveillance, and shared learning. Build in post-market surveillance: monitor outcomes and drift, update models responsibly, and share lessons across sites, vendors, and regulators under strong governance.
- Make patient impact the north star and communicate it clearly. Translate efficiency and cost metrics into service metrics people feel: earlier diagnoses, faster time-to-treatment, reclaimed clinical time, fewer hand-offs. Communications teams play a vital role by aligning narratives with what matters to patients, to clinicians, and to systems. We need to turn complexity into clarity, accuracy and trust.
These priorities apply across sectors, including healthcare communications: design with users, monitor in the wild, and tell clear, outcome-focused stories. Ultimately, effective healthcare communication should focus on integrity, foster trust, and drive adoption while reinforcing the outcome that matters most: better, faster, more equitable care for patients.
The HCA Foresight Committee is dedicated to anticipating emerging trends in healthcare communications and supporting our profession in preparing for the future. A key focus is artificial intelligence (AI), an area of rapid and transformative evolution. In this article, we invited Elena Garonna, Business Director at Avalere Health, to share her key takeaways from the AI in Health Summit, held on 1 October 2025 at the Royal College of Physicians, London (alongside Future of Health Europe1)
|
References
- Economist Impact. Future of Health Europe. Economist Impact Events. Accessed October 24, 2025. https://events.economist.com/future-of-health-europe.
- European Medicines Agency. Reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle. Published September 9, 2024. Accessed October 24, 2025. https://www.ema.europa.eu/system/files/documents/scientific-guideline/reflection-paper-use-artificial-intelligence-ai-medicinal-product-lifecycle-en.pdf.
- NHS England. AI based skin lesion analysis technology. NHS England. Accessed October 24, 2025. https://www.england.nhs.uk/elective-care/best-practice-solutions/ai-based-skin-lesion-analysis-technology/
- Ng AY, Oberije CJG, Ambrózay É, et al. Prospective implementation of AI-assisted screen reading to improve early detection of breast cancer. Nature Medicine. 2023;29(12):3044-3049. doi:10.1038/s41591-023-02625-9
- Jin XF, Ma HY, Shi JW, Cai JT. Efficacy of artificial intelligence in reducing miss rates of GI adenomas, polyps, and sessile serrated lesions: a meta-analysis of randomized controlled trials. Gastrointest Endosc. 2024;99(5):667-675.e1. doi:10.1016/j.gie.2024.01.004. Efficacy of artificial intelligence in reducing miss rates of GI adenomas, polyps, and sessile serrated lesions: a meta-analysis of randomized controlled trials - ScienceDirect
- Bowie K. AI brain scans can triple stroke recovery rates, NHS analysis finds. BMJ. Published September 2025. Accessed October 24, 2025. https://www.bmj.com/content/390/bmj.r1856.
- Great Ormond Street Hospital. GOSH pilots AI tool to give clinicians more quality-time with patients. Published November 11, 2024. Accessed October 24, 2025. https://www.gosh.nhs.uk/news/gosh-pilots-ai-tool-to-give-clinicians-more-quality-time-with-patients/
- World Health Organization. Improving maternal and newborn health and survival and reducing stillbirth: progress report 2023. Published May 9, 2023. Accessed October 24, 2025. https://www.who.int/publications/i/item/9789240073678
- Molu B. Improving nursing students’ learning outcomes in neonatal resuscitation: a quasi-experimental study comparing AI-assisted care plan learning with traditional instruction. Journal of Evaluation in Clinical Practice. 2025;31(1):e14286. doi:10.1111/jep.14286
- Khora. Neonatal resuscitation – VR simulation. Khora | Expanding Reality. Published November 4, 2024. Accessed October 24, 2025. https://khora.com/project/neonatal-resuscitation-vr-simulation-2/
- Looi MK. What does your patient think about AI in the NHS? BMJ. 2025;389:r391. Accessed October 24, 2025. https://www.bmj.com/content/389/bmj.r391
- World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Published June 28, 2021. Accessed October 24, 2025. https://iris.who.int/handle/10665/341996
- Medicines and Healthcare products Regulatory Agency. MHRA guidance on new Medical Devices Post-Market Surveillance requirements. GOV.UK. Published January 15, 2025. Accessed October 24, 2025. https://www.gov.uk/government/news/mhra-guidance-on-new-medical-devices-post-market-surveillance-requirements