Shared Experience: When AI works, but people don’t: the hidden behavioural risks leaders miss

General information
Title: When AI works, but people don’t: the hidden behavioural risks leaders miss
Date: Friday 27th February - 12:30 - 1:30pm GMT
Virtual Event
Presenter: Sonya Cullington - Cyberpsychologist | Helping the public sector govern AI with behavioural integrity
What is this session about?
AI-enabled systems often fail in ways leaders don't anticipate, not because the technology doesn't work, but because people don't feel safe enough to use it confidently. This talk explores why technically successful, compliant AI systems can still generate quiet behavioural failures that undermine safety and trust.
Drawing on cyberpsychology and 25 years in NHS communications and digital transformation, we'll examine what happens after AI goes live, when quiet behavioural failures begin to show. Leaders often mistake uptake for trust, consent for confidence, and silence for success. In practice, low trust shows up behaviourally: selective disclosure, workarounds, or automation bias, where people stop exercising judgement because following the system feels professionally safer than challenging it. In these moments, the human ceases to be a safeguard and becomes a rubber stamp.
Rather than offering a checklist, the talk equips leaders and those responsible for explaining, launching or standing behind AI-enabled services, with better questions: if 90% of users are compliant but 0% are providing feedback, do you have a successful rollout or a ticking time bomb?
Biography - Sonya Cullington

Sonya Cullington (MA, MSc) is a cyberpsychologist and policy advisor specialising in the "behavioural integrity" of digital systems, the gap between what technology can do and what it actually does to people when deployed at scale.
As Chair of the Patient and Public Advocacy Steering Committee at UK Digital Health and Care (UKDHC), she is a leading voice in ensuring digital health innovation remains ethical, inclusive, and grounded in public trust.
Sonya is the creator of the AI Judgment Framework, a cyberpsychological methodology designed for high-stakes environments like the NHS and public policy. Unlike traditional AI training, which focuses on prompt engineering, her framework focuses on cognitive oversight, teaching professionals to separate fluency (how well AI speaks) from veracity (how accurate it is), resist automation bias, and maintain epistemic vigilance in AI-augmented decision-making.
She advises public sector leaders on mitigating behavioural risks such as automation bias and digital exclusion, ensuring that emerging technology enhances human judgment rather than eroding it.
£0.00 per person (ex. VAT)

£0.00 per person (ex. VAT)

£0.00 per person (ex. VAT)

£0.00 per person (ex. VAT)

You will need to register on this site in order to book any HCA events. Click here to register or login as a member or non-member.
Please note: this event is for MEMBERS ONLY, if you are a member you need to log in before booking.

If your organisation hasn't renewed their membership this year then they need to do so here.
Shared Experience: When AI works, but people don’t: the hidden behavioural risks leaders miss

Virtual Meeting

Fri 27 Feb 2026

12:30 PM - 1:30 PM GMT
(1:30 PM - 2:30 PM CET)

19 places remaining