BY Jamie Whitburn, Consultant – Stirred
AI is transforming healthcare communications, streamlining processes and generating real-time insights that enhance efficiency and creativity.
However, the rush to integrate AI into healthcare communications comes with risks, specifically regarding misinformation and bias. These risks need to be mitigated effectively, with AI being balanced alongside human creativity, making this point in time a vital moment for thoughtful integration.
The Promise and Pitfalls of AI
With the ability to analyse vast amounts of data, predict trends, and personalise content, AI-driven tools promise to help healthcare brands stay relevant and responsive, improving audience interactions. These technologies can elevate ideation, speed up asset production, and optimise campaigns in real-time.
Yet the advantages come with challenges.
AI models reflect the quality of their training data. In healthcare, flawed data can lead to misinformation with serious consequences. Misleading medical claims, outdated information, or misinterpretations of clinical guidelines can erode trust and lead to ethical concerns.
For example, Google’s Med-PaLM 2, an AI model developed to answer medical questions, was flagged for inaccuracies compared to responses from human healthcare professionals (SOURCE & SOURCE). While the AI model performed well in structured settings, real-world application proved challenging, demonstrating the need for human verification.
The Risk of Misinformation, and Bias
Rigorous fact checking and expert oversight is a key pillar of traditional content development and it’s crucial that this is also applied to all AI-generated content. Bypassing this crucial step risks producing inaccurate or misleading information, potentially endangering patient safety. This risk extends beyond written content to AI-powered tools used in healthcare decision-making, where accuracy and human judgement are critical.
For example, the NHS has trialled Wysa, an AI-powered chatbot, to provide interim mental health support while patients wait for appointments with a human therapist. (SOURCE & SOURCE). Yet, a YouGov survey found only 1 in 5 UK adults would prefer an AI chatbot over a human therapist, with ongoing concerns about misdiagnoses, limited emotional intelligence, and the inability to provide human support. This highlights the continued need for expert oversight in AI-drive healthcare solutions.
Bias is another major concern. AI learns from existing datasets, many of which reflect historical biases. This can reinforce systemic disparities in health equity messaging and patient engagement rather than addressing them. For example, a study by UCL researchers found that AI models for liver disease screening were twice as likely to miss disease in women compared to men – reflecting existing inequalities in care (SOURCE). Similarly, AI-driven healthcare resource allocation has been found to systematically disadvantage Black patients. A widely used algorithm incorrectly concluded that Black patients were healthier than equally sick white patients because it relied on healthcare spending as a proxy for health status—perpetuating cycles of underfunding and neglect (SOURCE).
Bias extends to AI-generated health content. A 2024 UNESCO study found that large language models (LLMs) often reinforce outdated gender stereotypes, associating men with leadership roles while assigning women to undervalued professions (SOURCE). Without careful oversight, these biases can shape AI-generated healthcare narratives, subtly influencing public perceptions and decision-making.
Even companies that initially gained momentum in AI-driven healthcare have struggled to translate potential into practical solutions. Babylon Health’s chatbot, designed to provide diagnostic guidance, was criticised for producing misleading recommendations (SOURCE). Despite initial enthusiasm from investors and policymakers, Babylon collapsed in 2023 due to financial struggles and its failure to deliver on AI healthcare promises. This highlights evidence of the gap between AI’s theoretical potential and real-world reliability.
How Stirred is Approaching AI: A Thoughtful Integration
At Stirred, we believe that AI should be used to challenge our thinking and help us draw inspiration from beyond the healthcare sector. By leveraging AI to explore new creative directions and uncover unexpected insights, we can enhance our approach to healthcare communications. However, we recognise that AI is not a replacement for human expertise.
Rather than letting AI dictate our creative decisions, we use it to augment our strategic thinking. Beyond the more practical applications such as bringing concepts to life, aiding messaging, storyboarding or generating music and provisional voice overs, AI is helping us analyse global trends, identify cultural moments and ultimately, unearth those essential nuggets of insight.
However, it’s important to note that while AI is already enhancing our processes, final decisions will always require a human filter. This ensures that our work remains credible, compelling, and emotionally resonant – all qualities that AI alone cannot replicate.
By balancing AI’s capabilities with human insight, we can craft high-impact campaigns that shift behaviours, attitudes, and health outcomes, without sacrificing authenticity or integrity. The future of AI in healthcare communications isn’t about replacing human creativity but enhancing it.
And in a sector where trust is everything, maintaining this balance will be key.
BY Jamie Whitburn who is a Consultant at Stirred
You must be logged in to post a comment Login