Rebuilding Trust: Navigating Misinformation and AI in Australian and US Healthcare
Public trust in doctors and health services is facing a significant decline in both Australia and the US. The Australian Ethical Health Alliance (AEHA) symposium in May 2025 addressed this critical issue, emphasizing the urgent need for health professionals to take proactive measures.
Panellists at the AEHA symposium highlighted that trust is the ethical currency of medicine, and its erosion complicates various health practices.
Misinformation and Shifting Beliefs
The erosion of trust profoundly impacts crucial health practices, including vaccination and shared decision-making. The core challenge extends beyond the sheer volume of poor information; it's fundamentally about how people now form their beliefs. Patients frequently arrive for consultations armed with social media content, AI-generated summaries, and TikTok anecdotes, often expecting these sources to be weighed equally against established clinical advice and guidelines.
The Clinical Dilemma
Clinicians are increasingly confronted with a dilemma where robust evidence clashes with AI-inflated doubt. A common scenario involves parents requesting immunization exemptions based on AI-curated claims that lack any scientific evidence. Doctors are bound by an obligation to protect public health while simultaneously maintaining therapeutic relationships. Directly dismissing patient-presented information is often ineffective and can further erode trust. The AEHA discussion identified misinformation as a significant public health detriment, underscoring the need for strategies that go beyond simple education focused solely on facts.
Epistemic Injustice and Information Partnerships
Misinformation is closely linked to epistemic injustice, a phenomenon where certain patient groups, such as First Nations and culturally and linguistically diverse communities, are historically less likely to be believed. For these groups, a paternalistic "trust me, I'm the expert" approach can inadvertently reinforce existing historical mistrust. To counter this, an "information partnership" model was proposed, encouraging clinicians and patients to collaboratively explore health sources together. This approach aims to build trust through shared understanding and mutual respect.
AI's Role: Amplifier or Ally?
Participants at the symposium acknowledged that AI is already an active component of the misinformation ecosystem. It is capable of generating plausible but inaccurate health advice and often divorces information from crucial clinical context. The consensus was clear: the solution lies in governing AI, not rejecting it outright. Clinicians must take an active role in stewarding AI systems. This involves setting clear rules, insisting on diverse training data to minimize bias, requiring auditability for transparency, and integrating robust ethical oversight into AI development and deployment. This includes establishing technical governance with clinical input, akin to privacy impact assessments, specifically to monitor AI-assisted content and patient-facing tools for the presence of misinformation.
Beyond Evidence: The Power of Community Trust
An illustrative example cited during the symposium was the unexpected lack of a significant increase in MMR vaccine uptake, even following measles-related child deaths overseas. This suggests that community trust, identity, and personal stories can often outweigh population data and scientific evidence. The panel strongly advocated for involving trusted community leaders and utilizing narrative approaches within primary care settings as effective strategies to build and reinforce community trust.
Ethical Listening and Clinician Actions
Being believed when providing health advice is paramount. Nina Roxburgh emphasized the crucial need to share credibility with patients, including those with lived experience, moving beyond solely institutional voices. This necessitates trauma-informed communication, proactively offering interpreters, co-designing health materials with affected communities, and ensuring respectful documentation practices.
Pragmatic Actions for Clinicians:
- Correct misinformation respectfully, focusing on evidence and understanding patient concerns.
- Invite patients to review sources collaboratively, fostering a partnership approach.
- Use consistent, organization-backed messages to present a united front against misinformation.
- Escalate recurrent misinformation trends to health service leadership to advocate for system-level solutions.
- Advocate for clinical governance for AI and digital tools, not just IT sign-off, specifically to monitor for misinformation and ensure ethical deployment.
The symposium concluded with a powerful message: trust is performative, requiring ongoing transparency, a candid acknowledgment of uncertainty, and a clear demonstration of decision-making processes. This proactive approach is essential to prevent the vacuum from being filled by AI-generated "certainty," which often lacks nuance and context.