Back

AI Integration in Private Communications Raises Consent and Privacy Concerns

Show me the source
Generated on: Last updated:

Introduction to AI in Communications

AI systems, such as Microsoft Copilot and Hinge's AI "Convo Starters," are projected to be increasingly integrated into private communications by 2026. This development prompts discussions regarding user consent and the handling of personal data by artificial intelligence.

Concerns Regarding User Consent and Awareness

Abhinav Dhall, an associate professor in Monash University's Department of Data Science and Artificial Intelligence, indicates that many individuals are not explicitly aware when AI systems are processing their data.

"AI may analyze conversations involving multiple users, even if only one user has activated or noticed the AI feature."

This situation can lead to ambiguity about the scope of consent given and how communication data is utilized.

Examples of AI Integration

Meta AI

Meta's AI offers chat summarization capabilities on platforms including Instagram, Messenger, and WhatsApp. When this feature is employed on Messenger, unread messages within a chat are shared with Meta AI to generate a summary, which is then made visible only to the requesting user.

A Meta spokesperson confirmed that private messages exchanged with friends and family are not used for training the company's AIs. However, messages "willingly shared by someone in the chat" are considered available for use. Meta did not provide further comments when asked about its approach to consent.

Google AI Inbox/Gemini

Google has initiated the rollout of 'AI Inbox,' a feature that reconfigures the traditional email inbox to include AI-generated summaries and prioritized to-do items, powered by its Gemini large language model (LLM). This system is designed to function as a "personal, proactive inbox assistant" within Gmail, aiding in drafting, summarizing, and triaging emails. Consequently, private emails sent to users of Gemini in Gmail may be processed by Google's AI.

A Google spokesperson stated that when Workspace Gemini features, including those in Gmail, are used, personal content is not utilized to train the company's foundational models.

Privacy Implications for Senders

Dana McKay, associate dean of Interaction, Technology and Information at RMIT's School of Computing Technologies, suggests that senders should be required to provide consent before their private emails are processed by a recipient's AI. This recommendation is based on concerns that emails may contain confidential information, such as medical or commercially sensitive data.

Dhall points out that irrespective of whether content is used for training, the sender's words undergo processing.

"AI systems possess capabilities beyond human readers, such as summarizing, extracting details, and storing patterns, which can alter privacy expectations."

Challenges with Consent Mechanisms

McKay believes that many individuals do not possess sufficient understanding of AI to provide informed consent. She also notes the prevalence of "click-wrapped consent" in many tech services, where users cannot selectively consent to terms and conditions that are often extensive and complex.

Dhall adds that as AI features become embedded in popular platforms, users may find themselves agreeing to terms by default, frequently without a clear moment of informed choice. He emphasizes the need to:

"redefine how consent is obtained, making it more inclusive and understandable, rather than embedding it within lengthy documents or enabling it by default."

Legal Perspective in Australia

James Patto, founder of Melbourne-based law firm Scildan Legal, clarifies that Australia does not have an AI-specific regulatory framework. Instead, existing laws, primarily privacy law alongside consumer and discrimination legislation, are applied to AI tools. Patto explains that:

"Australian privacy law is centered on purpose, with explicit consent being a requirement only in specific, narrow situations."

Using Gmail's AI Inbox as an example, Patto notes that this involves a mailbox owner enabling a feature to manage their own inbox, rather than Google using emails for its own purposes. Therefore, the primary compliance question concerns whether the organization enabling the tool can lawfully use it. Patto likens the tool to spam filtering, where senders are not typically expected to provide explicit consent each time an email is scanned for security or inbox management.

However, Patto states that consent becomes more pertinent when email content is used for a materially different purpose, such as training or improving AI models. Patto anticipates that such AI features will eventually become standard, and what constitutes a "reasonable expectation" under privacy law will evolve over time.

McKay expresses concern that the US, a major developer of these technologies, lacks a unified privacy law. She also highlights that countries like China have distinct cultural conceptions of privacy. These differences imply that Australian legal and cultural assumptions may not align with the frameworks governing these technologies.