A growing area of research, including an opinion paper published in Trends in Cognitive Sciences and various studies, highlights the potential for artificial intelligence models, particularly large language models (LLMs), to influence human expression, reasoning, and opinions. Researchers suggest that as individuals increasingly interact with these AI systems, there is a possibility of reduced diversity in human thought and communication patterns.
Potential for Homogenization
The opinion paper, co-authored by computer scientist Zhivar Sourati of the University of Southern California, posits a significant concern.
Widespread adoption of LLM styles could establish them as a "socially correct way to frame information."
This perspective is supported by observations that individuals may tend to adopt writing patterns, reasoning methodologies, and viewpoints presented by the LLMs they utilize.
A preprint analysis conducted by Sourati and co-authors examined text published before and after ChatGPT's 2022 launch, including Reddit posts, news articles, and other preprints. Their findings indicated a decrease in stylistic diversity in texts published after the platform's release. The opinion piece further suggests this phenomenon extends to people's perspectives, referencing a 2023 preprint that reported participants' opinions on social media topics shifted closer to views expressed by LLMs after interaction.
Additionally, a study published in Science Advances reported that individuals' opinions on social issues, such as the death penalty, began to reflect those acquired from AI tools. Participants who used AI assistants to write about sociopolitical subjects later expressed attitudes more aligned with the LLMs' output compared to a control group without AI interaction. Sterling Williams-Ceci, an information scientist at Cornell University and a co-author of the study, suggested this effect could potentially reduce the diversity of political views, with the specific impact varying based on the leanings of the LLMs.
Factors Driving Standardization
Researchers attribute the potential for standardization to several key factors outlined in the opinion paper and related research:
Training Data Bias
LLMs are built using extensive datasets of human-created text and media, which often overrepresent dominant languages, ideologies, and cultural perspectives. The opinion paper notes these datasets frequently reflect styles prevalent in Western, educated, industrialized, rich, and democratic (WEIRD) societies. Consequently, LLM outputs may mirror a restricted or skewed portion of human experience, leading to less varied text compared to human writing.
Loss of Individual Style
When LLMs are used for tasks such as drafting emails, revising essays, or shaping social media posts, the final versions may lose some individual stylistic uniqueness.
The "Drift" in Agency
The opinion paper suggests a phenomenon termed "drift," where users might defer to model-suggested continuations or "good enough" options rather than formulating their own ideas. This gradual shift could transfer agency from the user to the model, potentially influencing perceptions of trustworthiness and what constitutes credible speech or sound reasoning.
Broader Cognitive Shifts
Widespread exposure to machine-generated framings could lead people to recall events similarly, adopt similar attitudes, or rely on similar mental shortcuts. Sourati noted that even individuals not directly using LLMs might be indirectly affected by social pressure to align with prevalent LLM-influenced thought and expression. Reliance on linear explanations, often encouraged by "chain-of-thought" prompting, might also reduce the use of intuitive or abstract reasoning styles.
Impact on Creativity and Memory
While some studies cited by the authors indicate LLMs may help individuals generate more ideas and detail, groups utilizing LLMs have reportedly produced fewer and less creative ideas compared to groups relying solely on human collaboration. LLM-assisted writing has also been linked to reduced memory retention, lower ownership of ideas, and decreased neural engagement compared to writing independently or with search engines.
Nuances and Counterarguments
While concerns about homogenization are raised, some research indicates potential resistance. One study, posted as a preprint and yet to undergo peer review, identified groups of writers who maintained "distinctively human stylistic signatures," potentially prioritizing authenticity over AI-driven efficiency.
Oliver Hauser, an economics and AI researcher at the University of Exeter, acknowledged that AI has the capacity to enhance writing and clarity for individuals. However, he also noted the potential for collective detriment if such adoption becomes widespread.
The authors of the Trends in Cognitive Sciences opinion paper also acknowledge potential benefits of standardization. These include improved communication and reduced coordination issues. They suggest that it could also potentially reduce bias against nonstandard dialects.
Strategies to Safeguard Diversity
The opinion paper proposes that the situation represents a paradox:
LLMs are developed to model human thought, yet they may reduce the very cognitive variation that strengthens human groups.
While recognizing ongoing efforts to diversify model outputs through techniques like persona prompting, fine-tuning, and personalized models, the authors contend these solutions may be superficial if the foundational training data remains narrowly representative. They also note that pushing models too far from their pretraining patterns can elevate the risk of generating inaccurate information (hallucinations).
To mitigate the potential for homogenization and safeguard cognitive diversity, researchers advocate for several key approaches:
- Diversifying AI Models: Developing LLMs grounded in a wider human diversity of language, perspective, and reasoning, rather than simply promoting random variation.
- Adjusting User Interaction: Encouraging users to treat chatbot outputs as starting points for thought and further development rather than finalized ideas.
- Careful Evaluation: Schools, workplaces, and software companies may need to carefully evaluate when AI assistance is beneficial and when it might subtly restrict human writing, deliberation, and creativity.
The aim is to support societal collective intelligence and problem-solving by fostering broader, human-grounded diversity within AI systems and user interactions.