Most major AI chatbots tend to lean to the left when asked politically-charged questions.

A recent analysis published in PLOS ONE has revealed that the majority of modern conversational Large Language Models (LLMs) tend to exhibit a politically left-of-centre bias.

The study, led by David Rozado of New Zealand's Otago Polytechnic, looked at 24 different Large Language Models (LLMs), including both open-source and proprietary models, such as OpenAI's GPT-3.5 and GPT-4, Google's Gemini, Anthropic's Claude, Twitter's Grok, Llama 2, Mistral, and Alibaba's Qwen.

Rozado's research involved administering 11 different political orientation tests to these LLMs. 

Among the tests were the Political Compass Test and Eysenck’s Political Test

These tests aimed to detect any inherent political biases within the models. 

The findings indicated that a significant number of these models produced responses that leaned towards left-of-centre viewpoints.

Additionally, Rozado conducted an experiment using supervised fine-tuning to see if political biases within LLMs could be adjusted. 

He fine-tuned versions of GPT-3.5 using politically aligned datasets: left-leaning content from publications like The Atlantic and The New Yorker, right-leaning content from The American Conservative, and neutral content from sources like the Institute for Cultural Evolution. 

The results showed that the fine-tuned models produced responses aligned with the political orientations of their respective training data.

One potential reason for the left-leaning bias observed across many LLMs could be attributed to the influence of ChatGPT. 

As the pioneering model with significant popularity, ChatGPT has been utilised to fine-tune other LLMs, potentially propagating its own left-leaning tendencies. 

However, Rozado says that the study could not definitively determine whether these biases originated from the pre-training or fine-tuning phases of model development.

“Most existing LLMs display left-of-centre political preferences when evaluated with a variety of political orientation tests,” Rozado noted. 

He also clarified that the study's results do not suggest that the political preferences observed in LLMs are intentionally embedded by the organisations developing them.

The full study is accessible here.