-
サマリー
あらすじ・解説
A recent study reveals that large language models (Large Language Models Reflect the Ideology of their Creators), like ChatGPT and Google’s Gemini, exhibit inherent ideological biases reflecting their creators' viewpoints.
The research, employing a novel methodology, analysed LLMs' descriptions of historical figures to identify these biases, finding significant differences between Western and non-Western models, and even variations based on the language used.
These findings challenge the notion of AI neutrality, highlighting the influence of training data and refinement processes on an LLM's output. The study's authors emphasise the need for transparency regarding these biases and suggest alternative regulatory approaches to mitigate potential risks of political manipulation and societal polarisation. The implications for information access and the future of AI regulation are significant.