From Prompt to Bias: Generative AI as a Co-Actor of Historical Prejudices

Authors

DOI:

https://doi.org/10.26806/hisape.n54.4

Keywords:

generative artificial intelligence, large language models (LLM), historical bias, prompt engineering

Abstract

From Prompt to Bias: Generative AI as a Co-Actor of Historical Prejudices
This study addresses the issue of generative artificial intelligence (AI) in the portrayal of historical events, with a specific focus on Czech history. Through a combination of an experimental approach and qualitative semantic analysis, it examines how prompt formulation influences the inherent biases of large language models (LLMs) such as ChatGPT, Google Gemini, and Claude. Based on an analysis of responses to neutral and biased prompts, a typology of three narrative strategies was developed: submissive adoption and bias amplification, active correction and reframing, and critical deconstruction and education. The results indicate that while less advanced models (e.g., ChatGPT 4o mini) tend to submissively adopt the given bias, more robust models demonstrate the capacity for active correction and education. The study concludes that LLMs function as active coagents in the social construction of historical narratives, which has significant ethical implications for education and the media sphere.

Author Biography

Martin Richter, Institute of Communication Studies and Journalism, Faculty of Social Sciences, Charles University

Institut komunikačních studií a žurnalistiky | Institute of Communication Studies and Journalism
Fakulta sociálních věd | Faculty of Social Sciences
Univerzita Karlova | Charles University

Mgr. Martin Richter (* 1993)

myrichtermail@gmail.com

Published

2025-12-05

Issue

Section

Studies