From Prompt to Bias: Generative AI as a Co-Actor of Historical Prejudices
DOI:
https://doi.org/10.26806/hisape.n54.4Keywords:
generative artificial intelligence, large language models (LLM), historical bias, prompt engineeringAbstract
From Prompt to Bias: Generative AI as a Co-Actor of Historical Prejudices
This study addresses the issue of generative artificial intelligence (AI) in the portrayal of historical events, with a specific focus on Czech history. Through a combination of an experimental approach and qualitative semantic analysis, it examines how prompt formulation influences the inherent biases of large language models (LLMs) such as ChatGPT, Google Gemini, and Claude. Based on an analysis of responses to neutral and biased prompts, a typology of three narrative strategies was developed: submissive adoption and bias amplification, active correction and reframing, and critical deconstruction and education. The results indicate that while less advanced models (e.g., ChatGPT 4o mini) tend to submissively adopt the given bias, more robust models demonstrate the capacity for active correction and education. The study concludes that LLMs function as active coagents in the social construction of historical narratives, which has significant ethical implications for education and the media sphere.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Martin Richter

This work is licensed under a Creative Commons Attribution 4.0 International License.





