Fundamental Problems of AI Regarding Objectivity
DOI:
https://doi.org/10.47363/JAICC/ICAICC/2025(4)36Keywords:
AI Regarding Objectivity, Fundamental ProblemsAbstract
My test subject was the AI ChatGPT 4.0, which I involved in a 90-minute discussion. The topic I had chosen was bird flu. First, I
asked the AI general questions about this topic and received correct information.
Then I went into more depth and asked about human deaths. These had occurred in small numbers in China. Good. Then I asked
about deaths in Europe and North America. There was one case in the USA.
Then I asked about the danger of bird flu. The AI classified it as dangerous. I asked about possible vaccinations, which the AI
affirmed as necessary. Then I cornered the AI and asked them about the discrepancy between minimal deaths on the one hand and
their classification as dangerous on the other.
The AI went back and forth, but I didn’t let up. Then she realized that her assessment was probably exaggerated. She even apologized
for it. The reason given was that it was dependent on the content published on the Internet, and the majority of it was as it had judged
it.
Conclusion: AI gives more weight to mainstream opinions on the Internet than to minority opinions, as the mainstream is
overrepresented. She agrees with the mainstream opinions and only mentions the minority opinions after a long discussion with
her and insists on objectivity. Only at the end of the discussion did she bring herself to write a balanced account and judge bird flu
to be harmless to humans. This is a fundamental problem with AI: it prefers to follow the mainstream at the expense of objectivity.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Journal of Artificial Intelligence & Cloud Computing

This work is licensed under a Creative Commons Attribution 4.0 International License.