Fundamental Problems of AI Regarding Objectivity

Authors

DOI:

https://doi.org/10.47363/JAICC/ICAICC/2025(4)36

Keywords:

AI Regarding Objectivity, Fundamental Problems

Abstract

My test subject was the AI ChatGPT 4.0, which I involved in a 90-minute discussion. The topic I had chosen was bird flu. First, I 
asked the AI general questions about this topic and received correct information. 
Then I went into more depth and asked about human deaths. These had occurred in small numbers in China. Good. Then I asked 
about deaths in Europe and North America. There was one case in the USA. 
Then I asked about the danger of bird flu. The AI classified it as dangerous. I asked about possible vaccinations, which the AI 
affirmed as necessary. Then I cornered the AI and asked them about the discrepancy between minimal deaths on the one hand and 
their classification as dangerous on the other. 
The AI went back and forth, but I didn’t let up. Then she realized that her assessment was probably exaggerated. She even apologized 
for it. The reason given was that it was dependent on the content published on the Internet, and the majority of it was as it had judged 
it. 


Conclusion: AI gives more weight to mainstream opinions on the Internet than to minority opinions, as the mainstream is 
overrepresented. She agrees with the mainstream opinions and only mentions the minority opinions after a long discussion with 
her and insists on objectivity. Only at the end of the discussion did she bring herself to write a balanced account and judge bird flu 
to be harmless to humans. This is a fundamental problem with AI: it prefers to follow the mainstream at the expense of objectivity.

Downloads

Published

2025-05-10

How to Cite

Fundamental Problems of AI Regarding Objectivity. (2025). Journal of Artificial Intelligence & Cloud Computing, 4(3), 1-1. https://doi.org/10.47363/JAICC/ICAICC/2025(4)36

Similar Articles

11-20 of 334

You may also start an advanced similarity search for this article.