AI Hallucination belongs to the categories Artificial Intelligence and Digital Transformation.
AI hallucination describes a phenomenon that can occur when using artificial intelligence (AI): The AI „hallucinates“ and outputs information that is not true at all - it invents facts, so to speak. This can happen especially when an AI works with little or contradictory data. Even modern language models such as ChatGPT are affected by this.
An example: You ask an AI for the date of birth of a famous person. Instead of answering correctly, the AI gives an incorrect date that it has „invented“ because it has incorrectly linked certain data or has not found any precise information about it. To the layperson, the information seems credible, but it is completely false.
The risk of AI hallucination exists wherever AI is used – whether in business reports, when creating summaries, or in customer communication. Therefore, it is important to double-check AI responses and not trust them blindly.
By understanding the concept of AI hallucination, decision-makers become more aware of how to approach AI critically and responsibly, and review information once again.













