What does the term "hallucination" mean in the context of Artificial Intelligence? How do AI models produce outputs that are factually incorrect or misleading? What are the common causes of hallucinations in AI-generated content? How can developers detect and mitigate hallucinations in AI systems? What are the implications of AI hallucinations for real-world applications like chatbots and decision-making tools?