The AI system behavior of declining to make claims when confidence is insufficient — the protective mechanism that causes AI systems to hedge, generalize, or omit rather than risk stating falsehoods. Hallucination avoidance is the reason single-source claims are not cited, ambiguous entities are not named, and poorly-corroborated attributes are not stated. Understanding hallucination avoidance explains why earnest, true claims about your entity may not be cited: it’s not that the AI doesn’t believe them — it’s that the structural evidence isn’t sufficient to override the hallucination-avoidance threshold.
