Analyze {{model}} responses on {{dataset}} for hallucinations. Flag instances where the model fabricates information, provides incorrect facts, or makes unsupported claims. Rate severity as critical, moderate, or minor for each finding.
Hallucination Detection Scan
Identify factual inaccuracies and hallucinations in model outputs without prior examples.
95 copies0 forks
Share this prompt:
Details
Category
AnalysisUse Cases
Factual accuracy auditHallucination identificationOutput verification
Works Best With
claude-opus-4.5gpt-5.2gemini-2.0-flash
Created Updated Shared