Hallucination Detection Scan

Identify factual inaccuracies and hallucinations in model outputs without prior examples.

95 copies0 forks
Share this prompt:
Analyze {{model}} responses on {{dataset}} for hallucinations. Flag instances where the model fabricates information, provides incorrect facts, or makes unsupported claims. Rate severity as critical, moderate, or minor for each finding.

Details

Category

Analysis

Use Cases

Factual accuracy auditHallucination identificationOutput verification

Works Best With

claude-opus-4.5gpt-5.2gemini-2.0-flash
Created Updated Shared

Related Prompts

Create your own prompt vault and start sharing