A new website called Halupedia has launched as a Wikipedia clone featuring entries entirely hallucinated by large language models [1].

The project serves as a provocative experiment in the digital age. By compiling AI errors into a full-scale reference site, the creators aim to spark critical discussion about the proliferation of AI-generated misinformation, and the reliability of automated knowledge [1], [4].

Halupedia mimics the structure and aesthetic of a traditional online encyclopedia, but the content is intentionally false. The site does not attempt to hide these hallucinations; instead, it presents them as factual entries to highlight how easily large language models can generate convincing but incorrect information [1].

This approach transforms the typical AI error—the hallucination—into a curated archive. While standard AI tools are often tuned to reduce these errors, Halupedia leans into them to show the scale of potential misinformation [1], [2].

The project arrives as a warning about the fragility of digital truth. By creating a space where falsehoods are organized and presented with the authority of a wiki, the developers illustrate how AI could potentially pollute the broader information ecosystem [3].

Users can browse the site to see how the AI fabricates historical events, biographies, and scientific concepts. The project functions as a mirror to the current state of generative AI, emphasizing that the ability to produce fluent text does not equate to the ability to produce factual truth [1], [2].

Halupedia presents a Wikipedia-style encyclopedia whose articles are completely fabricated by AI hallucinations.

Halupedia acts as a stress test for human discernment in the era of generative AI. By presenting hallucinations within a trusted format—the encyclopedia—it demonstrates that the 'authority' of a layout can override a user's skepticism. This underscores the danger of 'model collapse' and the potential for the internet to be flooded with synthetic, incorrect data that future AI models may then train upon, creating a feedback loop of misinformation.