On March 20 2026 Wikipedia’s volunteer editors overwhelmingly voted (40–2) to ban large language models (LLMs) from generating or rewriting encyclopedia entries. The new rule applies to the English‑language version of the massive online encyclopedia, which hosts over 7.1 million articles. The community decided that AI‑generated banned from wikipedia as text often violates Wikipedia’s core policies on neutrality and verifiability, with hallucinated facts, broken citations and stylistic tells. As one editor explained, the mood among moderators shifted from cautious optimism to genuine worry as more administrative reports centered on LLM‑related problems.
The ban isn’t a blanket prohibition on AI. Editors may still use AI for two narrowly defined tasks:
| Allowed Use | Details |
|---|---|
| Copy Editing Suggestions | AI can suggest minor grammar or wording improvements, but must NOT add new information. Human review is required. |
| Translations | AI can help translate articles, but editors must verify accuracy and understand both languages. |
Everything else, drafting entire articles or rewriting existing content, is off‑limits. The updated policy warns that simply pointing to an editor’s writing style isn’t enough to prove AI involvement. Moderators should look for violations of content policies and err on the side of caution.
For years, Wikipedia editors have experimented with AI assistance. Some even formed the WikiProject AI Cleanup to “speedily delete” poorly written, machine‑generated articles. However, as AI tools proliferated, editors became overwhelmed by the volume of low‑quality entries and started seeing the hallucination problem first‑hand. Volunteer editor Ilyas Lebleu (a.k.a. Chaotic Enby), who co‑authored the new guideline, told reporters that recent months brought a surge in AI‑related disputes, tipping community sentiment toward a formal ban.
The ban also reflects Wikipedia’s core philosophy: articles must be written by humans who understand the sources they cite. Jimmy Wales, Wikipedia’s co‑founder, has repeatedly argued that current AI models are “nowhere near good enough” for encyclopedia work and called AI‑generated content a “mess”. Even though AI can suggest grammar tweaks or provide translation help, the site’s leaders fear that relying on LLMs for writing undermines trust.
The decision comes amid a broader reckoning over AI in media:
For those who use AI tools to generate quick content summaries. The new policy is a reminder that human‑curated references still matter. Relying on an AI summary may be fine for brainstorming, but publishing it verbatim on an encyclopedia undermines quality and can mislead readers. FutureTools has always advocated for responsible AI. We recommend combining LLM assistants with human expertise and citing primary sources (like official policies or research papers) to avoid spreading inaccuracies. Our tutorials on prompt engineering and fact‑checking tools can help you strike that balance.
Wikipedia’s decision could inspire similar moves across other collaborative platforms. As one editor told reporters, “I foresee a domino effect, empowering communities on other platforms to decide whether AI should be welcome on their own terms”. With AI‑generated content booming, forums, Q&A sites and newsrooms will likely grapple with the same tension between efficiency and accuracy. At FutureTools, we’re tracking these developments and will continue to share tips on integrating AI ethically.
Wikipedia’s ban doesn’t demonize AI, it draws a clear line between helpful tools and harmful shortcuts. By allowing translation and copy‑editing assistance while prohibiting full article generation and the encyclopedia stays true to its mission of being a reliable, human‑vetted reference. As AI continues to evolve, expect more nuanced policies like this across the web.
Want to stay ahead of the curve? Subscribe to FutureTools for weekly insights into AI trends, responsible usage guides and tool round‑ups. We’ll help you harness AI’s power without falling into the AI‑generated slop trap.

Mehdi tracks the fast-moving world of AI, breaking down major updates, launches, and policy shifts into clear, timely news that helps readers stay ahead of what’s next.