Google engineer John Mueller said that no AI system currently uses llms.txt to process website content [1].

The debate over this proposed file highlights the tension between website owners and AI developers. As large language models increasingly scrape the web, publishers are seeking ways to signal the intent and value of their content to AI-driven search tools.

An llms.txt file is a plain-text document that website owners can place on their servers to describe how LLMs should treat their data [2]. Unlike robots.txt, which provides instructions to prevent or allow crawling, llms.txt is not a control file [3]. Instead, it is intended to serve as a guide for AI models to find the most relevant information on a site [4].

Industry reactions to the proposal remain divided. Some view the file as a "treasure map for AI" that points models toward a site's most valuable sections [4]. However, other experts dismiss the file as speculative theater and compare it to the obsolete meta keywords tag [5].

Recent data suggests limited adoption and utility. One analysis tracked 10 sites to determine if the file mattered, while the current number of AI systems utilizing the format remains zero [1, 6].

Mueller's comments add weight to the argument that the file is currently non-functional. The discussion intensified in early 2024 as the SEO community attempted to find new ways to influence AI discovery [1, 2, 3]. Despite the effort, the lack of adoption by major AI developers means the file serves no practical purpose for site owners at this time [1, 5].

"FWIW no AI system currently uses llms.txt"

The current lack of support for llms.txt indicates that AI developers are prioritizing their own scraping and indexing methods over user-provided hints. Until a major LLM provider officially adopts the standard, the file remains a theoretical exercise rather than a functional tool for search engine optimization.