Effective use of the robots.txt file for websites: Recommendations from the Google analyst

Article image Effective use of the robots.txt file for websites: Recommendations from the Google analyst
Article image Effective use of the robots.txt file for websites: Recommendations from the Google analyst
Publication date:21.12.2025
Blog category: Web Technology News

Gary Illais, a Google analyst, recently highlighted the importance of using robots.txt for website owners in a LinkedIn post. He suggests using this file to prevent web crawlers from accessing URLs that trigger actions such as adding items to a cart or wish list. Illais stresses the importance of blocking access to URLs with "?add_to_cart" or "?add_to_wishlist" parameters through the robots.txt file.

"Looking at what we're crawling from the sites in the complaints, way too often it's action URLs such as 'add to cart' and 'add to wishlist.' These are useless for crawlers, and you likely don't want them crawled." - Gary Illyes

🚀Illais also pointed out that while using the HTTP POST method can also prevent such URLs from being crawled, crawlers can still make POST requests, so using robots.txt remains a good idea. For example, if your website has URLs like "https://example.com/product/scented-candle-v1?add_to_cart" and "https://example.com/product/scented-candle-v1?add_to_wishlist" - you should add a disallow rule for them in your robots.txt file.

  • 📌 Using the robots.txt file allows you to reduce the load on servers by preventing web crawlers from accessing unnecessary URLs.
  • 📌 Proper use of robots.txt can significantly improve the performance of web crawlers.
  • 📌 The robots.txt standards were developed back in the 1990s and are still relevant today.
1. What is a robots.txt file? 2. Why is robots.txt important for websites? 3. How to properly create and configure the robots.txt file? 4. What disadvantages can arise from the incorrect use of the robots.txt file? 5. How to check if web crawlers are following robots.txt directives correctly?

🚀Illais confirms that Google's crawlers fully respect robots.txt rules, with rare exceptions that are well documented for scenarios involving "user calls or contract requests". He also emphasizes that compliance with the robots.txt protocol is one of the main principles of Google's website crawling policy.

🧩 Summary: Gary Illais, a Google analyst, emphasizes the importance of using the robots.txt file to effectively manage web crawlers. He recommends using this file to prevent web crawlers from accessing URLs that trigger actions such as adding items to a cart or wish list. This can reduce the load on servers and improve the performance of web crawlers.
🧠 Own Considerations: Given Illais' recommendations, it's important to keep in mind the role robots.txt plays in a website's interaction with web crawlers. Often, website owners focus on creating interesting and high-quality content, forgetting that the performance of web crawlers can be greatly improved with a well-configured robots.txt file. This, in turn, can have a positive effect on the performance of the website and its position in the search results.