robots.txt
robots.txt
A robots.txt file tells search engine crawlers which pages or files the crawler can or can't request from your site.
https://support.google.com/webmasters/answer/6062608?hl=en
SEO
https://support.google.com/webmasters/answer/6062608?hl=zh-Hans
https://abc.xgqfrms.xyz/robots.txt
# Robots.txt 是存放在站点根目录下的一个纯文本文件。
# 虽然它的设置很简单,但是作用却很强大。
# 它可以指定搜索引擎蜘蛛只抓取指定的内容,或者是禁止搜索引擎蜘蛛抓取网站的部分或全部内容。
User-agent: Baiduspider
Disallow: /
User-agent: Sosospider
Disallow: /
User-agent: sogou spider
Disallow: /
User-agent: YodaoBot
Disallow: /
User-agent: Googlebot
Disallow:
User-agent: Bingbot
Disallow:
User-agent: Slurp
Disallow:
User-agent: Teoma
Disallow:
User-agent: ia_archiver
Disallow:
User-agent: twiceler
Disallow:
User-agent: MSNBot
Disallow:
User-agent: Scrubby
Disallow:
User-agent: Robozilla
Disallow:
User-agent: Gigabot
Disallow:
User-agent: googlebot-image
Disallow:
User-agent: googlebot-mobile
Disallow:
User-agent: yahoo-mmcrawler
Disallow:
User-agent: yahoo-blogs/v3.9
Disallow:
User-agent: psbot
Disallow:
User-agent: *
Disallow:
Disallow: /bin/
本文首发于博客园,作者:xgqfrms,原文链接:https://www.cnblogs.com/xgqfrms/p/12577829.html
未经授权禁止转载,违者必究!