2014-04-24 21:52
题目:如果你要设计一个网络爬虫,如何避免进入抓取死循环?
解法:对网址做格式归一化,重复检测。设计一个碰撞率尽量低的签名算法,对每次抓取了的网址计算签名并保存入库。一个正常网站中的超链接会有大量有向的环路(没有环路就代表你无法通过点击链接的方式回到之前访问过的地方,这肯定不科学啦),所以检测去重是必须的功能。
代码:
1 // 10.5 Assume that you're writing a web crawler, how would you avoid going into an infinite loop? 2 // Answer: 3 // Every crawler starts from a seed link, and performs multithread BFS to go down the website and fetch webpages. 4 // Every link visited will be put into a hash table. In case there is a loop, the link will be found in the hash table and will not be visited again. 5 int main() 6 { 7 return 0; 8 }