那墙可有五十米高啊!
刚写完用了两天数据源就被封了2333333
收到通知暂停更新,稍后会删除该文,期待官方解禁。
===========================================
https://github.com/mlxy/GoogleHostsUpdate
简单的读页面源码然后正则匹配。
我只是懒得自己更新hosts。
为了chrome的同步我呕心沥血。
请和之前的汤站爬虫一起加进计划任务里。
面向过程充满了爱。
1 #encoding:utf-8 2 import urllib 3 import re 4 import os 5 6 url = 'http://www.360kb.com/kb/2_122.html' 7 regexHosts = r'#google hosts 2015 by 360kb.com.*#google hosts 2015 end' 8 regexTimeUpdated = r'<strong>(\d\d\d\d\.\d\d?\.\d\d?) </strong>' 9 10 hostsPath = 'C:\\Windows\\System32\\drivers\\etc\\hosts' 11 12 def retrievePage(url): 13 ''' 读取页面源代码。 ''' 14 response = urllib.urlopen(url) 15 page = response.read() 16 return page 17 18 def matchTimeUpdated(page): 19 ''' 从页面源码中匹配出hosts更新时间。 ''' 20 timeUpdated = re.search(regexTimeUpdated, page) 21 return timeUpdated.group(1) 22 23 def matchHostList(page): 24 ''' 从页面源码中匹配出host列表。 ''' 25 result = re.search(regexHosts, page, re.S) 26 hosts = result.group() 27 return hosts 28 29 def translateSpaceEntity(srcString): 30 ''' 把结果中的" "转换成空格。 ''' 31 return srcString.replace(' ', ' ') 32 33 def removeHtmlLabels(srcString): 34 ''' 去除结果中的HTML标签。 ''' 35 return re.sub(r'<[^>]+>', '', srcString) 36 37 def addExtraInfo(srcString, extraInfo): 38 ''' 在第一行添加额外信息。''' 39 return extraInfo + '\n' + srcString 40 41 def write2File(hosts, filePath): 42 ''' 写出到文件。 ''' 43 f = open(filePath, 'w') 44 f.write(hosts) 45 f.close() 46 47 def run(): 48 ''' 主运行函数。 ''' 49 page = retrievePage(url) 50 51 roughHosts = matchHostList(page) 52 preciseHosts = removeHtmlLabels(translateSpaceEntity(roughHosts)) 53 54 extraInfo = '''#Hosts updated at %s 55 #Script written by mlxy@https://github.com/mlxy, feel free to modify and distribute it. 56 ''' %matchTimeUpdated(page) 57 hostsWithExtra = addExtraInfo(preciseHosts, extraInfo) 58 59 write2File(hostsWithExtra, hostsPath) 60 61 if __name__ == '__main__': 62 run()