我的Github:   Github

Python爬虫总结

基本的读取
import urllib2
http=urllib2.urlopen("http://xxxxx").read().decode('utf-8')


使用代理
import urllib2
proxy=urllib2.ProxyHandler({'http':'http://xxxxxx:xxxx'})
opener=urllib2.build_opener(proxy,urllib2.HTTPHandler)
urllib2.install_opener(opener)
html=urllib2.urlopen("xxxxxxx").read().decode('utf-8')


使用cookie
import urllib2,cookielib
cookies=urllib2.HTTPCookieProcessor(cookielib.CookieJar())
opener=urllib2.build_opener(cookies,urllib2.HTTPHandler)
urllib2.install_opener(opener)
html=urllib2.urlopen('xxxxxxx').read()

同时使用代理和cookie时更改
opener=urllib2.build_opener(proxy,cookies,urllib2.HTTPHandler)


表单的处理
要先截获报文,这里举例post内是username,passwd,login_submint
import urllib2
postdata=urllib2.urlencode({
'username':'xxxxxx',
'passwd':'xxxxxx',
'login_submint':'登录'
})
然后生成http请求再发送
req=urllib2.Request(
url='xxxxxxxxx',
data=postdata
)
result=urllib2.urlopen(req).read()



伪装浏览器行为
headers={'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}

req=urllib2.Request(
url='xxxxxxxxx',
data=postdata,
headers=headers
)
result=urllib2.urlopen(req).read()



反盗链
headers中加入Referer

posted @ 2015-09-27 20:29  寂夜云  阅读(340)  评论(0编辑  收藏  举报