爬虫2:urllib

 
 
了解即可,不好用
 

一. 概述

 
python内置的http请求库,包括4个模块,分别如下
 
urllib.request   请求模块
urllib.error       异常处理模块
urllib.parse      url解析模块, 工具模块
urllib.robotparser    robots.txt解析模块
 
 
 
urlopen
python2和python3中urlopen方法的不同用法
 
python2中
import urllib2
response=urllib2.urlopen('http://www.baidu.com')
 
python3中
import urllib.request
response=urllib.request.urlopen('http://www.baidu.com')
 

 

二. urlopen用法

 
1. 获取普通源代码,get方法
import urllib.request
 
response=urllib.request.urlopen('http://www.baidu.com')
print(response.read().decode('utf-8')

 

2. post方法请求,传入一个字典
import urllib.parse
import urllib.request
 
data=bytes(urllib.parse.urlencode({'word':'hello'}), encoding='utf8') //传入一个值
response=urllib.request.urlopen('http://httpbin.org/post', data=data)
print(response.read())

 

3. 超时操作,在指定时间内未得到响应就抛出异常
import urllib.request
 
response = urllib.request.urlopen('http://httpbin.org/get', timeout=1)
print(response.read())

 

 
4. 查看异常
import socket
import urllib.request
import urllib.error
 
try:
    response = urllib.request.urlopen('http://httpbin.org/get', timeout=0.1)
except urllib.error.URLError as e:
    if isinstance(e.reason, socket.timeout):
        print('TIME OUT')

 

 
 

三. 响应

 
1. 响应类型
import urllib.request
 
response = urllib.request.urlopen('https://www.python.org')
print(type(response))

 

 
2. 状态码,响应头
import urllib.request
 
response = urllib.request.urlopen('https://www.python.org')
print(response.status)
print(response.getheaders())   //数组的形式,每个数组元素都是一个元组
print(response.getheader('Server'))  //获取数组元素中元组键名为Server的值

 

 
3. Request()方法单独使用来请求页面
 
import urllib.request
 
request = urllib.request.Request('https://python.org')
response = urllib.request.urlopen(request)
print(response.read().decode('utf-8'))

 


可以在请求中加入头信息等数据
from urllib import request,parse
 
url = 'http://httpbin.org/post'
headers = {
    'User-Agent': 'Mozilla/4.0(compatible; MSIE 5.5; Windows NT)',
    'Host': 'httpbin.org'
}
dict = {
    'name': 'Germey'
}
data = bytes(parse.urlencode(dict), encoding='utf8')
req = request.Requests(url=url, data=data, headers=headers, method='POST')
response = request.urlopen(req)
print(response.read().decode('utf-8'))

 

 
 

使用add_header()方法来添加header
 
from urllib import request, parse
 
url = 'http://httpdbin.org/post'
dict = {
    'name': 'Germey'
}
data = bytes(parse.urlencode(dict), encoding='utf8')
req = request.Requests(url=url, data=data,  method='POST')
req.add_header('User-Agent', 'Mozilla/4.0(compatible; MSIE 5.5; Windows NT)')
response = request.urlopen(req)
print(response.read().decode('utf-8'))

 

 
 
 

4. Handler

 
1. 代理
 
import urllib.request
 
proxy_handler = urllib.request.ProxyHander({
    'http':'http://127.0.0.1:9743'
    'https':'https://127.0.0.1:9743'
})
opener = urllib.requets.build_opener(proxy_handler)
response =  opener.open('http://www.baidu.com')
print(response.read())

 

 
 
2. Cookie
 
例子1,打印出cookie信息
import http.cookiejar,urlib.request
 
cookie = http.cookiejar.CookieJar()
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
for item in cookie:
    print(item.name+"="+item.value)

 

 
例子2,保存cookie信息到文本文件
import http.cookiejar, urllib.requst
filename = 'cookie.txt'
cookie = http.cookiejar.MozillaCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.requst.build_opener(handler)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True, ignore_expires=True)

 

 
 
例子3,另外一种保存方法
import http.cookiejar, urllib.request
filename = 'cookie.txt'
cookie = http.cookiejar.LWPCookieJar(filename)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(hander)
response = opener.open('http://www.baidu.com')
cookie.save(ignore_discard=True, ignore_expires=True)

 

 
 
例子4,读取cookie文件
import http.cookiejar, urllib.request
cookie = http.cookiejar.LWPCookieJar()
cookie.load('cookie.txt', ignore_discard=True, ignore_expires=True)
handler = urllib.request.HTTPCookieProcessor(cookie)
opener = urllib.request.build_opener(handler)
response = opener.open('http://www.baidu.com')
print(response.read().decode('utf-8'))

 

 
 

5. 异常处理

一般可捕获3个异常类型,分别为URLError, HTTPError, ContentTooShortError
 
1. 打印异常原因, URLError:只有一个reason属性
from urllib import request, error
try:
    response = request.urlopen('http://cuiqingcai.com/index.html')
except error.URLError as e:
    print(e.reason)

 

 
2. HTTPError是URLError的子类,有三个属性:code,reason,headers
from urllib import request, error
 
try:
    response = request.urlopen('http://cuiqingcai.com/index.htm')
except error.HTTPError as e:
    print(e.reason, e.code, e.headers, sep='\n')
except error,URLError as e:
    print(e.reason)
else:
    print('Request Successfully')

 

 
 
3. 验证异常原因
import socket
import urllib.request
import urllib.error
 
try:
    response = urllib.request.urlopen('https://www.baidu.com',timeout=0.01)
except urllib.error.URLError as e:
    print(type(e.reason))
    if isinstance(e.reason, socket.timeout):
        print('TIME OUT')

 

 
 

 

六. URL解析

 
urlparse模块
 
语法:urllib.parse.urlparse(urlstring, scheme='', allow_fragments=True)
 
1. 拆分URL并分段获取URL地址
from urllib.parse import urlparse
 
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment')
print(type(result), result)
 
返回结果如下
<class 'urllib.parse.ParseResult'> ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')

 

 
 
2. 1 URL中不写协议时,scheme参数指定协议类型
from urllib.parse import urlparse
 
result = urlparse('www.baidu.com/index.html;user?id=5#comment', scheme='https')
print(result)
返回结果如下
ParseResult(scheme='https', netloc='', path='www.baidu.com/index.html', params='user', query='id=5', fragment='comment')

 

 
 
2. 2 URL中有协议时,scheme参数指定协议类型无效
from urllib.parse import urlparse
 
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment', scheme='https')
print(result)
返回结果
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5', fragment='comment')

 

 
 
3.1 allow_fragments:锚点链接
from urllib.parse import urlparse
 
result = urlparse('http://www.baidu.com/index.html;user?id=5#comment', allow_fragments=False)
print(result)
返回结果, 值为False时,锚点值拼接到query中
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html', params='user', query='id=5#comment', fragment='')

 

 
 
3.2 当query和params为空时,拼接到前面
from urllib.parse import urlparse
 
result = urlparse('http://www.baidu.com/index.html#comment', allow_fragments=False)
print(result)
返回结果
ParseResult(scheme='http', netloc='www.baidu.com', path='/index.html#comment', params='', query='', fragment='')

 

 
 
4. urlunparse模块,拼接url
from urllib.parse import urlunparse
 
data = ['http', 'www.baidu.com', 'index.html', 'user', 'a=6', 'comment']
print(urlunparse(data))
返回结果:
http://www.baidu.com/index.html;user?a=6#comment
 
 
5. urljoin模块
 
有相同字段的URL,后面的覆盖前面的
from urllib.parse import urljoin
 
print(urljoin('http://www.baidu.com', 'FAQ.html'))
print(urljoin('http://www.baidu.com', 'https://cuiqingcai.com/FAQ.html'))
print(urljoin('http://www.baidu.com/about.html', 'https://cuiqingcai.com/FAQ.html'))
print(urljoin('http://www.baidu.com/about.html', 'https://cuiqingcai.com/FAQ.html?question=2'))
print(urljoin('http://www.baidu.com?wd=abc', 'https://cuiqingcai.com/index.php'))
print(urljoin('http://www.baidu.com', '?category=2#comment'))
print(urljoin('www.baidu.com', '?category=2#comment'))
print(urljoin('www.baidu.com#comment', '?category=2'))

返回结果为

 
 
 
6. urlencode模块:把字典对象转换为get请求参数
from urllib.parse import urlencode
 
params = {
    'name': 'germey',
    'age': 22
}
base_url = 'http://www.baidu.com?'
url = base_url + urlencode(params)
print(url)

 

返回
 
 
 
 
 
posted @ 2018-06-29 15:22  坚强的小蚂蚁  阅读(279)  评论(0编辑  收藏  举报