requests模块第一讲
Requests 唯一的一个非转基因的 Python HTTP 库,人类可以安全享用。Requests是网络请求的一个模块
环境的安装:
pip install requests
requests模块的作用:
模拟浏览器发请求
requests使用(编码)流程:
- 指定url
- 基于requests模块发起请求
- 获取响应对象中的数据值
- 持久化存储
相关爬取案例:
- 通过5个基于requests模块的爬虫项目对该模块进行学习和巩固
- 基于requests模块的get请求
- 需求:爬取搜狗指定词条搜索后的页面数据
- 基于requests模块的post请求
- 需求:登录豆瓣电影,爬取登录成功后的页面数据
- 基于requests模块ajax的get请求
- 需求:爬取豆瓣电影分类排行榜 https://movie.douban.com/中的电影详情数据
- 基于requests模块ajax的post请求
- 需求:爬取肯德基餐厅查询http://www.kfc.com.cn/kfccda/index.aspx中指定地点的餐厅数据
- 综合练习
- 需求:爬取国家药品监督管理总局中基于中华人民共和国化妆品生产许可证相关数据http://125.35.6.84:81/xk/
- 基于requests模块的get请求
代码示例:
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import requests import os #指定搜索关键字 word = input('enter a word you want to search:') #自定义请求头信息 headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36', } #指定url url = 'https://www.sogou.com/web' #封装get请求参数 prams = { 'query':word, 'ie':'utf-8' } #发起请求 response = requests.get(url=url,params=param) #获取响应数据 page_text = response.text with open('./sougou.html','w',encoding='utf-8') as fp: fp.write(page_text)
请求载体身份标识的伪装:
-
User-Agent:请求载体身份标识,通过浏览器发起的请求,请求载体为浏览器,则该请求的User-Agent为浏览器的身份标识,使用爬虫程序发起的请求,则该请求的载体为爬虫程序,则该请求的User-Agent为爬虫程序的身份标识。可以通过判断该值来获知该请求的载体究竟是基于哪款浏览器还是基于爬虫程序。
-
反爬机制:某些门户网站会对访问该网站的请求中的User-Agent进行捕获和判断,如果该请求的UA为爬虫程序,则拒绝向该请求提供数据。
-
反反爬策略:将爬虫程序的UA伪装成某一款浏览器的身份标识。
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import requests import os url = 'https://accounts.douban.com/login' #封装请求参数 data = { "source": "movie", "redir": "https://movie.douban.com/", "form_email": "15027900535", "form_password": "bobo@15027900535", "login": "登录", } #自定义请求头信息 headers={ 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36', } response = requests.post(url=url,data=data) page_text = response.text with open('./douban111.html','w',encoding='utf-8') as fp: fp.write(page_text)
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
#!/usr/bin/env python # -*- coding:utf-8 -*- import requests import urllib.request if __name__ == "__main__": #指定ajax-get请求的url(通过抓包进行获取) url = 'https://movie.douban.com/j/chart/top_list?' #定制请求头信息,相关的头信息必须封装在字典结构中 headers = { #定制请求头中的User-Agent参数,当然也可以定制请求头中其他的参数 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36', } #定制get请求携带的参数(从抓包工具中获取) param = { 'type':'5', 'interval_id':'100:90', 'action':'', 'start':'0', 'limit':'20' } #发起get请求,获取响应对象 response = requests.get(url=url,headers=headers,params=param) #获取响应内容:响应内容为json串 print(response.text)
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import requests import json url="http://www.kfc.com.cn/kfccda/ashx/GetStoreList.ashx?op=keyword" header={"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"} all_adds=[] for i in range(8): data={ "cname":"", "pid": "", "keyword": "北京", "pageIndex": str(i), "pageSize": "10" } response_obj=requests.post(url=url,data=data,headers=header) all_adds.append(response_obj.text) with open("ktc.html","w") as f: json.dump(all_adds,f) with open("ktc.html","r")as f: json.load(f)
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import requests import json nameurl="http://125.35.6.84:81/xk/itownet/portalAction.do?method=getXkzsList" header={"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"} all_name=[] all_id=[] for page in range(1,8): namedata={ "on":"true", "page":str(page), "pageSize":"15", "productName":"", "conditionType":"1", "applyname":"", "applysn":"", } msg=requests.post(url=nameurl,data=namedata,headers=header).json() all_name.append(msg) for name in all_name: name_list=name["list"] for i in name_list: all_id.append(i["ID"]) detail_url="http://125.35.6.84:81/xk/itownet/portalAction.do?method=getXkzsById" detail_list=[] for id in all_id: detail_data={ "id":id, } detail_msg=requests.post(url=detail_url,data=detail_data,headers=header).json() detail_list.append(detail_msg) with open("detail.txt","w")as f: json.dump(detail_list,f)