requests
Python标准库中提供了:urllib、urllib2、httplib等模块以供Http请求,但是,它的 API 太渣了。它是为另一个时代、另一个互联网所创建的。它需要巨量的工作,甚至包括各种方法覆盖,来完成最简单的任务。
Requests 是使用 Apache2 Licensed 许可证的 基于Python开发的HTTP 库,其在Python内置模块的基础上进行了高度的封装,从而使得Pythoner进行网络请求时,变得美好了许多,使用Requests可以轻而易举的完成浏览器可有的任何操作。
安装
pip3 install requests
1、GET请求
# 1、无参数实例 import requests ret = requests.get('https://github.com/timeline.json') print(ret.content) # 获取收到的字节内容,bytes类型 print(ret.apparent_encoding) # 获取当前网站的编码格式 ret.encoding = ret.apparent_encoding # 设置编码格式 print(ret.url) print(ret.text) # 获取解码后的结果 # 2、有参数实例 import requests payload = {'key1': 'value1', 'key2': 'value2'} ret = requests.get("http://httpbin.org/get", params=payload) # params为url上?后的参数 print(ret.url) print(ret.text)
带参数的GET请求->headers
#通常我们在发送请求时都需要带上请求头,请求头是将自身伪装成浏览器的关键,常见的有用的请求头如下 Host Referer #大型网站通常都会根据该参数判断请求的来源 User-Agent #客户端 Cookie #Cookie信息虽然包含在请求头里,但requests模块有单独的参数来处理他,headers={}内就不要放它了 #添加headers(浏览器会识别请求头,不加可能会被拒绝访问,比如访问https://www.zhihu.com/explore) import requests response=requests.get('https://www.zhihu.com/explore') response.status_code #500 #自己定制headers headers={ 'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.76 Mobile Safari/537.36', } respone=requests.get('https://www.zhihu.com/explore', headers=headers) print(respone.status_code) #200
带参数的GET请求->cookies
#登录github,然后从浏览器中获取cookies,以后就可以直接拿着cookie登录了,无需输入用户名密码 #用户名:egonlin 邮箱378533872@qq.com 密码lhf@123 import requests Cookies={ 'user_session':'wGMHFJKgDcmRIVvcA14_Wrt_3xaUyJNsBnPbYzEL6L0bHcfc', } response=requests.get('https://github.com/settings/emails', cookies=Cookies) #github对请求头没有什么限制,我们无需定制user-agent,对于其他网站可能还需要定制 print('378533872@qq.com' in response.text) #True
2、POST请求
介绍
#GET请求 HTTP默认的请求方法就是GET * 没有请求体 * 数据必须在1K之内! * GET请求数据会暴露在浏览器的地址栏中 GET请求常用的操作: 1. 在浏览器的地址栏中直接给出URL,那么就一定是GET请求 2. 点击页面上的超链接也一定是GET请求 3. 提交表单时,表单默认使用GET请求,但可以设置为POST #POST请求 (1). 数据不会出现在地址栏中 (2). 数据的大小没有上限 (3). 有请求体 (4). 请求体中如果存在中文,会使用URL编码! #!!!requests.post()用法与requests.get()完全一致,特殊的是requests.post()有一个data参数,用来存放请求体数据
使用
# 1、基本POST实例 import requests payload = {'key1': 'value1', 'key2': 'value2'} ret = requests.post("http://httpbin.org/post", data=payload) # data发送字典时发送的是urlencoding形式的数据 print(ret.text) # 2、发送请求头和数据实例 import requests import json url = 'https://api.github.com/some/endpoint' payload = {'some': 'data'} headers = {'content-type': 'application/json'} ret = requests.post(url, data=json.dumps(payload), headers=headers) # 此时data发送的json格式的数据 print(ret.text) print(ret.cookies) # 获取网站返回的cookie
发送post请求,模拟浏览器的登录行为
注意:
1、对于登录来说,应该输错用户名或密码然后分析抓包流程,用脑子想一想,输对了浏览器就跳转了,还分析个毛线,累死你也找不到包
2、要做登录的时候一定记得要把cookie先清除;
3、requests.session():中间的cookie都不用自己分析了,有用的没用的都给放进来了、
4、response.cookie.get_dict() #获取cookie
自动登录github
''' 一 目标站点分析 浏览器输入https://github.com/login 然后输入错误的账号密码,抓包 发现登录行为是post提交到:https://github.com/session 而且请求头包含cookie 而且请求体包含: commit:Sign in utf8:✓ authenticity_token:lbI8IJCwGslZS8qJPnof5e7ZkCoSoMn6jmDTsL1r/m06NLyIbw7vCrpwrFAPzHMep3Tmf/TSJVoXWrvDZaVwxQ== login:egonlin password:123 二 流程分析 先GET:https://github.com/login拿到初始cookie与authenticity_token 返回POST:https://github.com/session, 带上初始cookie,带上请求体(authenticity_token,用户名,密码等) 最后拿到登录cookie ps:如果密码时密文形式,则可以先输错账号,输对密码,然后到浏览器中拿到加密后的密码,github的密码是明文 ''' import requests import re #第一次请求 r1=requests.get('https://github.com/login') r1_cookie=r1.cookies.get_dict() #拿到初始cookie(未被授权) authenticity_token=re.findall(r'name="authenticity_token".*?value="(.*?)"',r1.text)[0] #从页面中拿到CSRF TOKEN #第二次请求:带着初始cookie和TOKEN发送POST请求给登录页面,带上账号密码 data={ 'commit':'Sign in', 'utf8':'✓', 'authenticity_token':authenticity_token, 'login':'317828332@qq.com', 'password':'alex3714' } r2=requests.post('https://github.com/session', data=data, cookies=r1_cookie ) login_cookie=r2.cookies.get_dict() #第三次请求:以后的登录,拿着login_cookie就可以,比如访问一些个人配置 r3=requests.get('https://github.com/settings/emails', cookies=login_cookie) print('317828332@qq.com' in r3.text) #True
这里要注意先要使用get请求访问,获取一个未授权的cookie,发送post请求时要带着这个cookie,且这里用的form表单形式提交的,我们要到页面上找到token的内容并发送
import requests import re session = requests.session() # 第一次请求 r1 = session.get('https://github.com/login') authenticity_token = re.findall(r'name="authenticity_token".*?value="(.*?)"', r1.text)[0] # 从页面中拿到CSRF TOKEN # 第二次请求 data = { 'commit': 'Sign in', 'utf8': '✓', 'authenticity_token': authenticity_token, 'login': '317828332@qq.com', 'password': 'alex3714' } r2 = session.post('https://github.com/session', data=data, ) # 第三次请求 r3 = session.get('https://github.com/settings/emails') print('317828332@qq.com' in r3.text) # True
登录github小应用
import requests import re #第一次请求 # GET请求 # 请求头 # - 获取token和 # - User-agent # - cookie # 第二次请求 #POST请求 #请求头 # referer # User-agent #请求体 #获取data # 第三次请求,登录成功之后 #- 请求之前自己先登录一下,看一下有没有referer #- 请求新的url,进行其他操作 #- 查看用户名在不在里面 #第一次请求 response1 = requests.get( "https://github.com/login", headers = { "User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36", }, ) authenticity_token = re.findall('name="authenticity_token".*?value="(.*?)"',response1.text,re.S) r1_cookies = response1.cookies.get_dict() # print(r1_cookies,"cookie") #获取到的cookie #第二次请求 response2 = requests.post( "https://github.com/session", headers = { "Referer": "https://github.com/", "User-Agent":"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36", }, data={ "commit":"Sign in", "utf8":"✓", "authenticity_token":authenticity_token, "login":"haiyanzzz", "password":"xxxx", zhy..azjash1234 }, cookies = r1_cookies ) print(response2.status_code) print(response2.history) #跳转的历史状态码 #第三次请求,登录成功之后,访问其他页面 r2_cookies = response2.cookies.get_dict() #拿上cookie,知道是你登录了,就开始访问页面 response3 = requests.get( "https://github.com/settings/emails", headers = { "Referer": "https://github.com/", "User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.108 Safari/537.36", }, cookies = r2_cookies, ) print(response3.text) print("haiyanzzz" in response3.text) #True返回True说明就成功了
3、其他请求
requests.get(url, params=None, **kwargs) requests.post(url, data=None, json=None, **kwargs) requests.put(url, data=None, **kwargs) requests.head(url, **kwargs) requests.delete(url, **kwargs) requests.patch(url, data=None, **kwargs) requests.options(url, **kwargs) # 以上方法均是在此方法的基础上构建 requests.request(method, url, **kwargs)
4、更多参数
def request(method, url, **kwargs): """Constructs and sends a :class:`Request <Request>`. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param data: (optional) Dictionary, bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) json data to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload. ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')`` or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers to add for the file. :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Boolean. Set to True if POST/PUT/DELETE redirect following is allowed. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. :param verify: (optional) whether the SSL cert will be verified. A CA_BUNDLE path can also be provided. Defaults to ``True``. :param stream: (optional) if ``False``, the response content will be immediately downloaded. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :return: :class:`Response <Response>` object :rtype: requests.Response Usage:: >>> import requests >>> req = requests.request('GET', 'http://httpbin.org/get') <Response [200]> """
def param_method_url(): # requests.request(method='get', url='http://127.0.0.1:8000/test/') # requests.request(method='post', url='http://127.0.0.1:8000/test/') pass def param_param(): # - 可以是字典 # - 可以是字符串 # - 可以是字节(ascii编码以内) # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params={'k1': 'v1', 'k2': '水电费'}) # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params="k1=v1&k2=水电费&k3=v3&k3=vv3") # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8')) # 错误 # requests.request(method='get', # url='http://127.0.0.1:8000/test/', # params=bytes("k1=v1&k2=水电费&k3=v3&k3=vv3", encoding='utf8')) pass def param_data(): # 可以是字典 # 可以是字符串 # 可以是字节 # 可以是文件对象 # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data={'k1': 'v1', 'k2': '水电费'}) # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data="k1=v1; k2=v2; k3=v3; k3=v4" # ) # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data="k1=v1;k2=v2;k3=v3;k3=v4", # headers={'Content-Type': 'application/x-www-form-urlencoded'} # ) # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # data=open('data_file.py', mode='r', encoding='utf-8'), # 文件内容是:k1=v1;k2=v2;k3=v3;k3=v4 # headers={'Content-Type': 'application/x-www-form-urlencoded'} # ) pass def param_json(): # 将json中对应的数据进行序列化成一个字符串,json.dumps(...) # 然后发送到服务器端的body中,并且Content-Type是 {'Content-Type': 'application/json'} requests.request(method='POST', url='http://127.0.0.1:8000/test/', json={'k1': 'v1', 'k2': '水电费'}) def param_headers(): # 发送请求头到服务器端 requests.request(method='POST', url='http://127.0.0.1:8000/test/', json={'k1': 'v1', 'k2': '水电费'}, headers={'Content-Type': 'application/x-www-form-urlencoded'} ) def param_cookies(): # 发送Cookie到服务器端 requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={'k1': 'v1', 'k2': 'v2'}, cookies={'cook1': 'value1'}, ) # 也可以使用CookieJar(字典形式就是在此基础上封装) from http.cookiejar import CookieJar from http.cookiejar import Cookie obj = CookieJar() obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None, discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False, port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False) ) requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={'k1': 'v1', 'k2': 'v2'}, cookies=obj) def param_files(): # 发送文件 # file_dict = { # 'f1': open('readme', 'rb') # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) # 发送文件,定制文件名 # file_dict = { # 'f1': ('test.txt', open('readme', 'rb')) # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) # 发送文件,定制文件名 # file_dict = { # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf") # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) # 发送文件,定制文件名 # file_dict = { # 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', {'k1': '0'}) # } # requests.request(method='POST', # url='http://127.0.0.1:8000/test/', # files=file_dict) pass def param_auth(): from requests.auth import HTTPBasicAuth, HTTPDigestAuth ret = requests.get('https://api.github.com/user', auth=HTTPBasicAuth('wupeiqi', 'sdfasdfasdf')) print(ret.text) # ret = requests.get('http://192.168.1.1', # auth=HTTPBasicAuth('admin', 'admin')) # ret.encoding = 'gbk' # print(ret.text) # ret = requests.get('http://httpbin.org/digest-auth/auth/user/pass', auth=HTTPDigestAuth('user', 'pass')) # print(ret) # def param_timeout(): # ret = requests.get('http://google.com/', timeout=1) # print(ret) # ret = requests.get('http://google.com/', timeout=(5, 1)) # print(ret) pass def param_allow_redirects(): ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False) print(ret.text) def param_proxies(): # proxies = { # "http": "61.172.249.96:80", # "https": "http://61.185.219.126:3128", # } # proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'} # ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies) # print(ret.headers) # from requests.auth import HTTPProxyAuth # # proxyDict = { # 'http': '77.75.105.165', # 'https': '77.75.105.165' # } # auth = HTTPProxyAuth('username', 'mypassword') # # r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth) # print(r.text) pass def param_stream(): ret = requests.get('http://127.0.0.1:8000/test/', stream=True) print(ret.content) ret.close() # from contextlib import closing # with closing(requests.get('http://httpbin.org/get', stream=True)) as r: # # 在此处理响应。 # for i in r.iter_content(): # print(i) def requests_session(): import requests session = requests.Session() ### 1、首先登陆任何页面,获取cookie i1 = session.get(url="http://dig.chouti.com/help/service") ### 2、用户登陆,携带上一次的cookie,后台对cookie中的 gpsd 进行授权 i2 = session.post( url="http://dig.chouti.com/login", data={ 'phone': "8615131255089", 'password': "xxxxxx", 'oneMonth': "" } ) i3 = session.post( url="http://dig.chouti.com/link/vote?linksId=8589623", ) print(i3.text)
总结
requests method: 请求方式 url: 请求的url params: url中的参数?k1=v1&k2=v2 data: 请求体的内容 json: json格式的请求体的内容 headers: 请求头 cookies: 请求的cookie proxies: 封IP,用代理 files: 上传文件 auth: 基本认证 timeout: 超时时间 allow_redirects: True stream: 下载大文件时 ret = requests.get('http://127.0.0.1:8000/test/', stream=True) for i in r.iter_content(): # print(i) from contextlib import closing with closing(requests.get('http://httpbin.org/get', stream=True)) as r: # 在此处理响应。 for i in r.iter_content(): print(i) cert: 证书 verify: 确认
BeautifulSoup
BeautifulSoup是一个模块,该模块用于接收一个HTML或XML字符串,然后将其进行格式化,之后遍可以使用他提供的方法进行快速查找指定元素,从而使得在HTML或XML中查找指定元素变得简单。
下载
pip3 install beautifulsoup4
基本使用
soup = BeautifulSoup('<html>....</html>',"html.parser") div = soup.find(name='标签名') # 获取名字为name的标签对象 div = soup.find(name='标签名',id='i1') # 获取名字为name,id为i1的标签对象 div = soup.find(name='div',attrs={'id':'auto-channel-lazyload-article','class':'id'}) # 其它属性可以通过attrs来设置 div.text div.attrs div.get('href') divs = soup.find_all(name='标签名') divs = soup.find_all(name='标签名',id='i1') divs = soup.find_all(name='标签名',_class='i1') divs = soup.find_all(name='div',attrs={'id':'auto-channel-lazyload-article','class':'id'}) divs是列表 # 列表中为一个个的标签对象 divs[0]
常见用法
1. name,标签名称
# tag = soup.find('a') # name = tag.name # 获取 # print(name) # tag.name = 'span' # 设置 # print(soup)
2. attr,标签属性
# tag = soup.find('a') # attrs = tag.attrs # 获取 # print(attrs) # tag.attrs = {'ik':123} # 设置 # tag.attrs['id'] = 'iiiii' # 设置 # print(soup)
3. children,所有子标签
# body = soup.find('body') # v = body.children
4.descendants,所有子子孙孙标签
# body = soup.find('body') # v = body.descendants
5. clear,将标签的所有子标签全部清空(保留标签名)
# tag = soup.find('body') # tag.clear() # print(soup)
6. decompose,递归的删除所有的标签
# body = soup.find('body') # body.decompose() # print(soup)
7. extract,递归的删除所有的标签,并获取删除的标签
# body = soup.find('body') # v = body.extract() # print(soup)
8. decode,转换为字符串(含当前标签);decode_contents(不含当前标签)
# body = soup.find('body') # v = body.decode() # v = body.decode_contents() # print(v)
9. encode,转换为字节(含当前标签);encode_contents(不含当前标签)
# body = soup.find('body') # v = body.encode() # v = body.encode_contents() # print(v)
10. find,获取匹配的第一个标签
# tag = soup.find('a') # print(tag) # tag = soup.find(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie') # tag = soup.find(name='a', class_='sister', recursive=True, text='Lacie') # print(tag)
11. find_all,获取匹配的所有标签
# tags = soup.find_all('a') # print(tags) # tags = soup.find_all('a',limit=1) # print(tags) # tags = soup.find_all(name='a', attrs={'class': 'sister'}, recursive=True, text='Lacie') # # tags = soup.find(name='a', class_='sister', recursive=True, text='Lacie') # print(tags) # ####### 列表 ####### # v = soup.find_all(name=['a','div']) # print(v) # v = soup.find_all(class_=['sister0', 'sister']) # print(v) # v = soup.find_all(text=['Tillie']) # print(v, type(v[0])) # v = soup.find_all(id=['link1','link2']) # print(v) # v = soup.find_all(href=['link1','link2']) # print(v) # ####### 正则 ####### import re # rep = re.compile('p') # rep = re.compile('^p') # v = soup.find_all(name=rep) # print(v) # rep = re.compile('sister.*') # v = soup.find_all(class_=rep) # print(v) # rep = re.compile('http://www.oldboy.com/static/.*') # v = soup.find_all(href=rep) # print(v) # ####### 方法筛选 ####### # def func(tag): # return tag.has_attr('class') and tag.has_attr('id') # v = soup.find_all(name=func) # print(v) # ## get,获取标签属性 # tag = soup.find('a') # v = tag.get('id') # print(v)
12. has_attr,检查标签是否具有该属性
# tag = soup.find('a') # v = tag.has_attr('id') # print(v)
13. get_text,获取标签内部文本内容
# tag = soup.find('a') # v = tag.get_text('id') # print(v)
14. index,检查标签在某标签中的索引位置
# tag = soup.find('body') # v = tag.index(tag.find('div')) # print(v) # tag = soup.find('body') # for i,v in enumerate(tag): # print(i,v)
15. is_empty_element,是否是空标签(是否可以是空)或者自闭合标签,
判断是否是如下标签:'br' , 'hr', 'input', 'img', 'meta','spacer', 'link', 'frame', 'base'
# tag = soup.find('br') # v = tag.is_empty_element # print(v)
16. 当前的关联标签
# soup.next # soup.next_element # soup.next_elements # soup.next_sibling # soup.next_siblings # # tag.previous # tag.previous_element # tag.previous_elements # tag.previous_sibling # tag.previous_siblings # # tag.parent # tag.parents
17. 查找某标签的关联标签
# tag.find_next(...) # tag.find_all_next(...) # tag.find_next_sibling(...) # tag.find_next_siblings(...) # tag.find_previous(...) # tag.find_all_previous(...) # tag.find_previous_sibling(...) # tag.find_previous_siblings(...) # tag.find_parent(...) # tag.find_parents(...) # 参数同find_all
18. select,select_one, CSS选择器
soup.select("title") soup.select("p nth-of-type(3)") soup.select("body a") soup.select("html head title") tag = soup.select("span,a") soup.select("head > title") soup.select("p > a") soup.select("p > a:nth-of-type(2)") soup.select("p > #link1") soup.select("body > a") soup.select("#link1 ~ .sister") soup.select("#link1 + .sister") soup.select(".sister") soup.select("[class~=sister]") soup.select("#link1") soup.select("a#link2") soup.select('a[href]') soup.select('a[href="http://example.com/elsie"]') soup.select('a[href^="http://example.com/"]') soup.select('a[href$="tillie"]') soup.select('a[href*=".com/el"]') from bs4.element import Tag def default_candidate_generator(tag): for child in tag.descendants: if not isinstance(child, Tag): continue if not child.has_attr('href'): continue yield child tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator) print(type(tags), tags) from bs4.element import Tag def default_candidate_generator(tag): for child in tag.descendants: if not isinstance(child, Tag): continue if not child.has_attr('href'): continue yield child tags = soup.find('body').select("a", _candidate_generator=default_candidate_generator, limit=1) print(type(tags), tags)
19. 标签的内容
# tag = soup.find('span') # print(tag.string) # 获取 # tag.string = 'new content' # 设置 # print(soup) # tag = soup.find('body') # print(tag.string) # tag.string = 'xxx' # print(soup) # tag = soup.find('body') # v = tag.stripped_strings # 递归内部获取所有标签的文本 # print(v)
20.append在当前标签内部追加一个标签
# tag = soup.find('body') # tag.append(soup.find('a')) # print(soup) # # from bs4.element import Tag # obj = Tag(name='i',attrs={'id': 'it'}) # obj.string = '我是一个新来的' # tag = soup.find('body') # tag.append(obj) # print(soup)
21.insert在当前标签内部指定位置插入一个标签
# from bs4.element import Tag # obj = Tag(name='i', attrs={'id': 'it'}) # obj.string = '我是一个新来的' # tag = soup.find('body') # tag.insert(2, obj) # print(soup)
22. insert_after,insert_before 在当前标签后面或前面插入
# from bs4.element import Tag # obj = Tag(name='i', attrs={'id': 'it'}) # obj.string = '我是一个新来的' # tag = soup.find('body') # # tag.insert_before(obj) # tag.insert_after(obj) # print(soup)
23. replace_with 在当前标签替换为指定标签
# from bs4.element import Tag # obj = Tag(name='i', attrs={'id': 'it'}) # obj.string = '我是一个新来的' # tag = soup.find('div') # tag.replace_with(obj) # print(soup)
24. 创建标签之间的关系
# tag = soup.find('div') # a = soup.find('a') # tag.setup(previous_sibling=a) # print(tag.previous_sibling)
25. wrap,将指定标签把当前标签包裹起来
# from bs4.element import Tag # obj1 = Tag(name='div', attrs={'id': 'it'}) # obj1.string = '我是一个新来的' # # tag = soup.find('a') # v = tag.wrap(obj1) # print(soup) # tag = soup.find('a') # v = tag.wrap(soup.find('p')) # print(soup)
26. unwrap,去掉当前标签,将保留其包裹的标签
# tag = soup.find('a') # v = tag.unwrap() # print(soup)