python基础爬虫

python基础爬虫

基于beautifulSoup的爬虫:

一:先导包:

import requests
from bs4 import BeautifulSoup

二:伪装:

headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:122.0) Gecko/20100101 Firefox/122.0'}

user-agent在浏览器按f12 -> 网络 -> 消息头

三:获取爬取页面对象、设置编码格式(以防万一)、获取beautifulSoup对象:

response = requests.get("", headers=headers)
    response.encoding = 'utf-8'
    html=BeautifulSoup(response.text,"html.parser")

解析器写第一种就行

四:查看需爬取网页源码确定查找内容:

all_results=html.findAll("标签名",attrs={'关键字':'关键字名'})

如:

五:遍历查找结果并只输出标签内文本:

    for title in all_results:
        for title in all_results:
            title1 = title.get_text()
            print(title1)

示例:

随机挑选一位幸运儿

完整代码:

import requests
from bs4 import BeautifulSoup
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:122.0) Gecko/20100101 Firefox/122.0'}

#遍历翻页
for i in range(1,20):
    response = requests.get(f"https://www.cnblogs.com/xxxxxxxxx?page={i}", headers=headers)
    response.encoding = 'utf-8'
    html=BeautifulSoup(response.text,"html.parser")
    all_results=html.findAll("a",attrs={'class':'postTitle2 vertical-middle'})
    for title in all_results:
        title1 = title.get_text()
        print(title1)
        

结果:

posted @   拉克斯椒房殿(李旭鹏)  阅读(8)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY
· Docker 太简单,K8s 太复杂?w7panel 让容器管理更轻松!
点击右上角即可分享
微信分享提示