Python爬虫入门案例教学:快手高清视频下载

前言

今天分享的案例是Python爬取快手短视频平台高清无水印视频

主要知识点:

  • requests
  • json
  • re
  • pprint

开发环境:

  • 版 本:anaconda5.2.0(python3.6.5)
  • 编辑器:pycharm

 

【付费VIP完整版】只要看了就能学会的教程,80集Python基础入门视频教学

案例实现步骤:

  1. 找到目标网址 https://www.kuaishou.com/graphql
  2. 发送请求 get post
  3. 解析数据 (视频地址 视频标题)
  4. 发送请求 请求每一个视频的地址
  5. 保存视频

开始实现代码

1. 导入模块

import requests
import pprint
import json
import re

 

2. 请求数据

headers = {
    # data内容类型
    # application/json: 传入json类型数据 json 浏览器跟 快手服务器 交流(数据传输格式)的方式
    # 默认格式: application/x-www-form-urlencoded
    'content-type': 'application/json',
    # cookie: 用户身份标识 有没有登录
    'Cookie': 'did=web_53827e0b098c608bc6f42524b1f3211a; didv=1617281516668; kpf=PC_WEB; kpn=KUAISHOU_VISION; clientid=3',
    # User-Agent: 浏览器信息(用来伪装成浏览器)
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36',
}
data = {
    'operationName': "visionSearchPhoto",
    'query': "query visionSearchPhoto($keyword: String, $pcursor: String, $searchSessionId: String, $page: String, $webPageArea: String) {\n  visionSearchPhoto(keyword: $keyword, pcursor: $pcursor, searchSessionId: $searchSessionId, page: $page, webPageArea: $webPageArea) {\n    result\n    llsid\n    webPageArea\n    feeds {\n      type\n      author {\n        id\n        name\n        following\n        headerUrl\n        headerUrls {\n          cdn\n          url\n          __typename\n        }\n        __typename\n      }\n      tags {\n        type\n        name\n        __typename\n      }\n      photo {\n        id\n        duration\n        caption\n        likeCount\n        realLikeCount\n        coverUrl\n        photoUrl\n        liked\n        timestamp\n        expTag\n        coverUrls {\n          cdn\n          url\n          __typename\n        }\n        photoUrls {\n          cdn\n          url\n          __typename\n        }\n        animatedCoverUrl\n        stereoType\n        videoRatio\n        __typename\n      }\n      canAddComment\n      currentPcursor\n      llsid\n      status\n      __typename\n    }\n    searchSessionId\n    pcursor\n    aladdinBanner {\n      imgUrl\n      link\n      __typename\n    }\n    __typename\n  }\n}\n",
    'variables': {
        'keyword': keyword,
        'pcursor': str(page),
        'page': "search"
# 发送请求
response = requests.post('https://www.kuaishou.com/graphql', headers=headers, data=data)

 

解析数据

for page in range(0, 11):
    print(f'-----------------------正在爬取{page+1}页----------------------')
    json_data = response.json()
    data_list = json_data['data']['visionSearchPhoto']['feeds']
    for data in data_list:
        title = data['photo']['caption']
        url_1 = data['photo']['photoUrl']
        new_title = re.sub(r'[/\:*?"<>|\n]', '_', title)
        # print(title, url_1)
        # content: 获取到的二进制数据
        # 文字 text
        # 图片 视频 音频 二进制数据
        content = requests.get(url_1).content

 

保存数据

with open('./video/' + new_title + '.mp4', mode='wb') as f:
    f.write(content)
print(new_title, '爬取成功!!!')

 

posted @ 2021-09-13 18:56  松鼠爱吃饼干  阅读(760)  评论(0编辑  收藏  举报
Title