安装scrapy框架

安装scrapy框架之前,需要安装几个必备库

ps.分享个python库下载地址:https://www.lfd.uci.edu/~gohlke/pythonlibs/

 

0、wheel(有了这个库之后可以本地安装pyhton库)

1、lxml

2、pyOpenSSL

3、pywin32

4、twisted

anaconda可以使用自带的conda install xxxx来安装组件

必备组件安装完成后,pip install scrapy 或者conda install scrapy

 

安装完成后,test一下:

命令行输入scrapy

(C:\ProgramData\Anaconda3) C:\Users\wangguoqiang>Scrapy
Scrapy 1.5.1 - no active project

Usage:
scrapy <command> [options] [args]

Available commands:
bench Run quick benchmark test
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy

[ more ] More commands available when run from project directory

Use "scrapy <command> -h" to see more info about a command

显示如下即代表安装成功

继续输入Scrapy startproject hello

(C:\ProgramData\Anaconda3) C:\Users\wangguoqiang>Scrapy startproject hello
New Scrapy project 'hello', using template directory 'c:\\programdata\\anaconda
\\lib\\site-packages\\scrapy\\templates\\project', created in:
C:\Users\wangguoqiang\hello

You can start your first spider with:
cd hello
scrapy genspider example example.com

cd 进入创建的hello项目中

输入scrapy genspider baidu www.baidu.com

(C:\ProgramData\Anaconda3) C:\Users\wangguoqiang>cd hello

(C:\ProgramData\Anaconda3) C:\Users\wangguoqiang\hello>scrapy genspider baidu ww
w.baidu.com
Created spider 'baidu' using template 'basic' in module:
  hello.spiders.baidu

输入

scrapy crawl baidu

 

posted @ 2018-08-13 14:20  月河  阅读(110)  评论(0编辑  收藏  举报