爬虫之初体验

  首先了解一下什么是爬虫

1.爬虫的定义

爬虫:网络爬虫:(web crawer),是一种按照一定的规则,自动的抓取万维网信息的程序或者脚本

2.通过一个简单的实例先进行一个体验,更直观的了解一下爬虫的厉害之处

代码编写的环境:

        1.JDK1.8

        2.idea

        3.maven

下面就是简单爬虫的测试类:

package cn.itcast.crawler.test;
import org.apache.http.HttpEntity;
import org.apache.http.client.methods.CloseableHttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.impl.client.CloseableHttpClient;
import org.apache.http.impl.client.HttpClients;
import org.apache.http.util.EntityUtils;
import java.io.IOException;
public class CrawlerFirst {
public static void main(String[] args) throws IOException {
//1.打开浏览器,创建一个HttpClient对象
CloseableHttpClient httpClient=HttpClients.createDefault();
//2.输入网址,发起get请求创建Httpget对象
HttpGet httpGet=new HttpGet("https://www.baidu.com");
//3.按回车按钮,发送请求,返回响应;使用httpclient对象发起请求
CloseableHttpResponse response=httpClient.execute(httpGet);
//4.解析响应,获取数据
//判断状态码是否是200
if(response.getStatusLine().getStatusCode()==200){
HttpEntity httpEntity=response.getEntity();
String content=EntityUtils.toString(httpEntity,"utf8");
System.out.println(content);
}
}
}
pop.xml中的配置,在配置文件中需要引入一下两个jar包
通过slf4j-log4j12jar包可以打印出日志信息方便我们查看
<!-- https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12 -->
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.25</version>
<scope>test</scope>
</dependency>
这个jar包是用来进行发送和请求响应的jar包
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.2</version>
</dependency>
log4j.properties文件
### 配置根 ###
log4j.rootLogger = debug,A1
log4j.logger.cn.itcast=dubug
log4j.appender.A1 = org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout = org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern = %-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
执行代码的结果:

 

由上面代码中可以看出,这是访问了百度的首页,爬取到了百度的信息,如果有小伙伴想要爬取别的网站,可以在上面中进行输入

posted @ 2020-06-12 23:52  IT特工  阅读(204)  评论(0编辑  收藏  举报