个人作业——顶会热词实现

将环境配置好之后,首先获取数据,使用python的爬虫技术进行数据获取,并存到数据库中

这是一个小案例

import requests

url = 'https://tse1-mm.cn.bing.net/th/id/OIP-C.RxlEiBrRi5qLhxXqa88TNwHaMV?w=190&h=317&c=7&r=0&o=5&dpr=1.25&pid=1.7'
Beauty = requests.get(url)
f = open('小姐姐.png', 'wb')
f.write(Beauty.content)
f.close()

 

这才是真正的数据获取

import requests
import pymysql
from bs4 import BeautifulSoup

db = pymysql.connect(host='localhost',
                     user='root',
                     password='数据库密码',
                     db='使用的数据库名',
                     charset='utf8')

cursor = db.cursor()

headers = {
    "User-Agent":
    "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
}
url = "https://openaccess.thecvf.com/ECCV2018.py"
html = requests.get(url)

soup = BeautifulSoup(html.content, 'html.parser')

soup.a.contents == 'pdf'

pdfs = soup.findAll(name="a", text="pdf")

lis = []
jianjie = ""
for i, pdf in enumerate(pdfs):
    pdf_name = pdf["href"].split('/')[-1]
    name = pdf_name.split('.')[0].replace("_CVPR_2019_paper", "")
    link = "http://openaccess.thecvf.com/content_CVPR_2019/html/" + name + "_CVPR_2019_paper.html"
    url1 = link
    html1 = requests.get(url1)
    soup1 = BeautifulSoup(html1.content, 'html.parser')
    weizhi = soup1.find('div', attrs={'id': 'abstract'})
    if weizhi:
        jianjie = weizhi.get_text()
    print("这是第" + str(i) + "条数据")
    keyword = str(name).split('_')
    keywords = ''
    for k in range(len(keyword)):
        if (k == 0):
            keywords += keyword[k]
        else:
            keywords += ',' + keyword[k]
    info = {}
    info['title'] = name
    info['link'] = link
    info['abstract'] = jianjie
    info['keywords'] = keywords
    lis.append(info)

cursor = db.cursor()
for i in range(len(lis)):
    cols = ", ".join('`{}`'.format(k) for k in lis[i].keys())
    print(cols)  # '`name`, `age`'

    val_cols = ', '.join('%({})s'.format(k) for k in lis[i].keys())
    print(val_cols)  # '%(name)s, %(age)s'

    sql = "insert into lun(%s) values(%s)"
    res_sql = sql % (cols, val_cols)
    print(res_sql)

    cursor.execute(res_sql, lis[i])  # 将字典a传入
    db.commit()
    num = 1
    print(num)
    print("ok")

这个运行时间比较长,需要耐心。

 

数据获取完之后,进行数据查询,这里我使用的是模糊查询,由于Ajax技术不精通,使用jsp进行输出

思路:使用Druid技术进行模糊查询,将其存储为集合,再存到request域中。

public List<Lunwen> findAll2(String title, String keywords) {
        String sql = "select * from lun where title like '%" + title + "%' and keywords like '%" + keywords + "%'";
        List<Lunwen> list = template.query(sql, new BeanPropertyRowMapper<>(Lunwen.class));
        return list;
    }

 

String title=request.getParameter("title");
        String keywords=request.getParameter("keywords");
        findService service=new findServiceImpl();
        List<Lunwen> lunwens=service.findAll2(title,keywords);
        request.setAttribute("lunwens",lunwens);
        request.getRequestDispatcher("/List.jsp").forward(request,response);

第一阶段到这里就结束了,下一篇为第二阶段,生成词云。

posted on 2022-05-15 15:36  跨越&尘世  阅读(27)  评论(0编辑  收藏  举报