使用正则表达式,取得点击次数,函数抽离

1. 用正则表达式判定邮箱是否输入正确。

checkemail = '^([a-zA-Z0-9_-])+@([a-zA-Z0-9_-])+(.[a-zA-Z0-9_-])+$'
email = '99999@qq.com'
if re.match(checkemail,email):
    print(re.match(checkemail,email).group(0))
else:
    print('email error')

 

2. 用正则表达式识别出全部电话号码。

checkphone= '^1\d{10}$'
phone = '18897387546'
if re.match(checkphone, phone):
    print(re.match(checkphone, phone).group(0))
else:
    print('phone error')

  

3. 用正则表达式进行英文分词。re.split('',news)

news = 'or at Windows stat-up/ shutdown.To ensure maximum privacy protection Anti Tracks implements the US Department of Defense DOD 5220.22-M'
checkenglish = '[\s,.?\/-]+'
print(re.split(checkenglish,news))

  

4. 使用正则表达式取得新闻编号

newsUrl = ' http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0401/9163.html'
newsId = re.search('\_(.*).html', newsUrl).group(1).split('/')[-1]
print(newsId)

 

5. 生成点击次数的Request URL

clickCountUrl = 'http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newId)
print(clickCountUrl)

  

6. 获取点击次数

res = requests.get(clickCountUrl)
res.encoding = 'utf-8'
count = int(res.text.split('.html')[-1].lstrip("('").rstrip("');"))
print(count)

  

7. 将456步骤定义成一个函数 def getClickCount(newsUrl):

def getClickCount(newsUrl):
    newId = re.search('\_(.*).html', newsUrl).group(1).split('/')[-1]
    res = requests.get('http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newId))
    res.encoding = 'utf-8'
    count = int(res.text.split('.html')[-1].lstrip("('").rstrip("');"))
    return count

  

8. 将获取新闻详情的代码定义成一个函数 def getNewDetail(newsUrl):

def getNewDetail(newsUrl):
    res = requests.get(newsUrl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    t = soup.select('.show-title')[0].text
    # info = soup.select('.show-info')[0].text.split()
    info = soup.select('.show-info')[0].text
    # print(soup.select('#content')[0].text)
    s = info.split()[0].lstrip('发布时间:') + " " + info.split()[1]  # 发布时间
    # zz = info[2].lstrip('作者:')  #作者
    # sh = info[3].lstrip('审核:')  #审核
    # ly = info[4].lstrip("来源:")  #来源
    if info.find('来源:') > 0:
        source = info[info.find('来源:'):].split()[0].lstrip('来源:')
    else:
        source = 'none'
    cc = datetime.strptime(s, '%Y-%m-%d %H:%M:%S')
    clickCount = getClickCount(newsUrl)
    # print(cc,type(cc))
    print(cc.strftime('%Y/%m/%d %H:%M:%S'),t,newsUrl, source, clickCount)
    return

  

9. 取出一个新闻列表页的全部新闻 包装成函数def getListPage(pageUrl):

def getListPage(pageUrl):
    res = requests.get(pageUrl)
    res.encoding = 'utf-8'
    soup = BeautifulSoup(res.text, 'html.parser')
    for news in soup.select('li'):
        if len(news.select('.news-list-title')) > 0:
            a = news.select('a')[0].attrs['href']  # URL
            getNewDetail(a)

 

10. 获取总的新闻篇数,算出新闻总页数包装成函数def getPageN():

def getPageN():
    var = 1
    firstUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen'
    res = requests.get(firstUrl)
    res.encoding = 'utf-8'
    getListPage(firstUrl)
    count = 2
    while var == 1:
        pageUrl = 'http://news.gzcc.cn/html/xiaoyuanxinwen/{}.html'.format(count)
        res = requests.get(pageUrl)
        res.encoding = 'utf-8'
        if res.status_code == 404:
            count = count - 1
            print('一共'+count+'页新闻')
            break
        getListPage(pageUrl)
        count = count + 1

 

11. 获取全部新闻列表页的全部新闻详情。

getPageN()

 

运行结果图

posted @ 2018-04-10 17:50  风丶轻轻  阅读(173)  评论(0编辑  收藏  举报