使用正则表达式,取得点击次数,函数抽离

学会使用正则表达式

1. 用正则表达式判定邮箱是否输入正确。

# 验证邮箱
import re
str=r'^[a-zA-Z0-9]+(\.[a-zA-Z0-9_-]+){0,4}@[a-zA-Z0-9]+(\.[a-zA-Z0-9]+){0,4}$'
are=('1924668503@qq.com')
if re.match(str,are):
    print('success')
else:
    print('please input ...')

 

2. 用正则表达式识别出全部电话号码。

# 识别电话号码
import re
tep='版权所有:广州商学院   地址:广州市黄埔区九龙大道206号学校办公室:020-82876130   招生电话:020-82872773 粤公网安备 44011602000060号    粤ICP备15103669号'
phone=re.findall('(\d{3,4})-(\d{6,8})',tep)
print(phone)

3. 用正则表达式进行英文分词。re.split('',news)

import re
news='''
From the distance, it looked like a skinny tube, 
but as we got closer, we could see it flesh out before our eyes. 
It was tubular, all right, but fatter than we could see from far away. Furthermore, 
we were also astonished to notice that the building was really in two parts: a pagoda sitting on top of a tubular
 one-story structure. Standing ten feet away, we could marvel at how much of the pagoda was made up of glass windows.
  Almost everything under the wonderful Chinese roof was made of glass, unlike the tube that it was sitting on, 
  which only had four. Inside, the tube was gloomy, because of the lack of light. 
  Then a steep, narrow staircase took us up inside the pagoda and the light changed dramatically. 
  All those windows let in a flood of sunshine and we could see out for miles across the flat land.
'''
new = re.split("[\s+\n\.\,\']", news)
print(new)

4. 使用正则表达式取得新闻编号

import re
url='http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/9183.html'
a=re.match('http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/(.*).html',url).group(1)
print(a)

 

5. 生成点击次数的Request URL

import re
url='http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/9183.html'
a=re.match('http://news.gzcc.cn/html/2018/xiaoyuanxinwen_0404/(.*).html',url).group(1)
srac='http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(a)
print(srac)

6. 获取点击次数

import requests
from bs4 import BeautifulSoup
url='http://oa.gzcc.cn/api.php?op=count&id=9183&modelid=80'
srac=requests.get(url)
print(srac.text.split('.html')[-1].lstrip("('").rstrip("');"))

7. 将456步骤定义成一个函数 def getClickCount(newsUrl)

8. 将获取新闻详情的代码定义成一个函数 def getNewDetail(newsUrl):

import requests
import re
from bs4 import BeautifulSoup
from datetime import datetime

url='http://news.gzcc.cn/html/xiaoyuanxinwen/'
res=requests.get(url)
res.encoding="utf-8"
soup=BeautifulSoup(res.text,"html.parser")

# 将456步骤定义成一个函数 def getClickCount(newsUrl)
def getClickCount(newUrl):
    newId=re.search('\_(.*).html',newUrl).group(1).split('/')[1]
    clickUrl='http://oa.gzcc.cn/api.php?op=count&id={}&modelid=80'.format(newId)
    return (int(requests.get(clickUrl).text.split('.html')[-1].lstrip("('").rstrip("');")))

# 将获取新闻详情的代码定义成一个函数 def getNewDetail(newsUrl):
def getNewDetail(newsUrl):
    resd = requests.get(newsUrl)
    resd.encoding = 'utf-8'
    soupd = BeautifulSoup(resd.text, 'html.parser')
    print(t)
    print(newsUrl)
    info = soupd.select('.show-info')[0].text
    d = re.search('发布时间:(.*) \xa0\xa0 \xa0\xa0作者:', info).group(1)
    dt = datetime.strptime(d, '%Y-%m-%d %H:%M:%S')
    print('发布时间:{}'.format(dt))
    print('作者:' + re.search('作者:(.*)审核:', info).group(1))
    print('审核:' + re.search('审核:(.*)来源:', info).group(1))
    print('来源:' + re.search('来源:(.*)摄影:', info).group(1))
    print('摄影:' + re.search('摄影:(.*)点击', info).group(1))
    print(getClickCount(a))
    print('正文:'+soupd.select('.show-content')[0].text)

for news in soup.select("li"):
    if len(news.select(".news-list-title")) > 0:
        t=news.select('.news-list-title')[0].text
        a = news.select('a')[0].attrs['href']  # 新闻链接
        getNewDetail(a)
        break

 

posted @ 2018-04-09 20:31  112李立建  阅读(134)  评论(0编辑  收藏  举报