BeautifulSoup select方法

 1 html = """
 2 <html><head><title>The Dormouse's story</title></head>
 3 <body>
 4 <p class="title" name="dromouse"><b>The Dormouse's story</b></p>
 5 <p class="story">Once upon a time there were three little sisters; and their names were
 6 <a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie --></a>,
 7 <a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
 8 <a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
 9 and they lived at the bottom of a well.</p>
10 <p class="story">...</p>
11 """

我们在写 CSS 时,标签名不加任何修饰,类名前加点,id名前加 #,在这里我们也可以利用类似的方法来筛选元素,用到的方法是 soup.select(),返回类型是 list
(1)通过标签名查找
 

print soup.select('title') 
#[<title>The Dormouse's story</title>]
 
print soup.select('a')
#[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]
 
print soup.select('b')
#[<b>The Dormouse's story</b>]

(2)通过类名查找
 

print soup.select('.sister')
#[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>, <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>]

(3)通过 id 名查找
 

print soup.select('#link1')
#[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]

(4)组合查找

组合查找即和写 class 文件时,标签名与类名、id名进行的组合原理是一样的,例如查找 p 标签中,id 等于 link1的内容,二者需要用空格分开
 

print soup.select('p #link1')
#[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]

直接子标签查找
 

print soup.select("head > title")
#[<title>The Dormouse's story</title>]

(5)属性查找

查找时还可以加入属性元素,属性需要用中括号括起来,注意属性和标签属于同一节点,所以中间不能加空格,否则会无法匹配到。
 

print soup.select("head > title")
#[<title>The Dormouse's story</title>]
 
print soup.select('a[href="http://example.com/elsie"]')
#[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]

同样,属性仍然可以与上述查找方式组合,不在同一节点的空格隔开,同一节点的不加空格
 

print soup.select('p a[href="http://example.com/elsie"]')
#[<a class="sister" href="http://example.com/elsie" id="link1"><!-- Elsie --></a>]


示例代码:
from bs4 import BeautifulSoup
import requests

#定义58同城上杭州区域的起始页面
start_url = 'http://hz.58.com/sale.shtml'
url_host = 'http://hz.58.com'

def get_index_url(url):
    wb_data = requests.get(start_url)
    soup = BeautifulSoup(wb_data.text,'lxml')
    links = soup.select('ul.ym-mainmnu > li > span > a["href"]')
    print(links)
    for link in links:
        page_url = url_host + str(link.get('href'))
        print(page_url)
get_index_url(start_url)

 

运行结果:

C:\Users\licl11092\AppData\Local\Programs\Python\Python35\python.exe D:/Spider/58spider/channel_extact.py
[<a href="/shouji/">手机</a>, <a href="/tongxunyw/">通讯</a>, <a href="/danche/">摩托车</a>, <a href="/diandongche/">电动车</a>, <a href="/diannao/">电脑</a>, <a href="/shuma/">数码</a>, <a href="/jiadian/">家电</a>, <a href="/ershoujiaju/">家具</a>, <a href="/yingyou/">母婴玩具</a>, <a href="/fushi/">服装箱包</a>, <a href="/meirong/">美容保健</a>, <a href="/yishu/">艺术收藏</a>, <a href="/tushu/">图书音像</a>, <a href="/wenti/">文体户外</a>, <a href="/bangong/">办公设备</a>, <a href="/shebei.shtml">二手设备</a>, <a href="/chengren/" onclick="clickLog('from=pc_index_loucengdb_ershoujiaoyi_gongcheng')">成人用品</a>]
http://hz.58.com/shouji/
http://hz.58.com/tongxunyw/
http://hz.58.com/danche/
http://hz.58.com/diandongche/
http://hz.58.com/diannao/
http://hz.58.com/shuma/
http://hz.58.com/jiadian/
http://hz.58.com/ershoujiaju/
http://hz.58.com/yingyou/
http://hz.58.com/fushi/
http://hz.58.com/meirong/
http://hz.58.com/yishu/
http://hz.58.com/tushu/
http://hz.58.com/wenti/
http://hz.58.com/bangong/
http://hz.58.com/shebei.shtml
http://hz.58.com/chengren/

Process finished with exit code 0

 

posted @ 2017-08-30 17:26  我是旺旺  阅读(971)  评论(0编辑  收藏  举报