Python crawler access to web pages the get requests a cookie
Python in the process of accessing the web page,encounter with cookie,so we need to get it.
cookie in Python is form of a dictionary exists ,so cookie is {'cookie':'cookies'}
get cookie need webdriver(),Several browsers/drivers are supported (Firefox, Chrome, Internet Explorer, PhantomJS), as well as the Remote protocol.
1 from selenium import webdriver
Definition function,Because cookie in requests headers.so:
1 headers={} 2 if headers.get('cookie'): 3 'No!' 4 else: 5 'YES!'
Definition function:
def p(url,header): if header.get('cookie'): print 'YES!' else: print 'NO!' headers = {} driver=webdriver.Chrome() driver.get(url) cookie=driver.get_cookies() #print cookie s = [] for i in cookie: lt.append(i.get('value')) s.append(i.get('name') + "=" +i.get('value') ) #print s #headers['cookie'] = ','.join(s) # if headers.get('cookie') header.update(headers) # if no headers.get('cookie') driver.quit() p(url,header) #xiuluo
Interface ~:
1 if __name__ == '__main__': 2 header={'data':'dasda'} 3 url = '' 4 p(url,header)
If there is no entry, the function will not be executed ~~!!
Welcome to Python world! I have a contract in this world! How about you?