1、通过requests.get方法

r = requests.get("http://200.20.3.20:8080/job/Compile/job/aaa/496/artifact/bbb.iso")

with open(os.path.join(os.path.dirname(os.path.abspath("__file__")),"bbb.iso"),"wb") as f:
    f.write(r.content)

 

2、urllib2方法

import urllib2
print "downloading with urllib2"
url = '"http://200.21.1.22:8080/job/Compile/job/aaa/496/artifact/bbb.iso"'
f = urllib2.urlopen(url)
data = f.read()
with open(os.path.join(os.path.dirname(os.path.abspath("__file__")),"bbb.iso"),"wb") as f:
    f.write(data)

 

3、下载大文件

很多时候,我们下载大文件python报内存错误,你打开任务管理器,会很明显的看到python程序的内存在不停的增大,最终会导致程序奔溃

 

self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
MemoryError

 

我们可以采用下面的代码来解决大文件下载的问题

import requests


r = requests.get(
    url="http://10.242.255.110/K%3A/SSL/Feature_bridge_mode/alpha/20210809/SDP2.1.7.15_B_Build20210809.run",
    stream=True
)



f = open("bbb123","wb")

for chunk in r.iter_content(chunk_size=1024):
    if chunk:
        f.write(chunk)

  

 

posted on 2019-07-22 18:07  bainianminguo  阅读(8404)  评论(0编辑  收藏  举报