Python【04】【基础部分】- 模块
内容参考:Wu Sir 's blog
一、自定义模块
模块是实现某个功能的代码集合
模块分为三种:
- 自定义模块
- 内置模块
- 开源模块
1、模块的定义
(1)单一.py文件也可以被当作一个模块,导入时,解释器会解释此文件
(2)一个文件夹可以被定义为一个模块,但是文件夹内必须包含一个“__init__.py”文件,导入时,解释器会解释此文件
2、模块的导入
(1)导入模块的方法
import module from module.xx.xx import xx from module.xx.xx import xx as rename from module.xx.xx import *
(2)模块路径
import sys print sys.path # 可使用模块位置 # ['/data/python/day5', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-x86_64-linux-gnu', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
(3)手动添加路径到sys.path
import sys import os pre_path = os.path.abspath('../') sys.path.append(pre_path)
二、开源模块
1、模块下载安装方法
下载的方式有很多,yum、pip….
下载源码包后,解压编译安装
# python setup.py build # python setup.py install
在安装时需要提前gcc与python开发环境,安装完成后会自动添加到sys.path中
# centos方式 $ yum install gcc $ yum install python-devel # ubuntu方式 $ apt-get python-dev
2、模块的导入
导入方法与自定义一样
3、开源模块 Paramiko
paramiko是一个用于做远程控制的模块,使用该模块可以对远程服务器进行命令或文件操作,值得一说的是,fabric和ansible内部的远程管理就是使用的paramiko来现实。
(1)下载安装
# pycrypto,由于 paramiko 模块内部依赖pycrypto,所以先下载安装pycrypto # 下载安装 pycrypto $ wget https://files.cnblogs.com/files/wupeiqi/pycrypto-2.6.1.tar.gz $ tar -xvf pycrypto-2.6.1.tar.gz $ cd pycrypto-2.6.1 $ python setup.py build $ python setup.py install # 进入python环境,导入Crypto检查是否安装成功 # 下载安装 paramiko $ wget https://files.cnblogs.com/files/wupeiqi/paramiko-1.10.1.tar.gz $ tar -xvf paramiko-1.10.1.tar.gz $ cd paramiko-1.10.1 $ python setup.py build $ python setup.py install # 进入python环境,导入paramiko检查是否安装成功
(2)使用实例
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
#!/usr/bin/env python #coding:utf-8 import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('192.168.1.108', 22, 'alex', '123') stdin, stdout, stderr = ssh.exec_command('df') print stdout.read() ssh.close();
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import paramiko private_key_path = '/home/auto/.ssh/id_rsa' key = paramiko.RSAKey.from_private_key_file(private_key_path) ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('主机名 ', 端口, '用户名', key) stdin, stdout, stderr = ssh.exec_command('df') print stdout.read() ssh.close()
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import os,sys import paramiko t = paramiko.Transport(('182.92.219.86',22)) t.connect(username='wupeiqi',password='123') sftp = paramiko.SFTPClient.from_transport(t) sftp.put('/tmp/test.py','/tmp/test.py') t.close() import os,sys import paramiko t = paramiko.Transport(('182.92.219.86',22)) t.connect(username='wupeiqi',password='123') sftp = paramiko.SFTPClient.from_transport(t) sftp.get('/tmp/test.py','/tmp/test2.py') t.close()
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
import paramiko pravie_key_path = '/home/auto/.ssh/id_rsa' key = paramiko.RSAKey.from_private_key_file(pravie_key_path) t = paramiko.Transport(('182.92.219.86',22)) t.connect(username='wupeiqi',pkey=key) sftp = paramiko.SFTPClient.from_transport(t) sftp.put('/tmp/test3.py','/tmp/test3.py') t.close() import paramiko pravie_key_path = '/home/auto/.ssh/id_rsa' key = paramiko.RSAKey.from_private_key_file(pravie_key_path) t = paramiko.Transport(('182.92.219.86',22)) t.connect(username='wupeiqi',pkey=key) sftp = paramiko.SFTPClient.from_transport(t) sftp.get('/tmp/test3.py','/tmp/test4.py') t.close()
三、内置模块
1、OS
提供系统级别的操作
os.getcwd() # 获取当前工作目录,即当前python脚本工作的目录路径 os.chdir("dirname") # 改变当前脚本工作目录;相当于shell下cd os.curdir # 返回当前目录: ('.') os.pardir # 获取当前目录的父目录字符串名:('..') os.makedirs('dirname1/dirname2') # 可生成多层递归目录 os.removedirs('dirname1') # 若目录为空,则删除,并递归到上一级目录,如若也为空,则删除,依此类推 os.mkdir('dirname') # 生成单级目录;相当于shell中mkdir dirname os.rmdir('dirname') # 删除单级空目录,若目录不为空则无法删除,报错;相当于shell中rmdir dirname os.listdir('dirname') # 列出指定目录下的所有文件和子目录,包括隐藏文件,并以列表方式打印 os.remove() # 删除一个文件 os.rename("oldname","newname") # 重命名文件/目录 os.stat('path/filename') # 获取文件/目录信息 os.sep # 输出操作系统特定的路径分隔符,win下为"\\",Linux下为"/" os.linesep # 输出当前平台使用的行终止符,win下为"\t\n",Linux下为"\n" os.pathsep # 输出用于分割文件路径的字符串 os.name # 输出字符串指示当前使用平台。win->'nt'; Linux->'posix' os.system("bash command") # 运行shell命令,直接显示 os.environ # 获取系统环境变量 os.path.abspath(path) # 返回path规范化的绝对路径 os.path.split(path) # 将path分割成目录和文件名二元组返回 os.path.dirname(path) # 返回path的目录。其实就是os.path.split(path)的第一个元素 os.path.basename(path) # 返回path最后的文件名。如何path以/或\结尾,那么就会返回空值。即os.path.split(path)的第二个元素 os.path.exists(path) # 如果path存在,返回True;如果path不存在,返回False os.path.isabs(path) # 如果path是绝对路径,返回True os.path.isfile(path) # 如果path是一个存在的文件,返回True。否则返回False os.path.isdir(path) # 如果path是一个存在的目录,则返回True。否则返回False os.path.join(path1[, path2[, ...]]) # 将多个路径组合后返回,第一个绝对路径之前的参数将被忽略 os.path.getatime(path) # 返回path所指向的文件或者目录的最后存取时间
2、SYS
提供对解释器相关的操作
sys.argv # 命令行参数List,第一个元素是程序本身路径 sys.exit(n) # 退出程序,正常退出时exit(0) sys.version # 获取Python解释程序的版本信息 sys.maxint # 最大的Int值 sys.path # 返回模块的搜索路径,初始化时使用PYTHONPATH环境变量的值 sys.platform # 返回操作系统平台名称 sys.stdout.write('please:') val = sys.stdin.readline()[:-1]
3、Hashlib
用于加密相关的操作,代替了md5模块和sha模块,主要提供 SHA1, SHA224, SHA256, SHA384, SHA512 ,MD5 算法
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
## md5 ## import md5 hash = md5.new() hash.update('admin') print hash.hexdigest() ## sha ## import sha hash = sha.new() hash.update('admin') print hash.hexdigest()
hashlib 加密
import hashlib # ######## md5 ######## hash = hashlib.md5() hash.update('admin') print hash.hexdigest() # ######## sha1 ######## hash = hashlib.sha1() hash.update('admin') print hash.hexdigest() # ######## sha256 ######## hash = hashlib.sha256() hash.update('admin') print hash.hexdigest() # ######## sha384 ######## hash = hashlib.sha384() hash.update('admin') print hash.hexdigest() # ######## sha512 ######## hash = hashlib.sha512() hash.update('admin') print hash.hexdigest()
虽然hashlib的加密已经足够强大,但是还是可以通过撞库返解,python中的hashmac的加密方式是对内容创建key然后再进行加密
import hmac h = hmac.new('wueiqi') h.update('hellowo') print h.hexdigest()
4、json和pickle
json与pickle都是用于序列化的两个模块,其作用是进行不同语言与不同程序内存之间的数据交换
- json用于字符串与简单的python数据类型间进行转换,它适用于大多数语言
- pickle用于python特有的数据类型与python的数据类型间进行转化,它能处理大多数python的数据类型,但只能在Python中使用
json.dumps() & json.loads()
import json lis = {1:'tom',2:'kim'} js_lis = json.dumps(lis) # json.dumps()对内容序列化 with open('db','wb') as wf: wf.write(js_lis) ## {"1": "tom", "2": "kim"} with open('db','rb') as rf: js_file = json.loads(rf.read()) # json.loads()对内容反序列化 print js_file ## {u'1': u'tom', u'2': u'kim'}
json.dump() & json.load()
import json lis = {1:'tom',2:'tom'} with open('db','wb') as wf: json.dump(lis,wf) # dump序列化数据,省略了一行代码,将内容与文件句柄一起处理,->PS:看源码 ## {"1": "tom", "2": "tom"} with open('db','rb') as rf: js_rf = json.load(rf) # load反序列化 print js_rf ## {u'1': u'tom', u'2': u'tom'}
pickle.dumps() & pickle.loads()
import pickle lis = {1:'tom',2:'kim'} pi_lis = pickle.dumps(lis) # pickle.dumps()序列化数据是特有的类型,只能应用在Python中 with open('db','wb') as wf: wf.write(pi_lis) """ (dp0 I1 S'tom' p1 sI2 S'kim' p2 s. """ with open('db','rb') as rf: pi_rf = pickle.loads(rf.read()) # pickle.loads()将数据反序列化 print pi_rf ## {1: 'tom', 2: 'kim'}
pickle.dump() & pickle.load()
import pickle lis = {1:'tom',2:'tom'} with open('db','wb') as wf: pickle.dump(lis,wf) # dump序列化数据,省略了一行代码,将内容与文件句柄一起处理,->PS:看源码 """ (dp0 I1 S'tom' p1 sI2 g1 s. """ with open('db','rb') as rf: js_rf = pickle.load(rf) # load反序列化 print js_rf ## {1: 'tom', 2: 'tom'}
5、系统命令
可执行shell命令的相关模块和函数:
- os.system
- so.spawn*
- os.popen* --将被废弃
- popen2.* --将被废弃
- commands.* --将被废弃,3.x中被移除
这些将要被废弃或移除的模块会在subprocess模块中实现
(1)call()
执行命令并返回状态码
retcode = subprocess.call("ls -l",shell=True) retcode = subprocess.call(["ls","-l"],shell=False) # 执行命令是以字符串拼接的,指定shell=True将以shell终端运行命令,在windows平台下要指定shell=True,否则会报WindowsError
(2)check_call()
执行命令,如果状态码为0则返回0,否则抛异常
retcode = subprocess.check_call(["ls","-l"]) retcode = subprocess.check_call("ls -l",shell=True)
(3)check_output()
执行命令并输出结果,如果返回码不为0,则抛异常
retcode = subprocess.check_output(["ls","-l"]) retcode = subprocess.check_output("ls -l",shell=True)
(4)Popen()
subprocess.Popen()用于执行复杂的系统命令
参数:
- args:shell命令,可以是字符串或者序列类型(如:list,元组)
- bufsize:指定缓冲。0 无缓冲,1 行缓冲,其他 缓冲区大小,负值 系统缓冲
- stdin, stdout, stderr:分别表示程序的标准输入、输出、错误句柄
- preexec_fn:只在Unix平台下有效,用于指定一个可执行对象(callable object),它将在子进程运行之前被调用
- close_sfs:在windows平台下,如果close_fds被设置为True,则新创建的子进程将不会继承父进程的输入、输出、错误管道。
所以不能将close_fds设置为True同时重定向子进程的标准输入、输出与错误(stdin, stdout, stderr)。 - shell:同上
- cwd:用于设置子进程的当前目录
- env:用于指定子进程的环境变量。如果env = None,子进程的环境变量将从父进程中继承。
- universal_newlines:不同系统的换行符不同,True -> 同意使用 \n
- startupinfo与createionflags只在windows下有效
将被传递给底层的CreateProcess()函数,用于设置子进程的一些属性,如:主窗口的外观,进程的优先级等等
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
# Popen()首先会返回一个寄存器地址,然后输出结果 >>> subprocess.Popen(["ls","-l"]) <subprocess.Popen object at 0x7f5c0fa84790> >>> total 0 -rw-rw-r-- 1 tom tom 0 Dec 1 09:42 model.py >>> subprocess.Popen("ls -l",shell=True) <subprocess.Popen object at 0x7f5c0fa41090> >>> total 0 -rw-rw-r-- 1 tom tom 0 Dec 1 09:42 model.py
使用Python环境进行shell命用调用
>> retcode = subprocess.Popen(["python"],stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE) # 交互时使用方式 >>> retcode.stdin.write("print 1 \n") >>> retcode.stdin.write("print 2 \n") >>> retcode.communicate() # 将输入命令的结果输出 ('1\n2\n', '') # 问题,最后为什么会有一个空''?
More
6、shutil
高级的 文件、文件夹、压缩包 处理模块
(1)shutil.copyfileobj(fsrc, fdst[, length])
将文件内容拷贝到另一个文件中,可以部分内容
def copyfileobj(fsrc, fdst, length=16*1024): """copy data from file-like object fsrc to file-like object fdst""" while 1: buf = fsrc.read(length) if not buf: break fdst.write(buf)
(2)shutil.copyfile(src, dst)
拷贝文件
def copyfile(src, dst): """Copy data from src to dst""" if _samefile(src, dst): raise Error("`%s` and `%s` are the same file" % (src, dst)) for fn in [src, dst]: try: st = os.stat(fn) except OSError: # File most likely does not exist pass else: # XXX What about other special files? (sockets, devices...) if stat.S_ISFIFO(st.st_mode): raise SpecialFileError("`%s` is a named pipe" % fn) with open(src, 'rb') as fsrc: with open(dst, 'wb') as fdst: copyfileobj(fsrc, fdst)
(3)shutil.copymode(src, dst)
仅拷贝权限。内容、组、用户均不变
def copymode(src, dst): """Copy mode bits from src to dst""" if hasattr(os, 'chmod'): st = os.stat(src) mode = stat.S_IMODE(st.st_mode) os.chmod(dst, mode)
(4)shutil.copystat(src, dst)
拷贝状态的信息,包括:mode bits, atime, mtime, flags
def copystat(src, dst): """Copy all stat info (mode bits, atime, mtime, flags) from src to dst""" st = os.stat(src) mode = stat.S_IMODE(st.st_mode) if hasattr(os, 'utime'): os.utime(dst, (st.st_atime, st.st_mtime)) if hasattr(os, 'chmod'): os.chmod(dst, mode) if hasattr(os, 'chflags') and hasattr(st, 'st_flags'): try: os.chflags(dst, st.st_flags) except OSError, why: for err in 'EOPNOTSUPP', 'ENOTSUP': if hasattr(errno, err) and why.errno == getattr(errno, err): break else: raise
(5)shutil.copy(src, dst)
拷贝文件和权限
def copy(src, dst): """Copy data and mode bits ("cp src dst"). The destination may be a directory. """ if os.path.isdir(dst): dst = os.path.join(dst, os.path.basename(src)) copyfile(src, dst) copymode(src, dst)
(6)shutil.copy2(src, dst)
拷贝文件和状态信息
def copy2(src, dst): """Copy data and all stat info ("cp -p src dst"). The destination may be a directory. """ if os.path.isdir(dst): dst = os.path.join(dst, os.path.basename(src)) copyfile(src, dst) copystat(src, dst)
(7)shutil.ignore_patterns(*patterns) & shutil.copytree(src, dst, symlinks=False, ignore=None)
递归的去拷贝文件
例如:copytree(source, destination, ignore=ignore_patterns('*.pyc', 'tmp*'))
def ignore_patterns(*patterns): """Function that can be used as copytree() ignore parameter. Patterns is a sequence of glob-style patterns that are used to exclude files""" def _ignore_patterns(path, names): ignored_names = [] for pattern in patterns: ignored_names.extend(fnmatch.filter(names, pattern)) return set(ignored_names) return _ignore_patterns def copytree(src, dst, symlinks=False, ignore=None): """Recursively copy a directory tree using copy2(). The destination directory must not already exist. If exception(s) occur, an Error is raised with a list of reasons. If the optional symlinks flag is true, symbolic links in the source tree result in symbolic links in the destination tree; if it is false, the contents of the files pointed to by symbolic links are copied. The optional ignore argument is a callable. If given, it is called with the `src` parameter, which is the directory being visited by copytree(), and `names` which is the list of `src` contents, as returned by os.listdir(): callable(src, names) -> ignored_names Since copytree() is called recursively, the callable will be called once for each directory that is copied. It returns a list of names relative to the `src` directory that should not be copied. XXX Consider this example code rather than the ultimate tool. """ names = os.listdir(src) if ignore is not None: ignored_names = ignore(src, names) else: ignored_names = set() os.makedirs(dst) errors = [] for name in names: if name in ignored_names: continue srcname = os.path.join(src, name) dstname = os.path.join(dst, name) try: if symlinks and os.path.islink(srcname): linkto = os.readlink(srcname) os.symlink(linkto, dstname) elif os.path.isdir(srcname): copytree(srcname, dstname, symlinks, ignore) else: # Will raise a SpecialFileError for unsupported file types copy2(srcname, dstname) # catch the Error from the recursive copytree so that we can # continue with other files except Error, err: errors.extend(err.args[0]) except EnvironmentError, why: errors.append((srcname, dstname, str(why))) try: copystat(src, dst) except OSError, why: if WindowsError is not None and isinstance(why, WindowsError): # Copying file access times may fail on Windows pass else: errors.append((src, dst, str(why))) if errors: raise Error, errors
(8)shutil.rmtree(path[, ignore_errors[, onerror]])
递归的去删除文件
def rmtree(path, ignore_errors=False, onerror=None): """Recursively delete a directory tree. If ignore_errors is set, errors are ignored; otherwise, if onerror is set, it is called to handle the error with arguments (func, path, exc_info) where func is os.listdir, os.remove, or os.rmdir; path is the argument to that function that caused it to fail; and exc_info is a tuple returned by sys.exc_info(). If ignore_errors is false and onerror is None, an exception is raised. """ if ignore_errors: def onerror(*args): pass elif onerror is None: def onerror(*args): raise try: if os.path.islink(path): # symlinks to directories are forbidden, see bug #1669 raise OSError("Cannot call rmtree on a symbolic link") except OSError: onerror(os.path.islink, path, sys.exc_info()) # can't continue even if onerror hook returns return names = [] try: names = os.listdir(path) except os.error, err: onerror(os.listdir, path, sys.exc_info()) for name in names: fullname = os.path.join(path, name) try: mode = os.lstat(fullname).st_mode except os.error: mode = 0 if stat.S_ISDIR(mode): rmtree(fullname, ignore_errors, onerror) else: try: os.remove(fullname) except os.error, err: onerror(os.remove, fullname, sys.exc_info()) try: os.rmdir(path) except os.error: onerror(os.rmdir, path, sys.exc_info())
(9)shutil.move(src, dst)
递归的去移动文件
def move(src, dst): """Recursively move a file or directory to another location. This is similar to the Unix "mv" command. If the destination is a directory or a symlink to a directory, the source is moved inside the directory. The destination path must not already exist. If the destination already exists but is not a directory, it may be overwritten depending on os.rename() semantics. If the destination is on our current filesystem, then rename() is used. Otherwise, src is copied to the destination and then removed. A lot more could be done here... A look at a mv.c shows a lot of the issues this implementation glosses over. """ real_dst = dst if os.path.isdir(dst): if _samefile(src, dst): # We might be on a case insensitive filesystem, # perform the rename anyway. os.rename(src, dst) return real_dst = os.path.join(dst, _basename(src)) if os.path.exists(real_dst): raise Error, "Destination path '%s' already exists" % real_dst try: os.rename(src, real_dst) except OSError: if os.path.isdir(src): if _destinsrc(src, dst): raise Error, "Cannot move a directory '%s' into itself '%s'." % (src, dst) copytree(src, real_dst, symlinks=True) rmtree(src) else: copy2(src, real_dst) os.unlink(src)
(10)shutil.make_archive(base_name, format,...)
创建压缩包并返回文件路径,例如:zip、tar
- base_name: 压缩包的文件名,也可以是压缩包的路径。只是文件名时,则保存至当前目录,否则保存至指定路径,
如:www =>保存至当前路径
如:/Users/wupeiqi/www =>保存至/Users/wupeiqi/ - format: 压缩包种类,“zip”, “tar”, “bztar”,“gztar”
- root_dir: 要压缩的文件夹路径(默认当前目录)
- owner: 用户,默认当前用户
- group: 组,默认当前组
- logger: 用于记录日志,通常是logging.Logger对象
def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0, dry_run=0, owner=None, group=None, logger=None): """Create an archive file (eg. zip or tar). 'base_name' is the name of the file to create, minus any format-specific extension; 'format' is the archive format: one of "zip", "tar", "bztar" or "gztar". 'root_dir' is a directory that will be the root directory of the archive; ie. we typically chdir into 'root_dir' before creating the archive. 'base_dir' is the directory where we start archiving from; ie. 'base_dir' will be the common prefix of all files and directories in the archive. 'root_dir' and 'base_dir' both default to the current directory. Returns the name of the archive file. 'owner' and 'group' are used when creating a tar archive. By default, uses the current owner and group. """ save_cwd = os.getcwd() if root_dir is not None: if logger is not None: logger.debug("changing into '%s'", root_dir) base_name = os.path.abspath(base_name) if not dry_run: os.chdir(root_dir) if base_dir is None: base_dir = os.curdir kwargs = {'dry_run': dry_run, 'logger': logger} try: format_info = _ARCHIVE_FORMATS[format] except KeyError: raise ValueError, "unknown archive format '%s'" % format func = format_info[0] for arg, val in format_info[1]: kwargs[arg] = val if format != 'zip': kwargs['owner'] = owner kwargs['group'] = group try: filename = func(base_name, base_dir, **kwargs) finally: if root_dir is not None: if logger is not None: logger.debug("changing back to '%s'", save_cwd) os.chdir(save_cwd) return filename
shutil 对压缩包的处理是调用 ZipFile 和 TarFile 两个模块来进行的,详细:
import zipfile # 压缩 z = zipfile.ZipFile('laxi.zip', 'w') z.write('a.log') z.write('data.data') z.close() # 解压 z = zipfile.ZipFile('laxi.zip', 'r') z.extractall() z.close() import tarfile # 压缩 tar = tarfile.open('your.tar','w') tar.add('/Users/wupeiqi/PycharmProjects/bbs2.zip', arcname='bbs2.zip') tar.add('/Users/wupeiqi/PycharmProjects/cmdb.zip', arcname='cmdb.zip') tar.close() # 解压 tar = tarfile.open('your.tar','r') tar.extractall() # 可设置解压地址 tar.close()
7、configParser
用于对特定的配置或配置文件内容进行操作,当前模块的名称在 python 3.x 版本中变更为 configparser。
""" $ cat db.cfg [section1] key1 = aa value1 = bb [section2] key2 = cc value2 = dd """ import ConfigParser # 读取文件内容 cp = ConfigParser.ConfigParser() cp.read('db.cfg') # 获取配置文件中sections名 content = cp.sections() print content ## ['section1', 'section2'] # 获取配置文件中sections下行的options名 opt = cp.options("section1") print opt ## ['key1', 'value1'] # 获取文件中指定sections下的内容,以list&tuple的形式输出 ite_list = cp.items("section1") print ite_list ## [('key1', 'aa'), ('value1', 'bb')] # 获取sections下key的值 ge = cp.get("section1","key1") print ge ## aa # 删除一行key值 sec = cp.remove_option("section1","key1") cp.write(open("db.cfg","w")) # 添加一个section名 sec = cp.has_section("new") sec = cp.add_section("new") cp.write(open("db.cfg","wb")) # 添加一串section内容 sec = cp.set("new","key3","xx") cp.write(open("db.cfg","wb")) # 删除指定section下key值 sec = cp.remove_option("section2","key2") cp.write(open("db.cfg","w"))
8、logging
用于便捷记录日志且线程安全的模块
import logging # 设置全局日志级别 logger = logging.getLogger("example") logger.setLevel(logging.DEBUG) # 设置屏幕输出级别 ch = logging.StreamHandler() ch.setLevel(logging.WARNING) # 设置日志文件记录级别 fh = logging.FileHandler("db.log") fh.setLevel(logging.INFO) # 设置日志格式,并应用 formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s -%(module)s: %(message)s") ch.setFormatter(formatter) fh.setFormatter(formatter) # 调用上面定义的logger接口 logger.addHandler(ch) logger.addHandler(fh) # 设定信息 logger.critical("critical message...") logger.error("error message...") logger.warning("warning message") logger.info("info message...") logger.debug("debug message...")
日志级别
log level CRITICAL = 50 FATAL = CRITICAL ERROR = 40 WARNING = 30 WARN = WARNING INFO = 20 DEBUG = 10 NOTSET = 0
日志格式参数
9、时间相关
time
时间戳: 1970年1月1日以秒为单位开始计时。1449065828.188416 == time.time()
格式化的字符串: 2015-12-7 == time.strftime("%Y-%m-%d")
结构化时间: 元组中包含了年、月、日、星期... time.struct_time(tm_year=2015,...) == time.localtime()
import time # Time Stamp print time.time() # 时间戳形式表示当前时间 print time.mktime(time.localtime()) # 将结构化时间转为时间戳 # Structural Time print time.localtime() # 结构化形式表示当前时间,可加时间戳参数 print time.gmtime() # 结构化形式表示,可加时间戳参数 # String Format print time.strftime('%Y-%m-%d %H-%M-%S') # 字符串形式表示,自定格式 print time.strftime('%Y-%m-%d',time.localtime()) # 显示当前时间 print time.strptime('2015-12-7','%Y-%m-%d') # 将字符串形式转为结构化形式 print time.asctime() # 显示当前时间,也可以将结构形式转为字符串形式 print time.asctime(time.localtime()) print time.ctime(time.time()) # 将时间戳形式转换为字符串形式
datetime
''' datetime.date:表示日期的类。常用的属性有year, month, day datetime.time:表示时间的类。常用的属性有hour, minute, second, microsecond datetime.datetime:表示日期时间 datetime.timedelta:表示时间间隔,即两个时间点之间的长度 timedelta([days[, seconds[, microseconds[, milliseconds[, minutes[, hours[, weeks]]]]]]]) strftime("%Y-%m-%d") ''' import datetime print datetime.datetime.now() # 显示当前时间 ## 2015-12-07 18:29:22.173000 # 时间的运算 print datetime.datetime.now() - datetime.timedelta(days=5) ## 2015-12-02 18:31:17.391000
10、re
在Python中re模块用于正则表达式的操作
字符
. 匹配除换行符以外的任意字符
\w 匹配字母或数字或下划线或汉字
\s 匹配任意的空白符
\d 匹配数字
\b 匹配单词的开始或结束
^ 匹配字符串的开始
$ 匹配字符串的结束
次数
* 重复零次或更多次
+ 重复一次或更多次
? 重复零次或一次
{n} 重复n次
{n,} 重复n次或更多次
{n,m} 重复n到m次
(1)match(pattern,string,flags=0)
从起始位置开始根据模型去字符串中匹配指定内容,匹配单个
pattern -- 正则表达式
string -- 要匹配的字符串
flags -- 标志位,用于控制正则表达式的匹配方式
import re print re.match("\d","1adf121sdfa").group() # 注意match()只匹配第一个哦 ## 1
![](https://images.cnblogs.com/OutliningIndicators/ContractedBlock.gif)
# flags I = IGNORECASE = sre_compile.SRE_FLAG_IGNORECASE # ignore case L = LOCALE = sre_compile.SRE_FLAG_LOCALE # assume current 8-bit locale U = UNICODE = sre_compile.SRE_FLAG_UNICODE # assume unicode locale M = MULTILINE = sre_compile.SRE_FLAG_MULTILINE # make anchors look for newline
S = DOTALL = sre_compile.SRE_FLAG_DOTALL # make dot match newline X = VERBOSE = sre_compile.SRE_FLAG_VERBOSE # ignore whitespace and comments
(2)search(pattern, string, flags=0)
根据模型去字符串中匹配指定内容,匹配单个
import re print re.search("\d+","dfsdfsa2sdfa").group() # 注意math()只匹配一次,而不是第一个 ## 2
(3)group和groups
import re leatter = '123abc456' print re.search("([0-9]*)([a-z]*)([0-9]*)",leatter).group(0) # group()将匹配到的内容以字符串输出 ## 123abc456 print re.search("([0-9]*)([a-z]*)([0-9]*)",leatter).group(1) ## 123 print re.search("([0-9]*)([a-z]*)([0-9]*)",leatter).group(2) ## abc print re.search("([0-9]*)([a-z]*)([0-9]*)",leatter).groups() # groups()将匹配到的内容以元组输出 ## ('123', 'abc', '456')
(4)findall(pattern, string, flags=0)
上述两中方式均用于匹配单值,即:只能匹配字符串中的一个,如果想要匹配到字符串中所有符合条件的元素,则需要使用 findall。
import re print re.findall('\d+','adfa11233xcvfsdf') # 需要注意的这里并指定group/groups,而且以list输出内容 ## ['11233']
(5)sub(pattern, repl, string, count=0, flags=0)
用于替换匹配的字符串,功能要比str.repleace强大
import re letter = 'saf23sdf2132' print re.sub('\d+','www',letter) # 将所有匹配到的内容替换 ## safwwwsdfwww print re.sub('\d+','www',letter,1) # 将匹配到的内容替换一次
(6)split(pattern, string, maxsplit=0, flags=0)
根据指定匹配进行分组
import re string = "'1 - 2 * ((60-30+1*(9-2*5/3+7/3*99/4*2998+10*568/14))-(-4*3)/(16-3*2) )'" print re.split("\*",string) # 不指定匹配,将匹配到的所有内容当作分割点 ## ["'1 - 2 ", ' ((60-30+1', '(9-2', '5/3+7/3', '99/4', '2998+10', '568/14))-(-4', '3)/(16-3', "2) )'"] print re.split("\*",string,1) # 指定匹配次数 ## ["'1 - 2 ", " ((60-30+1*(9-2*5/3+7/3*99/4*2998+10*568/14))-(-4*3)/(16-3*2) )'"] print re.split("[\+\-\*\/]+",string) # 以 '+ - * /' 为分隔符分割 ## ["'1 ", ' 2 ', ' ((60', '30', '1', ……… '(', '4', '3)', '(16', '3', "2) )'"] inpp = '1-2*((60-30 +(-40-5)*(9-2*5/3 + 7 /3*99/4*2998 +10 * 568/14 )) - (-4*3)/ (16-3*2))' inpp = re.sub('\s*','',inpp) # 将空格替换掉 print re.split('\(([\+\-\*\/]?\d+[\+\-\*\/]?\d+){1}\)', inpp, 1) # 以 '()'为分割符分割 ## ['1-2*((60-30+', '-40-5', '*(9-2*5/3+7/3*99/4*2998+10*568/14))-(-4*3)/(16-3*2))']
(7)实例--模拟计算器
11、random
import random print random.random() # 随机产生一个小于1的符点数 print random.randint(1,3) # 随机产生1 - 3之间的一个数 print random.randrange(1,10) # 随机产生1 - 10之间的一个数
随机实例
# 需要注意的是chr中数值的对应 checkcode = '' for i in range(4): current = random.randrange(0,4) if current != i: temp = chr(random.randint(65,90)) else: temp = random.randint(0,9) checkcode += str(temp) print checkcode