pandas read_csv读取大文件的Memory error问题
今天在读取一个超大csv文件的时候,遇到困难:首先使用office打不开然后在python中使用基本的pandas.read_csv打开文件时:MemoryError
最后查阅read_csv文档发现可以分块读取。
read_csv中有个参数chunksize,通过指定一个chunksize分块大小来读取文件
1.分块计算数量
from collections import Counter import pandas as pd size = 2 ** 10 counter = Counter() for chunk in pd.read_csv('file.csv', header=None, chunksize=size): counter.update([i[0] for i in chunk.values]) print(counter) ``` --- 大概输出如下: ``` Counter({100: 41, 101: 40, 102: 40, ... 150: 35}) ```
2.分块读取合并为一个list,list元素是dataframe,最后concat为完整dataframe
data = pd.read_csv(path+"dika_num_trainall.csv", sep=',', engine='python', iterator=True) loop = True chunkSize = 100000 chunks = [] while loop: try: chunk = data.get_chunk(chunkSize) chunks.append(chunk) except StopIteration: loop = False print("Iteration is stopped.") print('开始合并') df_train = pd.concat(chunks, ignore_index=True)