redis导入数据比较头疼的事情,涉及几千万,导入还是很耗时,通过生成pipe文件的方式比较快捷。
python3.6.1版本 在linux环境下运行
with open("data1", "w") as f: for d in data: k = d["key"] v = d['value'] f.write('*3\r\n$3\r\nset\r\n$%d\r\n%s\r\n$%d\r\n%s\r\n' %(len(bytes(k, 'utf-8')), k, len(bytes(v, 'utf-8')), v))
python2.7
for line in lines: line=line.strip('\n') jsonLine = json.loads(line) province = jsonLine["province"] if province == henan: key = jsonLine["company_name"] k = key.encode('utf-8') v = line print '*3\r\n$3\r\nset\r\n$%d\r\n%s\r\n$%d\r\n%s\r\n' %(len(k), k, len(v), v),
127.0.0.1:6379> select 10 OK 127.0.0.1:6379[10]> dbsize (integer) 2907521