clickhouse输入输出格式 TSKV CSV
TSKV
TSKV格式不适合有大量小列的输出.
TSKV的效率并不比JSONEachRow差.
TSKV数据查询和数据导入。
不需要保证列的顺序。
支持忽略某些值,这些列使用默认值,例如0和空白行。复杂类型的值必须指定,无法使用默认值。
ch2 :) select * from escape_demo format TSKV; SELECT * FROM escape_demo FORMAT TSKV name=xiaomi addr=nanjing\tjiangsu age=23 desc=From nanjing name=xiaohong addr=zheng\tzhou age=23 desc=A gril name=xiaohong addr=zheng\tzhou age=23 desc=A gril 3 rows in set. Elapsed: 0.014 sec.
[root@ch2 tmp]# clickhouse-client --query "select * from escape_demo format TSKV" > tskv.txt [root@ch2 tmp]# cat tskv.txt name=xiaomi addr=nanjing\tjiangsu age=23 desc=From nanjing name=xiaohong addr=zheng\tzhou age=23 desc=A gril name=xiaohong addr=zheng\tzhou age=23 desc=A gril
[root@ch2 tmp]# clickhouse-client --query "insert into escape_demo format TSKV" < tskv.txt [root@ch2 tmp]# clickhouse-client ClickHouse client version 20.3.9.70 (official build). Connecting to localhost:9300 as user default. Connected to ClickHouse server version 20.3.9 revision 54433. ch2 :) select * from escape_demo; SELECT * FROM escape_demo ┌─name─────┬─addr───────────────┬─age─┬─desc─────────┐ │ xiaomi │ nanjing jiangsu │ 23 │ From nanjing │ │ xiaohong │ zheng zhou │ 23 │ A gril │ │ xiaohong │ zheng zhou │ 23 │ A gril │ │ xiaomi │ nanjing jiangsu │ 23 │ From nanjing │ │ xiaohong │ zheng zhou │ 23 │ A gril │ │ xiaohong │ zheng zhou │ 23 │ A gril │ └──────────┴────────────────────┴─────┴──────────────┘
CSV格式:CSV、CSVWithNames
1. CSV格式:CSV、CSVWithNames
CSV默认的分隔符为逗号,format_csv_delimiter设置自定义的分隔符。
CSV中的双引号使用两个双引号转义。
支持数据的查询和数据导入的。
create table csv_demo(create_date Date, update_time DateTime, desc String) ENGINE=TinyLog;
[root@ch2 tmp]# cat csv_demo.csv 2014-03-23|2014-03-23 14:10:14|Apache Spark achieves high performance 2014-03-23|2014-03-23 15:10:30|Spark offers over 80 high-level operators 1395990600|1395904200|Learning Apache "Spark" is easy
clickhouse-client --format_csv_delimiter="|" --query="insert into csv_demo format CSV" < csv_demo.csv
ch2 :) select * from csv_demo; SELECT * FROM csv_demo ┌─create_date─┬─────────update_time─┬─desc──────────────────────────────────────┐ │ 2014-03-23 │ 2014-03-23 14:10:14 │ Apache Spark achieves high performance │ │ 2014-03-23 │ 2014-03-23 15:10:30 │ Spark offers over 80 high-level operators │ │ 0000-00-00 │ 2014-03-27 07:10:00 │ Learning Apache "Spark" is easy │ └─────────────┴─────────────────────┴───────────────────────────────────────────┘ 3 rows in set. Elapsed: 0.004 sec. ch2 :) select * from csv_demo format CSV; SELECT * FROM csv_demo FORMAT CSV "2014-03-23","2014-03-23 14:10:14","Apache Spark achieves high performance" "2014-03-23","2014-03-23 15:10:30","Spark offers over 80 high-level operators" "0000-00-00","2014-03-27 07:10:00","Learning Apache ""Spark"" is easy" 3 rows in set. Elapsed: 0.004 sec. ch2 :) set format_csv_delimiter='|'; SET format_csv_delimiter = '|' Ok. 0 rows in set. Elapsed: 0.002 sec. ch2 :) select * from csv_demo format CSV; SELECT * FROM csv_demo FORMAT CSV "2014-03-23"|"2014-03-23 14:10:14"|"Apache Spark achieves high performance" "2014-03-23"|"2014-03-23 15:10:30"|"Spark offers over 80 high-level operators" "0000-00-00"|"2014-03-27 07:10:00"|"Learning Apache ""Spark"" is easy" 3 rows in set. Elapsed: 0.004 sec.
CSVWithNames可以自动过滤到CSV文件第一列,即名字列