clickhouse学习笔记——Go客户端连接clickhouse
1.创建clickhouse环境
安装clickhouse
或者使用docker
参考:https://hub.docker.com/r/clickhouse/clickhouse-server
docker run -d -p 18123:8123 -p 19000:9000 --name some-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.8
使用datagrip连接
创建表和测试数据
CREATE TABLE default.my_first_table ( user_id UInt32, message String, timestamp DateTime, metric Float32 ) ENGINE = MergeTree() PRIMARY KEY (user_id, timestamp); INSERT INTO default.my_first_table (user_id, message, timestamp, metric) VALUES (101, 'Hello, ClickHouse!', now(), -1.0 ), (102, 'Insert a lot of rows per batch', yesterday(), 1.41421 ), (102, 'Sort your data based on your commonly-used queries', today(), 2.718 ), (101, 'Granules are the smallest chunks of data read', now() + 5, 3.14159 )
2.使用client连接clickhouse
golang客户端连接clickhouse,可以使用 clickhouse-go 这个库
引入依赖
go get github.com/ClickHouse/clickhouse-go
导入clickhouse的driver和database/sql,也可以使用github.com/jmoiron/sqlx,参考:golang操作clickhouse使用入门
import "database/sql" import _ "github.com/ClickHouse/clickhouse-go"
否则会报
sql: unknown driver "clickhouse" (forgotten import?)
创建连接
source := "tcp://localhost:19000?username=default&password=&database=default&block_size=4096" db, err := sql.Open("clickhouse", source) if err != nil { fmt.Println(err) } defer func() { _ = db.Close() }()
记得添加defer用于关闭连接
创建clickhouse表
query := ` CREATE TABLE default.my_first_table ( user_id UInt32, message String, timestamp DateTime, metric Float32 ) ENGINE = MergeTree() PRIMARY KEY (user_id, timestamp) ` _, err = db.Exec(query) if err != nil { fmt.Println(err) }
插入数据
var arr [][]any arr = append(arr, []any{101, "Hello, ClickHouse!", time.Now(), -1.0}) arr = append(arr, []any{102, "Insert a lot of rows per batch", time.Now().Add(-1), 1.41421}) arr = append(arr, []any{102, "Sort your data based on your commonly-used queries", time.Now().Add(1), 2.718}) arr = append(arr, []any{101, "Granules are the smallest chunks of data read", time.Now().Add(5), 3.14159}) tx, err := db.Begin() if err != nil { log.Fatal(err) } query := `INSERT INTO default.my_first_table (user_id, message, timestamp, metric) VALUES (?, ?, ?, ?)` stmt, err := tx.Prepare(query) if err != nil { log.Fatal(err) } for _, data := range arr { _, err = stmt.Exec(data...) if err != nil { fmt.Println(err) } } _ = tx.Commit()
查询数据
// 读取数据 type Data struct { UserId int64 `db:"user_id"` Message string `db:"message"` Timestamp time.Time `db:"timestamp"` Metric float64 `db:"metric"` } query := "SELECT * FROM default.my_first_table" rows, err := db.Query(query) if err != nil { fmt.Println(err) } var result []Data for rows.Next() { var data Data err = rows.Scan(&data.UserId, &data.Message, &data.Timestamp, &data.Metric) if err != nil { fmt.Println(err) } result = append(result, data) } fmt.Println(result)
输出
[{101 Hello, ClickHouse! 2024-05-25 15:54:00 +0000 UTC -1} {101 Granules are the smallest chunks of data read 2024-05-25 15:54:00 +0000 UTC 3.14159} {102 Insert a lot of rows per batch 2024-05-25 15:54:00 +0000 UTC 1.41421} {102 Sort your data based on your commonly-used queries 2024-05-25 15:54:00 +0000 UTC 2.718}]
本文只发表于博客园和tonglin0325的博客,作者:tonglin0325,转载请注明原文链接:https://www.cnblogs.com/tonglin0325/p/6246532.html