Flink Table API & SQL 自定义Redis Sink 使用方式

flink-connector-redis的使用方式和其他连接器几乎一样,除了一些公共的参数外(connector.type, format.type, or update-mode等),还支持以下参数 

为了满足业务和数据的多样性,根据connector.data.type来确定写入的数据结构

1.string

取sql的第一个字段为key,第二个字段为value,调用set方法将数据写入,在这里我们使用拼接的string+item_id作为key,价格作为value,在本地单元测试运行结果如下,和数据源完全一致。

tableEnv.sqlUpdate("CREATE TABLE datasink (item_id String,price String) WITH ('connector.data.type' = 'string' ...)");
 
tableEnv.sqlUpdate("insert into datasink SELECT CONCAT('string',item_id) ,cast(price as string) FROM order_info");
//字段
String[] names = {"pay_hour", "item_id", "price", "total_count"};
 
//数据
data.add(Row.of("20200312", "1", 13.2D, 2L));
data.add(Row.of("20200312", "2", 12.2D, 2L));
data.add(Row.of("20200312", "3", 11.2D, 2L));
data.add(Row.of("20200312", "3", 21.2D, 2L));

 

在这里需要约定的是

1.第一个值作为key,第二个值作为value

2.key和value要自己在sql中转成string类型,避免数据损失精度

3.key要自己加好前缀避免和其他人的key发生冲突。

2.list

取sql的第一个字段为key,第二个字段为value,调用lpush方法将数据写入list

tableEnv.sqlUpdate("CREATE TABLE datasink (item_id String,price String) WITH ('connector.data.type' = 'list' ...)");
 
tableEnv.sqlUpdate("insert into datasink SELECT CONCAT('list',item_id) ,cast(price as string) FROM order_info");
 
//字段
String[] names = {"pay_hour", "item_id", "price", "total_count"};
 
//数据
data.add(Row.of("20200312", "1", 13.2D, 2L));
data.add(Row.of("20200312", "2", 12.2D, 2L));
data.add(Row.of("20200312", "3", 11.2D, 2L));
data.add(Row.of("20200312", "3", 21.2D, 2L));

 

约定如string

3.set

和list基本一致,区别是调用sadd将数据写入set

4.sortedset

调用的方法为zadd,数据保存在serted set中,所以需要每个value的分数用于排序,在这里第一个值为key,第二个值为score且必须是数值类型,第三个值为value

tableEnv.sqlUpdate("CREATE TABLE datasink (item_id_key string,price double,price String) WITH ('connector.data.type' = 'sortedset' ...)");
 
tableEnv.sqlUpdate("insert into datasink SELECT CONCAT('sortedset',item_id),price,cast(price as string) FROM order_info");
 
//字段
String[] names = {"pay_hour", "item_id", "price", "total_count"};
 
//数据
data.add(Row.of("20200312", "1", 13.2D, 2L));
data.add(Row.of("20200312", "2", 12.2D, 2L));
data.add(Row.of("20200312", "3", 11.2D, 2L));
data.add(Row.of("20200312", "3", 21.2D, 2L)); 

 

在这里需要约定的是

1.第一个值作为key,第二个值作为score且为数值,第三个值作为value

2.key和value要自己在sql中转成string类型,避免数据损失精度

3.key要自己加好前缀避免和其他人的key发生冲突。

5.map

取第一个字段为key,第二个字段为field,第三个字段为value,调用hset方法将数据写入map

tableEnv.sqlUpdate("CREATE TABLE datasink (item_id_key string,item_id String,price String) WITH ('connector.data.type' = 'map' ...)");
 
tableEnv.sqlUpdate("insert into datasink SELECT CONCAT('map',item_id),item_id ,cast(price as string) FROM order_info");
 
//字段
String[] names = {"pay_hour", "item_id", "price", "total_count"};
 
//数据
data.add(Row.of("20200312", "1", 13.2D, 2L));
data.add(Row.of("20200312", "2", 12.2D, 2L));
data.add(Row.of("20200312", "3", 11.2D, 2L));
data.add(Row.of("20200312", "3", 21.2D, 2L));

 

在这里需要约定的是

1.第一个值作为key,第二个值作为field,第三个值作为value

2.key和value要自己在sql中转成string类型,避免数据损失精度

3.key要自己加好前缀避免和其他人的key发生冲突。

本地测试的完整代码如下

import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.api.java.typeutils.RowTypeInfo;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.java.StreamTableEnvironment;
import org.apache.flink.types.Row;
import org.junit.Test;
 
import java.util.ArrayList;
import java.util.List;
 
 
public class TestRedisSink {
    @Test
    public void TestSink() throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
 
        DataStream<Row> ds = env.fromCollection(getTestData()).returns(getTestDataType());
 
        tableEnv.createTemporaryView("order_info", ds);
 
        tableEnv.sqlUpdate(" CREATE TABLE datasink (item_id string,price String) WITH (" +
                "    'connector.type' = 'redis'," +
                "    'connector.cluster.ip' = '127.0.0.1'," +
                "    'connector.cluster.port' = '7100,7101,7200,7201,7300,7301'," +
                "    'connector.max.timeout.millis' = '1000'," +
                "    'connector.max.total' = '1'," +
                "    'connector.max.idle' = '1'," +
                "    'connector.min.idle' = '1'," +
                "    'connector.data.type' = 'string'," +
                "    'connector.expire.second' = '600'" +
                ")");
        tableEnv.sqlUpdate("insert into datasink SELECT CONCAT('string',item_id) ,cast(price as string) FROM order_info");
        env.execute("test");
    }
 
    private List<Row> getTestData() {
        List<Row> data = new ArrayList<>();
        data.add(Row.of("20200312", "1", 13.2D, 2L));
        data.add(Row.of("20200312", "2", 12.2D, 2L));
        data.add(Row.of("20200312", "3", 11.2D, 2L));
        data.add(Row.of("20200312", "3", 21.2D, 2L));
        return data;
    }
 
    private RowTypeInfo getTestDataType() {
        TypeInformation<?>[] types = {
                BasicTypeInfo.STRING_TYPE_INFO,
                BasicTypeInfo.STRING_TYPE_INFO,
                BasicTypeInfo.DOUBLE_TYPE_INFO,
                BasicTypeInfo.LONG_TYPE_INFO};
        String[] names = {"pay_hour", "item_id", "price", "total_count"};
 
        RowTypeInfo typeInfo = new RowTypeInfo(types, names);
        return typeInfo;
    }
}

redis sink使用模板

CREATE TABLE datasink (
    item_id STRING,
    price STRING
) WITH (
    'connector.type' = 'redis',
    'connector.cluster.ip' = '127.0.0.1',
    'connector.cluster.port' = '7100,7101,7200,7201,7300,7301',
    'connector.max.timeout.millis' = '60000',
    'connector.max.total' = '1',
    'connector.max.idle' = '1',
    'connector.min.idle' = '1',
    'connector.data.type' = 'string',
    'connector.expire.second' = '600',
    'connector.parallelism' = '2',
    'connector.name' = 'redisSink'
)

最后附上on yarn任务图

 

posted @ 2022-08-10 15:26  博而不客  阅读(886)  评论(0编辑  收藏  举报