第二章 Ksql Stream

1、创建topic

sh kafka-start.sh create ksql_user

2、监听topic

sh kafka-start.sh consumer ksql_user

3、生产者

sh kafka-start.sh  producer ksql_user

#发生一条消息,如下
1647326157859,User_4,Page_66

4、ksql创建 流表

CREATE STREAM ksql_user (viewtime bigint, userid varchar, pageid varchar) WITH \
(kafka_topic='ksql_user', value_format='DELIMITED');

#注解:
    value_format 支持:JSON,DELIMITED等

5、stream 命令

#查看
show streams;

#查询流表数据
select * from ksql_user;

#删除流表
drop stream ksql_user;

===================案例测试一=======================

 场景:对ksql_user 进行分流,pageid 大于50在ksql_user_high 主题中

1、创建表(注意:kafka会自动创建一个topic为KSQL_USER_HIGH)

CREATE STREAM ksql_user_high AS
    SELECT VIEWTIME, USERID, PAGEID
    FROM KSQL_USER
    WHERE CAST(SUBSTRING(PAGEID,6,2) AS BIGINT) >= 50
    EMIT CHANGES;

2、流表查询

select * from KSQL_USER_HIGH;

3、在topic为ksql_user插入数据,同时消费KSQL_USER_HIGH查看数据

1647326157859,User_4,Page_66
1647326157859,User_7,Page_29
1647326157859,User_8,Page_16
1647326157859,User_4,Page_58

1647326157859,User_4,Page_66
1647326157859,User_7,Page_29
1647326157859,User_8,Page_16
1647326157859,User_4,Page_58
1647326157859,User_5,Page_89
1647326157859,User_4,Page_70
1647326157859,User_1,Page_80
1647326157859,User_7,Page_36
1647326157859,User_2,Page_12
1647326157859,User_1,Page_46
1647326157859,User_2,Page_25
1647326157859,User_9,Page_14
1647326157859,User_8,Page_76
1647326157859,User_5,Page_49
1647326157859,User_5,Page_62
1647326157859,User_4,Page_86
1647326157859,User_3,Page_63
1647326157859,User_9,Page_70
1647326157859,User_4,Page_80
1647326157859,User_2,Page_68
1647326157859,User_5,Page_82
1647326157859,User_2,Page_28
1647326157859,User_2,Page_87
1647326157859,User_3,Page_44
1647326157859,User_7,Page_60
1647326157859,User_6,Page_71
1647326157859,User_1,Page_40
1647326157859,User_2,Page_20
1647326157859,User_7,Page_43
1647326157859,User_1,Page_74
1647326157859,User_6,Page_78
1647326157859,User_1,Page_88

===================案例测试二==================

 1、创建流表

CREATE STREAM readings1 (
    sensor VARCHAR KEY,
    area VARCHAR,
    reading INT
) WITH (
    kafka_topic = 'readings1',
    partitions = 2,
    value_format = 'json'
);

2、插入数据

INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-1', 'wheel', 45);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-2', 'motor', 41);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-1', 'wheel', 92);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-2', 'engine', 13);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-2', 'engine', 90);

INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-4', 'motor', 95);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-3', 'engine', 67);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-3', 'wheel', 52);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-4', 'engine', 55);
INSERT INTO READINGS1 (sensor, area, reading) VALUES ('sensor-3', 'engine', 37);

3、流式计算

SET 'auto.offset.reset' = 'earliest';
CREATE TABLE get_avg_readings1 AS
    SELECT sensor,
           AVG(reading) AS avg
    FROM READINGS1
    GROUP BY sensor
    EMIT CHANGES;

 

FAQ

错误1:

umn1 EMIT CHANGES; (io.confluent.ksql.logging.query.QueryLogger:68)
[2022-03-18 16:54:22,643] ERROR Unhandled exception caught in streams thread _confluent-ksql-default_query_CTAS_AVG_READINGS_17-dff8e943-cf50-4526-9efa-24910b570d49-StreamThread-1. (UNKNOWN) (io.confluent.ksql.util.QueryMetadataImpl:199)
org.apache.kafka.streams.errors.StreamsException: Could not create topic _confluent-ksql-default_query_CTAS_AVG_READINGS_17-Aggregate-Aggregate-Materialize-changelog, because brokers don't support configuration replication.factor=-1. You can change the replication.factor config or upgrade your brokers to version 2.4 or newer to avoid this error.
        at org.apache.kafka.streams.processor.internals.InternalTopicManager.makeReady(InternalTopicManager.java:463)
        at org.apache.kafka.streams.processor.internals.ChangelogTopics.setup(ChangelogTopics.java:97)

解决方案:kafka升级

posted @ 2022-03-21 17:42  小白啊小白,Fighting  阅读(157)  评论(0编辑  收藏  举报