官网相关
https://flink.apache.org/
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/jdbcdriver/
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/sql/overview/
https://flink.apache.org/downloads/
单机部署
解压后直接启动
bin/start-cluster.sh
bin/stop-cluster.sh
./bin/sql-gateway.sh start -Dsql-gateway.endpoint.rest.address=localhost
集群部署步骤
| node01 | node02 | node03 | node04 |
| :-------- : | :---------: | :--------- : | :---------: |
| JobManager | TaskManager | TaskManager | TaskManager |
安装包上传到node01节点
解压、修改配置文件
解压:tar -zxf flink-1.9.2-bin-scala_2.11.tgz
修改flink-conf.yaml配置文件
jobmanager.rpc.address: node01
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1024m
taskmanager.heap.size: 1024m
taskmanager.numberOfTaskSlots: 2
rest.port: 8081
修改slaves配置文件
node02
node03
node04
同步安装包到其他的节点
同步到node02 scp -r flink-1.9.2 node02:`pwd `
同步到node03 scp -r flink-1.9.2 node03:`pwd `
同步到node04 scp -r flink-1.9.2 node04:`pwd `
node01配置环境变量
vim ~/.bashrc
export FLINK_HOME=/opt/software/flink/flink-1.9.2
export PATH=$PATH :$FLINK_HOME /bin
source ~/.bashrc
standalone集群启动、停止
bin/start-cluster.sh
bin/stop-cluster.sh
./bin/sql-gateway.sh start -Dsql-gateway.endpoint.rest.address=localhost
查看Flink Web UI页面
http://node01:8081/
界面提交jar
配置jar参数
1 、main 方法全路径
2 、参数:--host localhost --port 7777
命令提交jar
bin/flink run -m jobManagerIP:8081 -c main方法全路径 jar包地址路径 -p 并行度 参数:--host localhost --port 7777
命令
查询提交的jobId集合
bin/flink list
取消提交的job
bin/flink cancel jobId
查询所有的jobId集合,包含已取消的job
bin/flink list -a
sql语法相关
{ DESCRIBE | DESC } [catalog_name.][db_name.]table_name
SHOW CREATE TABLE
SHOW CATALOGS
SHOW CURRENT CATALOG
SHOW DATABASES
SHOW CURRENT DATABASE
SHOW TABLES
SHOW CREATE TABLE
SHOW COLUMNS
SHOW PARTITIONS
SHOW PROCEDURES
SHOW VIEWS
SHOW CREATE VIEW
SHOW FUNCTIONS
SHOW MODULES
SHOW JARS
SHOW JOBS
been连接flink
beeline> !connect jdbc:flink://localhost:8083 -n 用户名 -p 密码
beeline -u jdbc:flink://localhost:8083 -n 用户名 -p 密码
表操作
< !
create table flink_t1(id int , name string) with ('connector' = 'filesystem' , 'path' = 'file:///tmp/T.csv' , 'format' = 'csv' );
insert into flink_t1 values (1 , 'hi' ), (2 , 'hello' );
select * from flink_t1;
jdbc连接flink
https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/jdbcdriver/
<dependency >
<groupId > org.apache.flink</groupId >
<artifactId > flink-sql-jdbc-driver-bundle</artifactId >
<version > 1.18.1</version >
</dependency >
Connection connection = DriverManager.getConnection("jdbc:flink://localhost:8083","username","password");
Statement statement = connection.createStatement()
statement.execute("create table sql");
statement.execute("insert into sql");
statement.execute("select * from t")
statement.execute("show databases");
rs = statement.getResultSet();
while (rs.next()) {
String databaseName = rs.getString(1);
}
DataSource dataSource = new FlinkDataSource("jdbc:flink://localhost:8083", new Properties());
Connection connection = dataSource.getConnection()
错误解答
出现此类错误,主要的原因是Current usage: 75.1 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memor
y used. Killing container.
字面原因是容器内存不够,实际上是flink on yarn启动时检查虚拟内存造成的
所以修改配置文件,让它不检查就没事了
修改etc/hadoop/yarn-site.xml
<property >
<name > yarn.nodemanager.pmem-check-enabled</name >
<value > false</value >
</property >
<property >
<name > yarn.nodemanager.vmem-check-enabled</name >
<value > false</value >
</property >
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 上周热点回顾(2.24-3.2)