……

安装Flink之前先安装hadoop集群。

Flink下载:

https://flink.apache.org/downloads.html
https://mirrors.tuna.tsinghua.edu.cn/apache/flink/flink-1.11.1/flink-1.11.1-bin-scala_2.11.tgz

解压安装:

tar -xf flink-1.11.1-bin-scala_2.11.tgz -C /usr/local/
cd /usr/local/
ln -sv flink-1.11.1/ flink

环境变量:

cat > /etc/profile.d/flink.sh <<EOF
export FLINK_HOME=/usr/local/flink
export PATH=\$PATH:\$FLINK_HOME/bin
EOF
. /etc/profile.d/flink.sh

普通用户的环境变量:

cat >> ~/.bashrc <<EOF
export FLINK_HOME=/usr/local/flink
export PATH=\$PATH:\$FLINK_HOME/bin
EOF
. ~/.bashrc

修改配置:

cd /usr/local/flink/conf
vim flink-conf.yaml
jobmanager.rpc.address: hadoop-master

配置master节点:

cat > masters <<EOF
hadoop-master
EOF

配置worker节点:master节点也可以作为worker节点

cat > workers <<EOF
hadoop-master
hadoop-node1
hadoop-node2
EOF

复制配置文件到worker节点:

scp ./* root@hadoop-node1:/usr/local/flink/conf
scp ./* root@hadoop-node2:/usr/local/flink/conf

修改属组属主:

chown -R hadoop.hadoop /usr/local/flink/ /usr/local/flink

启动集群:

start-cluster.sh

查看运行的进程:master节点。

~]$ jps
26801 TaskManagerRunner
26455 StandaloneSessionClusterEntrypoint
...

worker节点:

~]$ jps
TaskManagerRunner
...

关闭集群:

stop-cluster.sh


拓展集群:向正在运行的集群中添加JobManager、TaskManager

bin/jobmanager.sh ((start|start-foreground) [host] [webui-port])|stop|stop-all
bin/taskmanager.sh start|start-foreground|stop|stop-all

web页面:

http://192.168.0.54:8081/#/overview

image.png

 posted on 2020-09-28 17:32  大码王  阅读(647)  评论(0编辑  收藏  举报
复制代码