安装spark之前先安装hadoop集群。
spark下载地址:
1
|
https: //downloads .apache.org /spark/ |
下载安装包:
1
|
wget https: //downloads .apache.org /spark/spark-2 .4.6 /spark-2 .4.6-bin-hadoop2.7.tgz |
安装包复制到各个节点:
1
|
scp spark-2.4.6-bin-hadoop2.7.tgz root@hadoop-node1: /root |
解压安装:
1
2
3
|
tar -xf spark-2.4.6-bin-hadoop2.7.tgz -C /usr/local/ cd /usr/local/ ln -sv spark-2.4.6-bin-hadoop2.7/ spark |
配置环境变量:
1
2
3
4
5
|
cat > /etc/profile .d /spark .sh <<EOF export SPARK_HOME= /usr/local/spark export PATH=$PATH:$SPARK_HOME /bin EOF . /etc/profile .d /spark .sh |
配置工作节点:这里将master节点也作为工作节点。
1
2
3
4
5
|
cat > /usr/local/spark/conf/slaves <<EOF hadoop-master hadoop-node1 hadoop-node2 EOF |
复制配置文件:
1
|
cp /usr/local/spark/conf/spark-env .sh.template /usr/local/spark/conf/spark-env .sh |
修改环境变量:spark会先加载这个文件里的环境变量
1
2
3
|
cat >> /usr/local/spark/conf/spark-env .sh <<EOF export SPARK_MASTER_HOST=hadoop-master EOF |
修改属组属主:
1
2
|
cd /usr/local/ chown -R hadoop.hadoop spark/ spark |
复制配置到其他节点:
1
2
|
scp ./* root@hadoop-node1: /usr/local/spark/conf/ scp ./* root@hadoop-node2: /usr/local/spark/conf/ |
启动master节点:使用hadoop用户启动。
1
2
3
|
su hadoop ~]$ . /start-master .sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.master.Master-1-hadoop-master.out |
查看主节点运行的进程:
1
2
3
4
|
~]$ jps 5078 Master 5163 Worker ... |
启动worker节点:
1
2
3
|
]$ . /start-slaves .sh hadoop-node1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.worker.Worker-1-hadoop-node1.out hadoop-node2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.worker.Worker-1-hadoop-node2.out |
node1节点:
1
2
3
|
~]$ jps 2898 Worker ... |
同时启动master和node节点:
1
2
3
4
5
|
]$ . /start-all .sh starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.master.Master-1-hadoop-master.out hadoop-master: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.worker.Worker-1-hadoop-master.out hadoop-node2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.worker.Worker-1-hadoop-node2.out hadoop-node1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-hadoop-org .apache.spark.deploy.worker.Worker-1-hadoop-node1.out |
web页面:
1
|
http: //192 .168.0.54:8080/ |
本文来自博客园,作者:大码王,转载请注明原文链接:https://www.cnblogs.com/huanghanyu/
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· AI技术革命,工作效率10个最佳AI工具