cluster management(hadoop always to be continue... )

installation: create user, mkdir, download, nano configuration file, nano system config, ssh masternode>all nodes, start-all

 

  • 程序源文件存放
  • 部署脚本     远程拷贝+本地配置(安装顺序及dependency)
  1. 目录结构及权限检查
  2. 远程scp
  3. 动态设置配置(hadoop/yarn以及spark集群,同步文件目录,配置文件无需修改。zookeeper单独配置每个journalnode的id。)

check installation:

  1. hadoop-2.x.y.tar.gz.mds 这个文件,该文件包含了检验值可用于检查 hadoop-2.x.y.tar.gz 的完整性,否则若文件发生了损坏或下载不完整,Hadoop 将无法正常运行。
  2. ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.x.y.jar
  3. 17/09/26 17:42:45 FATAL namenode.SecondaryNameNode: Failed to start secondary namenode

    java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.

  4.  17:44:06 ERROR datanode.DataNode: Exception in secureMain

    java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.

  5. 2017-09-26 17:57:35,332 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: SIGTERM

--------------------------------

1  机器+角色  echo"192.168.1.*" 主机名 hadoop1(别名)>/etc/hosts

192.168.1.*  ZK RM NN1

192.168.1.  ZK RM NN2 JOBHIS

192.168.1.  ZK DN ND

192.168.1.  DN QJM1 ND

192.168.1.  DN QJM2 ND

2  用户与组 useradd

hdfs yarn zookeeper hive hbase  -g hadoop

3  ssh no pass 

su

ssh-copy-id -~/.ssh/id_rsa.pub 192.168.1.*

4  修改ulimit

5  关闭防火墙 service iptables stop

6  关闭seLinux

--------------

软件:1 jdk + hadoop安装包

2ntp server --编辑conf后启动

ntp client --其他节点

3mysql

4hdfs yarn(具体配置...)

 

posted on 2017-09-22 21:22  satyrs  阅读(186)  评论(0编辑  收藏  举报

导航