Hadoop在启动时的坑——start-all.sh报错
1、若你用的Linux系统是CentOS的话,这是一个坑:
它会提示你JAVA_HOME找不到,现在去修改文件:
.修改hadoop配置文件,手动指定JAVA_HOME环境变量 [${hadoop_home}/etc/hadoop/hadoop-env.sh] ... export JAVA_HOME=/soft/jdk ...
这是CentOS的一个大坑,手动配置JAVA_HOME环境变量。
2、启动后无NameNode进程
如果在启动Hadoop,start-all.sh之后一切正常。但是Jps查看进程时发现进程中少了一个NameNode进程,不要慌张。
跳转解决 :https://www.cnblogs.com/dongxiucai/p/9636177.html
3、一定要设置ssh免密登陆,切记
配置SSH 1)检查是否安装了ssh相关软件包(openssh-server + openssh-clients + openssh) $yum list installed | grep ssh 2)检查是否启动了sshd进程 $>ps -Af | grep sshd 3)在client侧生成公私秘钥对。 $>ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa 4)生成~/.ssh文件夹,里面有id_rsa(私钥) + id_rsa.pub(公钥) 5)追加公钥到~/.ssh/authorized_keys文件中(文件名、位置固定) $>cd ~/.ssh $>cat id_rsa.pub >> authorized_keys 6)修改authorized_keys的权限为644. $>chmod 644 authorized_keys 7)测试 $>ssh localhost
4、报以下错误:
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
15/01/23 20:23:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) Client VM warning: You have loaded library /hadoop/hadoop-2.6.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
hd-m1]
sed: -e expression #1, char 6: unknown option to `s'
-c: Unknown cipher type 'cd'
这个是因为你的操作系统、jdk、hadoop的位数不匹配,有32位的又有64位的。
查看位数跳转:https://www.cnblogs.com/dongxiucai/p/9637403.html
常规解决方案为 :
主要是环境变量设置好:
在 /etc/profile 中加入
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
然后重新编译 source /etc/profile
并把相同配置添加到hadoop-env.sh文件末尾
一般情况都能解决。
若是还不行的话,就需要更换相匹配的版本了
5、报以下错误:
mkdir: cannot create directory ‘/soft/hadoop-2.7.3/logs’: Permission denied
这是在创建logs时无权限,原因是/soft目录的用户权限为root,需要修改为hadoop用户权限
注意:hadoop为用户名,/soft为安装目录,因人而异
解决方案:
1、先切换到root用户 su root 2、修改/soft目录的用户权限,记住要递归 chown -R hadoop:hadoop /soft // -R是递归修改 3、查看修改结果 drwxr-xr-x. 3 hadoop hadoop 4096 8月 11 06:13 hadoop drwxr-xr-x. 3 hadoop hadoop 4096 8月 11 06:20 jdk 修改成功
6、格式化namenode后启动hdfs报:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. Starting namenodes on [] py_1: starting namenode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-hjt-virtual-machine.out py_3: starting namenode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-ubuntu.out py_2: starting namenode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-namenode-cyrg.out py_1: starting datanode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-hjt-virtual-machine.out py_3: starting datanode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-ubuntu.out py_2: starting datanode, logging to /soft/hadoop/hadoop/hadoop-2.5.0-cdh5.3.6/logs/hadoop-hadoop-datanode-cyrg.out
发现,namenode一共启动了3台,全部启动。在反复检查过所有的配置后发现没有错。
其实大家现在可以看到我的机器的名称:py_1、py_2、py_3就是名称带下滑线,这个切记。改为py01、py02、py03完美解决。
有帮助的话,点个推荐让更多人看到