linux 多个文件中查找字符串 hadoop 3 安装 调试

http://www.cnblogs.com/iLoveMyD/p/4281534.html

 

2015年2月9日 14:36:38

# find <directory> -type f -name "*.c" | xargs grep "<strings>"

<directory>是你要找的文件夹;如果是当前文件夹可以省略
-type f 意思是只找文件
-name "*.c" 表示只找C语言写的代码,从而避免去查binary;也可以不写,表示找所有文件
<strings>是你要找的某个字符串


Stopping secondary namenodes [bigdata-server-02]
Last login: Thu Dec 21 17:18:39 CST 2017 on pts/0
ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting.
Stopping nodemanagers
Last login: Thu Dec 21 17:18:42 CST 2017 on pts/0
Stopping resourcemanager
Last login: Thu Dec 21 17:18:46 CST 2017 on pts/0
[root@bigdata-server-02 hadoop]# vim etc/hadoop/hadoop-env.sh


[root@bigdata-server-02 hadoop]# find . -type f | xargs grep HADOOP_WORKER
./sbin/workers.sh:#   HADOOP_WORKERS    File naming remote hosts.
./sbin/workers.sh:#   HADOOP_WORKER_SLEEP Seconds to sleep between spawning remote commands.
grep: ./share/hadoop/yarn/webapps/ui2/assets/images/datatables/Sorting: No such file or directory
grep: icons.psd: No such file or directory
./share/doc/hadoop/hadoop-project-dist/hadoop-common/UnixShellAPI.html:<p>Connect to ${HADOOP_WORKERS} or ${HADOOP_WORKER_NAMES} and execute command.</p>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/UnixShellAPI.html:<p>Connect to ${HADOOP_WORKER_NAMES} and execute command under the environment which does not support pdsh.</p>
./bin/hadoop:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/yarn:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/mapred:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./bin/hdfs:if [[ ${HADOOP_WORKER_MODE} = true ]]; then
./etc/hadoop/hadoop-env.sh:#export HADOOP_WORKERS="${HADOOP_CONF_DIR}/workers"
./etc/hadoop/hadoop-user-functions.sh.example:#  tmpslvnames=$(echo "${HADOOP_WORKER_NAMES}" | tr ' ' '\n' )
./libexec/hadoop-config.cmd:  set HADOOP_WORKERS=%HADOOP_CONF_DIR%\%2
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVES HADOOP_WORKERS
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVE_NAMES HADOOP_WORKER_NAMES
./libexec/hadoop-config.sh:hadoop_deprecate_envvar HADOOP_SLAVE_SLEEP HADOOP_WORKER_SLEEP
./libexec/yarn-config.sh:  hadoop_deprecate_envvar YARN_SLAVES HADOOP_WORKERS
./libexec/hadoop-functions.sh:    HADOOP_WORKERS="${workersfile}"
./libexec/hadoop-functions.sh:    HADOOP_WORKERS="${HADOOP_CONF_DIR}/${workersfile}"
./libexec/hadoop-functions.sh:## @description  Connect to ${HADOOP_WORKERS} or ${HADOOP_WORKER_NAMES}
./libexec/hadoop-functions.sh:  if [[ -n "${HADOOP_WORKERS}" && -n "${HADOOP_WORKER_NAMES}" ]] ; then
./libexec/hadoop-functions.sh:    hadoop_error "ERROR: Both HADOOP_WORKERS and HADOOP_WORKER_NAMES were defined. Aborting."
./libexec/hadoop-functions.sh:  elif [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
./libexec/hadoop-functions.sh:    if [[ -n "${HADOOP_WORKERS}" ]]; then
./libexec/hadoop-functions.sh:      worker_file=${HADOOP_WORKERS}
./libexec/hadoop-functions.sh:    if [[ -z "${HADOOP_WORKER_NAMES}" ]] ; then
./libexec/hadoop-functions.sh:      tmpslvnames=$(echo ${HADOOP_WORKER_NAMES} | tr -s ' ' ,)
./libexec/hadoop-functions.sh:    if [[ -z "${HADOOP_WORKER_NAMES}" ]]; then
./libexec/hadoop-functions.sh:      HADOOP_WORKER_NAMES=$(sed 's/#.*$//;/^$/d' "${worker_file}")
./libexec/hadoop-functions.sh:## @description  Connect to ${HADOOP_WORKER_NAMES} and execute command
./libexec/hadoop-functions.sh:  local workers=(${HADOOP_WORKER_NAMES})
./libexec/hadoop-functions.sh:        HADOOP_WORKER_NAMES="$1"
./libexec/hadoop-functions.sh:        HADOOP_WORKER_MODE=true
[root@bigdata-server-02 hadoop]#




[root@hadoop3 logs]# cat hadoop-root-namenode-hadoop3.log

 

 

2017-12-29 15:06:50,183 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-12-29 15:06:50,190 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:9870
at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1133)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1155)
at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1214)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1069)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:888)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:950)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:929)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1653)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1720)
Caused by: java.net.BindException: 地址已在使用
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317)
at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1120)
at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1151)
... 9 more
2017-12-29 15:06:50,192 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...

但是并没有配置这个端口啊 

 

find检索字符串

[root@hadoop3 hadoop]# find . -type f | xargs grep 9870
grep: ./share/hadoop/yarn/webapps/ui2/assets/images/datatables/Sorting: 没有那个文件或目录
grep: icons.psd: 没有那个文件或目录
./share/doc/hadoop/hadoop-yarn/hadoop-yarn-registry/apidocs/org/apache/hadoop/registry/client/types/AddressTypes.html: ["namenode.example.org", "9870"]
./share/doc/hadoop/api/org/apache/hadoop/registry/client/types/AddressTypes.html: ["namenode.example.org", "9870"]
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml: <value>0.0.0.0:9870</value>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html:<p>NameNode and DataNode each run an internal web server in order to display basic information about the current status of the cluster. With the default configuration, the NameNode front page is at <tt>http://namenode-name:9870/</tt>. It lists the DataNodes in the cluster and basic statistics of the cluster. The web interface can also be used to browse the file system (using &#x201c;Browse the file system&#x201d; link on the NameNode front page).</p></div>
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html: &lt;value&gt;machine1.example.com:9870&lt;/value&gt;
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html: &lt;value&gt;machine2.example.com:9870&lt;/value&gt;
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html: &lt;value&gt;machine3.example.com:9870&lt;/value&gt;
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html: &lt;value&gt;machine1.example.com:9870&lt;/value&gt;
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html: &lt;value&gt;machine2.example.com:9870&lt;/value&gt;
./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html: &lt;value&gt;machine3.example.com:9870&lt;/value&gt;
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/3.0.0-alpha1/CHANGES.3.0.0-alpha1.html:<td align="left"> <a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9870">HDFS-9870</a> </td>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/3.0.0-alpha1/RELEASENOTES.3.0.0-alpha1.html:<p>The patch updates the HDFS default HTTP/RPC ports to non-ephemeral ports. The changes are listed below: Namenode ports: 50470 &#x2013;&gt; 9871, 50070 &#x2013;&gt; 9870, 8020 &#x2013;&gt; 9820 Secondary NN ports: 50091 &#x2013;&gt; 9869, 50090 &#x2013;&gt; 9868 Datanode ports: 50020 &#x2013;&gt; 9867, 50010 &#x2013;&gt; 9866, 50475 &#x2013;&gt; 9865, 50075 &#x2013;&gt; 9864</p><hr />
./share/doc/hadoop/hadoop-project-dist/hadoop-common/release/2.8.0/CHANGES.2.8.0.html:<td align="left"> <a class="externalLink" href="https://issues.apache.org/jira/browse/HDFS-9870">HDFS-9870</a> </td>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/CommandsManual.html:<pre class="source">$ bin/hadoop daemonlog -setlevel 127.0.0.1:9870 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG
./share/doc/hadoop/hadoop-project-dist/hadoop-common/SingleCluster.html:<li>NameNode - <tt>http://localhost:9870/</tt></li>
./share/doc/hadoop/hadoop-project-dist/hadoop-common/ClusterSetup.html:<td align="left"> Default HTTP port is 9870. </td></tr>
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:06:50,085 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:06:50,193 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:23:48,931 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:java.net.BindException: Port in use: 0.0.0.0:9870
./logs/hadoop-root-namenode-hadoop3.log:2017-12-29 15:23:49,035 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Port in use: 0.0.0.0:9870
[root@hadoop3 hadoop]# xlc
Stopping namenodes on [hadoop3]

 

9870为默认

./share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml: <value>0.0.0.0:9870</value>

<property>
<name>dfs.namenode.http-address</name>
<value>0.0.0.0:9870</value>
<description>
The address and the base port where the dfs namenode web ui will listen on.
</description>
</property>

 

vim查找9870

:/9870

:?9870

 

 





posted @ 2017-12-21 17:44  papering  阅读(755)  评论(0编辑  收藏  举报