集群纪要
1.kafaka方面
kafka-shell基本命令
在节点hadoop-2,hadoop-3,hadoop-5,启动kafka
启动命令如下
kafka-server-start.sh /usr/local/kafka_2.11-0.10.0.1/config/server.properties > /usr/local/kafka_2.11-0.10.0.1/logs/logs &
1.创建topic
kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --create --topic your.topic.name --partitions 30 --replication-factor 1
partitions指定topic分区数,replication-factor指定topic每个分区的副本数(一般等于broker个数)
2.查看topic列表
kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --list
3.查看topic信息
kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --describe --topic your.topic.name
4.向topic生产数据
kafka-console-producer.sh --broker-list hadoop-2:9092,hadoop-4:9092,hadoop-5:9092 --topic your.topic.name
5.消费者消费数据
kafka-console-consumer.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 -topic your.topic.name --from-beginning
6.查看topic某分区偏移量的最大(小)值
kafka-run-class.sh kafka.tools.GetOffsetShell --topic kafkademo --time -1 --broker-list hadoop-2:9092,hadoop-4:9092,hadoop-5:9092 --partitions 0
7.增加topic分区数量(只能增加,无法减少)
kafka-topics.sh --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --alter --topic your.topic.name --partitions 40
8.查看kafka消费进度
kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper hadoop-2:2181,hadoop-3:2181,hadoop-5:2181 --group pv
2.脚本方面
1)关闭
#!/bin/bash
ssh hadoop-2 "/usr/local/spark/spark-2.2.1-bin-hadoop2.7/sbin/stop-all.sh"
sleep 10s
ssh hadoop-3 "/usr/local/hadoop/hadoop-2.7.3/sbin/stop-yarn.sh"
sleep 30s
/usr/local/hadoop/hadoop-2.7.3/sbin/stop-dfs.sh
sleep 30s
ssh hadoop-5 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh stop"
sleep 3s
ssh hadoop-3 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh stop"
sleep 3s
ssh hadoop-2 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh stop"
sleep 3s
2)开启方面(starthadoop.sh)
#!/bin/bash
ssh hadoop-2 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start"
sleep 5s
ssh hadoop-3 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start"
sleep 5s
ssh hadoop-5 "/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start"
sleep 5s
/usr/local/hadoop/hadoop-2.7.3/sbin/start-dfs.sh
sleep 30s
ssh hadoop-3 "/usr/local/hadoop/hadoop-2.7.3/sbin/start-yarn.sh"
sleep 30s
ssh hadoop-2 "/usr/local/spark/spark-2.2.1-bin-hadoop2.7/sbin/start-all.sh"
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Ⅰ)启动集群
virsh start hadoop-1
virsh start oracleSigle
1.启动1.3的mysql
systemctl start mariadb
Ⅱ)启动1.238 orcale的启动
1)首先登陆服务器,切换到oracle用户
su - oracle
2)接下来,检查oracle监听器运行状态,通过lsnrctl status命令查看。
lsnrctl status
3)上述反馈的结果即为oracle监听没有启动,下面执行启动监听,通过命令lsnrctl start
lsnrctl start
A)启动oracle实例
sqlplus /nolog
SQL> conn as sysdba //会让输入用户名:test,密码:test
B)然后启动实例,通过startup命令:
SQL> startup
#C)关闭oracle实例
通过shutdown命令关闭oracle实例。
SQL> shutdown
Ⅲ)启动各种大数据平台
1). 运行1.115的的脚本,启动zookeeper集群
./starthadoop.sh
2). 在节点hadoop-2,hadoop-3,hadoop-5,启动kafka
cd /usr/local/kafka_2.11-0.10.0.1/bin //进入该目录下
启动
./kafka-server-start.sh /usr/local/kafka_2.11-0.10.0.1/config/server.properties > /usr/local/kafka_2.11-0.10.0.1/logs/logs &
3). 在节点hadoop-1,hadoop-2,hadoop-3,hadoop-4,hadoop-5,启动HBase
cd /usr/local/hbase-1.2.4/bin/ //进入该目录下
./start-hbase.sh
进入shell中
pwd
[root@hadoop-1 bin]# /usr/local/hbase-1.2.4/bin/hbase shell
hadoop dfsadmin -safemode leave //退出安全模式
重启hbase
----------------------------------------------------------------------------------------------
shell操作
1) tail -n 1000 文件名 显示最后1000行
2) free -m 查看内存以M显示 total 就是总共的内存/used:已经使用的内存数/free:空闲的内存数/shared:多个进程共享的内存总额
3)rm -rf 文件夹 //强制删除文件夹
4)rm -r 文件 //强制删除文件
5)sudo scp jdk-8u172-linux-x64.tar.gz 192.168.1.172:/usr/ 将一个文件发送到另一台机器
6) find /etc -name '*srm*'
cp [-adfilprsu] 源文件(source) 目标文件(destination) //cp zoo_sample.cfg zoo.cfg
scp /home/space/music/1.mp3 root@www.runoob.com:/home/root/others/music //复制到远程机器上
例如:scp -r zoo.cfg weekend03:/usr/local/zookeeper-3.4.8/conf
# tail -f catalina.out
9)echo $JAVA_HOME 查看java安装路径
10) history 10 查看最近使用的命令
gitLub的用户名:"kkk" 密码 "461476321za."邮箱:1154107593@qq.com
A)virsh shutdown +域名 关闭虚拟机
B) mv starthadoop.sh /root/ 将文件移动到/root目录下
C) chmod 777 file 文件授权
D)cp spark-env.sh.template spark-env.sh //cp主要复制文件或目录
E) yum install wget yum安装某个软件
F) vi +35 ../config/elasticsearch.yml //指定某行进修改
G) yum list installed | grep ruby //grep“软名或者包名”指定查看安装那个软件包
rpm -qa | grep ruby
dpkg -l | grep ruby
如果查看安装全部软件包,以yum为例:yum list installed
H) 卸载:bin/kibana-plugin remove x-pack // 中的x-pack是卸载的包
1)netstat -nltp //查看开启的服务或开启的端口 (只展示有用数据,推荐使用)
2)netstat -anp //查看开启服务和服务连接的主机 (不推荐使用)
3)vncserver :1 //开启nvc服务连接
4)yum list installed |grep java //查看linux中安装有哪些java软件
5) 删除文件下的所有文件 rm -rf kafka_2.11-0.10.0.1/ 谨慎使用,删后不可恢复
6)netstat -tlup|grep vnc //查看vnc的进程
7)iptables -I INPUT -p tcp --dport 5901 -j ACCEPT //自己的vnc启动命令
8)ll -h //按照常人可查看的形式查看。
-------------------------------------手动启动集群服务------------------------------------------------
1.开启zookeeper
---hadoop-2,hadoop-3,hadoop-5执行
/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start
/usr/local/zookeeper/zookeeper-3.4.8/bin/zkServer.sh status
2.开启hadoop集群
---journalnode节点(hadoop-2,hadoop-3,hadoop-5)
hadoop-daemon.sh start journalnode
---主namenode节点(hadoop-1,hadoop-4)
hadoop-daemon.sh start namenode
---hadoop-1和hadoop-4
hadoop-daemon.sh start zkfc
---datanode节点(hadoop-1,hadoop-2,hadoop-3,hadoop-4,hadoop-5)
hadoop-daemon.sh start datanode
---hadoop-3节点
start-yarn.sh
3.hbase启动
---在hadoop-2节点
start-hbase.sh
http://192.168.1.116:16010
---在hadoop-2节点,关闭HMaster
hbase-daemon.sh stop master
//在hbase shell运行 list报错解决方法
先启动regionserver,在启动HMaster。
在regionServer上./hbase-daemon.sh start regionserver
在master上执行:./bin/hbase-daemon.sh start master