原创文章,转载请注明: 转载自http://www.cnblogs.com/tovin/p/3966570.html

 

一、storm集群整体部署

  集群总共使用了6台机器:

    storm使用3个节点(nimbus在node01, supervisor在node02、node03)

    zookeeper使用3个节点(node04、node05、node06)

  

 二、zookeeper安装

  1、使用最新的稳定版本3.4.6,下载地址http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.6.tar.gz

tar zxvf zookeeper-3.4.6.tar.gz 
ln -s zookeeper-3.4.6 zookeeper
rm  zookeeper-3.4.6.tar.gz
cp zookeeper/conf/zoo_sample.cfg zookeeper/conf/zoo.cf

  2、修改配置文件zoo.cfg , 根据自己的实际情况修改红色部分即可,其余配置可以使用默认的。

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=node06:2888:3888
server.2=node05:2888:3888
server.3=node04:2888:3888

 

 3、zookeeper的所有节点上执行1、2步骤后,在zoo.cfg中配置的dataDir目录下添加myid文件,内容为server.1=node06:2888:3888配置中server后面的id(node06的myid文件内容为1)

echo "1" > zookeeper/data/myid    (node06)
echo "2" > zookeeper/data/myid    (node05)
echo "3" > zookeeper/data/myid    (node04)

 

   4、在所有节点上启动zookeeper 

    bin/zkServer.sh start

    jps命令验证QuorumPeerMain进程是否启动成功

     

    启动客户端连接测试:  bin/zkCli.sh -server node06:2181

    至此,Zookerper安装成功

 

三、storm安装

  选用最新的0.9.2版本进行安装部署,下载地址http://mirror.bit.edu.cn/apache/incubator/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz

  1、安装前提(自行先安装java、python)

       Java 6及以上
        Python 2.6.6及以上

  2、安装storm包 

 tar zxvf apache-storm-0.9.2-incubating.tar.gz 
 ln -s apache-storm-0.9.2-incubating storm
 rm apache-storm-0.9.2-incubating.tar.gz

 

  3、修改conf/storm.yaml配置文件,根据自己的实际情况修改红色部分即可,其余配置可以使用默认的。(配置文件采用yaml格式,可以百度下它的具体语法,

  以免格式使用错误,造成启动失败)

########### These MUST be filled in for a storm configuration
 storm.zookeeper.servers:
     - "node06"
     - "node05"
     - "node04"

#The worker nodes need to know which machine is the master in order to download #topology jars and confs
 nimbus.host: "node01"

#The Nimbus and Supervisor daemons require a directory on the local disk to store small amounts of state (like jars, confs, and things like that). You should create that directo#ry on each machine, give it proper permissions, and then fill in the directory location using this config.
 storm.local.dir: "/usr/local/storm/data" 

#For each worker machine, you configure how many workers run on that machine with this config. Each worker uses a single port for receiving messages, and this setting defines which ports are open for use. If you define five ports here, then Storm will allocate up to five workers to run on this machine. If you define three ports, Storm will only run up to three. By default, this setting is configured to run 4 workers on the ports 6700, 6701, 6702, and 6703. 
 supervisor.slots.ports: 
   - 6700 
   - 6701

 ui.port: 6066
# ##### These may optionally be filled in:
#    
## List of custom serializations
# topology.kryo.register:
#     - org.mycompany.MyType
#     - org.mycompany.MyType2: org.mycompany.MyType2Serializer
#
## List of custom kryo decorators
# topology.kryo.decorators:
#     - org.mycompany.MyDecorator
#
## Locations of the drpc servers
# drpc.servers:
#     - "server1"
#     - "server2"

## Metrics Consumers
# topology.metrics.consumer.register:
#   - class: "backtype.storm.metric.LoggingMetricsConsumer"
#     parallelism.hint: 1
#   - class: "org.mycompany.MyMetricsConsumer"
#     parallelism.hint: 1
#     argument:
#       - endpoint: "metrics-collector.mycompany.org"

  4、在storm的所有节点上执行1、2、3步骤后

    node01启动nimbus:bin/storm nimbus >/dev/null 2>&1 &

    node02、node03启动supervisor:bin/storm supervisor >/dev/null 2>&1 &

    node01启动ui: bin/storm ui >/dev/null 2>&1 &

  5、访问node01:6066查看storm ui页面

    

  

  原创文章,转载请注明: 转载自http://www.cnblogs.com/tovin/p/3966570.html

 

 

posted on 2014-09-11 16:04  tovin  阅读(2515)  评论(0编辑  收藏  举报