基本环境:

[java]

redhat6.4
jdk1.7 已配置环境变量
三台主机IP分别是:192.168.0.2 192.168.0.3 192.168.0.4
[/java]

配置过程如下:
1.解压zookeeper压缩包至目录下,假设解压后为/opt/zookeeper-3.4.6
2.配置环境变量:

[java]
export ZOOKEEPER_HOME=/opt/zookeeper-3.4.6
[/java]

3.修改ZOOKEEPER_HOME/conf/zoo.cfg配置文件,配置文件如下:

[java]

The number of milliseconds of each tick

#作为服务器客户端之间维持心跳的时间间隔
tickTime=2000

The number of ticks that the initial

synchronization phase can take

#最长忍受10个心跳时间后超时
initLimit=10

The number of ticks that can pass between

sending a request and getting an acknowledgement

#请求与应答时间的时间长度,超过就超时
syncLimit=5
#server.A=192.168.0.2:3888:3988 A表示服务器顺序,中间是IP 3888是端口,3988表示leader服务器挂了之后,用这个端口执行选举时服务器相互通信的端口
server.1=192.168.0.2:2888:3888
server.2=192.168.0.3:2888:3888
server.3=192.168.0.4:2888:3888

the directory where the snapshot is stored.

do not use /tmp for storage, /tmp here is just

example sakes.

#Zookeeper保存数据的目录
dataDir=/opt/zookeeper_data/data
dataLogDir=/opt/zookeeper_data/logs

the port at which the clients will connect

#客户端连接zookeeper服务器的端口
clientPort=2181

the maximum number of client connections.

increase this if you need to handle more clients

#maxClientCnxns=60
#

Be sure to read the maintenance section of the

administrator guide before turning on autopurge.

#

http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

#

The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

Purge task interval in hours

Set to "0" to disable auto purge feature

#autopurge.purgeInterval=1
[/java]

4.scp该目录到另外两台机器
5.创建如下目录
dataDir=/opt/zookeeper_data/data
dataLogDir=/opt/zookeeper_data/logs
然后在/opt/zookeeper_data/data目录下创建myid文件,并且在里面写入机器的顺序,比如zoo.cfg中所对应的server.1的机器的文件中写1。
6.分别启动zookeeper程序。

[java]
$ZOOKEEPER_HOME/bin/zkServer.sh start
[/java]

7.检验是否启动成功

[java]
$ZOOKEEPER_HOME/bin/zkServer.sh status
[/java]

会看到如下两种结果
Mode: leader 或者follower
意思显而易见。

zookeeper3.4.6完全分布式集群搭建过程
Tagged on:

One thought on “zookeeper3.4.6完全分布式集群搭建过程

发表评论

电子邮件地址不会被公开。 必填项已用*标注

This site uses Akismet to reduce spam. Learn how your comment data is processed.