zookeeper (3.8.4)

IP4 zookeeper 2181:客户端连接ZooKeeper集群的默认端口 2888:Follower节点通过这个端口从Leader节点同步数据,以保持集群数据的一致性。3888:ZooKeeper集群中的节点通过这个端口进行Leader选举。当集群中的Leader节点发生故障时,其他节点会通过这个端口进行选举,产生新的Leader
IP5 zookeeper 2181:客户端连接ZooKeeper集群的默认端口 2888:Follower节点通过这个端口从Leader节点同步数据,以保持集群数据的一致性。3888:ZooKeeper集群中的节点通过这个端口进行Leader选举。当集群中的Leader节点发生故障时,其他节点会通过这个端口进行选举,产生新的Leader
IP6 zookeeper 2181:客户端连接ZooKeeper集群的默认端口 2888:Follower节点通过这个端口从Leader节点同步数据,以保持集群数据的一致性。3888:ZooKeeper集群中的节点通过这个端口进行Leader选举。当集群中的Leader节点发生故障时,其他节点会通过这个端口进行选举,产生新的Leader

创建用户(如果已创建则跳过该步骤)

(IP4,IP5,IP6这三台都执行)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#添加用户及用户组
adduser app

#设置密码
passwd app

#添加用户至用户组
usermod -aG app app

#设置目录权限
chown -R app:app /data

#切换用户
su app

配置hosts(每台服务器配置)

1
2
3
4
5
6
#编辑hosts
vim /etc/hosts

IP4 storageServer1
IP5 storageServer2
IP6 storageServer4

配置jdk

1
2
3
4
5
6
7
8
9
10
11
12
13
#编辑环境变量
vim /etc/profile

#设置java环境变量
export JAVA_HOME=/data/jdk1.8.0_231
export JRE_HOME=/data/jdk1.8.0_231/jre
export CLASS_PATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

#生效环境变量
source /etc/profile


Zookeeper集群配置

(IP4,IP5,IP6这三台都执行)

  • zookeeper解压
1
2
cd /data
tar -zxvf apache-zookeeper-3.8.4-bin.tar.gz
  • 配置Zookeeper
1
2
3
4
5
6
7
vim /etc/profile

export ZK_HOME=/data/zookeeper
export PATH=$PATH:$ZK_HOME/bin

source /etc/profile

  • 修改zookeeper配置文件
1
2
3
4
5
6
#复制配置文件
cd /data/zookeeper/conf/

cp zoo_sample.cfg zoo.cfg

vim zoo.cfg
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/data/zookeeper/tmp/data
dataLogDir=/data/zookeeper/tmp/log
# the port at which the clients will connect
clientPort=2181

server.1=storageServer1:2888:3888
server.2=storageServer2:2888:3888
server.3=storageServer3:2888:3888
  • 创建文件夹
1
2
3
mkdir /data/zookeeper/tmp/data
mkdir /data/zookeeper/tmp/log

  • 分发已经配置好的Redis文件夹到各个服务器
1
2
3
scp -r /data/zookeeper app@IP5:/data/zookeeper
scp -r /data/zookeeper app@IP6:/data/zookeeper

  • 在(IP4,IP5,IP6)三台服务器上dataDir目录下分别创建myid
1
2
3
4
5
6
7
8
cd /data/zookeeper/tmp/data/

touch myid
# 每台服务器上的myid分别写入一个id,而非一个myid写入三个id
echo 1 > myid
echo 2 > myid
echo 3 > myid

systemd脚本【所有主机执行】

vim /usr/lib/systemd/system/zookeeperd.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[Unit]
Description=Zookeeper antiFraud service
After=network.target remote-fs.target
StartLimitIntervalSec=0

[Service]
User=app
Type=forking
Restart=always
RestartSec=1
Environment=JAVA_HOME=/data/jdk1.8.0_231
WorkingDirectory=/data/zookeeper-3.4.8
ExecStart=/data/zookeeper/bin/zkServer.sh start
ExecStop=/data/zookeeper/bin/zkServer.sh stop

[Install]
WantedBy=multi-user.target

  • 实现systemctl开机自启动服务
1
2
3
4
systemctl enable zookeeperd.service
systemctl daemon-reload
systemctl list-unit-files|grep zookeeperd

  • 执行以下命令使得app用户可以执行systemd命令
1
echo "app ALL = (root) NOPASSWD:/usr/bin/systemctl start zookeeperd,/usr/bin/systemctl restart zookeeperd,/usr/bin/systemctl stop zookeeperd,/usr/bin/systemctl reload zookeeperd,/usr/bin/systemctl status zookeeperd" | sudo tee /etc/sudoers.d/app