zookeeper安装

傻男人 1年前 ⋅ 574 阅读

zookeeper安装

docker安装

单机模式

启动
docker run -d -p 2181:2181 --name single-zookeeper --restart always zookeeper
验证zookeeper是否成功启动

进入到zookeeper容器中docker exec -it ff208526c196 bash

root@ff208526c196:/apache-zookeeper-3.7.0-bin# bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: standalone

集群模式

创建文件夹
mkdir -p /mount/zookeeper/zookeeper-cluster;
mkdir -p /mount/zookeeper/zookeeper-cluster/node1;
mkdir -p /mount/zookeeper/zookeeper-cluster/node2;
mkdir -p /mount/zookeeper/zookeeper-cluster/node3;
启动

创建集群的网络

[root@worker10-152 zookeeper-cluster]# docker network create --driver bridge --subnet=172.17.0.0/16 --gateway=172.17.0.1 zookeeper-netwrok
Error response from daemon: Pool overlaps with other one on this address space
[root@worker10-152 zookeeper-cluster]# docker network create --driver bridge --subnet=172.15.0.0/16 --gateway=172.15.0.1 zookeeper-netwrok
7d8b8cd9f9eee66a205f95a52621bf1a0e672b14ea9b24308395e359db8b945c

Error response from daemon: Pool overlaps with other one on this address space 表示目标网段已经存在,那就换一个网段

伪集群的模式,在同一台机器上搭建的集群

docker run -d -p 2181:2181 -p 2888:2888 -p 3888:3888 --name zookeeper_node1 --privileged --restart always \
--network zookeeper_network \
-v /mount/zookeeper/zookeeper-cluster/node1/data:/data \
-v /mount/zookeeper/zookeeper-cluster/node1/datalog:/datalog \
-v /mount/zookeeper/zookeeper-cluster/node1/logs:/logs \
-e ZOO_MY_ID=1 \
-e "ZOO_SERVERS=server.1=172.16.20.156:2888:3888;2181 server.2=172.16.20.156:2889:3889;2182 server.3=172.16.20.156:2890:3890;2183" zookeeper;

docker run -d -p 2182:2181 -p 2889:2888 -p 3889:3888 --name zookeeper_node2 --privileged --restart always \
--network zookeeper_network \
-v /mount/zookeeper/zookeeper-cluster/node2/data:/data \
-v /mount/zookeeper/zookeeper-cluster/node2/datalog:/datalog \
-v /mount/zookeeper/zookeeper-cluster/node2/logs:/logs \
-e ZOO_MY_ID=2 \
-e "ZOO_SERVERS=server.1=172.16.20.156:2888:3888;2181 server.2=172.16.20.156:2889:3889;2182 server.3=172.16.20.156:2890:3890;2183" zookeeper;

docker run -d -p 2183:2181 -p 2890:2888 -p 3890:3888 --name zookeeper_node3 --privileged --restart always \
--network zookeeper_network \
-v /mount/zookeeper/zookeeper-cluster/node3/data:/data \
-v /mount/zookeeper/zookeeper-cluster/node3/datalog:/datalog \
-v /mount/zookeeper/zookeeper-cluster/node3/logs:/logs \
-e ZOO_MY_ID=3 \
-e "ZOO_SERVERS=server.1=172.16.20.156:2888:3888;2181 server.2=172.16.20.156:2889:3889;2182 server.3=172.16.20.156:2890:3890;2183" zookeeper;

docker stack deploy或docker-compose安装

需要用到上面docker安装用到的网络

创建文件夹
mkdir -p /mount/zookeeper/zookeeper-cluster;
mkdir -p /mount/zookeeper/zookeeper-cluster/node4;
mkdir -p /mount/zookeeper/zookeeper-cluster/node5;
mkdir -p /mount/zookeeper/zookeeper-cluster/node6;
mkdir -p /mount/zookeeper/zookeeper-compose
编写配置文件
[root@worker10-152 /]# cd /mount/zookeeper/zookeeper-compose
[root@worker10-152 zookeeper-compose]# vim docker-compose.yml

添加以下配置

version: '3.5'
services:
  zookeeper_node4:
    image: zookeeper
    restart: always
    privileged: true
    hostname: zookeeper_node4
    ports:
      - 2181:2181
    volumes: # 挂载数据
      - /mount/zookeeper/zookeeper-cluster/node4/data:/data
      - /mount/zookeeper/zookeeper-cluster/node4/datalog:/datalog
      - /mount/zookeeper/zookeeper-cluster/node4/logs:/logs \
    environment:
      ZOO_MY_ID: 4
      ZOO_SERVERS: server.4=0.0.0.0:2888:3888;2181 server.5=zookeeper_node5:2888:3888;2181 server.6=zookeeper_node6:2888:3888;2181
    networks:
      default:
        ipv4_address: 172.15.0.10

  zookeeper_node5:
    image: zookeeper
    restart: always
    privileged: true
    hostname: zookeeper_node5
    ports:
      - 2182:2181
    volumes: # 挂载数据
      - /mount/zookeeper/zookeeper-cluster/node5/data:/data
      - /mount/zookeeper/zookeeper-cluster/node5/datalog:/datalog
      - /mount/zookeeper/zookeeper-cluster/node5/logs:/logs \
    environment:
      ZOO_MY_ID: 5
      ZOO_SERVERS: server.4=zookeeper_node4:2888:3888;2181 server.5=0.0.0.0:2888:3888;2181 server.6=zookeeper_node6:2888:3888;2181
    networks:
      default:
        ipv4_address: 172.15.0.11

  zookeeper_node6:
    image: zookeeper
    restart: always
    privileged: true
    hostname: zookeeper_node6
    ports:
      - 2183:2181
    volumes: # 挂载数据
      - /mount/zookeeper/zookeeper-cluster/node6/data:/data
      - /mount/zookeeper/zookeeper-cluster/node6/datalog:/datalog
      - /mount/zookeeper/zookeeper-cluster/node6/logs:/logs \
    environment:
      ZOO_MY_ID: 6
      ZOO_SERVERS: server.4=zookeeper_node4:2888:3888;2181 server.5=zookeeper_node5:2888:3888;2181 server.6=0.0.0.0:2888:3888;2181
    networks:
      default:
        ipv4_address: 172.15.0.12

networks: # 自定义网络
  default:
    external:
      name: zookeeper-netwrok
启动
[root@worker10-152 zookeeper-compose]# docker-compose -f docker-compose.yml up -d
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating zookeeper-compose_zookeeper_node5_1 ... done
Creating zookeeper-compose_zookeeper_node6_1 ... done
Creating zookeeper-compose_zookeeper_node4_1 ... done
查看启动是否成功
[root@worker10-152 zookeeper-compose]# docker ps
CONTAINER ID   IMAGE                                                    COMMAND                  CREATED         STATUS                PORTS                                                                                                                                NAMES
d236fe402b31   zookeeper                                                "/docker-entrypoint.…"   4 seconds ago   Up 2 seconds          2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp                                                              zookeeper-compose_zookeeper_node4_1
f3036e685648   zookeeper                                                "/docker-entrypoint.…"   4 seconds ago   Up 2 seconds          2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp, :::2183->2181/tcp                                                              zookeeper-compose_zookeeper_node6_1
e46314134179   zookeeper                                                "/docker-entrypoint.…"   4 seconds ago   Up 2 seconds          2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp, :::2182->2181/tcp                                                              zookeeper-compose_zookeeper_node5_1
停止集群
docker-compose stop
删除所有已停止的容器
docker-compose rm

安装包安装方式

已3.4.11版本为例

解压到指定的额安装目录

tar -zxvf zookeeper-3.4.11.tar.gz -C /usr/local

重命名文件目录

mv /usr/local/zookeeper-3.4.11 /usr/local/zookeeper

配置/usr/local/zookeeper/conf/zoo_sample.cfg重命令

拷贝zoo_sample.cfg并重命名为zoo.cfg,重点配置如下内容:

配置如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=172.16.213.51:2888:3888
server.2=172.16.213.109:2888:3888
server.3=172.16.213.75:2888:3888

每个配置项含义如下:

  • tickTime:zookeeper使用的基本时间度量单位,以毫秒为单位,它用来控制心跳和超时。2000表示2 tickTime。更低的tickTime值可以更快地发现超时问题。
  • initLimit:这个配置项是用来配置Zookeeper集群中Follower服务器初始化连接到Leader时,最长能忍受多少个心跳时间间隔数(也就是tickTime)
  • syncLimit:这个配置项标识Leader与Follower之间发送消息,请求和应答时间长度最长不能超过多少个tickTime的时间长度
  • dataDir:必须配置项,用于配置存储快照文件的目录。需要事先创建好这个目录,如果没有配置dataLogDir,那么事务日志也会存储在此目录。
  • clientPort:zookeeper服务进程监听的TCP端口,默认情况下,服务端会监听2181端口。
  • server.A=B:C:D:其中A是一个数字,表示这是第几个服务器;B是这个服务器的IP地址;C表示的是这个服务器与集群中的Leader服务器通信的端口;D 表示如果集群中的Leader服务器宕机了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。 除了修改zoo.cfg配置文件外,集群模式下还要配置一个文件myid,这个文件需要放在dataDir配置项指定的目录下,这个文件里面只有一个数字,如果要写入1,表示第一个服务器,与zoo.cfg文本中的server.1中的1对应,以此类推,在集群的第二个服务器zoo.cfg配置文件中dataDir配置项指定的目录下创建myid文件,写入2,这个2与zoo.cfg文本中的server.2中的2对应。Zookeeper在启动时会读取这个文件,得到里面的数据与zoo.cfg里面的配置信息比较,从而判断每个zookeeper server的对应关系。 为了保证zookeeper集群配置的规范性,建议将zookeeper集群中每台服务器的安装和配置文件路径都保存一致。

启动

[root@localhost ~]# cd /usr/local/zookeeper/bin
[root@localhost bin]# ./zkServer.sh  start

查看启动状态

[root@localhost kafka]# jps
23097 QuorumPeerMain

Zookeeper启动后,通过jps命令(jdk内置命令)可以看到有一个QuorumPeerMain标识,这个就是Zookeeper启动的进程,前面的数字是Zookeeper进程的PID。

添加到环境变量

有时候为了启动Zookeeper方面,也可以添加zookeeper环境变量到系统的/etc/profile中,这样,在任意路径都可以执行“zkServer.sh start”命令了,添加环境变量的内容为:

export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin

全部评论: 0

    我有话说: