Openshift部署Zookeeper和Kafka
部署Zookeeper
github网址
https://github.com/ericnie2015/zookeeper-k8s-openshift
1.在openshift目录中,首先构建images
oc create -f buildconfig.yaml
oc new-app zk-builder -p IMAGE_STREAM_VERSION="3.4.13"
buildconfig.yaml中主要是启动github中的docker类型的构建,并把结果push到imagestream中
- kind: BuildConfig apiVersion: v1 metadata: name: zk-builder spec: runPolicy: Serial triggers: - type: GitHub github: secret: ${GITHUB_HOOK_SECRET} - type: ConfigChange source: git: uri: ${GITHUB_REPOSITORY} ref: ${GITHUB_REF} strategy: type: Docker output: to: kind: ImageStreamTag name: "${IMAGE_STREAM_NAME}:${IMAGE_STREAM_VERSION}"
而Dockerfile更多的是一个从无到有的构建过程,为解决启动无法访问/opt/zookeeper/data目录,把user设置成root
FROM openjdk:8-jre-alpine MAINTAINER Enrique Garcia <engapa@gmail.com> ARG ZOO_HOME=/opt/zookeeper ARG ZOO_USER=zookeeper ARG ZOO_GROUP=zookeeper ARG ZOO_VERSION="3.4.13" ENV ZOO_HOME=$ZOO_HOME \ ZOO_VERSION=$ZOO_VERSION \ ZOO_CONF_DIR=$ZOO_HOME/conf \ ZOO_REPLICAS=1 # Required packages RUN set -ex; \ apk add --update --no-cache \ bash tar wget curl gnupg openssl ca-certificates # Download zookeeper distribution under ZOO_HOME /zookeeper-3.4.13/ ADD zk_download.sh /tmp/ RUN set -ex; \ mkdir -p /opt/zookeeper/bin; \ mkdir -p /opt/zookeeper/conf; \ chmod a+x /tmp/zk_download.sh; RUN /tmp/zk_download.sh RUN set -ex \ rm -rf /tmp/zk_download.sh; \ apk del wget gnupg # Add custom scripts and configure user ADD zk_env.sh zk_setup.sh zk_status.sh /opt/zookeeper/bin/ RUN set -ex; \ chmod a+x $ZOO_HOME/bin/zk_*.sh; \ addgroup $ZOO_GROUP; \ addgroup sudo; \ adduser -h $ZOO_HOME -g "Zookeeper user" -s /sbin/nologin -D -G $ZOO_GROUP -G sudo $ZOO_USER; \ chown -R $ZOO_USER:$ZOO_GROUP $ZOO_HOME; \ ln -s $ZOO_HOME/bin/zk_*.sh /usr/bin USER root #USER $ZOO_USER WORKDIR $ZOO_HOME/bin/ # EXPOSE ${ZK_clientPort:-2181} ${ZOO_SERVER_PORT:-2888} ${ZOO_ELECTION_PORT:-3888} ENTRYPOINT ["./zk_env.sh"] #RUN echo "aaa" > /usr/alog #CMD ["tail","-f","/usr/alog"] CMD zk_setup.sh && ./zkServer.sh start-foreground
如果因为虚拟机网络问题无法访问外网,可以先在login到registry,然后直接在本地构建
ericdeMacBook-Pro:zookeeper-k8s-openshift ericnie$ docker build -t 172.30.1.1:5000/myproject/zookeeper:3.4.13 . Sending build context to Docker daemon 54.78kB Step 1/19 : FROM openjdk:8-jre-alpine Trying to pull repository registry.access.redhat.com/openjdk ... Trying to pull repository docker.io/library/openjdk ... sha256:e3168174d367db9928bb70e33b4750457092e61815d577e368f53efb29fea48b: Pulling from docker.io/library/openjdk 4fe2ade4980c: Pull complete 6fc58a8d4ae4: Pull complete d3e6d7e9702a: Pull complete Digest: sha256:e3168174d367db9928bb70e33b4750457092e61815d577e368f53efb29fea48b Status: Downloaded newer image for docker.io/openjdk:8-jre-alpine ---> 0fe3f0d1ee48
docker images
然后推送到镜像库
ericdeMacBook-Pro:zookeeper-k8s-openshift ericnie$ docker push 172.30.1.1:5000/myproject/zookeeper:3.4.13 The push refers to a repository [172.30.1.1:5000/myproject/zookeeper] 5fe222836c76: Pushed 55e1a1171f7a: Pushed 347a06ac9233: Pushed 03a33ce83585: Pushed 94058c4e233d: Pushed 984d85b76d76: Pushed cd4b8e8a8238: Pushed 12c374f8270a: Pushed 0c3170905795: Pushed df64d3292fd6: Pushed 3.4.13: digest: sha256:87bf78acf297bc2144d77ce4465294fec519fd50a4c197a1663cc4304c8040c9 size: 2413
完成后在console上看到imagestream
2.基于模版部署
创建部署模版
oc create -f zk-persistent.yaml
ericdeMacBook-Pro:openshift ericnie$ cat zk-persistent.yaml kind: Template apiVersion: v1 metadata: name: zk-persistent annotations: openshift.io/display-name: Zookeeper (Persistent) description: Create a replicated Zookeeper server with persistent storage iconClass: icon-database tags: database,zookeeper labels: template: zk-persistent component: zk parameters: - name: NAME value: zk-persistent required: true - name: SOURCE_IMAGE description: Container image value: 172.30.1.1:5000/myproject/zookeeper required: true - name: ZOO_VERSION description: Version value: "3.4.13" required: true - name: ZOO_REPLICAS description: Number of nodes value: "3" required: true - name: VOLUME_DATA_CAPACITY description: Persistent volume capacity for zookeeper dataDir directory (e.g. 512Mi, 2Gi) value: 1Gi required: true - name: VOLUME_DATALOG_CAPACITY description: Persistent volume capacity for zookeeper dataLogDir directory (e.g. 512Mi, 2Gi) value: 1Gi required: true - name: ZOO_TICK_TIME description: The number of milliseconds of each tick value: "2000" required: true - name: ZOO_INIT_LIMIT description: The number of ticks that the initial synchronization phase can take value: "5" required: true - name: ZOO_SYNC_LIMIT description: The number of ticks that can pass between sending a request and getting an acknowledgement value: "2" required: true - name: ZOO_CLIENT_PORT description: The port at which the clients will connect value: "2181" required: true - name: ZOO_SERVER_PORT description: Server port value: "2888" required: true - name: ZOO_ELECTION_PORT description: Election port value: "3888" required: true - name: ZOO_MAX_CLIENT_CNXNS description: The maximum number of client connections value: "60" required: true - name: ZOO_SNAP_RETAIN_COUNT description: The number of snapshots to retain in dataDir value: "3" required: true - name: ZOO_PURGE_INTERVAL description: Purge task interval in hours. Set to 0 to disable auto purge feature value: "1" required: true - name: ZOO_HEAP_SIZE description: JVM heap size value: "-Xmx960M -Xms960M" required: true - name: RESOURCE_MEMORY_REQ description: The memory resource request. value: "1Gi" required: true - name: RESOURCE_MEMORY_LIMIT description: The limits for memory resource. value: "1Gi" required: true - name: RESOURCE_CPU_REQ description: The CPU resource request. value: "1" required: true - name: RESOURCE_CPU_LIMIT description: The limits for CPU resource. value: "2" required: true objects: - apiVersion: v1 kind: Service metadata: name: ${NAME} labels: zk-name: ${NAME} annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: ${ZOO_CLIENT_PORT} name: client - port: ${ZOO_SERVER_PORT} name: server - port: ${ZOO_ELECTION_PORT} name: election clusterIP: None selector: zk-name: ${NAME} - apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: ${NAME} labels: zk-name: ${NAME} spec: podManagementPolicy: "Parallel" serviceName: ${NAME} replicas: ${ZOO_REPLICAS} template: metadata: labels: zk-name: ${NAME} template: zk-persistent component: zk annotations: scheduler.alpha.kubernetes.io/affinity: > { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [{ "labelSelector": { "matchExpressions": [{ "key": "zk-name", "operator": "In", "values": ["${NAME}"] }] }, "topologyKey": "kubernetes.io/hostname" }] } } spec: containers: - name: ${NAME} imagePullPolicy: IfNotPresent image: ${SOURCE_IMAGE}:${ZOO_VERSION} resources: requests: memory: ${RESOURCE_MEMORY_REQ} cpu: ${RESOURCE_CPU_REQ} limits: memory: ${RESOURCE_MEMORY_LIMIT} cpu: ${RESOURCE_CPU_LIMIT} ports: - containerPort: ${ZOO_CLIENT_PORT} name: client - containerPort: ${ZOO_SERVER_PORT} name: server - containerPort: ${ZOO_ELECTION_PORT} name: election env: - name : SETUP_DEBUG value: "true" - name : ZOO_REPLICAS value: ${ZOO_REPLICAS} - name : ZK_HEAP_SIZE value: ${ZOO_HEAP_SIZE} - name : ZK_tickTime value: ${ZOO_TICK_TIME} - name : ZK_initLimit value: ${ZOO_INIT_LIMIT} - name : ZK_syncLimit value: ${ZOO_SYNC_LIMIT} - name : ZK_maxClientCnxns value: ${ZOO_MAX_CLIENT_CNXNS} - name : ZK_autopurge_snapRetainCount value: ${ZOO_SNAP_RETAIN_COUNT} - name : ZK_autopurge_purgeInterval value: ${ZOO_PURGE_INTERVAL} - name : ZK_clientPort value: ${ZOO_CLIENT_PORT} - name : ZOO_SERVER_PORT value: ${ZOO_SERVER_PORT} - name : ZOO_ELECTION_PORT value: ${ZOO_ELECTION_PORT} - name : JAVA_ZK_JVMFLAG value: "\"${ZOO_HEAP_SIZE}\"" readinessProbe: exec: command: - zk_status.sh initialDelaySeconds: 20 timeoutSeconds: 10 livenessProbe: exec: command: - zk_status.sh initialDelaySeconds: 20 timeoutSeconds: 10 volumeMounts: - name: datadir mountPath: /opt/zookeeper/data - name: datalogdir mountPath: /opt/zookeeper/data-log volumeClaimTemplates: - metadata: name: datadir annotations: volume.alpha.kubernetes.io/storage-class: anything spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: ${VOLUME_DATA_CAPACITY} - metadata: name: datalogdir annotations: volume.alpha.kubernetes.io/storage-class: anything spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: ${VOLUME_DATALOG_CAPACITY}
oc new-app zk-persistent -p NAME=myzk --> Deploying template "test/zk-persistent" to project test Zookeeper (Persistent) --------- Create a replicated Zookeeper server with persistent storage * With parameters: * NAME=myzk * SOURCE_IMAGE=bbvalabs/zookeeper * ZOO_VERSION=3.4.13 * ZOO_REPLICAS=3 * VOLUME_DATA_CAPACITY=1Gi * VOLUME_DATALOG_CAPACITY=1Gi * ZOO_TICK_TIME=2000 * ZOO_INIT_LIMIT=5 * ZOO_SYNC_LIMIT=2 * ZOO_CLIENT_PORT=2181 * ZOO_SERVER_PORT=2888 * ZOO_ELECTION_PORT=3888 * ZOO_MAX_CLIENT_CNXNS=60 * ZOO_SNAP_RETAIN_COUNT=3 * ZOO_PURGE_INTERVAL=1 * ZOO_HEAP_SIZE=-Xmx960M -Xms960M * RESOURCE_MEMORY_REQ=1Gi * RESOURCE_MEMORY_LIMIT=1Gi * RESOURCE_CPU_REQ=1 * RESOURCE_CPU_LIMIT=2 --> Creating resources ... service "myzk" created statefulset "myzk" created --> Success Run 'oc status' to view your app. $ oc get all,pvc,statefulset -l zk-name=myzk NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/myzk None <none> 2181/TCP,2888/TCP,3888/TCP 11m NAME DESIRED CURRENT AGE statefulsets/myzk 3 3 11m NAME READY STATUS RESTARTS AGE po/myzk-0 1/1 Running 0 2m po/myzk-1 1/1 Running 0 1m po/myzk-2 1/1 Running 0 46s NAME STATUS VOLUME CAPACITY ACCESSMODES AGE pvc/datadir-myzk-0 Bound pvc-a654d055-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datadir-myzk-1 Bound pvc-a6601148-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datadir-myzk-2 Bound pvc-a667fa41-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datalogdir-myzk-0 Bound pvc-a657ff77-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datalogdir-myzk-1 Bound pvc-a664407a-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m pvc/datalogdir-myzk-2 Bound pvc-a66b85f7-6dfa-11e7-abe1-42010a840002 1Gi RWO 11m NAME DESIRED CURRENT AGE statefulsets/myzk 3 3 11m
如果在cdk或者minishift上,因为只有一个节点,所以只会启动一个myzk
ericdeMacBook-Pro:openshift ericnie$ oc get pods NAME READY STATUS RESTARTS AGE myzk-0 1/1 Running 0 1m myzk-1 0/1 Pending 0 1m myzk-2 0/1 Pending 0 1m
进行验证
ericdeMacBook-Pro:openshift ericnie$ for i in 0 1 2; do oc exec myzk-$i -- hostname; done myzk-0 myzk-1 myzk-2 ericdeMacBook-Pro:openshift ericnie$ for i in 0 1 2; do echo "myid myzk-$i";oc exec myzk-$i -- cat /opt/zookeeper/data/myid; done myid myzk-0 1 myid myzk-1 2 myid myzk-2 3 ericdeMacBook-Pro:openshift ericnie$ for i in 0 1 2; do oc exec myzk-$i -- hostname -f; done myzk-0.myzk.myproject.svc.cluster.local myzk-1.myzk.myproject.svc.cluster.local myzk-2.myzk.myproject.svc.cluster.local
3.删除实例
oc delete all,statefulset,pvc -l zk-name=myzk
中间遇到的问题
- pod启动以后处于crash状态,看日志是zk_status.sh没找到,后来调了半天的Dockerfile,发现部署模版调用的是zookeeper:3.4.13下载的版本,并非本地版本,所以强制修改为172.30.1.1:5000/myproject/zookeeper
- pod无法启动,报错没有访问/opt/zookeeper/data目录的权限,去掉SecurityContext的Runas语句后,以root启动避免
部署kafka
过程和zookeeper类似
1.clone代码
ericdeMacBook-Pro:minishift ericnie$ git clone https://github.com/ericnie2015/kafka-k8s-openshift.git Cloning into 'kafka-k8s-openshift'... remote: Enumerating objects: 607, done. remote: Total 607 (delta 0), reused 0 (delta 0), pack-reused 607 Receiving objects: 100% (607/607), 102.01 KiB | 24.00 KiB/s, done. Resolving deltas: 100% (382/382), done.
2.本地构建并push镜像仓库
ericdeMacBook-Pro:kafka-k8s-openshift ericnie$ docker build -t 172.30.1.1:5000/myproject/kafka:2.12-2.0.0 . Sending build context to Docker daemon 86.53kB Step 1/19 : FROM openjdk:8-jre-alpine ---> 0fe3f0d1ee48 Step 2/19 : MAINTAINER Enrique Garcia <engapa@gmail.com> ---> Using cache ---> e51b1e313e0c Step 3/19 : ARG KAFKA_HOME=/opt/kafka ---> Running in 0a464e9d1781 ---> abadcf5d52d5 Removing intermediate container 0a464e9d1781 Step 4/19 : ARG KAFKA_USER=kafka ---> Running in b2e50be2d35b ---> e3f1455c4aca
ericdeMacBook-Pro:kafka-k8s-openshift ericnie$ docker push 172.30.1.1:5000/myproject/kafka:2.12-2.0.0 The push refers to a repository [172.30.1.1:5000/myproject/kafka] 84cb97552ea5: Pushed 681963d6c624: Pushed 47afbbc52b62: Pushed 81d8600a6e97: Pushed 8457712c19b8: Pushed 6286fd332b87: Pushed c2f9d211658b: Pushed 12c374f8270a: Mounted from myproject/zookeeper 0c3170905795: Mounted from myproject/zookeeper df64d3292fd6: Mounted from myproject/zookeeper 2.12-2.0.0: digest: sha256:9ed95c9c7682b49f76d4b5454a704db5ba9561127fe86fe6ca52bd673c279ee5 size: 2413
3.基于模版部署
ericdeMacBook-Pro:openshift ericnie$ cat kafka-persistent.yaml kind: Template apiVersion: v1 metadata: name: kafka-persistent annotations: openshift.io/display-name: Kafka (Persistent) description: Create a Kafka cluster, with persistent storage. iconClass: icon-database tags: messaging,kafka labels: template: kafka-persistent component: kafka parameters: - name: NAME description: Name. required: true value: kafka - name: KAFKA_VERSION description: Kafka Version (Scala and kafka version). required: true value: "2.12-2.0.0" - name: SOURCE_IMAGE description: Container image source. value: 172.30.1.1:5000/myproject/kafka required: true - name: REPLICAS description: Number of replicas. required: true value: "3" - name: KAFKA_HEAP_OPTS description: Kafka JVM Heap options. Consider value of params RESOURCE_MEMORY_REQ and RESOURCE_MEMORY_LIMIT. required: true value: "-Xmx1960M -Xms1960M" - name: SERVER_NUM_PARTITIONS description: > The default number of log partitions per topic. More partitions allow greater parallelism for consumption, but this will also result in more files across the brokers. required: true value: "1" - name: SERVER_DELETE_TOPIC_ENABLE description: > Topic deletion enabled. Switch to enable topic deletion or not, default value is 'true' value: "true" - name: SERVER_LOG_RETENTION_HOURS description: > Log retention hours. The minimum age of a log file to be eligible for deletion. value: "2147483647" - name: SERVER_ZOOKEEPER_CONNECT description: > Zookeeper conection string, a list as URL with nodes separated by ','. value: "zk-persistent-0.zk-persistent:2181,zk-persistent-1.zk-persistent:2181,zk-persistent-2.zk-persistent:2181" required: true - name: SERVER_ZOOKEEPER_CONNECT_TIMEOUT description: > The max time that the client waits to establish a connection to zookeeper (ms). value: "6000" required: true - name: VOLUME_KAFKA_CAPACITY description: Kafka logs capacity. required: true value: "10Gi" - name: RESOURCE_MEMORY_REQ description: The memory resource request. value: "2Gi" - name: RESOURCE_MEMORY_LIMIT description: The limits for memory resource. value: "2Gi" - name: RESOURCE_CPU_REQ description: The CPU resource request. value: "1" - name: RESOURCE_CPU_LIMIT description: The limits for CPU resource. value: "2" - name: LP_INITIAL_DELAY description: > LivenessProbe initial delay in seconds. value: "30" objects: - apiVersion: v1 kind: Service metadata: name: ${NAME} labels: app: ${NAME} annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" spec: ports: - port: 9092 name: server - port: 2181 name: zkclient - port: 2888 name: zkserver - port: 3888 name: zkleader clusterIP: None selector: app: ${NAME} - apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: ${NAME} labels: app: ${NAME} spec: podManagementPolicy: "Parallel" serviceName: ${NAME} replicas: ${REPLICAS} template: metadata: labels: app: ${NAME} template: kafka-persistent component: kafka annotations: # Use this annotation if you want allocate each pod on different node # Note the number of nodes must be upper than REPLICAS parameter. scheduler.alpha.kubernetes.io/affinity: > { "podAntiAffinity": { "requiredDuringSchedulingIgnoredDuringExecution": [{ "labelSelector": { "matchExpressions": [{ "key": "app", "operator": "In", "values": ["${NAME}"] }] }, "topologyKey": "kubernetes.io/hostname" }] } } spec: containers: - name: ${NAME} imagePullPolicy: IfNotPresent image: ${SOURCE_IMAGE}:${KAFKA_VERSION} resources: requests: memory: ${RESOURCE_MEMORY_REQ} cpu: ${RESOURCE_CPU_REQ} limits: memory: ${RESOURCE_MEMORY_LIMIT} cpu: ${RESOURCE_CPU_LIMIT} ports: - containerPort: 9092 name: server - containerPort: 2181 name: zkclient - containerPort: 2888 name: zkserver - containerPort: 3888 name: zkleader env: - name : KAFKA_REPLICAS value: ${REPLICAS} - name: KAFKA_ZK_LOCAL value: "false" - name: KAFKA_HEAP_OPTS value: ${KAFKA_HEAP_OPTS} - name: SERVER_num_partitions value: ${SERVER_NUM_PARTITIONS} - name: SERVER_delete_topic_enable value: ${SERVER_DELETE_TOPIC_ENABLE} - name: SERVER_log_retention_hours value: ${SERVER_LOG_RETENTION_HOURS} - name: SERVER_zookeeper_connect value: ${SERVER_ZOOKEEPER_CONNECT} - name: SERVER_log_dirs value: "/opt/kafka/data/logs" - name: SERVER_zookeeper_connection_timeout_ms value: ${SERVER_ZOOKEEPER_CONNECT_TIMEOUT} livenessProbe: exec: command: - kafka_server_status.sh initialDelaySeconds: ${LP_INITIAL_DELAY} timeoutSeconds: 5 volumeMounts: - name: kafka-data mountPath: /opt/kafka/data volumeClaimTemplates: - metadata: name: kafka-data annotations: volume.alpha.kubernetes.io/storage-class: anything spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: ${VOLUME_KAFKA_CAPACITY}
修改模版的镜像库和zookeeper地址,我只有一个所以只修改成一个了
ericdeMacBook-Pro:openshift ericnie$ oc create -f kafka-persistent.yaml template "kafka-persistent" created ericdeMacBook-Pro:openshift ericnie$ oc new-app kafka-persistent --> Deploying template "myproject/kafka-persistent" to project myproject Kafka (Persistent) --------- Create a Kafka cluster, with persistent storage. * With parameters: * NAME=kafka * KAFKA_VERSION=2.12-2.0.0 * SOURCE_IMAGE=172.30.1.1:5000/myproject/kafka * REPLICAS=3 * KAFKA_HEAP_OPTS=-Xmx1960M -Xms1960M * SERVER_NUM_PARTITIONS=1 * SERVER_DELETE_TOPIC_ENABLE=true * SERVER_LOG_RETENTION_HOURS=2147483647 * SERVER_ZOOKEEPER_CONNECT=myzk-0.myzk.myproject.svc.cluster.local:2181,myzk-1.myzk.myproject.svc.cluster.local:2181,myzk-2.myzk.myproject.svc.cluster.local:2181 * SERVER_ZOOKEEPER_CONNECT_TIMEOUT=6000 * VOLUME_KAFKA_CAPACITY=10Gi * RESOURCE_MEMORY_REQ=0.2Gi * RESOURCE_MEMORY_LIMIT=2Gi * RESOURCE_CPU_REQ=0.2 * RESOURCE_CPU_LIMIT=2 * LP_INITIAL_DELAY=30 --> Creating resources ... service "kafka" created statefulset "kafka" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/kafka' Run 'oc status' to view your app.
因为内存不够,无法跑起来。
4.删除
ericdeMacBook-Pro:openshift ericnie$ oc delete all,statefulset,pvc -l app=kafka statefulset "kafka" deleted service "kafka" deleted persistentvolumeclaim "kafka-data-kafka-0" deleted persistentvolumeclaim "kafka-data-kafka-1" deleted persistentvolumeclaim "kafka-data-kafka-2" deleted