CentOS7系列--5.3CentOS7中配置和管理Kubernetes

CentOS7配置和管理Kubernetes

Kubernetes(k8s)是自动化容器操作的开源平台,这些操作包括部署,调度和节点集群间扩展。如果你曾经用过Docker容器技术部署容器,那么可以将Docker看成Kubernetes内部使用的低级别组件。Kubernetes不仅仅支持Docker,还支持Rocket,这是另一种容器技术。

使用Kubernetes可以:

Ø 自动化容器的部署和复制

Ø 随时扩展或收缩容器规模

Ø 将容器组织成组,并且提供容器间的负载均衡

Ø 很容易地升级应用程序容器的新版本

Ø 提供容器弹性,如果容器失效就替换它,等等

实际上,使用Kubernetes只需一个部署文件,使用一条命令就可以部署多层容器(前端,后台等)的完整集群:

$ kubectl create -f single-config-file.yaml

https://kubernetes.io/docs/getting-started-guides/centos/centos_manual_config/

https://www.cnblogs.com/zhenyuyaodidiao/p/6500830.html

1. 配置Admin Node与Container Node

1.1. 系统信息

centos-master = 192.168.1.101 = server1.smartmap.com

centos-minion-1 = 192.168.1.102 = server2.smartmap.com

centos-minion-2 = 192.168.1.103 = server3.smartmap.com

centos-minion-3 = 192.168.1.104 = server4.smartmap.com

1.2. 创建YUM更新源

[root@server1 ~]# vi /etc/yum.repos.d/virt7-docker-common-release.repo

[virt7-docker-common-release]

name=virt7-docker-common-release

baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/

gpgcheck=0

1.3. 安装 Kubernetes, etcd and flannel

[root@server1 ~]# yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

Loaded plugins: fastestmirror

base | 3.6 kB 00:00:00

extras | 3.4 kB 00:00:00

updates | 3.4 kB 00:00:00

virt7-docker-common-release | 3.4 kB 00:00:00

(1/5): base/7/x86_64/group_gz | 156 kB 00:00:00

1.4. 加入IP与服务器名映射到hosts文件中

[root@server1 ~]# echo "192.168.1.101 server1.smartmap.com

> 192.168.1.102 server2.smartmap.com

> 192.168.1.103 server3.smartmap.com

> 192.168.1.104 server4.smartmap.com" >> /etc/hosts

[root@server1 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.101 server1.smartmap.com

192.168.1.102 server2.smartmap.com

192.168.1.103 server3.smartmap.com

192.168.1.104 server4.smartmap.com

[root@server1 ~]#

1.5. 修改/etc/kubernetes/config文件

[root@server1 ~]# vi /etc/kubernetes/config

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver

KUBE_MASTER="--master=http://192.168.1.101:8080"

1.6. 修改/etc/kubernetes/config文件

[root@server1 ~]# vi /etc/sysconfig/docker

# /etc/sysconfig/docker

# Modify these options if you want to change the way the docker daemon runs

OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'

if [ -z "${DOCKER_CERT_PATH}" ]; then

DOCKER_CERT_PATH=/etc/docker

fi

# Do not add registries in this file anymore. Use /etc/containers/registries.conf

# from the atomic-registries package.

#

# docker-latest daemon can be used by starting the docker-latest unitfile.

# To use docker-latest client, uncomment below lines

#DOCKERBINARY=/usr/bin/docker-latest

#DOCKERDBINARY=/usr/bin/dockerd-latest

#DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest

#DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest

OPTIONS='--insecure-registry registry:5000'

2. 配置Admin Node

2.1. 修改/etc/etcd/etcd.conf文件

[root@server1 ~]# vi /etc/etcd/etcd.conf

# [member]

ETCD_NAME=default

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

#ETCD_WAL_DIR=""

#ETCD_SNAPSHOT_COUNT="10000"

#ETCD_HEARTBEAT_INTERVAL="100"

#ETCD_ELECTION_TIMEOUT="1000"

#ETCD_LISTEN_PEER_URLS="http://localhost:2380"

ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#ETCD_MAX_SNAPSHOTS="5"

#ETCD_MAX_WALS="5"

#ETCD_CORS=""

#

#[cluster]

#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"

# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."

#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"

#ETCD_INITIAL_CLUSTER_STATE="new"

#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"

#ETCD_DISCOVERY=""

#ETCD_DISCOVERY_SRV=""

#ETCD_DISCOVERY_FALLBACK="proxy"

#ETCD_DISCOVERY_PROXY=""

#ETCD_STRICT_RECONFIG_CHECK="false"

#ETCD_AUTO_COMPACTION_RETENTION="0"

#ETCD_ENABLE_V2="true"

#

#[proxy]

#ETCD_PROXY="off"

#ETCD_PROXY_FAILURE_WAIT="5000"

#ETCD_PROXY_REFRESH_INTERVAL="30000"

#ETCD_PROXY_DIAL_TIMEOUT="1000"

#ETCD_PROXY_WRITE_TIMEOUT="5000"

#ETCD_PROXY_READ_TIMEOUT="0"

#

#[security]

#ETCD_CERT_FILE=""

#ETCD_KEY_FILE=""

#ETCD_CLIENT_CERT_AUTH="false"

#ETCD_TRUSTED_CA_FILE=""

#ETCD_AUTO_TLS="false"

#ETCD_PEER_CERT_FILE=""

#ETCD_PEER_KEY_FILE=""

#ETCD_PEER_CLIENT_CERT_AUTH="false"

#ETCD_PEER_TRUSTED_CA_FILE=""

#ETCD_PEER_AUTO_TLS="false"

#

#[logging]

#ETCD_DEBUG="false"

# examples for -log-package-levels etcdserver=WARNING,security=DEBUG

#ETCD_LOG_PACKAGE_LEVELS=""

#

#[profiling]

#ETCD_ENABLE_PPROF="false"

#ETCD_METRICS="basic"

#

#[auth]

#ETCD_AUTH_TOKEN="simple"

2.2. 修改 /etc/kubernetes/apiserver文件

[root@server1 ~]# vi /etc/kubernetes/apiserver

###

# kubernetes system config

#

# The following values are used to configure the kube-apiserver

#

# The address on the local server to listen to.

KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.

# KUBE_API_PORT="--port=8080"

# Port minions listen on

# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.101:2379"

# Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!

KUBE_API_ARGS=""

2.3. 启动etcd服务

[root@server1 ~]# systemctl start etcd

[root@server1 ~]# etcdctl mkdir /kube-centos/network

[root@server1 ~]# etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }

[root@server1 ~]#

2.4. 修改/etc/sysconfig/flanneld文件

[root@server1 ~]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="http://192.168.1.101:2379"

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass

#FLANNEL_OPTIONS=""

2.5. 重启各项服务

[root@server1 ~]# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do

> systemctl restart $SERVICES

> systemctl enable $SERVICES

> systemctl status $SERVICES

> done

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

● etcd.service - Etcd Server

Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-11-26 14:02:06 CST; 85ms ago

Main PID: 8273 (etcd)

CGroup: /system.slice/etcd.service

└─8273 /usr/bin/etcd --name=default --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379

Nov 26 14:02:05 server1.smartmap.com etcd[8273]: enabled capabilities for version 3.2

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: 8e9e05c52164694d is starting a new election at term 2

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: 8e9e05c52164694d became candidate at term 3

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 3

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: 8e9e05c52164694d became leader at term 3

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 3

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: published {Name:default ClientURLs:[http://0.0.0.0:2379]} to cluster cdf818194e3a8c32

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: ready to serve client requests

Nov 26 14:02:06 server1.smartmap.com etcd[8273]: serving insecure client requests on [::]:2379, this is strongly discouraged!

Nov 26 14:02:06 server1.smartmap.com systemd[1]: Started Etcd Server.

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.

● kube-apiserver.service - Kubernetes API Server

Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-11-26 14:02:07 CST; 48ms ago

Docs: https://github.com/GoogleCloudPlatform/kubernetes

Main PID: 8307 (kube-apiserver)

CGroup: /system.slice/kube-apiserver.service

└─8307 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://192.168.1.101:2379 --address=0.0.0.0 --allow-privileged=...

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.184106 8307 storage_rbac.go:131] Created clusterrole.rbac.auth.../admin

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.189141 8307 storage_rbac.go:131] Created clusterrole.rbac.auth...o/edit

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.191558 8307 storage_rbac.go:131] Created clusterrole.rbac.auth...o/view

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.194588 8307 storage_rbac.go:131] Created clusterrole.rbac.auth...m:node

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.197275 8307 storage_rbac.go:131] Created clusterrole.rbac.auth...roxier

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.202016 8307 storage_rbac.go:131] Created clusterrole.rbac.auth...roller

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.206863 8307 storage_rbac.go:151] Created clusterrolebinding.rb...-admin

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.208544 8307 storage_rbac.go:151] Created clusterrolebinding.rb...covery

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.212371 8307 storage_rbac.go:151] Created clusterrolebinding.rb...c-user

Nov 26 14:02:07 server1.smartmap.com kube-apiserver[8307]: I1126 14:02:07.214597 8307 storage_rbac.go:151] Created clusterrolebinding.rb...m:node

Hint: Some lines were ellipsized, use -l to show in full.

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

● kube-controller-manager.service - Kubernetes Controller Manager

Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-11-26 14:02:07 CST; 73ms ago

Docs: https://github.com/GoogleCloudPlatform/kubernetes

Main PID: 8339 (kube-controller)

CGroup: /system.slice/kube-controller-manager.service

└─8339 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://192.168.1.101:8080

Nov 26 14:02:07 server1.smartmap.com systemd[1]: Started Kubernetes Controller Manager.

Nov 26 14:02:07 server1.smartmap.com systemd[1]: Starting Kubernetes Controller Manager...

Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

● kube-scheduler.service - Kubernetes Scheduler Plugin

Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-11-26 14:02:07 CST; 79ms ago

Docs: https://github.com/GoogleCloudPlatform/kubernetes

Main PID: 8371 (kube-scheduler)

CGroup: /system.slice/kube-scheduler.service

└─8371 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://192.168.1.101:8080

Nov 26 14:02:07 server1.smartmap.com systemd[1]: Started Kubernetes Scheduler Plugin.

Nov 26 14:02:07 server1.smartmap.com systemd[1]: Starting Kubernetes Scheduler Plugin...

Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.

Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.

● flanneld.service - Flanneld overlay address etcd agent

Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)

Active: active (running) since Sun 2017-11-26 14:02:07 CST; 44ms ago

Main PID: 8402 (flanneld)

CGroup: /system.slice/flanneld.service

└─8402 /usr/bin/flanneld -etcd-endpoints=http://192.168.1.101:2379 -etcd-prefix=/kube-centos/network

Nov 26 14:02:07 server1.smartmap.com systemd[1]: Starting Flanneld overlay address etcd agent...

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.432091 8402 main.go:132] Installing signal handlers

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.433596 8402 manager.go:136] Determining IP address of default interface

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.433830 8402 manager.go:149] Using interface with name ens33 an....1.101

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.433841 8402 manager.go:166] Defaulting external address to int...1.101)

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.505878 8402 local_manager.go:179] Picking subnet in range 172.....255.0

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.506878 8402 manager.go:250] Lease acquired: 172.30.47.0/24

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.507064 8402 network.go:58] Watching for L3 misses

Nov 26 14:02:07 server1.smartmap.com flanneld-start[8402]: I1126 14:02:07.507071 8402 network.go:66] Watching for new subnet leases

Nov 26 14:02:07 server1.smartmap.com systemd[1]: Started Flanneld overlay address etcd agent.

Hint: Some lines were ellipsized, use -l to show in full.

[root@server1 ~]#

3. 配置Container Node

3.1. 修改 /etc/kubernetes/kubelet文件

[root@server2 sysconfig]# vi /etc/kubernetes/kubelet

###

# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on

# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=server2.smartmap.com"

# location of the api-server

KUBELET_API_SERVER="--api-servers=http://server1.smartmap.com:8080"

# pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!

KUBELET_ARGS=""

3.2. 修改 /etc/sysconfig/flanneld文件

[root@server2 sysconfig]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="http://server1.smartmap.com:2379"

# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass

#FLANNEL_OPTIONS=""

3.3. 重启各项服务

[root@server2 sysconfig]# for SERVICES in kube-proxy kubelet flanneld docker; do

> systemctl restart $SERVICES

> systemctl enable $SERVICES

> systemctl status $SERVICES

> done

3.4. 配置kubectl

[root@server2 sysconfig]# kubectl config set-cluster default-cluster --server=http://server1.smartmap.com:8080

Cluster "default-cluster" set.

[root@server2 sysconfig]# kubectl config set-context default-context --cluster=default-cluster --user=default-admin

Context "default-context" set.

[root@server2 sysconfig]# kubectl config use-context default-context

Switched to context "default-context".

[root@server2 sysconfig]#

4. 检查集群

[root@server1 ~]# kubectl get nodes

NAME STATUS AGE

server2.smartmap.com Ready 10m

server3.smartmap.com Ready 1m

server4.smartmap.com Ready 1m

[root@server1 ~]#

5. 创建Pod

5.1. 创建镜像

5.1.1. 编辑Dockerfile文件

[root@server2 ~]# vi Dockerfile

# create new

FROM centos

MAINTAINER smartmap <admin@smartmap.com>

RUN yum -y install httpd

RUN echo "Hello DockerFile" > /var/www/html/index.html

EXPOSE 80

CMD ["-D", "FOREGROUND"]

ENTRYPOINT ["/usr/sbin/httpd"]

5.1.2. 生成镜像

[root@server2 ~]# docker build -t web_server:latest .

Sending build context to Docker daemon 44.54 kB

Step 1 : FROM centos

Trying to pull repository docker.io/library/centos ...

latest: Pulling from docker.io/library/centos

85432449fd0f: Pull complete

Digest: sha256:3b1a65e9a05f0a77b5e8a698d3359459904c2a354dc3b25ae2e2f5c95f0b3 667

---> 3fa822599e10

Step 2 : MAINTAINER smartmap <admin@smartmap.com>

---> Running in 2d8746184cda

---> 0e1a78adcae3

5.1.3. 查看所有镜像

[root@server2 ~]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

web_server latest 79a12844f0fc 4 minutes ago 316 MB

docker.io/centos latest 3fa822599e10 3 days ago 203.5 MB

5.2. 导出镜像

[root@server2 ~]# docker save web_server > web_server.tar

5.3. 将生成的镜像复制到其它所有节点

[root@server2 ~]# scp web_server.tar server3.smartmap.com:/root/web_server.tar

The authenticity of host 'server3.smartmap.com (192.168.1.103)' can't be established.

ECDSA key fingerprint is SHA256:lgN0eOtdLR2eqHh+fabe54DGpV08ZiWo9oWVS60aGzw.

ECDSA key fingerprint is MD5:28:c0:cf:21:35:29:3d:23:d3:62:ca:0e:82:7a:4b:af.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'server3.smartmap.com,192.168.1.103' (ECDSA) to the list o f known hosts.

root@server3.smartmap.com's password:

web_server.tar 100% 310MB 25.8MB/s 00:12

[root@server2 ~]# scp web_server.tar server4.smartmap.com:/root/web_server.tar

The authenticity of host 'server4.smartmap.com (192.168.1.104)' can't be established.

ECDSA key fingerprint is SHA256:lgN0eOtdLR2eqHh+fabe54DGpV08ZiWo9oWVS60aGzw.

ECDSA key fingerprint is MD5:28:c0:cf:21:35:29:3d:23:d3:62:ca:0e:82:7a:4b:af.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'server4.smartmap.com,192.168.1.104' (ECDSA) to the list o f known hosts.

root@server4.smartmap.com's password:

web_server.tar 100% 310MB 28.2MB/s 00:11

[root@server2 ~]#

5.4. 其它Container Node节点导入镜像

[root@server3 ~]# ll

total 317660

-rw-------. 1 root root 1496 Oct 28 11:32 anaconda-ks.cfg

-rw-r--r-- 1 root root 325278208 Dec 3 16:59 web_server.tar

[root@server3 ~]# docker load < web_server.tar

d1be66a59bc5: Loading layer 212.1 MB/212.1 MB

73c74fffa4a1: Loading layer 113.2 MB/113.2 MB

8c78557d73da: Loading layer 3.584 kB/3.584 kB

Loaded image: web_server:latest

[root@server3 ~]# docker images

REPOSITORY TAG IMAGE ID CREATED SIZE

web_server latest 79a12844f0fc 16 minutes ago 316 MB

[root@server3 ~]#

5.5. 在Admin Node上创建Pod

5.5.1. 创建pod-webserver.yaml文件

[root@server1 ~]# vi pod-webserver.yaml

apiVersion: v1

kind: Pod

metadata:

name: httpd

labels:

app: web_server

spec:

containers:

- name: httpd

image: web_server

imagePullPolicy: IfNotPresent

ports:

- containerPort: 80

5.5.2. 创建Pod

[root@server1 ~]# kubectl create -f pod-webserver.yaml

pod "httpd" created

[root@server1 ~]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

httpd 0/1 ContainerCreating 0 12s <none> server4.smartmap.com

[root@server1 ~]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

httpd 1/1 Running 0 7s 172.30.20.2 server4.smartmap.com

[root@server1 ~]# kubectl get pod httpd -o yaml | grep "podIP"

podIP: 172.30.20.2

[root@server1 ~]# curl http://172.30.20.2

Hello DockerFile

5.5.3. 删除Pod

[root@server1 ~]# kubectl delete pod httpd

pod "httpd" deleted

[root@server1 ~]#

5.5.4. 处理相关问题

1. open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory

[root@server2 ~]# yum install *rhsm*

Loaded plugins: fastestmirror

base | 3.6 kB 00:00:00

extras | 3.4 kB 00:00:00

updates | 3.4 kB 00:00:00

virt7-docker-common-release | 3.4 kB 00:00:00

(1/2): extras/7/x86_64/primary_db | 130 kB 00:00:00

(2/2): updates/7/x86_64/primary_db | 3.6 MB 00:00:00

Loading mirror speeds from cached hostfile

* base: mirrors.tuna.tsinghua.edu.cn

2.returned error: No such image: registry.access.redhat.com/rhel7/pod-infrastructure:latest

[root@server2 ~]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest

Trying to pull repository registry.access.redhat.com/rhel7/pod-infrastructure ...

latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure

26e5ed6899db: Downloading 16.35 MB/74.87 MB

26e5ed6899db: Pull complete

66dbe984a319: Pull complete

9138e7863e08: Pull complete

Digest: sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931

[root@server2 ~]#

6. 持久化存贮

6.1. 创建pod-webserver.yaml文件

[root@server1 ~]# vi pod-webserver-storage.yaml

apiVersion: v1

kind: Pod

metadata:

name: httpd

spec:

containers:

- name: httpd

imagePullPolicy: IfNotPresent

image: web_server

ports:

- containerPort: 80

volumeMounts:

# the volume to use (it's the one defined in "volumes" section)

- name: httpd-storage

# mount point inside Container

mountPath: /var/www/html

volumes:

# any name you like

- name: httpd-storage

hostPath:

# the directory on Host Node for saving data

path: /var/docker/disk01

6.2. 在所有Node上创建目录

[root@server2 ~]# mkdir -p /var/www/html

[root@server2 ~]# mkdir -p /var/docker/disk01

6.3. 创建Pod

[root@server1 ~]# kubectl create -f pod-webserver-storage.yaml

pod "httpd" created

[root@server1 ~]# kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE

httpd 1/1 Running 0 14s 172.30.49.2 server3.smartmap.com

[root@server1 ~]# kubectl get pod httpd -o yaml | grep "podIP"

podIP: 172.30.49.2

6.4. 修改内容

[root@server3 ~]# echo "Persistent Storage" > /var/docker/disk01/index.html

[root@server3 ~]# curl http://172.30.49.2

Persistent Storage

posted @ 2018-04-14 18:50  ParamousGIS  阅读(440)  评论(0编辑  收藏  举报