centos7使用二进制方式安装kubernetes v1.11.2

 

以下部署是单节点的 master,也没有证书相关内容。

 

一、.环境准备

1. 设备环境

192.168.56.10 k8s-m1
192.168.56.11 k8s-n1
192.168.56.12 k8s-n2
192.168.56.13 k8s-n3
master节点内存不小于1G,node节点内存不小于768M
内核版本大于 3.10.0

2.系统环境

2.1 添加firewall规则,所有节点互通,关闭selinux

centos 7 firewalld 永久添加规则:
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -s 192.168.56.0/24  -j ACCEPT
firewall-cmd --reload

[root@k8s-n1 ~]# cat /etc/selinux/config |grep SELINUX
SELINUX=disabled

2.2 设置host解析

192.168.56.10 k8s-m1
192.168.56.11 k8s-n1
192.168.56.12 k8s-n2
192.168.56.13 k8s-n3

2.3 所有节点设置k8s参数

[root@k8s-m1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

[root@k8s-m1 ~]# sysctl -p /etc/sysctl.d/k8s.conf

2.4 关闭swap,如果不关闭需要调整kubelet参数。/etc/fstab也要注解掉SWAP的挂载。

$ swapoff -a && sysctl -w vm.swappiness=0
node节点可以不关闭swap

 

3. 软件版本

Kubernetes v1.11.2
etcd-v3.2.9
flannel-v0.10.0
docker://18.6.1或者docker://18.3.1

 

二、docker安装

在centos系统下,这里介绍两种docker的安装方式,使用哪种都可以。

1. yum方式安装docker

最新版DockerCE 安装步骤(当前版本2018.09.10 Docker version 18.06.1-ce, build e68fc7a )

#配置yum源
yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo

#安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --enable docker-ce-edge

#安装最新版DockerCE
yum makecache fast;  yum -y install docker-ce

#启动
systemctl enable docker; systemctl start docker

#测试
docker run hello-world

2. 二进制方式安装docker

 2.1 下载二进制文件

wget  https://download.docker.com/linux/static/stable/x86_64/docker-18.06.1-ce.tgz
tar -xvf docker-18.06.1-ce.tgz
cp docker/docker* /usr/bin

2.2 配置文件

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/docker
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target

3.3 启动服务

systemctl daemon-reload
systemctl enable docker
systemctl start docker

 

 三、kubernetes 二进制文件下载
下载链接:wget https://dl.k8s.io/v1.11.2/kubernetes-server-linux-amd64.tar.gz
 
四、kubernetes 部署
1. 相关组件
master节点包含的组件:
kube-apiserver
kube-scheduler
kube-controller-manager
etcd
flannel
docker

node节点包含的组件:

flanneld
docker
kubelet
kube-proxy

 

2. master节点部署
2.1 部署思路
在centos7系统部署二进制的时候,所有组件都需要4个步骤:
1)复制对应的二进制文件到/usr/bin/目录下
2)创建systemd  service启动服务文件
3)创建service启动文件中对应的配置参数文件
4)将应用加入到开机自启动
2.2 etcd数据库安装
# etcd安装包下载
wget https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz
tar xf etcd-v3.2.9-linux-amd64.tar.gz
cd etcd-v3.2.9-linux-amd64/
cp etcd etcdctl /usr/bin/

#设置etcd.service服务文件
[root@k8s-m1 ~]# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf  
# set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /usr/local/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\""
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#创建配置文件/etc/etcd/etcd.conf
[root@k8s-m1 ~]# cat /etc/etcd/etcd.conf
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"
#ETCD_PROXY_WRITE_TIMEOUT="5000"
#ETCD_PROXY_READ_TIMEOUT="0"
#
#[Security]
#ETCD_CERT_FILE=""
#ETCD_KEY_FILE=""
#ETCD_CLIENT_CERT_AUTH="false"
#ETCD_TRUSTED_CA_FILE=""
#ETCD_AUTO_TLS="false"
#ETCD_PEER_CERT_FILE=""
#ETCD_PEER_KEY_FILE=""
#ETCD_PEER_CLIENT_CERT_AUTH="false"
#ETCD_PEER_TRUSTED_CA_FILE=""
#ETCD_PEER_AUTO_TLS="false"
#
#[Logging]
#ETCD_DEBUG="false"
#ETCD_LOG_PACKAGE_LEVELS=""
#ETCD_LOG_OUTPUT="default"
#
#[Unsafe]
#ETCD_FORCE_NEW_CLUSTER="false"
#
#[Version]
#ETCD_VERSION="false"
#ETCD_AUTO_COMPACTION_RETENTION="0"
#
#[Profiling]
#ETCD_ENABLE_PPROF="false"
#ETCD_METRICS="basic"
#
#[Auth]
#ETCD_AUTH_TOKEN="simple"


#配置开机启动
systemctl daemon-reload
systemctl enable etcd.service
systemctl start etcd.service

#检查是否安装成功
[root@k8s-m1 ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

 2.3 创建一个公用的kubernetes配置文件

[root@k8s-m1 ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=1"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.10:8080"

2.4 kube-apiserver服务

#复制二进制文件
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes
tar -xzvf  kubernetes-src.tar.gz
cp -r server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/bin/

#启动文件
[root@k8s-m1 ~]# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=etcd.service

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_ETCD_SERVERS \
            $KUBE_API_ADDRESS \
            $KUBE_API_PORT \
            $KUBELET_PORT \
            $KUBE_ALLOW_PRIV \
            $KUBE_SERVICE_ADDRESSES \
            $KUBE_ADMISSION_CONTROL \
            $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#配置文件
[root@k8s-m1 ~]# cat /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.56.10:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=100.0.100.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

2.5 kube-controller-manager服务

#启动文件
[root@k8s-m1 ~]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#配置文件
[root@k8s-m1 ~]# cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=" "

2.6 kube-scheduler服务

#启动文件
[root@k8s-m1 ~]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

#配置文件
[root@k8s-m1 ~]# cat /etc/kubernetes/scheduler
#KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/k8s-t/log/kubernetes --v=2"

2.7 将各组件加入开机启动

systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

2.8 验证master节点功能

[root@k8s-m1 ~]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
etcd-0               Healthy   {"health": "true"}   
scheduler            Healthy   ok                   
controller-manager   Healthy   ok

3. node节点安装二进制文件

3.1 先创建一个公共配置文件

[root@k8s-m1 ~]# cat /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.56.10:8080"

3.1 kubelet组件

#配置文件
[root@k8s-n1 ~]# cat /etc/kubernetes/kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://192.168.56.10:8080
    name: local
contexts:
  - context:
      cluster: local
    name: local
current-context: local

#配置文件
[root@k8s-n1 ~]# cat /etc/kubernetes/kubelet
# 启用日志标准错误
KUBE_LOGTOSTDERR="--logtostderr=true"
# 日志级别
KUBE_LOG_LEVEL="--v=0"
# Kubelet服务IP地址(本机地址)
NODE_ADDRESS="--address=192.168.56.11"
# Kubelet服务端口
NODE_PORT="--port=10250"
# 自定义节点名称(本机地址)
NODE_HOSTNAME="--hostname-override=192.168.56.11"
# kubeconfig路径,指定连接API服务器
KUBELET_KUBECONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
# 允许容器请求特权模式,默认false
KUBE_ALLOW_PRIV="--allow-privileged=false"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
#KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=kubernetes/pause"
# DNS信息
KUBELET_DNS_IP="--cluster-dns=192.168.12.19"
KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local"
# 禁用使用Swap
KUBELET_SWAP="--fail-swap-on=false"

#启动文件
[root@k8s-n1 ~]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
${KUBE_LOGTOSTDERR} \
${KUBE_LOG_LEVEL} \
${NODE_ADDRESS} \
${NODE_PORT} \
${NODE_HOSTNAME} \
${KUBELET_KUBECONFIG} \
${KUBE_ALLOW_PRIV} \
${KUBELET_DNS_IP} \
${KUBELET_DNS_DOMAIN} \
${KUBELET_SWAP}
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target

3.2 启动kubelet

systemctl daemon-reload
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl status kubelet.service

3.3 kube-proxy服务

#配置文件
[root@k8s-n1 ~]# cat /etc/kubernetes/proxy
###
# kubernetes proxy config
# default config should be adequate
# Add your own!
KUBE_PROXY_ARGS=""

#启动文件
[root@k8s-n1 ~]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBE_MASTER \
            $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3.4 启动kube-proxy服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

3.5 检查节点状态

#在master上执行检查命令
[root@k8s-m1 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.56.11   Ready     <none>    1d        v1.11.2

3.6 部署flannel 网络

下载组件,版本 flannel-v0.10.0

mkdir flannel
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel
cp flannel/{flanneld,mk-docker-opts.sh} /usr/bin

3.7 服务配置

#在master节点执行下面命令,将网段信息写入etcd库
[root@k8s-m1 ~]# etcdctl set /k8s/network/config '{"Network": "100.0.0.0/16"}'

#配置文件
[root@k8s-n1 flannel-v0.10.0]# cat /etc/sysconfig/flanneld
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.56.10:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/k8s/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

#启动文件
[root@k8s-n1 flannel-v0.10.0]# cat  /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/flanneld
#EnvironmentFile=-/etc/sysconfig/docker-network
#ExecStart=/usr/bin/flanneld-start $FLANNEL_OPTIONS
ExecStart=/usr/bin/flanneld  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS}  -etcd-prefix=${FLANNEL_ETCD_PREFIX}
ExecStartPost=/usr/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
WantedBy=docker.service

#启动flannel服务
systemctl daemon-reload
systemctl enable flanneld.service
systemctl start flanneld.service


#将doc启动参数加入到docker启动文件里
[root@k8s-n1 flannel-v0.10.0]#  vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS #增加一个$DOCKER_OPTS
EnvironmentFile=/run/flannel/docker  #添加一行环境变量

3.8 启动服务

systemctl daemon-reload
systemctl restart docker

3.9 测试

3.9.1 flannel测试

1)网卡多出一个flannel0接口
2)分别在master和node上运行一个busybox容器,相互ping一下是否通,ping通,说明部署成功。
[root@k8s-m1 ~]# docker run -it busybox sh
/ # ping 100.0.100.2

3.9.2 kubernetes 测试

#master节点上测试
[root@k8s-m1 ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
192.168.56.11   Ready     <none>    1d        v1.11.2
[root@k8s-m1 ~]# kubectl get pod
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-phv8v   1/1       Running   0          36s
[root@k8s-m1 ~]# kubectl get deployment
No resources found.
#创建一个部署
[root@k8s-m1 ~]# kubectl run nginx --image=nginx --replicas=2
deployment.apps/nginx created
[root@k8s-m1 ~]# kubectl get deployment
NAME      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx     2         2         2            1           4s
[root@k8s-m1 ~]# kubectl get pods
NAME                     READY     STATUS              RESTARTS   AGE
nginx-64f497f8fd-75fd7   1/1       Running   0          1m
nginx-64f497f8fd-ldv7s   1/1       Running   0          1m

#同时在node节点上,也可以看到相应的container已经启动

 

 

 

 

 

posted on 2018-10-11 16:48  冰冰爱学习  阅读(714)  评论(0编辑  收藏  举报

导航