kubeadm部署k8s-1.7.4

kubeadm部署k8s-1.7.4

参考文档:


环境:
etcd1: 192.168.130.32
etcd2: 192.168.130.33
etcd3: 192.168.130.34
master: 192.168.130.42
node1: 192.168.130.43
node2: 192.168.130.44

kubeadm 2018年GA

一.公共组件
1.etcd集群
gcr.io/etcd-development/etcd
etcd-3.2.7集群: http://192.168.130.32:2379,http://192.168.130.33:2379,http://192.168.130.34:2379
kubeadm部署k8s-1.7.4
安装略
2.docker仓库
k8s1.6,1.7
kubeadm部署k8s-1.7.4

可以通过docker-distribution创建私有仓库
192.168.130.254:5000/google_containers/hyperkube:v1.7.4
192.168.130.254:5000/google_containers/k8s-dns-eidecar-amd64:1.14.4
192.168.130.254:5000/google_containers/k8s-dns-kube-dns-amd64:1.14.4
192.168.130.254:5000/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
192.168.130.254:5000/google_containers/pause-amd64:3.0
提示: hyperkube是集成镜像,整合了kube-apiserver(/hyperkube apiserver),kube-controller-manager(/hyperkube controller-manager),kube-scheduler(/hyperkube scheduler)以及/usr/local/bin/kube-proxy,及大地方便了快速部署
二进制包下载地址, 下面的链接实际指向对象存储地址,如https://storage.googleapis.com/kubernetes-release/release/v1.8.4/bin/linux/amd64/kubeadm
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-apiserver
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-controller-manager
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-scheduler
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kube-proxy
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kubelet
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kubectl
https://dl.k8s.io/release/v1.8.4/bin/linux/amd64/kubeadm
https://dl.k8s.io/v1.8.4/kubernetes-server-linux-amd64.tar.gz
 
k8s1.8,1.9
kubeadm部署k8s-1.7.4


二.基础环境(master,node)
1.docker
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker Repository
enabled=1
gpgcheck=0
EOF
yum -y install docker-engine
sed -i '/^ExecStart=\/usr\/bin\/dockerd/c ExecStart=/usr/bin/dockerd --registry-mirror http://192.168.130.254:5000 --insecure-registry 192.168.130.254:5000 -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock' /lib/systemd/system/docker.service
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
2.kubeadm
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
enabled=1
gpgcheck=0
EOF
yum -y install kubeadm
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
cat > /etc/systemd/system/kubelet.service.d/20-pod-infra-image.conf <<EOF
[Service]
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=192.168.130.254:5000/google_containers/pause-amd64:3.0"
EOF
systemctl daemon-reload
systemctl enable kubelet
提示:此时kubelet因配置文件缺失而无法成功启动,等kubeadm init生成配置文件后会自动启动kubelet服务

三.master节点初始化(kubeadm init)
kubeadm部署k8s-1.7.4

export KUBE_REPO_PREFIX=192.168.130.254:5000/google_containers
export KUBE_HYPERKUBE_IMAGE=192.168.130.254:5000/google_containers/hyperkube:v1.7.4
提示:k8s-1.7.x版本指定的etcd版本为3.0.17,192.168.8.254:5000/google_containers/etcd-amd64:3.0.17 
使用外部etcd集群,早期kubeadm版本的--external-etcd-endpoints参数已经取消,取而代之的是--config参数外挂配置文件kubeadm.yml
flannel,calico网络在init时需要明确指定podSubnet,其它网络方案请参考k8s官方文档
cat >kubeadm.yaml <<EOF

apiVersion: kubeadm.k8s.io/v1alpha1

kind: MasterConfiguration

api:

  advertiseAddress: 0.0.0.0

etcd:

  endpoints:

  - http://192.168.130.32:2379

  - http://192.168.130.33:2379

  - http://192.168.130.34:2379

networking:

  podSubnet: 10.244.0.0/16

kubernetesVersion: v1.7.4

EOF
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
kubeadm init --config kubeadm.yaml
初始化完成后会自动创建生成证书,kube-apiserver.yaml,kube-controller-manager.yaml,kube-scheduler.yaml等文件,有需要时可以对参数进行微调
kubeadm部署k8s-1.7.4

如果不使用己存在的etcd集群,也可以以容器方式运行etcd
etcd:
    extraArgs:
        advertise-client-urls: http://0.0.0.0:2379
        listen-client-urls: http://0.0.0.0:2379
    image: gcr.io/google_containers/etcd-amd64:3.0.17
注意: k8s1.7只支持变量而1.8及以后的版本则废弃了变量,通过配置文件指定
cat >kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: 0.0.0.0
etcd:
  endpoints:
  - http://192.168.130.11:2379
  - http://192.168.130.12:2379
  - http://192.168.130.13:2379
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
kubernetesVersion: v1.7.4
imageRepository: 192.168.130.1:5000/google_containers
unifiedControlPlaneImage: 192.168.130.1:5000/google_containers/hyperkube:v1.7.4
EOF
如上,配置文件中的key
imageRepository对应变量KUBE_REPO_PREFIX
unifiedControlPlaneImage对应变量KUBE_HYPERKUBE_IMAGE

四.配置kubectl config
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm部署k8s-1.7.4
网络没好之前,node处于NotReady状态,默认master节点是不会调度到常规性的服务。 如果需要将pods能够调度到master上,需要在master上执行kubectl taint nodes --all node-role.kubernetes.io/master-
报错1:
kubeadm部署k8s-1.7.4
解决:
偶发性bug
kubeadm reset
rm -rf /run/kubernetes
重新kubeadm init 

报错2:
 Unable to connect to the server: x509: certificate signed by unknown authority
原因:
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
没有成功覆盖己有的config
解决: 
rm -rf ~/.kube

报错3:
Unable to connect to the server: x509: certificate is valid for 192.168.130.100, 10.254.0.1, 10.96.0.10, not 192.168.130.11
原因:
证书与主机不匹配
解决:
根据主机重新生成证书

五.网络
flannel方案
提示: flannel作为DaemonSet存在,建议使用私有仓库192.168.130.254:5000/coreos/flannel:v0.8.0-amd64 
kubeadm部署k8s-1.7.4
会自动创建/etc/cni/net.d/10-flannel.conf配置文件并生成flannel.1网卡,至此master节点上有如下5个docker image
kubeadm部署k8s-1.7.4



calico方案
kubeadm部署k8s-1.7.4

kubeadm部署k8s-1.7.4
报错:The DaemonSet "calico-node" is invalid: spec.template.spec.containers[0].securityContext.privileged: Forbidden: disallowed by cluster policy 解决:kube-apiserver和kubelet的启动脚本中添加--allow_privileged=true

六.添加node(kubeadm join)
安装kubeadm同master节点,略
无需指定变量KUBE_REPO_PREFIX
kubeadm join --token 5ef782.c2f3b670f11f6d18 192.168.130.42:6443
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm部署k8s-1.7.4


flannel方案
kubeadm部署k8s-1.7.4

calico方案
kubeadm部署k8s-1.7.4

kubeadm部署k8s-1.7.4

七.确认集群状态
kubectl get nodes
kubectl get pods --namespace=kube-system
flannel方案
kubeadm部署k8s-1.7.4

calico方案
kubeadm部署k8s-1.7.4



八.dashboard
cAdvisor--kubelet自带基础监测功能
修改/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=4194
systemctl daemon-reload && systemctl restart kubelet
kubeadm部署k8s-1.7.4

1.创建dashboard容器
主要修改使用私有仓库及nodePort

        image: 192.168.130.254:5000/google_containers/kubernetes-dashboard-amd64:v1.6.3

        imagePullPolicy: IfNotPresent

---

kind: Service

apiVersion: v1

metadata:

  labels:

    k8s-app: kubernetes-dashboard

  name: kubernetes-dashboard

  namespace: kube-system

spec:

  type: NodePort

  ports:

  - port: 80

    targetPort: 9090

    nodePort: 30000

  selector:

    k8s-app: kubernetes-dashboard

kubeadm部署k8s-1.7.4
2.访问url
kubectl proxy
kubectl proxy --address='0.0.0.0' --port=30001 --accept-hosts='^*$'
192.168.130.42:30001/ui
nodePort
192.168.130.43:30000/ui
api
https://192.168.130.42:6443/ui

kubeadm部署k8s-1.7.4
3.heapster监控
只需修改image为私有仓库镜像即可
192.168.130.254:5000/google_containers/heapster-influxdb-amd64:v1.3.3
192.168.130.254:5000/google_containers/heapster-grafana-amd64:v4.4.3
192.168.130.254:5000/google_containers/heapster-amd64:v1.4.0

kubectl apply -f heapster.yaml
kubectl apply -f influxdb.yaml
kubectl apply -f grafana.yaml
kubectl apply -f heapster-rbac.yaml
kubectl get services --namespace=kube-system monitoring-grafana monitoring-influxdb
kubeadm部署k8s-1.7.4
提示: grafana同样可以通过nodePort来暴露端口。在heapster部署成功后,kubernetes-dashboard需要重新部署一遍才能看到效果
kubeadm部署k8s-1.7.4
kubeadm部署k8s-1.7.4

Grafana默认用户名密码都是admin, influxDB默认数据库为k8s,用户名密码都为root

curl -s -G http://10.99.32.74:8086/query -u root:root --data-urlencode "q=SHOW DATABASES"|python -mjson.tool
kubeadm部署k8s-1.7.4

kubeadm部署k8s-1.7.4
kubeadm部署k8s-1.7.4

kubeadm部署k8s-1.7.4



posted @ 2017-09-12 16:07  李庆喜  阅读(696)  评论(0编辑  收藏  举报