kubernetes 1.4.5集群部署
2016/11/16 23:39:58
环境: centos7
[fu@centos server]$ uname -a
Linux centos 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
1. 初始化环境
关闭防火墙:
[root@k8s-master fu]# systemctl stop firewalld
[root@k8s-master fu]# systemctl disable firewalld
1.1 环境:
节点 | IP |
---|---|
node-1 | 192.168.44.129 |
node-2 | 192.168.44.131 |
node-3 | 192.168.44.132 |
1.2 设置hostname
hostnamectl --static set-hostname hostname
IP | hostname |
---|---|
192.168.44.129 | k8s-master |
192.168.44.131 | k8s-node-1 |
192.168.44.132 | k8s-node-2 |
master:
[root@centos fu]# hostnamectl --static set-hostname k8s-master
node-1
[root@centos fu]# hostnamectl --static set-hostname k8s-node-1
node-2
[root@centos fu]# hostnamectl --static set-hostname k8s-node-2
1.3 配置 hosts
vi /etc/hosts
IP | hostname |
---|---|
192.168.44.129 | k8s-master |
192.168.44.131 | k8s-node-1 |
192.168.44.132 | k8s-node-2 |
分别在hosts中加入:
192.168.44.129 k8s-master
192.168.44.131 k8s-node-1
192.168.44.132 k8s-node-2
或者,直接执行,在hosts中追加:
echo '192.168.44.129 k8s-master
192.168.44.131 k8s-node-1
192.168.44.132 k8s-node-2' >> /etc/hosts
1.4安装kubelet 和kubeadm
添加yum (注:root用户下执行)
cat <<EOF > /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
安装并启动:yum install docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
cat <<EOF > /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
yum install docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
2 部署 kubernetes master
2.1 添加yum(如环境统一处理,此处略过)
注:root用户下执行
cat <<EOF> /etc/yum.repos.d/k8s.repo
[kubelet]
name=kubelet
baseurl=http://files.rm-rf.ca/rpms/kubelet/
enabled=1
gpgcheck=0
EOF
安装 kubernetes依赖环境:
yum有很多源,大多是网络上的。makecache建立一个缓存,以后用install安装软件时就在缓存中搜索,提高了速度。
[root@k8s-master fu]# yum makecache
[root@k8s-master fu]# yum install -y socat kubelet kubeadm kubectl kubernetes-cni
2.2 安装docker
wget -qO- https://get.docker.com/ | sh
如果提示:
bash: wget: 未找到命令
则先安装wget:
[root@centos fu]# yum -y install wget
如果已经安装过dokcer则可直接启动:
提示未启动:
[root@centos fu]# docker images
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
docker设为开机启动并启动:
systemctl enable docker
systemctl start docker
2.3 下载镜像
images=(kube-proxy-amd64:v1.4.5 kube-discovery-amd64:1.0 kubedns-amd64:1.7 kube-scheduler-amd64:v1.4.5 kube-controller-manager-amd64:v1.4.5 kube-apiserver-amd64:v1.4.5 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.3 exechealthz-amd64:1.1 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.4.1)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done
2.4 启动 kubernetes
systemctl enable kubelet
systemctl start kubelet
2.5 创建集群
kubeadm init --api-advertise-addresses=192.168.44.129 --use-kubernetes-version v1.4.5
如提示:
Running pre-flight checks
preflight check errors:
/etc/kubernetes is not empty
则:
[root@k8s-master kubernetes]# rm -rf manifests/
然后再执行 init
2.6 记录 token
init打出的日志,把加入集群的token记录下来。
Kubernetes master initialised successfully!
You can now join any number of machines by running the following on each node:
kubeadm join --token=a46536.cad65192491d2fd9 192.168.44.129
2.7 检查 kubelet 状态
systemctl status kubelet
2.8 查询集群pods:
[root@k8s-master system]# kubectl get nodes
3 部署 kubernetes node
3.1 安装docker
wget -qO- https://get.docker.com/ | sh
如果提示:
bash: wget: 未找到命令
则先安装wget:
[root@centos fu]# yum -y install wget
设置docker开机启动并启动:
systemctl enable docker
systemctl start docker
3.2 下载镜像
images=(kube-proxy-amd64:v1.4.5 kube-discovery-amd64:1.0 kubedns-amd64:1.7 kube-scheduler-amd64:v1.4.5 kube-controller-manager-amd64:v1.4.5 kube-apiserver-amd64:v1.4.5 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.3 exechealthz-amd64:1.1 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.4.1)
for imageName in ${images[@]} ; do
docker pull jicki/$imageName
docker tag jicki/$imageName gcr.io/google_containers/$imageName
docker rmi jicki/$imageName
done
3.3 安装并启动 kubernetes(如环境统一处理,此处略过)
可以按上面先指定yum源,速度会有提升
yum makecache
yum install -y socat kubelet kubeadm kubectl kubernetes-cni
systemctl enable kubelet
systemctl start kubelet
3.4 加入集群
复制自己master创建集群的返回值
- kubeadm join --token=a46536.cad65192491d2fd9 192.168.44.129
返回如下错误,须手动清空该目录
如果此错误,请查看防火墙或selinux
返回:
都加入成功后,通过get nodes查看集群状态:
重启机器会有延迟:
4 设置 kubernetes
[fu@k8s-master ~]$ kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
[fu@k8s-master ~]$ kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
4.2 查看系统服务状态
查看所有的namespaces中的pods
[fu@k8s-master ~]$ kubectl get pods --all-namespaces
kube-dns 必须配置完网络才能 Running
4.3 其他主机控制集群
将master中文件,/etc/kubernetes/admin.conf拷贝到其它node节点,可以下载再上传,也可以主机间拷贝
主机间拷贝:
[root@k8s-master fu]# scp admin.conf fu@192.168.44.131:/home/fu
[root@k8s-master fu]# scp admin.conf fu@192.168.44.132:/home/fu
复制文件命令格式:scp local_file remote_username@remote_ip:remote_folder
node节点查询集群nodes状态命令:
[fu@k8s-node-2 ~]$ kubectl --kubeconfig ./admin.conf get nodes
可以看到跟在master上运行效果一样:
想看一下admin.conf是啥,可以more或cat,more按空格分页
[root@k8s-master fu]# more admin.conf
4.4 配置dashboard
#在master节点下载 yaml 文件, 直接导入会去官方拉取images
[fu@k8s-master ~]$ curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
编辑 yaml 文件,默认的 yaml 文件中对于 image 拉取策略的定义是 无论何时都会去拉取镜像
[fu@k8s-master ~]$ vi kubernetes-dashboard.yaml
编辑 yaml 改一下 imagePullPolicy
,把 Always
改成 IfNotPresent
(本地没有再去拉取) 或者 Never
(从不去拉取) 即可
imagePullPolicy: Always
修改为:
imagePullPolicy: IfNotPresent
修改为: 创建:
[fu@k8s-master ~]$ kubectl create -f ./kubernetes-dashboard.yaml
查看 NodePort ,既外网访问端口
[fu@k8s-master ~]$ kubectl describe svc kubernetes-dashboard --namespace=kube-system
通过 describe 命令我们可以查看其暴露出的 NodePoint
,然后便可访问:
浏览器中访问 dashboard,端口NodePort
192.168.44.129:32145
如打不开,可以查看一下pod状态,是否创建完成。
[fu@k8s-master ~]$ kubectl get pod --namespace=kube-system
都Running后,效果如下:
如有遇到此问题,按下解决,我没遇到。
FAQ:
kube-discovery error
failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]
systemctl stop kubelet;
docker rm -f -v $(docker ps -q);
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
systemctl start kubelet
kubeadm init
推荐博客***:https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/