20分钟kubeadm安装kubernetes 1.15.1
节点四台:master、node01、node02、harbor
安装软件:链接:https://pan.baidu.com/s/1iBM9ymdvmQi67DPJ93OF0w 密码:k2m6
设置系统主机名及host文件解析
#hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8s-node02
安装依赖包
#yum -y install conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
设置防火墙为iptables规则并设置空规则
#systemctl stop firewalld&&systemctl disable firewalld #yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
关闭SElinux
# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #setenforce 0 && sed -I 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
调整内核参数
#cat > kubernetes.conf << EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 #禁止使用swap空间 vm.overcommit_memory=1 #不检查物理内存 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF
开机调用kubernetes.conf,并生效
#cp kubernetes.conf /etc/sysctl.d/kubernetes.conf #sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区-安装系统时选择上海,这步跳过
设置时区 中国/上海
#timedatectl set-timezone Asia/Shanghai
将当前UTC时间写入硬件时钟
#timedatectl set-local-rtc 0
重启依赖于系统时间的服务
#systemctl restart rsyslog #systemctl restart crond
关闭系统邮件服务
#systemctl stop postfix&&systemctl disable postfix
设置系统日志服务rsyslogd和systemd journald
创建持久化目录
# mkdir /var/log/journal
创建journald配置文件
# mkdir /etc/systemd/journal.conf.d #cat > /etc/systemd/journal.conf.d/99-prophet.conf <<EOF [ Journal ] #持久化保存到磁盘 Storage=persistent #压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 #最大占用空间 SystemMaxUse=10G #单日志文件最大 200M SystemMaxFileSize=200M #日志保存时间 MaxRetentionSec=2week #不讲日志转发到 syslog ForwardToSyslog=no EOF #systemctl restart systemd-journald
升级系统内核为4.44版本
#rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
查看 /boot/grub2/grub.cfg是否存在menuentry 中是否包含 initrd16配置,如果没有重新安装
#cat /boot/grub2/grub.cfg|grep initrd16 #yum --enablerepo=elrepo-kernel install -y kernel-lt
设置开几重启内核
#grub2-set-default 'CentOS Linux (4.4.214-1.el7.elrepo.x86_64) 7 (core)' #reboot
检查下三台节点内核版本是否为4.44
#uname -r 4.4.214-1.el7.elrepo.x86_64
Kube-proxy开启ipvs前置条件
#modprobe br_netfilter #cat > /etc/sysconfig/modules/ipvs.modules << EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF #chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4
安装docker
#yum -y install yum-utils device-mapper-persistent-data lvm2 #yum-config-manager \ --add-repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # yum update -y && yum install -y docker-ce
创建 /etc/docker 目录
#mkdir /etc/docker #grub2-set-default 'CentOS Linux (4.4.214-1.el7.elrepo.x86_64) 7 (core)'&&reboot
设置docker启动,开机自启
# systemctl start docker && systemctl enable docker
创建 daemon.json 配置文件,将存储日志的方式改为为 json file 格式存储,方便日后从 /var/log/container/ 下查找容器日志,之后就可以从 efk 中搜索索引信息了
#cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://registry.docker-cn.com"], "exec-opts": ["native.cgroupdriver=systemd"], #centos7中有两种cgroup组(cgroupfx, cgroupdriver)是由systemd做隔离 "log-driver": "json-file", "log-opts": { "max-size": "100m" } } EOF #mkdir -p /etc/systemd/system/docker.service.d #systemctl daemon-reload && systemctl restart docker && systemctl enable docker
安装kubeadm
#cat << EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg EOF #yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 #systemctl enable kubelet
导入kubernetes系统镜像,百度云盘链接:https://pan.baidu.com/s/1iBM9ymdvmQi67DPJ93OF0w 密码:k2m6
# tar xf kubeadm-basic.images.tar.gz
批量导入镜像脚本
# vim docker-load.sh #!/bin/bash ls /root/rpm/kubeadm-basic.images > /root/docker-load-list.txt cd /root/rpm/kubeadm-basic.images for i in $(cat /root/docker-load-list.txt) do docker load -i $i done #chmod a+x docker-load.sh #./docker-load.sh
在master节点操作,导出kubeadm-config.yaml配置文件
#kubeadm config print init-defaults > /etc/kubernetes/kubeadm-config.yaml #vim kubeadm-config.yaml 第12行:advertiseAddress:192.168.1.11 第34行:kubernetesVersion: v1.15.1 第36行下增加:podSubnet: "10.244.0.0/16" #pod网段
初始化master
#kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.11:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:69540b24d9d2eaa4fd9a9d533bfde8c6520ce7586366fa9e35474e94553532ba
# mkdir -p $HOME/.kube # cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # chown $(id -u):$(id -g) $HOME/.kube/config # echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile # source ~/.bash_profile
保留安装文件
#mkdir install-k8s # mv /etc/kubernetes/kubeadm-config.yaml /etc/kubernetes/kubeadm-init.log /usr/local/kubernetes/install-k8s/
master安装flannel
#wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # kubectl create -f kube-flannel.yml
没有镜像可以下载国内镜像,然后重新打标签,将镜像scp到node01和node02节点上,docker load即可
#docker pull lizhenliang/flannel:v0.11.0-amd64 #docker tag lizhenliang/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
Node01和Node02节点加入k8s集群
#tail -5 /usr/local/kubernetes/install-k8s/kubeadm-init.log #kubeadm join 192.168.1.11:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:69540b24d9d2eaa4fd9a9d533bfde8c6520ce7586366fa9e35474e94553532ba
安装harbor仓库
#mv docker-compose /usr/local/bin/ #chmod a+x /usr/local/bin/docker-compose #tar xf harbor-offline-installer-v1.2.0.tgz -C /usr/local/harbor
修改harbor配置文件
#vim harbor.cfg 第五行:hostname = edwin.registry.docker.com 第九行:ui_url_protocol = https #mkdir -p /data/cert/&&cd /data/cert/
制作证书
生成server.key
#openssl genrsa -des3 -out server.key 2048
生成server.csr私钥
#openssl req -new -key server.key -out server.csr
# cp server.key server.key.bak
验证时不使用密码
# openssl rsa -in server.key.bak -out server.key
为生成证书签名
# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
为生成的证书授予权限
# chmod a+x * # cd /usr/loacl/harbor/ # ./install.sh
为k8s集群下的主机配置hosts主机解析
# echo "192.168.1.11 edwin.registry.docker.com" >> /etc/hosts