control-plane
# cat init-kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.x.1.180
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
imagePullPolicy: IfNotPresent
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
certSANs:
- 10.x.1.180
- 10.x.1.180
- 10.x.1.180
- 127.0.0.1
- k8s.jevic.cn
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "127.0.0.1:8443"
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: reg.jevic.cn/k8s
kind: ClusterConfiguration
kubernetesVersion: 1.23.10
networking:
dnsDomain: cluster.local
podSubnet: "172.86.128.0/18"
serviceSubnet: "10.254.0.0/16"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
# kubeadm init --config init-kubeadm.yaml --upload-certs
[init] Using Kubernetes version: v1.23.10
..............................................
..............................................
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 127.0.0.1:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3cb0af95811e4bde309e53fdaa63b9e6b9d07691e219b8e386dc81643d8d061e \
--control-plane --certificate-key b5c191b069c002d0dc689c1d150931f648647d11f9517a204bd157c96ce59e5a
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 127.0.0.1:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3cb0af95811e4bde309e53fdaa63b9e6b9d07691e219b8e386dc81643d8d061e
amazon-vpc-cni-k8s
addons
localdnsCache
aws-ebs-csi-driver
kubectl
curl -L https://dl.k8s.io/release/v1.23.10/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl
chmod +x /usr/local/bin/kubectl