云原生学习笔记-DAY1

1 云原生定义

官方定义: 云原生是一种提供了可应用于生产环境的方法论,方便企业快速运行应用程序,企业不需要将人效用于放在底层运行环境,而是主要聚焦在业务层功能开发,从而实现产品的快速交付、迭代及稳定性,从而整体降低成本支出并提高交付效率。
云原生有利于组织在动态环境中构建和运行可弹性扩展的应用,云原生的代表技术包括容器、服务网格、微服务、不可变基础设施、申明式API

2 云原生的项目分类

Graduated
Incubating
Sandbox
Archived

image

3 k8s简介

Kubunetes是谷歌结合其内部Borg系统的使用经验而开源的一个产品

4 k8s master节点上的重要组件

4.1: kube-api server: 给k8s中各种资源的增删查改及watch等操作提供HTTP Rest接口,并进行用户鉴权

4.2: kube-secheduler: 为待调度的Pod列表中的每个Pod从可用Node中找到一个最合适的Node,并将调度信息写入etcd。scheduler调度的时候要经历预选和优选两个阶段。

4.3: kube-controller-manager: k8s 控制器管理器,确保k8s pod始终处于预期的工作状态

4.4: etcd: 相当于集群的数据库,用于存储整个集群的元数据,包括配置、规格以及运行中的工作负载状态。生产环境要对etcd数据进行定期备份

5 k8s node节点上的重要组件

5.1: kubeproxy: 用于实现各节点上Pod的访问,会动态更新iptables或ipvs规则,实现pod访问请求被转发

5.2: kubelet: 向master汇报node的状态,在node上执行容器健康检查。调用docker或者containerd运行时运行容器,为pod准备卷及返回pod的运行状态等

6 ubuntu 20.04安装containerd

6.1 apt方式安装containerd

6.1.1 将默认的apt源替换成清华的源
vi /etc/apt/sources.list

 deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
 #deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
 deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
 #deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
 deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
 #deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
 #deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
 ##deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
 deb http://security.ubuntu.com/ubuntu/ focal-security main restricted universe multiverse

apt update
6.1.2 安装containerd
apt install containerd
#会自动安装runc,apt安装的container版本是1.6.12
6.1.3 修改配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
vi /etc/containerd/config.toml
修改sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9"
在registry.mirrors行下面添加两行如下
     [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
           endpoint = ["https://9916w1ow.mirror.aliyuncs.com"]
6.1.4 重启服务
systemctl restart containerd
systemctl status containerd

6.2 二进制安装方式containerd

6.2.1 从github下载containerd安装
cd /usr/local/src
wget https://github.com/containerd/containerd/releases/download/v1.6.20/containerd-1.6.20-linux-amd64.tar.gz
tar zxvf containerd-1.6.20-linux-amd64.tar.gz 
cp bin/* /usr/local/bin/
/usr/local/bin/containerd -v
6.2.2 准备containerd的service启动文件
vi /lib/systemd/system/containerd.service


 [Unit]
 Description=containerd container runtime
 Documentation=https://containerd.io
 After=network.target local-fs.target

 [Service]
 ExecStartPre=-/sbin/modprobe overlay
 ExecStart=/usr/local/bin/containerd

 Type=notify
 Delegate=yes
 KillMode=process
 Restart=always
 RestartSec=5

 #Having non-zero Limit*s causes performance problems due to accounting overhead

 #in the kernel. We recommend using cgroups to do container-local accounting.

 LimitNPROC=infinity
 LimitCORE=infinity
 LimitNOFILE=infinity

 #Comment TasksMax if your systemd version does not supports it.

 #Only systemd 226 and above support this version.

 TasksMax=infinity
 OOMScoreAdjust=-999

 [Install]
 WantedBy=multi-user.target
6.2.3 编辑containerd的配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
vi containerd/config/toml
修改125行SystemdCgroup = true
修改sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9"
在registry.mirrors这一行下面添加两行如下
     [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
           endpoint = ["https://9916w1ow.mirror.aliyuncs.com"]


注意:ubuntu22.04上面如果没修改SystemdCgroup = false 配置pod会不断重启,执行kubeadm init 会报错,必须修改成SystemdCgroup = true。ubuntu 20.04可以是SystemdCgroup = false配置。ubuntu 22.04没改配置可能遇到的报错如下
root@k8s-master-test:~# kubectl get nodes
The connection to the server 192.168.1.120:6443 was refused - did you specify the right host or port?
6.2.4启动containerd
systemctl start containerd
systemctl enable containerd

7 ubuntu安装runc

cd /usr/local/src/ && wget https://github.com/opencontainers/runc/releases/download/v1.1.5/runc.amd64
chmod +x runc.amd64
mv runc.amd64 /usr/bin/runc

8 containerd客户端工具安装使用

8.1 客户端命令ctr使用
查看帮助: ctr --help
拉取镜像,需要写registry全路径: ctr images pull docker.io/library/alpine:latest
查看镜像列表: ctr images list
指定名称空间查看镜像列表: ctr -n k8s.io images ls
从镜像运行容器: ctr run -t --net-host --rm docker.io/library/alpine:latest test-container sh
查看容器信息: ctr containers list
8.2 客户端命令nerdctl安装使用
8.2.1 nerdctl安装
cd /usr/local/src/
wget https://github.com/containerd/nerdctl/releases/download/v1.3.0/nerdctl-1.3.0-linux-amd64.tar.gz
tar zxvf nerdctl-1.3.0-linux-amd64.tar.gz -C /usr/local/bin/
8.2.2 nerdctl配置
mkdir /etc/nerdctl
vi /etc/nerdctl/nerdctl.toml

 namespace    = "k8s.io"
 debug        = false
 debug_full   = false
 insecure_registry = true
8.2.3 nerdctl使用,与docker命令相似
查看容器: nerdctl -n k8s.io ps
查看镜像: nerdctl images
获取帮助: nerdctl --help

9 安装CNI

cd /usr/local/src
wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
mkdir /opt/cni/bin -pv
tar xvf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin/

10 ubuntu 20.04使用kubeadm安装k8s 1.26.3

10.1、所有节点上都安装好containerd,runc,nerdctl,cni
10.2、所有节点准备k8s apt源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-cache madison kubeadm
10.3、所有节点安装kubeadm,kubectl,kubelet
apt install kubeadm=1.26.3-00 kubelet=1.26.3-00 kubectl=1.26.3-00 -y
10.4、获取kubeadm安装k8s所需的镜像版本(在master上执行)
kubeadm config images list --kubernetes-version v1.26.3 > get-image.sh


 registry.k8s.io/kube-apiserver:v1.26.3
 registry.k8s.io/kube-controller-manager:v1.26.3
 registry.k8s.io/kube-scheduler:v1.26.3
 registry.k8s.io/kube-proxy:v1.26.3
 registry.k8s.io/pause:3.9
 registry.k8s.io/etcd:3.5.6-0
 registry.k8s.io/coredns/coredns:v1.9.3
10.5、将上一步中registry.k8s.io替换为阿里云registry地址registry.cn-hangzhou.aliyuncs.com/google_containers下载镜像,node节点只需要kube-proxy和pause镜像
vi get-image.sh

master脚本
 #!/bin/bash
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.3
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

node脚本
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.3
 nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
10.6、所有节点系统内核参数优化
10.6.1 vi /etc/sysctl.conf
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
10.6.2 vi /etc/modules-load.d/modules.conf
ip_vs
ip_vs_lc
ip_vs_lblc
ip_vs_lblcr
ip_vs_rr
ip_vs_wrr
ip_vs_sh
ip_vs_dh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
ip_tables
ip_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
xt_set
br_netfilter
nf_conntrack
overlay
10.6.3 vi /etc/security/limits.conf
root            soft    core            unlimited
root            hard    core            unlimited
root            soft    nproc           1000000
root            hard    nproc           1000000
root            soft    nofile           1000000
root            hard    nofile           1000000
root            soft    memlock          32000
root            hard    memlock          32000
root            soft    msgqueue         8192000
root            hard    msgqueue         8192000
10.7、所有节点检查fstab swap有没有开启,有的话要注释掉。然后重启节点,重启完后验证模块加载
lsmod |grep br_netfilter


 br_netfilter           28672  0
 bridge                176128  1 br_netfilter
10.8、在master节点执行kubeadm初始化集群
10.8.1 生产环境要规划好pod-network-cidr和service-cidr网段地址
kubeadm init --apiserver-advertise-address=192.168.1.80 --apiserver-bind-port=6443 --kubernetes-version=v1.26.3 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
10.8.2 以下三条命令如果安装的网络组件是calico,则需要在所有节点执行(node节点可以从master上把config文件复制过去),因为calico-kube-controller pod会默认找config文件,如果没找到,pod起不来。如果安装的网络组件是flannel, 则在master执行即可
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
10.9、在node执行kubeadm命令加入集群
kubeadm join 192.168.1.80:6443 --token dt24yw.u5krdxqqk0wcem7z \
    --discovery-token-ca-cert-hash sha256:cb0198348a7e9bdd27b55e1eb4fdb961b907c9b35a8dad3f9f82946b4f69c246
10.10、安装网络组件calico(如果没有安装这个,node节点不会Ready)
参考:https://kubernetes.io/docs/concepts/cluster-administration/addons/
参考:https://www.tigera.io/project-calico/ 

下载calico-ipip_ubuntu2004-k8s-1.26.x, 并修改以下值:
- name: CALICO_IPV4POOL_CIDR
  value: "10.100.0.0/16" #与初始化时的Pod网段保持一致 
- name: CALICO_IPV4POOL_BLOCK_SIZE
  value: "24" #Pod网段的掩码 
- name: IP_AUTODETECTION_METHOD
  value: "interface=ens33" #node的接口名字

kubectl apply -f calico-ipip_ubuntu2004-k8s-1.26.x
10.11、查看所有Pod均是running状态即可
kubectl get pods -A
10.12、修改kube-poroxy config配置,将默认的iptables模式改成ipvs模式
kubectl edit configmap -n kube-system kube-proxy
mode值默认为空,改成ipvs, 保存后要重启节点


 metricsBindAddress: ""
 mode: "ipvs"

11 部署nginx和tomcat pod测试

kubectl apply -f nginx.yaml 
kubectl apply -f tomcat.yaml 

12 安装使用dashboard

12.1 下载dashboard pod需要的镜像
nerdctl pull kubernetesui/dashboard:v2.7.0
nerdctl pull kubernetesui/metrics-scraper:v1.0.8
12.2 部署dashboard pod
kubectl apply -f dashboard-v2.7.0.yaml 
kubectl apply -f admin-user.yaml 
kubectl apply -f admin-secret.yaml
12.3获取访问token
12.3.1
kubectl get secret -A |grep admin

 kubernetes-dashboard   dashboard-admin-user              kubernetes.io/service-account-token   3      8m37s
12.3.2
kubectl describe secret -n kubernetes-dashboard dashboard-admin-user

 Name:         dashboard-admin-user
 Namespace:    kubernetes-dashboard
 Labels:       <none>
 Annotations:  kubernetes.io/service-account.name: admin-user
               kubernetes.io/service-account.uid: 2f4ffe46-7ea1-4c86-824e-08bd713e5fb9

 Type:  kubernetes.io/service-account-token

 Data
 ====
 ca.crt:     1099 bytes
 namespace:  20 bytes
 token:      eyJhbGciOiJSUzI1NiIImtpZCI6InVkVXA4UDgwYUdJODhDekp3aExZRGdHTkdWLVFuMFM1SzVoZ0xKVURyYmMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmY0ZmZlNDYtN2VhMS00Yzg2LTgyNGUtMDhiZDcxM2U1ZmI5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.prNYRyXeF0GGG4PRGxr5QHccCB1kjDa5uAdwhDpxntkr0thjnNBD9a1XnrccDaZxMzI6jUiLTzYS927YAi9n87xFIXLRATe06CzowAG1lJthUYq8OYlLwkfh3ucLXIaYrXEo6HG_pHDBA2axZWH7p3_u1UgbiP9lcOTj2Wu6md4N7HF1KrpfHqiH-kvfdK8xn7z5j813L1N7uwaMZhrFAI6RNdXiqEXvhtCKbh1fXdyAncbfn6kfLtSvmgZhY0jCFrZD9mYokOVqlFkPdnzwLJPWqZDoErPvW353IBoyv74RDkZM24m3ogrB5iNC8JEEYiC3owF9_fqlyFhKApdLyQ
12.4 通过token登录dashboard, 30000是dashboard.yaml定义的端口

image

12.5 登录成功界面

image

posted @   jack_028  阅读(156)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
点击右上角即可分享
微信分享提示