Kubernetes 1.22.8高可用集群部署

一、服务器规划

IP 主机名 角色
10.64.128.160 SPHQOPENK8SMS01 K8S集群主节点1,Master和etcd
10.64.128.161 SPHQOPENK8SMS02 K8S集群主节点2,Master和etcd
10.64.128.162 SPHQOPENK8SMS03 K8S集群主节点3,Master和etcd
10.64.128.166 SPHQOPENK8SND01 K8S集群工作节点1
10.64.128.167 SPHQOPENK8SND02 K8S集群工作节点2
10.64.128.168 SPHQOPENK8SND03 K8S集群工作节点3
10.64.128.169 HA VIP VIP,在HA01和HA02主机实现
10.64.128.172 SPHQOPENK8SHA01 K8S主节点访问入口1,提供高可用及负载均衡
10.64.128.173 SPHQOPENK8SHA01 K8S主节点访问入口2,提供高可用及负载均衡

二、Kubernetes集群部署

2.1、服务器初始化

  • (1)修改主机名配置和/etc/hosts解析
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENMQ01
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SMS01
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SMS02
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SMS03
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENNGINX01
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENMySQL01
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENMySQL02
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SND01
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SND02
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SND03
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SHA01
[root@localhost ~]# hostnamectl set-hostname --static SPHQOPENK8SHA02
  • (2)K8S集群节点都进行解析:
[root@SPHQOPENK8SMS01 ~]# cat >> /etc/hosts << EOF
10.64.128.160 SPHQOPENK8SMS01
10.64.128.161 SPHQOPENK8SMS02
10.64.128.162 SPHQOPENK8SMS03
10.64.128.166 SPHQOPENK8SND01
10.64.128.167 SPHQOPENK8SND02
10.64.128.168 SPHQOPENK8SND03
10.64.128.172 SPHQOPENK8SHA01
10.64.128.173 SPHQOPENK8SHA02
EOF
  • (3)临时禁用swap
[root@SPHQOPENK8SMS01 ~]# swapoff –a
若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap
[root@ SPHQOPENK8SMS01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
  • (4)禁用Selinux
[root@SPHQOPENK8SMS01 ~]# getenforce
[root@SPHQOPENK8SMS01 ~]# setenforce 0
[root@SPHQOPENK8SMS01 ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
  • (5)关闭firewalld、iptables、NetworkManager
[root@SPHQOPENK8SMS01 ~]# systemctl stop firewalld
[root@SPHQOPENK8SMS01 ~]# systemctl disable firewalld
[root@SPHQOPENK8SMS01 ~]# iptable –F
[root@SPHQOPENK8SMS01 ~]# systemctl stop NetworkManager
[root@SPHQOPENK8SMS01 ~]# systemctl disable NetworkManager
  • (6)配置正确的时区和时间同步
[root@SPHQOPENK8SMS01 ~]# timedatectl set-timezone Asia/Shanghai
[root@SPHQOPENK8SMS01 ~]# systemctl start chronyd.service
[root@SPHQOPENK8SMS01 ~]# systemctl status chronyd.service
  • (7)内核参数优化
[root@SPHQOPENK8SMS01 ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@SPHQOPENK8SMS01 ~]# modprobe  br_netfilter
[root@SPHQOPENK8SMS01 ~]# sysctl -p /etc/sysctl.d/k8s.conf

2.2、keepalived部署

[root@SPHQOPENK8SHA01 ~]# yum install keepalived –y
[root@SPHQOPENK8SHA02 ~]# yum install keepalived –y

2.2.1、修改keepalived.conf配置:

[root@SPHQOPENK8SHA02 ~]# cd /etc/keepalived
[root@SPHQOPENK8SHA01 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     123456@qq.com
   }
   notification_email_from xxxx@xxxxx.com.cn
   smtp_server smtp.qq.com
   smtp_connect_timeout 30
   router_id SPHQOPENK8SHA01 #指定route_id,在ha2为SPHQOPENK8SHA02
}

vrrp_instance VI_1 {
    state MASTER		#在HA02上为BACKUP
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 66	#指定虚拟路由器ID,HA01和HA02此值必须相同
    priority 100		#在HA02上为80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456		#指定验证密码,HA01和HA02此值必须相同
    }
    virtual_ipaddress {
        10.64.128.169/24 dev eth0  label eth0:1		#指定vip,HA01和HA02此值相同
    }
}

[root@SPHQOPENK8SHA02 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     123456@qq.com
   }
   notification_email_from it-notification@qq.com.cn
   smtp_server smtp.12345.com.cn
   smtp_connect_timeout 30
   router_id SPHQOPENK8SHA02
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 66
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        10.64.128.169/24 dev eth0  label eth0:1
    }
}

2.2.2、启动keepalived

[root@SPHQOPENK8SHA01 keepalived]# systemctl start keepalived
[root@SPHQOPENK8SHA01 keepalived]# systemctl enable keepalived
[root@SPHQOPENK8SHA01 keepalived]# systemctl status keepalived

[root@SPHQOPENK8SHA02 keepalived]# systemctl start keepalived
[root@SPHQOPENK8SHA02 keepalived]# systemctl enable keepalived
[root@SPHQOPENK8SHA02 keepalived]# systemctl status keepalived

查看VIP是否在HA01上:
[root@SPHQOPENK8SHA01 keepalived]# hostname -I
10.64.128.172 10.64.128.169
[root@SPHQOPENK8SHA01 keepalived]# ifconfig
eth0      Link encap:Ethernet  HWaddr FE:FC:FE:BD:3C:98  
          inet addr:10.64.128.172  Bcast:10.64.128.255  Mask:255.255.255.0
          inet6 addr: fe80::fcfc:feff:febd:3c98/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:437789 errors:0 dropped:34097 overruns:0 frame:0
          TX packets:145704 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:336515646 (320.9 MiB)  TX bytes:12426129 (11.8 MiB)

eth0:1    Link encap:Ethernet  HWaddr FE:FC:FE:BD:3C:98  
          inet addr:10.64.128.169  Bcast:0.0.0.0  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:192 errors:0 dropped:0 overruns:0 frame:0
          TX packets:192 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:16704 (16.3 KiB)  TX bytes:16704 (16.3 KiB)

2.3、HAproxy部署

2.3.1、调整内核参数

[root@SPHQOPENK8SHA01 ~]# cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_nonlocal_bind = 1
EOF
[root@SPHQOPENK8SHA02 ~]# cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_nonlocal_bind = 1
EOF
[root@SPHQOPENK8SHA01 ~]# sysctl –p
[root@SPHQOPENK8SHA02 ~]# sysctl –p

2.3.2、安装haproxy

[root@SPHQOPENK8SHA01 ~]# yum install haproxy –y
[root@SPHQOPENK8SHA02 ~]# yum install haproxy –y

2.3.3、HA两个节点都配置haproxy

[root@SPHQOPENK8SHA01~]# cd /etc/haproxy
[root@SPHQOPENK8SHA01 haproxy]# cat haproxy.cfg
global
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

listen stats
    mode http
    bind 0.0.0.0:9999
    stats enable
    log global
    stats uri /status
    stats auth admin:123456
    
listen  kubernetes-api-6443
    bind 10.64.128.169:6443	#vip配置
    mode tcp 
    server SPHQOPENK8SMS01 10.64.128.160:6443 check inter 3s fall 3 rise 3 
    server SPHQOPENK8SMS02 10.64.128.161:6443 check inter 3s fall 3 rise 3 
    server SPHQOPENK8SMS03 10.64.128.162:6443 check inter 3s fall 3 rise 3

访问http://10.64.128.169:9999/status 可监控后端应用的可用状态,账号admin,密码123456

2.4、所有master和node节点安装配置docker

添加docker-ce 的yum源
[root@SPHQOPENK8SMS01 ~]# cat >  /etc/yum.repos.d/docker.repo  <<EOF
[docker]
name=docker
gpgcheck=0
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable/
EOF

查看docker-ce的版本列表
[root@SPHQOPENK8SMS01 ~]# yum list docker-ce --showduplicates
[root@SPHQOPENK8SMS01 ~]# yum install docker-ce -y
[root@SPHQOPENK8SMS01 ~]# systemctl start docker
[root@SPHQOPENK8SMS01 ~]# vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://registry.cn-hangzhou.aliyuncs.com"],
  "insecure-registries":["10.64.128.149:80"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
[root@SPHQOPENK8SMS01 ~]# systemctl restart docker
[root@SPHQOPENK8SMS01 ~]# systemctl enable docker

2.5、所有节点设置kubernetes源

[root@SPHQOPENK8SMS01 ~]# cat << EOF >  /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
[root@SPHQOPENK8SMS01 ~]# yum clean all
[root@SPHQOPENK8SMS01 ~]# yum makecache

2.6、Master节点安装kubeadm、kubelet、kubectl

启动kubelet并设置开机启动
[root@SPHQOPENK8SMS01 ~]# yum list kubeadm --showduplicates
[root@SPHQOPENK8SMS01 ~]# yum install kubeadm-1.22.8-0 kubelet-1.22.8-0 kubectl-1.22.8-0 -y
[root@SPHQOPENK8SMS01 ~]# systemctl start kubelet
[root@SPHQOPENK8SMS01 ~]# systemctl enable kubelet

2.7、Node节点安装kubeadm、kubelet

启动kubelet并设置开机启动
[root@SPHQOPENK8SND01 ~]# yum install kubeadm-1.22.8-0 kubelet-1.22.8-0 -y
[root@SPHQOPENK8SND01 ~]# systemctl start kubelet
[root@SPHQOPENK8SND01 ~]# systemctl enable kubelet

2.8、初始化Kubernetes集群

查看镜像的版本
[root@SPHQOPENK8SMS01 ~]# kubeadm config images list --kubernetes-version=v1.22.8

在master的三个节点和node3个节点先下载镜像,node只需要下载pause和kube-proxy镜像
[root@SPHQOPENK8SMS01 ~]# vim k8s_image_v1.22.8.sh
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.8
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4

[root@SPHQOPENK8SMS01 ~]# chmod +x k8s_image_v1.22.8.sh
[root@SPHQOPENK8SMS01 ~]# ./ k8s_image_v1.22.8.sh

[root@SPHQOPENK8SMS01 ~]# kubeadm init --control-plane-endpoint 10.64.128.169:6443 --kubernetes-version=v1.22.8 --pod-network-cidr 172.16.0.0/16 --service-cidr 10.96.0.0/12 --service-dns-domain cluster.local --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --token-ttl=0
[init] Using Kubernetes version: v1.22.8
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local sphqopenk8sms01] and IPs [10.96.0.1 10.64.128.160 10.64.128.169]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost sphqopenk8sms01] and IPs [10.64.128.160 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost sphqopenk8sms01] and IPs [10.64.128.160 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.541994 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node sphqopenk8sms01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node sphqopenk8sms01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: p374bq.d2mtexzw4ra7yih6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 \
	--discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb \
	--control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 \
	--discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb

[root@SPHQOPENK8SMS01 ~]# mkdir -p $HOME/.kube
[root@SPHQOPENK8SMS01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@SPHQOPENK8SMS01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@SPHQOPENK8SMS01 ~]# kubectl get node
NAME              STATUS     ROLES               AGE   VERSION
sphqopenk8sms01     NotReady    control-plane,master      60s      v1.22.8

2.9、安装网络插件calico

calico的yaml文件可以在github中找到

[root@SPHQOPENK8SMS01 ~]# kubectl apply -f calico.yaml
[root@SPHQOPENK8SMS01 k8s-cluster]# kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
sphqopenk8sms01     Ready    control-plane,master           3h    v1.22.8

2.10、node节点加入集群

[root@SPHQOPENK8SND01 ~]# kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 --discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb

[root@SPHQOPENK8SND02 ~]# kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 --discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb

[root@SPHQOPENK8SND03 ~]# kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 --discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb

[root@SPHQOPENK8SMS01 k8s-cluster]# kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
sphqopenk8sms01     Ready    control-plane,master         30h   v1.22.8
sphqopenk8snd01     Ready    <none>                    29h   v1.22.8
sphqopenk8snd02     Ready    <none>                    29h   v1.22.8
sphqopenk8snd03     Ready    <none>                    29h   v1.22.8

2.11、Master02、Master03加入集群

[root@SPHQOPENK8SMS01 k8s-cluster]# kubeadm init phase upload-certs --upload-certs
I0412 11:00:49.288531  124476 version.go:255] remote version is much newer: v1.23.5; falling back to: stable-1.22
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
583f10e2fec13353bc230f092dfda6069be09bf14941e3bfb2c92a2cf7f9b149

[root@SPHQOPENK8SMS02 ~]# kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 \
	--discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb \
	--control-plane --certificate-key 583f10e2fec13353bc230f092dfda6069be09bf14941e3bfb2c92a2cf7f9b149

[root@SPHQOPENK8SMS03 ~]# kubeadm join 10.64.128.169:6443 --token p374bq.d2mtexzw4ra7yih6 \
	--discovery-token-ca-cert-hash sha256:5336e39d7d7a6cf09522183c4fcfabff2e967a5da23a8ebb8b1cc7e11956e4eb \
	--control-plane --certificate-key 583f10e2fec13353bc230f092dfda6069be09bf14941e3bfb2c92a2cf7f9b149

[root@SPHQOPENK8SMS01 ~]# kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
sphqopenk8sms01   Ready    control-plane,master   30h   v1.22.8
sphqopenk8sms02   Ready    control-plane,master   29h   v1.22.8
sphqopenk8sms03   Ready    control-plane,master   29h   v1.22.8
sphqopenk8snd01   Ready    <none>                 29h   v1.22.8
sphqopenk8snd02   Ready    <none>                 29h   v1.22.8
sphqopenk8snd03   Ready    <none>                 29h   v1.22.8
posted @ 2022-06-25 08:42  烟雨浮华  阅读(558)  评论(0编辑  收藏  举报