k8s记录-kubeam方式部署k8s

参考:https://blog.csdn.net/networken/article/details/84991940

 

# k8s工具部署方案

 

# 1.集群规划

 

| **服务器** | |
| ------------ | ---------------------------------------- |
| **数量** | >1(根据实际提供的服务器分配模块) |
| **配置** | 16 core /32 memory / 300GB硬盘/50M带宽 |
| **操作系统** | CentOS linux 7.2 master节点需要外网环境 |
| **文件系统** | 300G硬盘安装在/ data目录下 |
| **其他条件** | 至少一个节点具备外网环境 |

 

| 节点名称 | 主机名 | IP地址 | 操作系统 |
| -------- | ------------- | ----------- | ---------- |
| master | VM_0_1_centos | 192.168.0.1 | CentOS 7.2 |
| node1 | VM_0_2_centos | 192.168.0.2 | CentOS 7.2 |
| node2 | VM_0_3_centos | 192.168.0.3 | CentOS 7.2 |

 

# 2.基础环境配置

 

## 2.1 hostname配置(可选)

 

**1)修改主机名**

 

**在192.168.0.1 root用户下执行:**

 

hostnamectl set-hostname VM_0_1_centos

 

**在192.168.0.2 root用户下执行:**

 

hostnamectl set-hostname VM_0_2_centos

 

**在192.168.0.3 root用户下执行:**

 

hostnamectl set-hostname VM_0_3_centos

 

**2)加入主机映射**

 

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

 

vim /etc/hosts

 

192.168.0.1 VM_0_1_centos

 

192.168.0.2 VM_0_2_centos

 

192.168.0.3 VM_0_3_centos

 

## 2.2 关闭selinux(可选)

 

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

 

sed -i '/\^SELINUX/s/=.\*/=disabled/' /etc/selinux/config

 

setenforce 0

 

## 2.3 修改Linux最大打开文件数

 

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

 

vim /etc/security/limits.conf

 

\* soft nofile 65536

 

\* hard nofile 65536

 

## 2.4 关闭防火墙(可选)

 

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

 

systemctl disable firewalld.service

 

systemctl stop firewalld.service

 

systemctl status firewalld.service

 

## 2.5 软件环境初始化

 

**1)初始化服务器**

 

groupadd -g 6000 apps
useradd -s /bin/sh -g apps –d /home/app app
passwd app
yum -y install gcc gcc-c++ make autoconfig openssl-devel supervisor gmp-devel mpfr-devel libmpc-devel libaio numactl autoconf automake libtool libffi-dev

 

**2)配置sudo**

 

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

 

vim /etc/sudoers.d/app

 

app ALL=(ALL) ALL

 

app ALL=(ALL) NOPASSWD: ALL

 

Defaults !env_reset

 

**3)配置ssh无密登录**

 

**a. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**

 

su app

 

ssh-keygen -t rsa

 

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

 

chmod 600 \~/.ssh/authorized_keys

 

**b.合并id_rsa_pub文件**

 

**在192.168.0.1 app用户下执行**

 

scp \~/.ssh/authorized_keys app\@192.168.0.2:/home/app/.ssh

 

输入app密码

 

**在192.168.0.2 app用户下执行**

 

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

 

scp \~/.ssh/authorized_keys app@192.168.0.3:/home/app/.ssh

 

**在192.168.0.3 app用户下执行**

 

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

 

scp \~/.ssh/authorized_keys app@192.168.0.1:/home/app/.ssh

 

scp \~/.ssh/authorized_keys app@192.168.0.2:/home/app/.ssh

 

**c. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行ssh 测试**

 

ssh app@192.168.0.1

 

ssh app@192.168.0.2

 

ssh app@192.168.0.3

 

## 2.6 sysctl参数配置

 

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

 

**vim /etc/sysctl.conf**

 

net.bridge.bridge-nf-call-iptables=1

 

net.bridge.bridge-nf-call-ip6tables=1

 

net.ipv4.ip_forward=1

 

net.ipv4.tcp_tw_recycle=0

 

vm.swappiness=0

 

vm.overcommit_memory=1

 

vm.panic_on_oom=0

 

fs.inotify.max_user_watches=89100

 

fs.file-max=52706963

 

fs.nr_open=52706963

 

net.ipv6.conf.all.disable_ipv6=1

 

net.netfilter.nf_conntrack_max=2310720

 

**#生效**

 

sysctl –p

 

## 2.7 ntpd配置

 

****1**)服务端配置****

 

**在192.168.0.1 root用户下操作**

 

yum install -y ntp ntpdate

 

**修改etc/ntp.conf**

 

**注释所有的server和restrict**

 

**加入:**

 

server 0.cn.pool.ntp.org

 

server 0.asia.pool.ntp.org

 

server 3.asia.pool.ntp.org

 

 

 

restrict 0.cn.pool.ntp.org nomodify notrap noquery

 

restrict 0.asia.pool.ntp.org nomodify notrap noquery

 

restrict 3.asia.pool.ntp.org nomodify notrap noquery

 

 

 

server 127.127.1.0 # local clock

 

fudge 127.127.1.0 stratum 10

 

 

 

system enable ntpd

 

systemctl disable chronyd

 

systemctl restart ntpd

 

**查看网络中的NTP服务器**

 

ntpq –p

 

****2**)客户端配置****

 

**在192.168.0.2 192.168.0.3 root用户下操作**

 

yum install -y ntp ntpdate

 

**在/etc/ntp.conf加入**

 

server 192.168.0.1 prefer

 

 

 

system enable ntpd

 

systemctl disable chronyd

 

systemctl restart ntpd

 

**同步**

 

ntpdate -u 192.168.0.1

 

执行hwclock --systohc,把系统时间同步到硬件BIOS

 

ssh app@192.168.0.3

 

# 3.配置centos源

 

**在192.168.0.1 root用户下操作,需要外网环境**

 

**1)安装插件**

 

yum install -y yum-plugin-downloadonly createrepo rsync

 

**2)创建目录**

 

mkdir -p /data/mirrors/centos

 

**3)下载文件或上传文件**

 

yum install nginx -y --downloadonly --downloaddir= /data/mirrors/centos
也可以自行下载rpm包到/data/mirrors/centos

 

**4)创建repo**

 

createrepo /data/mirrors/centos

 

**5)安装nginx**

 

yum -y install nginx
cd /etc/nginx/conf.d

 

**vim mirrors.conf**

 

server {
listen 88;
server_name localhost;
root /data/mirrors/;
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
**启服务**
nginx
nginx -t
nginx -s reload
systemctl enable nginx
systemctl start nginx

 

**6)配置repo(在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作)**

 

**sudo vim /etc/yum.repos.d/mirrors.repo**
[webase]
name=webank-local-repository
baseurl=http://192.168.0.1:88/centos/
enabled=1
gpgcheck=0
#验证
sudo yum clean all && yum makecache
sudo yum repoinfo webank-local-repository

 

**7)验证(任意机器)**

 

yum -y install 软件名.版本号

 

**8)同步清华大学源(在192.168.0.1 root用户下操作)**
#!/bin/bash
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/centosplus/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/os/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/updates/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/epel/7Server/x86_64/Packages/ /data/mirrors/centos

 

**9)同步阿里云k8s组件(在192.168.0.1 root用户下操作)**

 

需要手动下载:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages,下载并拷贝到 /data/mirrors/centos即可。

 

# 4.安装docker

 

yum list docker-ce.x86_64 --showduplicates |sort -r

 

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

 

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.0-3.el7.x86_64.rpm

 

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.0-3.el7.x86_64.rpm

 

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.0-3.el7.x86_64.rpm

 

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

 

wget https://webank-ai-1251170195.cos.ap-guangzhou.myqcloud.com/docker-compose

 

rpm -ivh containerd.io-1.2.0-3.el7.x86_64.rpm

 

rpm -ivh docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

 

rpm -ivh docker-ce-cli-18.09.0-3.el7.x86_64.rpm

 

rpm -ivh docker-ce-18.09.0-3.el7.x86_64.rpm

 

systemctl enable docker

 

usermod -G docker app

 

systemctl start docker

 

systemctl stop docker

 

sleep 5

 

mkdir -p /data/docker

 

mv /var/lib/docker/* /data/docker

 

cd /var/lib

 

rm -rf docker

 

ln -s /data/docker /var/lib/docker

 

ls -l docker

 

systemctl start docker

 

usermod -G docker app

 

systemctl restart docker.service

 

docker ps -a

 

cp docker-compose /usr/local/bin

 

chmod +x /usr/local/bin/docker-compose

 

 

 

# 5.registry私有仓库配置

 

****1**)配置registry私有仓库****

 

**在192.168.0.1(外网环境)app用户下执行**

 

docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1

 

docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1 k8s.gcr.io/pause:3.1

 

docker pull registry

 

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

 

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下执行**

 

sudo vim /etc/docker/daemon.json加入

 

{

 

​ "registry-mirrors": ["https://njrds9qc.mirror.aliyuncs.com"],

 

​ "insecure-registries":["192.168.0.1:5000"]

 

}

 

sudo systemctl daemon-reload

 

sudo systemctl restart docker

 

**#遇到拉取不了镜像解决方案**

 

docker login 192.168.0.1:5000 输入用户名和密码:wb 123

 

cat ~/.docker/config.json 查看认证信息

 

**创建secret**

 

/data/projects/common/kubernetes/bin/kubectl create secret docker-registry dockercfg-192 --docker-server=192.168.0.1:5000 --docker-username=wb --docker-password=123

 

**查看创建的dockercfg-192**

 

/data/projects/common/kubernetes/bin/kubectl get secret |grep dockercfg-192

 

**2****)推送images到私有仓库**

 

**在192.168.0.1 app用户下执行**

 

**a.****改标签**

 

docker tag f32a97de94e1 192.168.0.1:5000/registry:latest

 

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

 

**b.****推送**

 

docker push 192.168.0.1:5000/registry: latest

 

docker push 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

 

**c.****拉取**

 

**在192.168.0.2 192.168.0.3 app用户下执行**

 

docker pull 192.168.0.1:5000/registry:latest

 

docker tag f32a97de94e1 registry:latest

 

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

 

docker pull 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

 

docker tag 192.168.0.1:5000/k8s.gcr.io /pause:3.1 k8s.gcr.io/pause:3.1

 

# 6.安装k8s管理工具

 

**在192.168.0.1 192.168.0.2 192.168.0.3 root 用户下安装**

 

rpm -ivh cri-tools-1.13.0-0.x86_64.rpm

 

yum -y install kubelet-1.16.1 kubeadm-1.15.4 kubectl-1.15.0 --disableexcludes=kubernetes

 

systemctl daemon-reload

 

systemctl enable kubelet

 

swapoff -a

 

# 7.部署k8s组件

 

**1)查看所需要镜像(在master节点192.168.0.1 root用户下操作)**

 

#kubeadm config images list

 

k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

 

**2)下载镜像(在master节点192.168.0.1 root用户下操作)**

 

**cat kubeadm.sh**

 

#!/bin/bash

 

set -e

 

KUBE_VERSION=v1.16.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

 

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

 

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

 

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

 

**执行脚本**

 

bash kubeadm.sh

 

**3)初始化(master节点)**

 

kubeadm init \
--apiserver-advertise-address 192.168.0.1 \
--kubernetes-version=v1.16.1 \
--image-repository registry.aliyuncs.com/google_containers\
--pod-network-cidr=10.244.0.0/16

 

初始化了需要重新执行 nohup /usr/local/bin/tiller &

 

**如返回以下信息表示初始化成功**

 

kubeadm join 192.168.0.1:6443--token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b

 

**#添加节点(所有节点)**

 

kubeadm join 192.168.0.1:6443 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b --ignore-preflight-errors=all

 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

**#将master节点开放到node节点处**

 

kubectl taint nodes --all node-role.kubernetes.io/master-

 

将master节点的/etc/kubernetes/admin.conf复制到node节点的相同位置下,并在node节点执行:

 

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile

 

**#导出配置(8080)**

 

在/etc/profile加入,然后source /etc/profile

 

export KUBECONFIG=/etc/kubernetes/kubelet.conf

 

export KUBECONFIG=/etc/kubernetes/admin.conf

 

**4)安装flannel插件(所有节点)**

 

**#下载镜像**

 

**vim flanneld.sh**

 

#!/bin/bash

 

set -e

 

FLANNEL_VERSION=v0.11.0

 

QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

 

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

 

for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done

 

**执行脚本**

 

bash flanneld.sh

 

**#创建**

 

git clone https://github.com/coreos/flannel.git

 

cd flannel/Documentation

 

kubectl apply -f kube-flannel.yml

 

**#验证节点安装情况**

 

kubectl get componentstatus

 

kubectl get node

 

**k8s组件配置镜像仓库**

 

**#master节点操作**

 

#!/bin/bash

 

wip=192.168.0.1

 

docker tag k8s.gcr.io/kube-proxy:v1.16.1 ${wip}:5000/k8s.gcr.io/kube-proxy:v1.16.1

 

docker tag k8s.gcr.io/kube-controller-manager:v1.16.1 ${wip}:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

 

docker tag k8s.gcr.io/kube-apiserver:v1.16.1 ${wip}:5000/k8s.gcr.io/kube-apiserver:v1.16.1

 

docker tag k8s.gcr.io/kube-scheduler:v1.16.1 ${wip}:5000/k8s.gcr.io/kube-scheduler:v1.16.1

 

docker tag k8s.gcr.io/coredns:1.3.1 ${wip}1:5000/k8s.gcr.io/coredns:1.3.1

 

docker tag k8s.gcr.io/etcd:3.3.10 ${wip}:5000/k8s.gcr.io/etcd:3.3.10

 

docker tag k8s.gcr.io/pause:3.1 ${wip}:5000/k8s.gcr.io/pause:3.1

 

docker tag quay.io/coreos/flannel:v0.11.0-s390x ${wip}:5000/quay.io/coreos/flannel:v0.11.0-s390x

 

docker tag quay.io/coreos/flannel:v0.11.0-ppc64le ${wip}:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

 

docker tag quay.io/coreos/flannel:v0.11.0-arm64 ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm64

 

docker tag quay.io/coreos/flannel:v0.11.0-arm ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm

 

docker tag quay.io/coreos/flannel:v0.11.0-amd64 ${wip}:5000/quay.io/coreos/flannel:v0.11.0-amd64

 

docker push ${wip}:5000/k8s.gcr.io/kube-proxy:v1.16.1

 

docker push ${wip}:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

 

docker push ${wip}:5000/k8s.gcr.io/kube-apiserver:v1.16.1

 

docker push ${wip}:5000/k8s.gcr.io/kube-scheduler:v1.16.1

 

docker push ${wip}:5000/k8s.gcr.io/coredns:1.3.1

 

docker push ${wip}:5000/k8s.gcr.io/etcd:3.3.10

 

docker push ${wip}:5000/k8s.gcr.io/pause:3.1

 

docker push ${wip}:5000/quay.io/coreos/flannel:v0.11.0-s390x

 

docker push ${wip}:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

 

docker push ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm64

 

docker push ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm

 

docker push ${wip}:5000/quay.io/coreos/flannel:v0.11.0-amd64

 

**node节点操作**

 

#!/bin/bash

 

wip=192.168.0.1

 

docker pull ${wip}:5000/k8s.gcr.io/kube-proxy:v1.16.1

 

docker pull ${wip}:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

 

docker pull ${wip}:5000/k8s.gcr.io/kube-apiserver:v1.16.1

 

docker pull ${wip}:5000/k8s.gcr.io/kube-scheduler:v1.16.1

 

docker pull ${wip}:5000/k8s.gcr.io/coredns:1.3.1

 

docker pull ${wip}:5000/k8s.gcr.io/etcd:3.3.10

 

docker pull ${wip}:5000/k8s.gcr.io/pause:3.1

 

docker pull ${wip}:5000/quay.io/coreos/flannel:v0.11.0-s390x

 

docker pull ${wip}:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

 

docker pull ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm64

 

docker pull ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm

 

docker pull ${wip}:5000/quay.io/coreos/flannel:v0.11.0-amd64

 

docker tag ${wip}:5000/k8s.gcr.io/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.16.1

 

docker tag ${wip}:5000/k8s.gcr.io/kube-controller-manager:v1.16.1 k8s.gcr.io/kube-controller-manager:v1.16.1

 

docker tag ${wip}:5000/k8s.gcr.io/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.16.1

 

docker tag ${wip}:5000/k8s.gcr.io/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1

 

docker tag ${wip}:5000/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

 

docker tag ${wip}:5000/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10

 

docker tag ${wip}:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1

 

docker tag ${wip}:5000/quay.io/coreos/flannel:v0.11.0-s390x quay.io/coreos/flannel:v0.11.0-s390x

 

docker tag ${wip}:5000/quay.io/coreos/flannel:v0.11.0-ppc64le quay.io/coreos/flannel:v0.11.0-ppc64le

 

docker tag ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm64 quay.io/coreos/flannel:v0.11.0-arm64

 

docker tag ${wip}:5000/quay.io/coreos/flannel:v0.11.0-arm quay.io/coreos/flannel:v0.11.0-arm

 

docker tag ${wip}:5000/quay.io/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

 

# 8.安装helm

 

**所有节点安装**

 

wget
https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

 

tar xvf helm-v2.14.3-linux-amd64.tar.gz

 

sudo cp linux-amd64/helm tiller /usr/local/bin

 

sudo yum install -y socat

 

sudo yum install -y *rhsm*

 

sudo yum -y install bridge*

 

sudo nohup /usr/local/bin/tiller &

 

sudo sed -i '$a\export HELM_HOST=localhost:44134' /etc/profile

 

source /etc/profile

 

helm version

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

posted @ 2019-11-06 15:44  信方  阅读(467)  评论(0编辑  收藏  举报