Kubernetes安装测试

安装Kubernetes单机版

系统: centos 7.6
docker: 1.13.1

yum安装etcd和kubernetes(会自动安装docker)

yum install etcd kubernetes -y

修改配置文件

  1. Docker配置文件/etc/sysconfig/docker, OPTIONS=’–selinux-enabled=false –insecure-registry gcr.io’
  2. Kubernetes apiservce配置文件/etc/kubernetes/apiserver,把–admission_control参数钟的ServiceAccount删除

启动服务

systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy

安装Kubernetes集群

系统:centos 7
docker: 1.13.1
etcd: 3.3.11
Kubernetes: v1.5.2
apiserver: v1beta1

测试环境

2台主机做master集群、etcd集群、registry集群
2台主机做node
192.168.181.146 master1
192.168.181.150 master2
192.168.181.149 node1
192.168.181.147 node2

安装前准备工作

  1. 修改/etc/hosts文件
    192.168.181.146 master1 etcd1 keep1 k8s-n1
    192.168.181.149 node1 k8s-n2
    192.168.181.147 node2 k8s-n3
    192.168.92.136 master2 etcd2 keep2 k8s-n4
  2. 实现节点时间同步
  3. 关闭防火墙服务,并确保不会自动启动

安装etcd集群

  1. master1和master2上安装etcd程序(yum安装)
    yum install etcd -y
  2. 修改配置文件
    vi /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.181.146:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.181.146:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd1"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.181.146:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.181.146:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.181.146:2380,etcd2=http://192.168.181.150:2380"

etcd2主机将配置文件中etcd1改为etcd2

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.181.150:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.181.150:2379,http://127.0.0.1:2379"
ETCD_NAME="etcd2"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.181.150:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.181.150:2379,http://127.0.0.1:2379"
ETCD_INITIAL_CLUSTER="etcd1=http://192.168.181.146:2380,etcd2=http://192.168.181.150:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
  1. 启动服务
    systemctl start etcd

  2. 测试服务
    etcdctl member list

利用Habor和Cephfs安装私有仓库

参考文档https://cloud.tencent.com/developer/article/1433266

  1. Habor & Cephfs 介绍
    Harbor 是由 VMware 公司开源的企业级的 Docker Registry 管理项目,它包括权限管理(RBAC)、LDAP、日志审核、管理界面、自我注册、镜像复制和中文支持等功能,可以很好的满足我们公司私有镜像仓库的需求。Cephfs 是 Ceph 分布式存储系统中的文件存储,可靠性高,管理方便,伸缩性强,能够轻松应对 PB、EB 级别数据。我们可以使用 Cephfs 作为 Harbor 底层分布式存储使用,提高 Harbor 集群的高可用性。

安装简单的registry私有仓库

  1. yum install -y docker-distribution
  2. systemctl start docker-distribution

安装K8s集群

  1. master1和master2主机安装master节点
    yum install -y kubernetes-master
  2. 修改配置文件
    vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.181.146:2379,http://192.168.181.150:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

vi /etc/kubernetes/config

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://192.168.181.146:8080"
  1. 启动程序
    systemctl start kube-apiserver.service
    systemctl start kube-controller-manager.service
    systemctl start kube-scheduler.service

利用nginx和keepalived部署高可用LB

  1. 安装nginx和keepalived
    yum -y install keepalived nginx
  2. 添加负载均衡配置
    vi /etc/nginx/conf.d/kube.conf
upstream kube-master {
  ip_hash;
  server master1:8080 weight=3; 
  server master2:8080 weight=2;
}
server {
  listen 8001;
  server_name _;
  location / {
  proxy_pass http://kube-master;
  }
}
  1. 修改keepalived配置

vi /etc/keepalived/keepalived.conf

[root@master1 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   notification_email {
     root@localhost
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id node1
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance ka {
    state MASTER
    interface ens37
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.181.151
    }
}

virtual_server 192.168.181.151 8001 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.181.146 8001 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.181.150 8001 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

部署node节点

  1. 在节点服务器上面安装docker
    yum -y install docker
    systemctl start docker.service
  2. 部署启动kubelet和kubernetes Network proxy
    yum install kubernetes-node
  3. 修改kubelet配置文件
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=node1"
KUBELET_API_SERVER="--api-servers=http://master1:8080,http://master2:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
  1. 启动服务并测试
    systemctl start kubelet proxy

在master上测试

kubectl get nodes

  1. 修改docker的镜像源为本地镜像源
    vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
ADD_REGISTRY='--add-registry master1:5000'
INSECURE_REGISTRY='--insecure-registry master1:5000'

测试

创建pods

kubectl run -i -t bbox --image=busybox

#查看部署
kubectl get deployments
#查看pods
kubectl get pods

问题

  1. 如果访问不了registry.access.redhat.com/rhel7/pod-infrastructure:latest,或者认证失败
    原因是:缺少/etc/rhsm/ca/redhat-uep.pem文件
    解决方案:建议直接search pod-infrastructure ,然后从docker.io上下载下来,然后在上传到本地仓库
  2. 创建的pod出现CrashLoopBackOff 状态?
    原因:创建的容器没有启动起来,类似docker run busybox ,没有-d选项,所以容器一启动就关闭
    解决方案:创建的容器要有启动命令,如果用dockerfile制作的镜像,要加入CMD ["httpd","-f"] 或者启动http服务

部署flannel

1.安装flannel
yum install flannel
2.在master和node节点上配置flannel
vi /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://master1:2379,http://master2:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"

  1. 在etcd上配置flanneld使用的地址池
    etcdctl mk /atomic.io/network/config '{"Network":"10.99.0.0/16"}'

  2. 在master上启动flanneld服务
    systemctl start flanneld.service

  3. 在node节点上启动flanneld服务,并重启docker服务即可完成配置
    systemctl start flanneld.service
    systemctl restart docker.service

  4. 测试

Kubernetes Dashboard安装

1.下载yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.5.1/src/deploy/kubernetes-dashboard.yaml
2.修改文件内容

image:gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
#注意千万别用域名,要用ip地址
- --apiserver-host=http://192.168.181.146:8080
  1. 启动
    kubectl create -f kubernetes-dashboard.yaml
  2. 访问
    http://192.168.181.146:8080/ui

K8S监控系统部署

  1. Heaoster和grafana
posted @ 2019-07-29 14:51  小翼君  阅读(655)  评论(0编辑  收藏  举报