kubernetes 安装备注

一、安装环境

  阿里云:centos 7.3

  master节点:外网IP(116.62.205.90)、内网IP(172.16.223.200)

  node节点:外网IP(116.62.212.174)、内网IP(172.16.223.201)

 

二、Master节点安装步骤

1、在master节点上安装etcd

备注:etcd是用于共享配置和服务发现的分布式,一致性的KV存储系统,类似ZK和consul

  执行命令:yum -y install etcd

  修改/etc/etcd/etcd.conf文件,主要修改如下:

ETCD_LISTEN_PEER_URLS="http://172.16.223.200:2380"
ETCD_LISTEN_CLIENT_URLS="http://127.0.0.1:2379,http://172.16.223.200:2379"

ETCD_ADVERTISE_CLIENT_URLS="http://172.16.223.200:2379"

2、在master节点上安装kubernetes-master

  执行命令:yum -y install kubernetes-master

  修改配置文件/etc/kubernetes/apiserver:

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
# KUBE_API_PORT="--port=8080"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://172.16.223.200:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

  修改配置文件/etc/kubernetes/config,主要修改如下:

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://116.62.205.90:8080"

  修改配置文件/etc/kubernetes/controller-manager,主要修改如下:

KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"

3、在master节点的etcd中增加网络配置项

  执行命令:etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}' 

  此网段地址将被flanneld调用,若与本机局域网IP同网段似乎不行;

4、启动kubernetes-master节点的相关进程

  执行命令:systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager

 

三、NODE节点安装步骤

1、在node节点安装kubernetes-node

  执行命令:yum -y install kubernetes-node

  修改/etc/kubernetes/config,主要修改参数如下:

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://116.62.205.90:8080"

  修改配置文件/etc/kubernetes/kubelet,主要修改参数如下:

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.223.201"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://116.62.205.90:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

2、在node节点安装flannel

  备注:Flannel是CoreOS团队针对Kubernetes设计的一个网络规划服务,简单来说,它的功能是让集群中的不同节点主机创建的Docker容器都具有全集群唯一的虚拟IP地址。

  执行命令:yum -y install  flannel

  修改配置文件/etc/kubernetes/flanneld

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://172.16.223.200:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/coreos.com/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

备注:此处coreos.com这个域名需要和master服务器中etcd存储的域名一致

3、启动node节点的各项服务:

  执行命令:

  systemctl start flanneld

  systemctl start docker

  systemctl start kubelet

  systemctl start kube-proxy

 

四、安装验证及基本使用

1、验证安装是否成功:

  执行命令:kubernetes get node  可获取当前的可用node服务器,状态为ready

  在浏览器上访问8080域名,因能反馈master apiserver所提供的API列表

2、使用kubernutes进行容器编排:

  1)、首先在node服务器上下载images

  2)、在master服务器上编辑yaml文件,内容如下:

apiVersion: v1  
kind: Service  
metadata:  
  name: fred-srv-2 
spec:  
  type: NodePort  
  ports:  
    - port: 8080  
      nodePort: 31006 
  selector:  
    app: fred-web-2 

apiVersion: v1  
kind: ReplicationController  
metadata:  
  name: fred-web-2 
spec:  
  replicas: 1  
  template:  
    metadata:  
      labels:  
        app: fred-web-2 
    spec:  
      containers:  
        - name: test-tomcat  
          image: daocloud.io/library/tomcat 
          imagePullPolicy: IfNotPresent  
          ports:  
            - containerPort: 8080

  3)、执行命令:kubectl create -f tomcat.yaml

  4)、完成后检查结果如下:

    1)执行kubectl get rc 因能看见创建的rc fred-web-2

    2)执行kubectl get svc 因能看见创建的svc fred-svc-2

    3)执行kubectl get po 因能看见创建的po fred-web-2-XXXX,此时由于replicas参数为1,因此创建了一个po

    4)访问node服务器外网IP:31006,可以访问该po所在的tomcat ROOT页面;

 

五、其他:

  1、可以使用kubectl delete -f tomcat.yaml 删除创建的资源;

  2、调用journalctl可查看kubenertes自己的错误日志;

  3、初步认识kubernetes的感觉是一个编排docker容器的集群,也就是master节点通过资源文件的设置在node节点上批量创建docker容器;

这些天在一本书上把kubernetes看成是一个微服务的框架,与spring cloud等对标,对此感觉还不能理解;没有看到kubernetes是如何对各微服务暴露出来的业务接口进行管理??

posted @ 2017-08-25 17:32  Fredric_2013  阅读(287)  评论(0编辑  收藏  举报