k8s-calico
0.calico 安装
参考https://docs.projectcalico.org/v3.9/getting-started/kubernetes/
如果之前有flannel的话,kubeadm reset,然后重启服务器,不然flannel.1 网卡会残留在里面,造成一些问题。
0.1 kubernetes集群init
kubeadm init --pod-network-cidr=192.168.0.0/16
把kubeconfig 配置一下,这里略过。
0.2 部署calico
wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml
如果不是192.168.0.0/16这个网段的话,修改calico.yaml里面的CALICO_IPV4POOL_CIDR .然后apply:
#kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
创建了一堆资源,daemonset,还有calico-kube-controllers 。
等资源全部起来,就ok了。
calico-kube-controllers-dc6cb64cb-287n2 1/1 Running 1 120m
calico-node-dzn76 1/1 Running 1 120m
calico-node-g54xv 1/1 Running 1 120m
calico-node-gldpm 1/1 Running 1 116m
coredns-5644d7b6d9-qnfhp 1/1 Running 1 121m
coredns-5644d7b6d9-rv762 1/1 Running 1 121m
etcd-node1.k8s 1/1 Running 1 121m
kube-apiserver-node1.k8s 1/1 Running 1 121m
kube-controller-manager-node1.k8s 1/1 Running 1 120m
kube-proxy-44dhz 1/1 Running 1 121m
kube-proxy-5sqmq 1/1 Running 1 116m
kube-proxy-95kx6 1/1 Running 1 121m
kube-scheduler-node1.k8s 1/1 Running 1 120m
1. 网络拓扑
https://docs.projectcalico.org/v3.9/reference/architecture/
2. calicoctl
下载calicoctl ,chmod +x 然后设置环境变量:
export DATASTORE_TYPE=kubernetesexport
export KUBECONFIG=/etc/kubernetes/admin.conf
便可以正常使用calicoctl
环境变量可以参考https://docs.projectcalico.org/v3.9/getting-started/calicoctl/configure/etcd
# calicoctl get node
NAME
ecs-docker-test-0001
node1.k8s
node2.k8s
node3.k8s
# calicoctl get ippool
NAME CIDR SELECTOR
default-ipv4-ippool 192.168.0.0/16 all()
# calicoctl get workloadendpoints
WORKLOAD NODE NETWORKS INTERFACE
iperftest-6cff884cbd-cfkh9 node1.k8s 192.168.112.1/32 cali54d3368dbfc
iperftest-6cff884cbd-lmbm7 node2.k8s 192.168.166.137/32 cali4b802c9f732
iperftest-6cff884cbd-pr88l node3.k8s 192.168.83.1/32 cali18452e0e0c0
nginx1-6c86cb56b8-5hjt6 node1.k8s 192.168.112.3/32 cali7a3cc606203
nginx1-6c86cb56b8-5p7t9 node2.k8s 192.168.166.141/32 cali1899571599b
nginx1-6c86cb56b8-dldsd node3.k8s 192.168.83.6/32 cali8303429f07e
nginx1-6c86cb56b8-jlfqv ecs-docker-test-0001 192.168.161.193/32 cali65cc73a222d
4. network policy
略。