kubernetes基础集群搭建

1、首先准备三台机器,centos7

我的机器是:

10.0.0.11   k8s-master

10.0.0.12   k8s-node-1

10.0.0.13   k8s-node-2

2、关闭三台机器的防火墙以及setenforce

systemctl stop firewalld

systemctl disable firewalld.service

setenforce 0

3、编辑三台机器的hosts

1
2
3
4
5
6
7
8
9
[root@k8s-master ~]# vim /etc/hosts
 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 master
10.0.0.11 etcd
10.0.0.11 registry
10.0.0.12 node-1
10.0.0.13 node-2

Etcd是一个高可用的键值存储系统,主要用于共享配置和服务发现,它通过Raft一致性算法处理日志复制以保证强一致性,我们可以理解它为一个高可用强一致性的服务发现存储仓库。

在kubernetes集群中,etcd主要用于配置共享和服务发现

Etcd主要解决的是分布式系统中数据一致性的问题,而分布式系统中的数据分为控制数据和应用数据,etcd处理的数据类型为控制数据,对于很少量的应用数据也可以进行处理。

4、在master结点上安装etcd

1
[root@localhost ~]# yum install etcd -y

4.1、修改etcd配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS="http://localhost:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="master"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"

4.2、启动etcd服务,并测试

1
2
3
4
5
6
7
8
9
10
11
[root@localhost ~]# systemctl start etcd
[root@localhost ~]# etcdctl set testdir/testkey0 0
0
[root@localhost ~]# etcdctl get testdir/testkey0
0
[root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy
[root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379
cluster is healthy

5、在master、node节点上安装docker,并启动docker服务

1
2
3
yum -y install docker
systemctl enable docker
systemctl restart docker

6、在master、node结点上安装kubernetes

1
yum -y install kubernetes

6.1、修改master节点上的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
 
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
 
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
 
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
 
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
 
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
 
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
 
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
 
# Add your own!
KUBE_API_ARGS=""

6.2、master节点上修改k8s的config文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@k8s-master ~]# vim /etc/kubernetes/config
 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
 
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
 
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
 
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.0.0.11:8080"

6.3、master节点上启动服务

1
2
3
4
5
6
[root@localhost ~]# systemctl enable kube-apiserver.service
[root@localhost ~]# systemctl restart kube-apiserver.service
[root@localhost ~]# systemctl enable kube-controller-manager.service
[root@localhost ~]# systemctl restart kube-controller-manager.service
[root@localhost ~]# systemctl enable kube-scheduler.service
[root@localhost ~]# systemctl restart kube-scheduler.service

6.4、在node节点上修改配置文件,并启动服务,此步操作适用于node节点。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@localhost ~]# vim /etc/kubernetes/config
 
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
 
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
 
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
 
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://10.0.0.11:8080"
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@localhost ~]# vim /etc/kubernetes/kubelet
 
###
# kubernetes kubelet (minion) config
 
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
 
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
 
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=10.0.0.13"  #是那个node就改成那个node的ip
 
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
 
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
 
# Add your own!
KUBELET_ARGS=""

 

1
2
3
4
systemctl enable kubelet.service
systemctl restart kubelet.service
systemctl enable kube-proxy.service
systemctl restart kube-proxy.service

7、在master节点上进行测试,看看node节点是否存活

1
2
3
4
5
6
7
8
[root@localhost ~]# kubectl -s http://10.0.0.11:8080 get node
NAME        STATUS    AGE
10.0.0.12   Ready     55s
10.0.0.13   Ready     1m
[root@localhost ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.0.12   Ready     1m
10.0.0.13   Ready     2m

至此一套k8s集群搭建完毕,但还缺少网络组建,可以根据下面的操作继续搭建

8、在master、node节点上安装flannel

1
yum -y install flannel

8.1、在master节点上修改flannel的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s-master ~]# vim /etc/sysconfig/flanneld
 
# Flanneld configuration options 
 
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
 
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
 
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

8.2、配置flannel,以及启动服务,启动flannel后需要对docker等其他组件进行重启

1
2
3
4
5
6
7
8
9
etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }'
{ "Network": "10.0.0.0/16" }
 
systemctl enable flanneld.service
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

8.3、在node节点上修改flannel的配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@localhost ~]# vim /etc/sysconfig/flanneld
 
# Flanneld configuration options 
 
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"
 
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
 
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

8.4、启动node节点服务,启动flannel后需要对docker等其他组件进行重启

1
2
3
4
5
systemctl enable flanneld.service
systemctl restart flanneld.service
systemctl restart docker
systemctl restart kubelet.service
systemctl restart kube-proxy.service
posted @   Zsecret  阅读(212)  评论(0编辑  收藏  举报
编辑推荐:
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
· 基于Microsoft.Extensions.AI核心库实现RAG应用
· Linux系列:如何用heaptrack跟踪.NET程序的非托管内存泄露
· 开发者必知的日志记录最佳实践
阅读排行:
· winform 绘制太阳,地球,月球 运作规律
· 超详细:普通电脑也行Windows部署deepseek R1训练数据并当服务器共享给他人
· 上周热点回顾(3.3-3.9)
· AI 智能体引爆开源社区「GitHub 热点速览」
· 写一个简单的SQL生成工具
历史上的今天:
2019-12-04 Centos7配置静态网卡
2019-12-04 VSFTPD匿名用户上传文件
点击右上角即可分享
微信分享提示