Go Etcd
官网:
https://etcd.io/docs/v3.5/quickstart/
https://github.com/etcd-io/etcd
etcd 是什么
- etcd是一种key-value存储, 它侧重于保证集群环境中数据的一致性
- redis也是键值对存储, 它侧重于提供高速读写. 当需要在集群中保证数据强一致的时候就用etcd.
etcd是使用Go语言开发的一个开源的、高可用的分布式key-value存储系统,可以用于配置共享和服务的注册和发现。
特点
完全复制:集群中的每个节点都可以使用完整的存档
高可用性:Etcd可用于避免硬件的单点故障或网络问题
一致性:每次读取都会返回跨多主机的最新写入
简单:包括一个定义良好、面向用户的API(gRpc)
安全:实现了带有可选的客户端证书身份验证的自动化TLS
快速:每秒10000次写入的基准速度
可靠:使用Raft算法实现了强一致、高可用的服务存储目录
应用
服务发现
服务发现就是想要了解集群中是否有进程在监听 udp 或 tcp 端口,并且通过名字就可以查找和连接。
配置中心
将一些配置信息放到 etcd 上进行集中管理。
应用在启动的时候主动从 etcd 获取一次配置信息,
同时,在 etcd 节点上注册一个 Watcher 并等待,
以后每次配置有更新的时候,etcd 都会实时通知订阅者,以此达到获取最新配置信息的目的。
Etcd 安装3.5
去官网找一遍发现,居然没有docker 安装方式?what ?我看错了?
聪明的你们快去找找,https://etcd.io/docs/v3.5/install/
官网上只有:
- 二进制文件安装
- 从源码构建go
两种方式
那这个,我们创建一个docker镜像吧,因为我的最终目的是用docker模拟集群环境
创建go容器
hub 中有 golang 镜像,可以直接拉取:
docker pull golang
拉取到的镜像是基于 debian(linux 发行版的一种) 制作。
创建三个容器:
# 这个容器暴露端口给外部使用
docker run -itd -p 22379:2379 --name etcd1 golang
docker run -itd --name etcd2 golang
docker run -itd --name etcd3 golang
查看go版本
go version
在每个容器中 clone etcd
打开三个终端
docker exec -it etcd1 bash
docker exec -it etcd2 bash
docker exec -it etcd3 bash
在每个容器中克隆:
# 最新的3.6版本,需要go 1.19 这个容器
git clone https://github.com/etcd-io/etcd.git
# etcd 3.5版本 go 1.17.5 能编译通过,我用的就是这个
git clone -b v3.5.0 https://github.com/etcd-io/etcd.git
编绎 etcd
编绎脚本会拉取一些 golang 库,所以先设置好 goproxy 是非常有必要的。
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
在每个容器中都需要执行下面的命令。
cd etcd
./build.sh
编绎完成后,会多一个 bin 目录,里面有两个可执行文件etcd
和etcdctl
,分别为服务端和客户端文件,搭建集群,使用的是etcd
。
保证三个容器之间网络畅通
有关docker 网络可以参考:
https://www.cnblogs.com/makalochen/p/14242125.html
我们在写etcd 的配置文件之前我们得知道,对应etcd节点的IP已经各个节点之间是否可以互相访问
查询容器三个容器的ip
# 查询正在运行的容器,得到容器id
docker ps
# 查询对应容器的ip
# docker inspect --format '{{ .NetworkSettings.IPAddress }}' 容器id1 容器id2
docker inspect --format '{{ .NetworkSettings.IPAddress }}' 1080bf1030f6 f2a07b90daa1 24db14135e82
# 因为我们创建容器并没有指定网络,所以默认加入 bridge
# 查看是否已经加入
docker network inspect bridge
由上图可以得知,这三个容器在同一个网络中,他们网络访问应该没有问题,由于该容器中并没有ping
工具,所以我就不测试了,应该是没啥问题
写配置文件
etcd 的目录中有一个etcd.conf.yml.sample
是示例配置文件,将其复制一份用来作待使用的配置文件:
cp etcd.conf.yml.sample etcd.conf.yml
因为容器里面没有vi等工具,所以为了方便我用vscode 装 docker 扩展修改文件
修改配置文件,下面只列出配置文件中需要修改的部分:
# 节点别名,给人类看的
name: 'etcd3'
# 数据文件存放的目录
data-dir: /var/lib/etcd
# 用逗号分隔的 url 列表,用于与其他节点通信
# 下面 172.17.0.4 是docker 容器ip,每个节点不一样 注意修改,只能一个不能用逗号分隔多个
listen-peer-urls: http://172.17.0.4:2380
# 用逗号分隔的 url 列表,用于与客户端通信
listen-client-urls: http://172.17.0.4:2379,http://localhost:2379
# 用逗号分隔的 url 列表,用于通知其他节点,与通信端口相同即可,只能一个不能用逗号分隔多个
initial-advertise-peer-urls: http://172.17.0.4:2380
# 用逗号分隔的 url 列表,用于公开通知客户端,只能一个不能用逗号分隔多个
advertise-client-urls: http://172.17.0.4:2379
# 初始化集群配置。集群各节点别名与其 url 的键值对列表
initial-cluster: infra1=http://172.17.0.2:2380,infra2=http://172.17.0.3:2380,infra3=http://172.17.0.4:2380
# 初始化集群 token , 每个节点配置成一样
initial-cluster-token: 'etcd-cluster-1'
Etcd1 的配置文件
# This is the configuration file for the etcd server.
# Human-readable name for this member.
name: 'etcd1'
# Path to the data directory.
data-dir: /var/lib/etcd
# Path to the dedicated wal directory.
wal-dir:
# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000
# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100
# Time (in milliseconds) for an election to timeout.
election-timeout: 1000
# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: http://172.17.0.2:2380
# List of comma separated URLs to listen on for client traffic.
listen-client-urls: http://172.17.0.2:2379,http://localhost:2379
# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5
# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5
# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:
# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: http://172.17.0.2:2380
# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: http://172.17.0.2:2379
# Discovery URL used to bootstrap the cluster.
discovery:
# Valid values include 'exit', 'proxy'
discovery-fallback: 'proxy'
# HTTP proxy to use for traffic to discovery service.
discovery-proxy:
# DNS domain used to bootstrap initial cluster.
discovery-srv:
# Initial cluster configuration for bootstrapping.
initial-cluster: etcd1=http://172.17.0.2:2380,etcd2=http://172.17.0.3:2380,etcd3=http://172.17.0.4:2380
# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'etcd-cluster1'
# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'
# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false
# Accept etcd V2 client requests
enable-v2: true
# Enable runtime profiling data via HTTP server
enable-pprof: true
# Valid values include 'on', 'readonly', 'off'
proxy: 'off'
# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000
# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000
# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000
# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000
# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0
client-transport-security:
# Path to the client server TLS cert file.
cert-file:
# Path to the client server TLS key file.
key-file:
# Enable client cert authentication.
client-cert-auth: false
# Path to the client server TLS trusted CA cert file.
trusted-ca-file:
# Client TLS using generated certificates
auto-tls: false
peer-transport-security:
# Path to the peer server TLS cert file.
cert-file:
# Path to the peer server TLS key file.
key-file:
# Enable peer client cert authentication.
client-cert-auth: false
# Path to the peer server TLS trusted CA cert file.
trusted-ca-file:
# Peer TLS using generated certificates.
auto-tls: false
# Enable debug-level logging for etcd.
log-level: debug
logger: zap
# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]
# Force to create a new one member cluster.
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"
Etcd2 的配置文件
# This is the configuration file for the etcd server.
# Human-readable name for this member.
name: 'etcd2'
# Path to the data directory.
data-dir: /var/lib/etcd
# Path to the dedicated wal directory.
wal-dir:
# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000
# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100
# Time (in milliseconds) for an election to timeout.
election-timeout: 1000
# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: http://172.17.0.3:2380
# List of comma separated URLs to listen on for client traffic.
listen-client-urls: http://172.17.0.3:2379,http://localhost:2379
# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5
# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5
# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:
# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: http://172.17.0.3:2380
# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: http://172.17.0.3:2379
# Discovery URL used to bootstrap the cluster.
discovery:
# Valid values include 'exit', 'proxy'
discovery-fallback: 'proxy'
# HTTP proxy to use for traffic to discovery service.
discovery-proxy:
# DNS domain used to bootstrap initial cluster.
discovery-srv:
# Initial cluster configuration for bootstrapping.
initial-cluster: etcd1=http://172.17.0.2:2380,etcd2=http://172.17.0.3:2380,etcd3=http://172.17.0.4:2380
# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'etcd-cluster1'
# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'
# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false
# Accept etcd V2 client requests
enable-v2: true
# Enable runtime profiling data via HTTP server
enable-pprof: true
# Valid values include 'on', 'readonly', 'off'
proxy: 'off'
# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000
# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000
# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000
# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000
# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0
client-transport-security:
# Path to the client server TLS cert file.
cert-file:
# Path to the client server TLS key file.
key-file:
# Enable client cert authentication.
client-cert-auth: false
# Path to the client server TLS trusted CA cert file.
trusted-ca-file:
# Client TLS using generated certificates
auto-tls: false
peer-transport-security:
# Path to the peer server TLS cert file.
cert-file:
# Path to the peer server TLS key file.
key-file:
# Enable peer client cert authentication.
client-cert-auth: false
# Path to the peer server TLS trusted CA cert file.
trusted-ca-file:
# Peer TLS using generated certificates.
auto-tls: false
# Enable debug-level logging for etcd.
log-level: debug
logger: zap
# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]
# Force to create a new one member cluster.
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"
Etcd3 的配置文件
# This is the configuration file for the etcd server.
# Human-readable name for this member.
name: 'etcd3'
# Path to the data directory.
data-dir: /var/lib/etcd
# Path to the dedicated wal directory.
wal-dir:
# Number of committed transactions to trigger a snapshot to disk.
snapshot-count: 10000
# Time (in milliseconds) of a heartbeat interval.
heartbeat-interval: 100
# Time (in milliseconds) for an election to timeout.
election-timeout: 1000
# Raise alarms when backend size exceeds the given quota. 0 means use the
# default quota.
quota-backend-bytes: 0
# List of comma separated URLs to listen on for peer traffic.
listen-peer-urls: http://172.17.0.4:2380
# List of comma separated URLs to listen on for client traffic.
listen-client-urls: http://172.17.0.4:2379,http://localhost:2379
# Maximum number of snapshot files to retain (0 is unlimited).
max-snapshots: 5
# Maximum number of wal files to retain (0 is unlimited).
max-wals: 5
# Comma-separated white list of origins for CORS (cross-origin resource sharing).
cors:
# List of this member's peer URLs to advertise to the rest of the cluster.
# The URLs needed to be a comma-separated list.
initial-advertise-peer-urls: http://172.17.0.4:2380
# List of this member's client URLs to advertise to the public.
# The URLs needed to be a comma-separated list.
advertise-client-urls: http://172.17.0.4:2379
# Discovery URL used to bootstrap the cluster.
discovery:
# Valid values include 'exit', 'proxy'
discovery-fallback: 'proxy'
# HTTP proxy to use for traffic to discovery service.
discovery-proxy:
# DNS domain used to bootstrap initial cluster.
discovery-srv:
# Initial cluster configuration for bootstrapping.
initial-cluster: etcd1=http://172.17.0.2:2380,etcd2=http://172.17.0.3:2380,etcd3=http://172.17.0.4:2380
# Initial cluster token for the etcd cluster during bootstrap.
initial-cluster-token: 'etcd-cluster1'
# Initial cluster state ('new' or 'existing').
initial-cluster-state: 'new'
# Reject reconfiguration requests that would cause quorum loss.
strict-reconfig-check: false
# Accept etcd V2 client requests
enable-v2: true
# Enable runtime profiling data via HTTP server
enable-pprof: true
# Valid values include 'on', 'readonly', 'off'
proxy: 'off'
# Time (in milliseconds) an endpoint will be held in a failed state.
proxy-failure-wait: 5000
# Time (in milliseconds) of the endpoints refresh interval.
proxy-refresh-interval: 30000
# Time (in milliseconds) for a dial to timeout.
proxy-dial-timeout: 1000
# Time (in milliseconds) for a write to timeout.
proxy-write-timeout: 5000
# Time (in milliseconds) for a read to timeout.
proxy-read-timeout: 0
client-transport-security:
# Path to the client server TLS cert file.
cert-file:
# Path to the client server TLS key file.
key-file:
# Enable client cert authentication.
client-cert-auth: false
# Path to the client server TLS trusted CA cert file.
trusted-ca-file:
# Client TLS using generated certificates
auto-tls: false
peer-transport-security:
# Path to the peer server TLS cert file.
cert-file:
# Path to the peer server TLS key file.
key-file:
# Enable peer client cert authentication.
client-cert-auth: false
# Path to the peer server TLS trusted CA cert file.
trusted-ca-file:
# Peer TLS using generated certificates.
auto-tls: false
# Enable debug-level logging for etcd.
log-level: debug
logger: zap
# Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
log-outputs: [stderr]
# Force to create a new one member cluster.
force-new-cluster: false
auto-compaction-mode: periodic
auto-compaction-retention: "1"
使用配置文件运行各节点组成集群
在各个容器中分别执行:
bin/etcd --config-file etcd.conf.yml
集群应该已经正常运行了。
等等 , 仔细看日志
好吧 ,启动失败,看这个它是说 找不到etcd1
仔细看,这时候就要看你的配置文件了,果然找到了
# 修改前
initial-cluster: infra1=http://172.17.0.2:2380,infra2=http://172.17.0.3:2380,infra3=http://172.17.0.4:2380
# 修改后
initial-cluster: etcd1=http://172.17.0.2:2380,etcd2=http://172.17.0.3:2380,etcd3=http://172.17.0.4:2380
使用 etcdctl 连接到 etcd
这时候我们再打开一个终端查看
# 方式一
# 设置环境变量
export ETCDCTL_API=3
HOST_1=172.17.0.2
HOST_2=172.17.0.3
HOST_3=172.17.0.4
ENDPOINTS=$HOST_1:2379,$HOST_2:2379,$HOST_3:2379
# 方式二
# 连接
./etcdctl --endpoints=$ENDPOINTS member list
./etcdctl --endpoints=http://172.17.0.2:2379,http://172.17.0.3:2379,http://172.17.0.4:2379 endpoint status -w table
能查看状态说明我们的集群已经设置成功了
使用go 操作Etcd
其实上面我已经已经可以使用 etcdctl
客户端操作 etcd
集群了,但是我们在实际使用中不可能使用这个去操作,一般都是在 go里面写代码操作
官方提供的库和工具
https://etcd.io/docs/v3.5/integrations/
这里我们使用GO的 etcd/clientv3
进行操作
GitHub: https://github.com/etcd-io/etcd/tree/main/client/v3
官方使用文档:https://pkg.go.dev/go.etcd.io/etcd/client/v3#section-readme
go etcd hello world
学啥东西都是从 hello world 开始,这里我们向etcd 集群写入一个 hello world 并取出输出
首先我们肯定是要创建项目了,创建一个 etcd_test
的文件夹
# 初始化项目
go mod init etcd_test
# 安装/etcd/client/v3 包
go get go.etcd.io/etcd/client/v3
创建main.go ,并写入下面的内容
package main
import (
"context"
"fmt"
"time"
clientv3 "go.etcd.io/etcd/client/v3"
)
func main() {
cli, err := clientv3.New(clientv3.Config{
// 注意:容器是暴露到本季的端口中
Endpoints: []string{"127.0.0.1:22379"},
DialTimeout: 5 * time.Second,
})
if err != nil {
fmt.Printf("连接失败,错误信息:%+v", err)
}
defer cli.Close()
// put
// 设置超时时间
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
_, err = cli.Put(ctx, "test", "hello world")
cancel()
if err != nil {
fmt.Printf("put 操作失败, err:%v\n", err)
return
}
// get
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
resp, err := cli.Get(ctx, "test")
cancel()
if err != nil {
fmt.Printf("get 操作失败, err:%v\n", err)
return
}
for _, ev := range resp.Kvs {
fmt.Printf("%s:%s\n", ev.Key, ev.Value)
}
}
put, get, delete, watch 方法
其实上面的 hello world 案例我们已经 使用到了 put, get 方法,没错,这几个方式就是顾名思义
- put:修改或删除
- get:获取
- delete:删除
- wathc:监听
下面来个综合案例:
监听 key 为 test2 的 值变化情况
案例
文件:main.go
package main
import (
"context"
"fmt"
"time"
clientv3 "go.etcd.io/etcd/client/v3"
)
func main() {
cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:22379"},
DialTimeout: 5 * time.Second,
})
if err != nil {
fmt.Printf("连接失败,错误信息:%+v", err)
}
defer cli.Close()
// 监听 test2
// 派一个哨兵 一直监视 key 的变化(新增,修改,删除)
ch := cli.Watch(context.Background(), "test2")
// 尝试重通道取值(监视信息)
for wresp := range ch {
for _, evt := range wresp.Events {
fmt.Printf("type:%v; key:%v; value:%v \n", evt.Type, string(evt.Kv.Key), string(evt.Kv.Value))
}
}
}
文件:main2.go
package main
import (
"context"
"fmt"
"time"
clientv3 "go.etcd.io/etcd/client/v3"
)
func main() {
cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"127.0.0.1:22379"},
DialTimeout: 5 * time.Second,
})
if err != nil {
fmt.Printf("连接失败,错误信息:%+v", err)
}
defer cli.Close()
// 创建客户端这个回话存活 2 s
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
// put 新增
_, err = cli.Put(ctx, "test2", "新增")
cancel()
if err != nil {
fmt.Printf("put 操作失败, err:%v\n", err)
return
}
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
// put 修改
_, err = cli.Put(ctx, "test2", "修改")
cancel()
if err != nil {
fmt.Printf("put 操作失败, err:%v\n", err)
return
}
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
// get 获取
resp, err := cli.Get(ctx, "test2")
cancel()
if err != nil {
fmt.Printf("get 操作失败, err:%v\n", err)
return
}
for _, ev := range resp.Kvs {
fmt.Printf("%s:%s\n", ev.Key, ev.Value)
}
ctx, cancel = context.WithTimeout(context.Background(), time.Second)
// 删除
_, err = cli.Delete(ctx, "test2")
cancel()
if err != nil {
panic(err.Error())
}
}
分别执行
go run main.go
go run main2.go
结果
可以看到已经监听到了 新增,修改,删除
etcd实现服务注册和发现
方法汇总
- clientv3.New: 创建etcdv3客户端(func New(cfg Config) (*Client, error))
- clientv3.Config: 创建客户端时使用的配置
- Grant: 初始化一个新租约(Grant(ctx context.Context, ttl int64) (*LeaseGrantResponse, error))
- Put: 注册服务并绑定租约
- KeepAlive: 设置续租,定期发送续租请求 (KeepAlive(ctx context.Context, id LeaseID) (<-chan *LeaseKeepAliveResponse, error))
- Revoke: 撤销租约
- Get: 获取服务
- Watch: 监控服务
实现流程
服务注册实现案例
package main
import (
"context"
"log"
"time"
clientv3 "go.etcd.io/etcd/client/v3"
)
// ServiceRegister 创建租约注册服务
type ServiceRegister struct {
cli *clientv3.Client //etcd client
leaseID clientv3.LeaseID //租约ID
//租约keepalieve相应chan
keepAliveChan <-chan *clientv3.LeaseKeepAliveResponse
key string //key
val string //value
}
// NewServiceRegister 新建注册服务
func NewServiceRegister(endpoints []string, key, val string, lease int64) (*ServiceRegister, error) {
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: 5 * time.Second,
})
if err != nil {
log.Fatal(err)
}
ser := &ServiceRegister{
cli: cli,
key: key,
val: val,
}
//申请租约设置时间keepalive并注册服务
if err := ser.putKeyWithLease(lease); err != nil {
return nil, err
}
return ser, nil
}
// 设置租约
func (s *ServiceRegister) putKeyWithLease(lease int64) error {
//设置租约时间
resp, err := s.cli.Grant(context.Background(), lease)
if err != nil {
return err
}
//注册服务并绑定租约
_, err = s.cli.Put(context.Background(), s.key, s.val, clientv3.WithLease(resp.ID))
if err != nil {
return err
}
//设置续租 定期发送需求请求
leaseRespChan, err := s.cli.KeepAlive(context.Background(), resp.ID)
if err != nil {
return err
}
s.leaseID = resp.ID
log.Println(s.leaseID)
s.keepAliveChan = leaseRespChan
log.Printf("Put key:%s val:%s success!", s.key, s.val)
return nil
}
// ListenLeaseRespChan 监听 续租情况
func (s *ServiceRegister) ListenLeaseRespChan() {
for leaseKeepResp := range s.keepAliveChan {
log.Println("续约成功", leaseKeepResp)
}
log.Println("关闭续租")
}
// Close 注销服务
func (s *ServiceRegister) Close() error {
//撤销租约
if _, err := s.cli.Revoke(context.Background(), s.leaseID); err != nil {
return err
}
log.Println("撤销租约")
return s.cli.Close()
}
func main() {
// 集群节点
var endpoints = []string{"127.0.0.1.:22379"}
ser, err := NewServiceRegister(endpoints, "/web", "172.17.0.3:2379", 5)
if err != nil {
log.Fatalln(err)
}
//监听续租相应chan
go ser.ListenLeaseRespChan()
select {
// 30 秒后关闭租约
case <-time.After(30 * time.Second):
ser.Close()
}
}
服务发现实现案例
package main
import (
"context"
"log"
"sync"
"time"
mvccpb "go.etcd.io/etcd/api/v3/mvccpb"
clientv3 "go.etcd.io/etcd/client/v3"
)
// ServiceDiscovery 服务发现
type ServiceDiscovery struct {
cli *clientv3.Client //etcd client
serverList map[string]string //服务列表
lock sync.Mutex
}
// NewServiceDiscovery 新建发现服务
func NewServiceDiscovery(endpoints []string) *ServiceDiscovery {
cli, err := clientv3.New(clientv3.Config{
Endpoints: endpoints,
DialTimeout: 5 * time.Second,
})
if err != nil {
log.Fatal(err)
}
return &ServiceDiscovery{
cli: cli,
serverList: make(map[string]string),
}
}
// WatchService 初始化服务列表和监视
func (s *ServiceDiscovery) WatchService(prefix string) error {
//根据前缀获取现有的key
resp, err := s.cli.Get(context.Background(), prefix, clientv3.WithPrefix())
if err != nil {
return err
}
for _, ev := range resp.Kvs {
s.SetServiceList(string(ev.Key), string(ev.Value))
}
//监视前缀,修改变更的server
go s.watcher(prefix)
return nil
}
// watcher 监听前缀
func (s *ServiceDiscovery) watcher(prefix string) {
rch := s.cli.Watch(context.Background(), prefix, clientv3.WithPrefix())
log.Printf("watching prefix:%s now...", prefix)
for wresp := range rch {
for _, ev := range wresp.Events {
switch ev.Type {
case mvccpb.PUT: //修改或者新增
s.SetServiceList(string(ev.Kv.Key), string(ev.Kv.Value))
case mvccpb.DELETE: //删除
s.DelServiceList(string(ev.Kv.Key))
}
}
}
}
// SetServiceList 新增服务地址
func (s *ServiceDiscovery) SetServiceList(key, val string) {
s.lock.Lock()
defer s.lock.Unlock()
s.serverList[key] = string(val)
log.Println("put key :", key, "val:", val)
}
// DelServiceList 删除服务地址
func (s *ServiceDiscovery) DelServiceList(key string) {
s.lock.Lock()
defer s.lock.Unlock()
delete(s.serverList, key)
log.Println("del key:", key)
}
// GetServices 获取服务地址
func (s *ServiceDiscovery) GetServices() []string {
s.lock.Lock()
defer s.lock.Unlock()
addrs := make([]string, 0)
for _, v := range s.serverList {
addrs = append(addrs, v)
}
return addrs
}
// Close 关闭服务
func (s *ServiceDiscovery) Close() error {
return s.cli.Close()
}
func main() {
var endpoints = []string{"127.0.0.1:22379"}
ser := NewServiceDiscovery(endpoints)
defer ser.Close()
_ = ser.WatchService("/web")
for {
select {
case <-time.Tick(10 * time.Second):
log.Println(ser.GetServices())
}
}
}