gpg: no valid OpenPGP data found. 解决办法
待做。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。
卡助在这
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
因为无法访问谷歌,所以会卡在curl这一步。那么解决方案当然就是FQ。
我这里国外服务器上安装有SS,如果没有SS服务器,直接用ssh tunnel应该是可以的。有ss的话本地需要使用ss local。所以建议使用docker来运行ss,这样用完即删,很方便。
我用的镜像是:https://hub.docker.com/r/mritd/shadowsocks/
准备好国外服务器后,在国内服务器上安装tsocks
1
|
apt install -y tsocks
|
编辑 vi /etc/tsocks.conf
1
2
3
|
server = 127.0.0.1
server_type = 5
server_port = 1080
|
server是本地Ip
server_type 5 就是socks5的意思
server_port 本地代理端口,跟ss_local一样就行。
准备好梯子后,开始安装软件
1
2
3
4
5
6
7
|
apt-get update && apt-get install -y apt-transport-https
tsocks curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
tsocks apt-get update
tsocks apt-get install -y kubelet kubeadm kubectl
|
这里要注意一个问题是,如果国内用的是阿里云服务器,那么源可能用的是内网地址,这样直接用tsocks执行apt update会出下面这样的错误
W: The repository ‘http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates Release’ does not have a Release file.
或者
Err:9 http://mirrors.cloud.aliyuncs.com/ubuntu xenial Release
Connection failed
这时候 vi /etc/apt/source.list.d/source.aliyun.list
将所有 http://mirrors.cloud.aliyuncs.com改为http://mirrors.aliyun.com就能使用代理update,改之前可以备份一个,用完再改回内网。
安装好软件后,会卡在kubeadm init,使用tsocks kubeadm init并不能解决问题。
unable to get URL “https://dl.k8s.io/release/stable-1.9.txt”: Get https://storage.googleapis.com/kubernetes-release/release/stable-1.9.txt: dial tcp 172.217.160.112:443: i/o timeout
这里我们指定kubernetes vuersion来跳过这个,
kubeadm init –kubernetes-version=1.9.3
如果没有提前准备镜像,一般会卡在这里
[init] This might take a minute or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred:Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by: – The kubelet is not running – The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) – There is no internet connection, so the kubelet cannot pull the following control plane images: – gcr.io/google_containers/kube-apiserver-amd64:v1.9.3 – gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3 – gcr.io/google_containers/kube-scheduler-amd64:v1.9.3
所以我们需要提前准备好镜像。我使用的办法是在那台国外服务器上pull下镜像,再push到hub.docker.com,然后再从hub.docker.com pull到国内服务器。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
#!/bin/bash
ARCH=amd64
version=v1.9.3
username=<username>
#https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
images=(kube-apiserver-${ARCH}:${version} \
kube-controller-manager-${ARCH}:${version} \
kube-scheduler-${ARCH}:${version} \
kube-proxy-${ARCH}:${version} \
etcd-${ARCH}:3.1.11 \
pause-${ARCH}:3.0 \
k8s-dns-sidecar-${ARCH}:1.14.7 \
k8s-dns-kube-dns-${ARCH}:1.14.7 \
k8s-dns-dnsmasq-nanny-${ARCH}:1.14.7 \
)
docker login -u $username -p <password>
for image in ${images[@]}
do
docker pull k8s.gcr.io/${image}
docker tag k8s.gcr.io/${image} ${username}/${image}
docker push ${username}/${image}
docker rmi k8s.gcr.io/${image}
docker rmi ${username}/${image}
done
unset ARCH version images username
|
这是写好的脚本,在国外服务器运行,将其中的<username>和<password>换成你的hub.docker.com帐号密码就行。
在国内再运行下面这个脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
#!/bin/bash
ARCH=amd64
version=v1.9.3
username=<username>
#https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
images=(kube-apiserver-${ARCH}:${version} \
kube-controller-manager-${ARCH}:${version} \
kube-scheduler-${ARCH}:${version} \
kube-proxy-${ARCH}:${version} \
etcd-${ARCH}:3.1.11 \
pause-${ARCH}:3.0 \
k8s-dns-sidecar-${ARCH}:1.14.7 \
k8s-dns-kube-dns-${ARCH}:1.14.7 \
k8s-dns-dnsmasq-nanny-${ARCH}:1.14.7 \
)
for image in ${images[@]}
do
docker pull ${username}/${image}
#docker tag ${username}/${image} k8s.gcr.io/${image}
docker tag ${username}/${image} gcr.io/google_containers/${image}
docker rmi ${username}/${image}
done
unset ARCH version images username
|
这样,就把kubernetes需要的镜像都准备好了。再执行init就不会有问题了。
另外有一个小技巧,在init的过程中,另开一个终端,运行
journalctl -f -u kubelet.service
可以查看具体是什么愿意卡住了。