failed to pull image k8s.gcr.io/kube-controller-manage

 

root@ubuntu:~# kubeadm init --kubernetes-version=v1.18.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.10.16.82  --cri-socket /run/containerd/containerd.sock 
W1014 12:00:18.348953   26276 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.1: output: time="2020-10-14T12:02:48+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-apiserver:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-apiserver:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-apiserver/manifests/v1.18.1: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.1: output: time="2020-10-14T12:05:18+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-controller-manager:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-controller-manager:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-controller-manager/manifests/v1.18.1: dial tcp 108.177.97.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.1: output: time="2020-10-14T12:07:48+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-scheduler:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-scheduler:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-scheduler/manifests/v1.18.1: dial tcp 64.233.188.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.1: output: time="2020-10-14T12:10:18+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.18.1\": failed to resolve reference \"k8s.gcr.io/kube-proxy:v1.18.1\": failed to do request: Head https://k8s.gcr.io/v2/kube-proxy/manifests/v1.18.1: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: time="2020-10-14T12:12:48+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/pause:3.2\": failed to resolve reference \"k8s.gcr.io/pause:3.2\": failed to do request: Head https://k8s.gcr.io/v2/pause/manifests/3.2: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: time="2020-10-14T12:15:18+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/etcd:3.4.3-0\": failed to resolve reference \"k8s.gcr.io/etcd:3.4.3-0\": failed to do request: Head https://k8s.gcr.io/v2/etcd/manifests/3.4.3-0: dial tcp 108.177.125.82:443: i/o timeout"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.7: output: time="2020-10-14T12:17:49+08:00" level=fatal msg="pulling image failed: rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/coredns:1.6.7\": failed to resolve reference \"k8s.gcr.io/coredns:1.6.7\": failed to do request: Head https://k8s.gcr.io/v2/coredns/manifests/1.6.7: dial tcp 74.125.204.82:443: i/o timeout"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

 

 

k8s 国内镜像下载方案

 

众所周知,国内是不太容易下载k8s.gcr.io站点的镜像的

一、第一种方案:Azure中国镜像站

http://mirror.azure.cn/help/gcr-proxy-cache.html

GlobalProxy in China (Azure中国镜像)
dockerhub (docker.io) dockerhub.azk8s.cn
gcr.io gcr.azk8s.cn
k8s.gcr.io gcr.azk8s.cn/google-containers
quay.io quay.azk8s.cn
#这两条语句是等效的
docker pull  k8s.gcr.io/kube-apiserver:v1.15.2
docker pull  gcr.azk8s.cn/google-containers/kube-apiserver:v1.15.2

#这两条也是等效的
docker pull quay.io/xxx/yyy:zzz
docker pull quay.azk8s.cn/xxx/yyy:zzz

二、第二种方案:直接 pull 用户mirrorgooglecontainers同步过的镜像

就当前来说,用户 mirrorgooglecontainers 在 docker hub 同步了所有 k8s 最新的镜像,先从这儿下载,然后修改 tag 即可

https://hub.docker.com/u/mirrorgooglecontainers

#这两条也是等效的
docker pull mirrorgooglecontainers/kube-scheduler:v1.15.2
docker pull k8s.gcr.io/kube-scheduler:v1.15.2

三、通过脚本进行批量下载

要下载镜像的名称,可以通过 kubeadm config images list命令获取

[root@node-1 yum.repos.d]# kubeadm config images list --kubernetes-version=v1.15.2
k8s.gcr.io/kube-apiserver:v1.15.2
k8s.gcr.io/kube-controller-manager:v1.15.2
k8s.gcr.io/kube-scheduler:v1.15.2
k8s.gcr.io/kube-proxy:v1.15.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

脚本一:通过Azure中国镜像站进行下载

#!/bin/bash
# download k8s 1.15.2 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2'
# gcr.azk8s.cn/google-containers == k8s.gcr.io

images=(
kube-apiserver:v1.15.2
kube-controller-manager:v1.15.2
kube-scheduler:v1.15.2
kube-proxy:v1.15.2
pause:3.1
etcd:3.3.10
coredns:1.3.1
)

for imageName in ${images[@]};do
	docker pull gcr.azk8s.cn/google-containers/$imageName  
	docker tag  gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName  
	docker rmi  gcr.azk8s.cn/google-containers/$imageName
done

脚本二:通过 Azure 中国镜像站进行下载,执行脚本时需要指定版本

#!/bin/bash
# download k8s 1.15.2 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2'
# gcr.azk8s.cn/google-containers == k8s.gcr.io
#images=(
#kube-apiserver:v1.15.2
#kube-controller-manager:v1.15.2
#kube-scheduler:v1.15.2
#kube-proxy:v1.15.2
#pause:3.1
#etcd:3.3.10
#coredns:1.3.1
#)
if [ $# -ne 1 ];then
    echo "please user in: ./`basename $0` KUBERNETES-VERSION"
    exit 1
fi
version=$1

images=`kubeadm config images list --kubernetes-version=${version} |awk -F'/' '{print $2}'`

for imageName in ${images[@]};do
    docker pull gcr.azk8s.cn/google-containers/$imageName
    docker tag  gcr.azk8s.cn/google-containers/$imageName k8s.gcr.io/$imageName
    docker rmi  gcr.azk8s.cn/google-containers/$imageName
done

脚本三:通过用户mirrorgooglecontainers 在 docker hub的镜像进行下载

#!/bin/bash
# download k8s 1.15.2 images
# get image-list by 'kubeadm config images list --kubernetes-version=v1.15.2'

images=(
kube-apiserver:v1.15.2
kube-controller-manager:v1.15.2
kube-scheduler:v1.15.2
kube-proxy:v1.15.2
pause:3.1
etcd:3.3.10
)

for imageName in ${images[@]};do
	docker pull mirrorgooglecontainers/$imageName  
	docker tag  mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  
	docker rmi  mirrorgooglecontainers/$imageName
done


docker pull coredns/coredns:1.3.1
docker tag coredns/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1
docker rmi coredns/coredns:1.3.1

 

docker pull mirrorgcrio/pause-arm64:3.2
docker pull mirrorgcrio/kube-proxy-arm64:v1.18.1
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.18.1
docker pull mirrorgcrio/kube-apiserver-arm64:v1.18.1
docker pull mirrorgcrio/kube-scheduler-arm64:v1.18.1
docker pull mirrorgcrio/etcd-arm64:3.4.3-0
docker pull coredns/coredns:coredns-arm64


docker tag mirrorgcrio/kube-apiserver-arm64:v1.18.1 k8s.gcr.io/kube-apiserver:v1.18.1
docker tag mirrorgcrio/kube-scheduler-arm64:v1.18.1 k8s.gcr.io/kube-scheduler:v1.18.1
docker tag mirrorgcrio/kube-controller-manager-arm64:v1.18.1 k8s.gcr.io/kube-controller-manager:v1.18.1
docker tag mirrorgcrio/kube-proxy-arm64:v1.18.1 k8s.gcr.io/kube-proxy:v1.18.1
docker tag mirrorgcrio/pause-arm64:3.2 k8s.gcr.io/pause:3.2
docker tag mirrorgcrio/etcd-arm64:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag coredns/coredns:coredns-arm64 k8s.gcr.io/coredns:1.6.7

apt-get install kubeadm=1.18.1-00 kubectl=1.18.1-00 kubelet=1.18.1-00
kubeadm init --kubernetes-version=v1.18.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=14.14.18.6

------------------------------------------------------------------------
flannel镜像:
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-amd64
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm64
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-ppc64le
docker pull registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-s390x


docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm64 quay.io/coreos/flannel:v0.12.0-arm64
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-arm quay.io/coreos/flannel:v0.12.0-arm
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-ppc64le quay.io/coreos/flannel:v0.12.0-ppc64le
docker tag registry.cn-shanghai.aliyuncs.com/yijindami/flannel:v0.12.0-s390x quay.io/coreos/flannel:v0.12.0-s390x

kubectl create -f kube-flannel.yml

 

 

解决方案1

查询镜像列表
kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.17.9
k8s.gcr.io/kube-controller-manager:v1.17.9
k8s.gcr.io/kube-scheduler:v1.17.9
k8s.gcr.io/kube-proxy:v1.17.9
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:v3.3.12
k8s.gcr.io/coredns:1.6.9
下载镜像
images=(
  kube-apiserver:v1.17.9
  kube-controller-manager:v1.17.9
  kube-scheduler:v1.17.9
  kube-proxy:v1.17.9
  pause:3.1
  etcd:v3.3.12
  coredns:1.6.9
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

 

 

 

解决方案2: docker.io仓库对google的容器做了镜像,可以通过下列命令下拉取相关镜像:

复制代码
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
docker pull coredns/coredns:1.1.3
 
复制代码

版本信息需要根据实际情况进行相应的修改。通过docker tag命令来修改镜像的标签:

复制代码
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.3 k8s.gcr.io/kube-proxy-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3 k8s.gcr.io/kube-scheduler-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3 k8s.gcr.io/kube-controller-manager-amd64:v1.11.3
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.18  k8s.gcr.io/etcd-amd64:3.2.18
docker tag docker.io/mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1.3  k8s.gcr.io/coredns:1.1.3
 
复制代码

posted on 2020-10-14 12:45  tycoon3  阅读(5923)  评论(0编辑  收藏  举报

导航