kubenetes core-dns安装和配置(1.10.13)


前言:

kubernetes把所有的插件统一放在了.../cluster/addons下了
从早期的发布的版本看到,2015年1月,  kubernetes 0.8.x系列版本中.../cluster/addons目录被加入进来.addons被用来放置独立出来的插件,
addons目录中最早被放入的是dns和cluster-monitoring(可以看kubernetes 0.8.0版本)
0.8.0版本中名称还是skydns.

E:\k8s源码\kubernetes-0.8.0\cluster\addons\dns\README.md
## How does it work? SkyDNS depends on etcd for what to serve, but it doesn't really need all of what etcd offers in the way we use it. For simplicty, we run etcd and SkyDNS together in a pod, and we do not try to link etcd instances across replicas. A helper container called `kube2sky` also runs in the pod and acts a bridge between Kubernetes and SkyDNS. It finds the Kubernetes master through the `kubernetes-ro` service, it pulls service info from the master, and it writes that to etcd for SkyDNS to find.
当时用的skydns,该项目地址为: https://github.com/skynetservices/skydns
其中的makefile文件如下:
 all: skydns
 skydns:
 CGO_ENABLED=0 go build -a --ldflags '-w' github.com/skynetservices/skydns
 container: skydns
 docker build -t kubernetes/skydns .
 push:
 docker push kubernetes/skydns
 clean:
 rm -f skydns
//当时是把skydns制作成docker容器来运行的,skydns依赖etcd运行
//所以早期的dns服务是由kube2sky、skyDNS、etcd组成,都是按照docker容器部署的方式
E:\k8s源码\kubernetes-0.8.0\kubernetes-0.8.0\cluster\addons\dns\kube2sky\Dockerfile
E:\k8s源码\kubernetes-0.8.0\kubernetes-0.8.0\cluster\addons\dns\skydns\Dockerfile
详细可查看E:\k8s源码\kubernetes-0.8.0\kubernetes-0.8.0\cluster\addons\dns\README.md

经历几个版本都在沿用skydns,kubernetes 1.3.0 版本中 dns在addons下是消失.

1.4.0版本中dns再次出现时,kube-dns出现这个名称(skydns正式改名). 但是文件名称还是为skydns

1.6.0版本开始变成不在有skydns,全部为kube-dns
E:\k8s源码\kubernetes-1.6.0\kubernetes-1.6.0\cluster\addons\dns

1.6版本的kube-dns变化很大,开始由多个pod一起提供dns服务
kube-dns主要由三个容器组成,分别是 dnsmasq, kube-dns, sidecar(文件内的三个文件就是组成部分)
//E:\k8s源码\kubernetes-1.6.0\kubernetes-1.6.0\cluster\addons\dns\kubedns-cm.yaml apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists //E:\k8s源码\kubernetes-1.6.0\kubernetes-1.6.0\cluster\addons\dns\kubedns-controller.yaml.base apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: # replicas: not specified here: # 1. In order to make Addon Manager do not reconcile this replicas parameter. # 2. Default is 1. # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on. strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: tolerations: - key: "CriticalAddonsOnly" operator: "Exists" volumes: - name: kube-dns-config configMap: name: kube-dns optional: true containers: - name: kubedns image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 //下载tar包来使用. resources: # TODO: Set memory limits when we've profiled the container for large # clusters, then set request = limit to keep this container in # guaranteed class. Currently, this container falls into the # "burstable" category so the kubelet doesn't backoff from restarting it. limits: memory: 170Mi requests: cpu: 100m memory: 70Mi livenessProbe: httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP # we poll on pod startup for the Kubernetes master service and # only setup the /readiness HTTP server once that's available. initialDelaySeconds: 3 timeoutSeconds: 5 args: - --domain=__PILLAR__DNS__DOMAIN__. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 __PILLAR__FEDERATIONS__DOMAIN__MAP__ env: - name: PROMETHEUS_PORT value: "10055" ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP volumeMounts: - name: kube-dns-config mountPath: /kube-dns-config - name: dnsmasq image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 livenessProbe: httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --log-facility=- - --server=/__PILLAR__DNS__DOMAIN__/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP # see: https://github.com/kubernetes/kubernetes/issues/29055 for details resources: requests: cpu: 150m memory: 20Mi volumeMounts: - name: kube-dns-config mountPath: /etc/k8s/dns/dnsmasq-nanny - name: sidecar image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 livenessProbe: httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,A - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.__PILLAR__DNS__DOMAIN__,5,A ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: memory: 20Mi cpu: 10m dnsPolicy: Default # Don't use cluster DNS. serviceAccountName: kube-dns //E:\k8s源码\kubernetes-1.6.0\kubernetes-1.6.0\cluster\addons\dns\kubedns-svc.yaml.base apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "KubeDNS" spec: selector: k8s-app: kube-dns clusterIP: __PILLAR__DNS__SERVER__ ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP //E:\k8s源码\kubernetes-1.6.0\kubernetes-1.6.0\cluster\addons\dns\kubedns-sa.yaml apiVersion: v1 kind: ServiceAccount metadata: name: kube-dns labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile

1.9版本开始正式加入coredns支持,出现了kube-dns与coredns共存的状况

E:\k8s源码\kubernetes-1.10.13\kubernetes-1.10.13\cluster(sh和yaml)\addons\dns\README.md
//这段说明也比较有意思,说明很明确是以pod和service方式来提供dns服务
`kube-dns` schedules DNS Pods and Service on the cluster, other pods in cluster can use the DNS Service’s IP to resolve DNS names. * [Administrators guide](http://kubernetes.io/docs/admin/dns/) * [Code repository](http://www.github.com/kubernetes/dns) //独立子项目

E:\k8s源码\kubernetes-1.10.13\kubernetes-1.10.13\cluster\addons\dns\coredns.yaml.base
//镜像用的是 k8s.gcr.io/coredns:1.0.6

 


https://github.com/coredns/coredns(这里由于是github不是很稳定,访问有时候会无法显示)

官方: https://coredns.io/ 


https://github.com/coredns/coredns

下面解释什么是coredns(早期的kubernetes是集成的自己的kube-dns)

CoreDNS is a DNS server that chains plugins

CoreDNS is a DNS server/forwarder, written in Go, that chains plugins. Each plugin performs a (DNS) function.

CoreDNS is a Cloud Native Computing Foundation graduated project.

CoreDNS is a fast and flexible DNS server. The key word here is flexible: with CoreDNS you are able to do what you want with your DNS data by utilizing plugins. If some functionality is not provided out of the box you can add it by writing a plugin.

CoreDNS can listen for DNS requests coming in over UDP/TCP (go'old DNS), TLS (RFC 7858), also called DoT, DNS over HTTP/2 - DoH - (RFC 8484) and gRPC (not a standard).

Currently CoreDNS is able to:

  • Serve zone data from a file; both DNSSEC (NSEC only) and DNS are supported (file and auto).
  • Retrieve zone data from primaries, i.e., act as a secondary server (AXFR only) (secondary).
  • Sign zone data on-the-fly (dnssec).
  • Load balancing of responses (loadbalance).
  • Allow for zone transfers, i.e., act as a primary server (file + transfer).
  • Automatically load zone files from disk (auto).
  • Caching of DNS responses (cache).
  • Use etcd as a backend (replacing SkyDNS) (etcd).
  • Use k8s (kubernetes) as a backend (kubernetes).
  • Serve as a proxy to forward queries to some other (recursive) nameserver (forward).
  • Provide metrics (by using Prometheus) (prometheus).
  • Provide query (log) and error (errors) logging.
  • Integrate with cloud providers (route53).
  • Support the CH class: version.bind and friends (chaos).
  • Support the RFC 5001 DNS name server identifier (NSID) option (nsid).
  • Profiling support (pprof).
  • Rewrite queries (qtype, qclass and qname) (rewrite and template).
  • Block ANY queries (any).
  • Provide DNS64 IPv6 Translation (dns64).

And more. Each of the plugins is documented. See coredns.io/plugins for all in-tree plugins, and coredns.io/explugins for all out-of-tree plugins.

 

coredns作为Kubernetes插件的比较详细的工作机制,请参看 https://coredns.io/plugins/kubernetes/

coredns本身也提供了自身的插件扩展机制,方便二次开发coredns本身的插件项目列表:  https://coredns.io/plugins/  (coredns本身也是一个小生态系统)
它的哪些插件都是针对coredns来加强和扩展coredns的功能,例如:提供负载平衡,grpc,etcd(skydns)等.

 

一  coredns二进制程序

1.安装可以使用二进制
 下载地址: https://github.com/coredns/coredns/releases
 例如: https://github.com/coredns/coredns/releases/download/v1.8.5/coredns_1.8.5_linux_arm64.tgz

2.安装可以编译
Compilation from Source
To compile CoreDNS, we assume you have a working Go setup. See various tutorials if you don’t have that already configured.
First, make sure your golang version is 1.17 or higher as go mod support and other api is needed. See here for go mod details. 
Then, check out the project and run make to compile the binary:
 [root@k2 src]# git clone https://github.com/coredns/coredns -b v1.8.5  //下载指定版本,这里是coredns 1.8.5版本

[root@k2 src]# git clone https://github.com/coredns/coredns //最新
[root@k2 src]# cd coredns
[root@k2 src]# make

 

二 配置(CoreDNS 是以 Pod 的形式运行)
1. coredns.yaml (kubernetes 1.10.13版本)
[root@k2 dns]# cat coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local REVERSE_CIDRS {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.1.1  //镜像的名称和版本
        imagePullPolicy: IfNotPresent
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 172.17.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

2.查下tar包 (下载coredns的tar包,docker load -i ...导入)

[root@k2 dns]# docker images | grep corecoredns/coredns                  1.1.1               68af89c45ded        4 years ago         46.3MB


参考:
https://github.com/coredns/deployment/tree/master/kubernetes
写个脚本deploy.sh,执行下,把相关信息写入 coredns.yaml
1.coredns.yaml.sed

[root@k2 dns]# vi coredns.yaml.sed
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      containers:
      - name: coredns
        image: coredns/coredns:1.8.7
        imagePullPolicy: IfNotPresent  //本地包,从docker images镜像中加载
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: CLUSTER_DNS_IP
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

拉取镜像

[root@k2 tarcoredns]# docker pull coredns/coredns
[root@k2 tarcoredns]# docker images | grep core coredns/coredns latest 51e6b70358b2 12 days ago 48.7MB
//如果拉取特定版本的镜像可以,查看下版本https://github.com/coredns/coredns/releases/

 [root@k2 tarcoredns]# docker pull coredns/coredns:1.8.7
 1.8.7: Pulling from coredns/coredns
 d92bdee79785: Already exists
 224e0372b0f6: Pull complete
 Digest: sha256:58508c172b14716350dc5185baefd78265a703514281d309d1d54aa1b721ad68
 Status: Downloaded newer image for coredns/coredns:1.8.7
//然后你在对他进行重新命名标签. 或者也可以导出镜像,这样就有自己的该版本的tar包了.

[root@k2 tarcoredns]# docker images | grep core
coredns/coredns                                             1.8.7              51e6b70358b2        12 days ago         48.7MB

  2.deploy.sh

[root@k2 dns]# vi deploy.sh
#!/bin/bash
# Deploys CoreDNS to a cluster currently running Kube-DNS.
SERVICE_CIDR=${1:-172.17.0.0/16}
POD_CIDR=${2:-172.17.0.0/16}
CLUSTER_DNS_IP=${3:-172.17.0.2}
CLUSTER_DOMAIN=${4:-cluster.local}
YAML_TEMPLATE=${5:-`pwd`/coredns.yaml.sed}
sed -e s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g -e s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g -e s?SERVICE_CIDR?$SERVICE_CIDR?g -e s?POD_CIDR?$POD_CIDR?g $YAML_TEMPLATE > coredns.yaml


参考: https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh

#!/bin/bash

# Deploys CoreDNS to a cluster currently running Kube-DNS.

show_help () {
cat << USAGE
usage: $0 [ -r REVERSE-CIDR ] [ -i DNS-IP ] [ -d CLUSTER-DOMAIN ] [ -t YAML-TEMPLATE ]
    -r : Define a reverse zone for the given CIDR. You may specify this option more
         than once to add multiple reverse zones. If no reverse CIDRs are defined,
         then the default is to handle all reverse zones (i.e. in-addr.arpa and ip6.arpa)
    -i : Specify the cluster DNS IP address. If not specified, the IP address of
         the existing "kube-dns" service is used, if present.
    -s : Skips the translation of kube-dns configmap to the corresponding CoreDNS Corefile configuration.
USAGE
exit 0
}

# Simple Defaults
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null && pwd )"
CLUSTER_DOMAIN=cluster.local
YAML_TEMPLATE="$DIR/coredns.yaml.sed"
STUBDOMAINS=""
UPSTREAM=\\/etc\\/resolv\.conf

# Translates the kube-dns ConfigMap to equivalent CoreDNS Configuration.
function translate-kube-dns-configmap {
    kube-dns-upstreamnameserver-to-coredns
    kube-dns-stubdomains-to-coredns
}

function kube-dns-upstreamnameserver-to-coredns {
  up=$(kubectl -n kube-system get configmap kube-dns  -ojsonpath='{.data.upstreamNameservers}' 2> /dev/null | tr -d '[",]')
  if [[ ! -z ${up} ]]; then
    UPSTREAM=${up}
  fi
}

function kube-dns-stubdomains-to-coredns {
  STUBDOMAIN_TEMPLATE='
    SD_DOMAIN:53 {
      errors
      cache 30
      loop
      forward . SD_DESTINATION {
        max_concurrent 1000
      }
    }'

  function dequote {
    str=${1#\"} # delete leading quote
    str=${str%\"} # delete trailing quote
    echo ${str}
  }

  function parse_stub_domains() {
    sd=$1

  # get keys - each key is a domain
  sd_keys=$(echo -n $sd | jq keys[])

  # For each domain ...
  for dom in $sd_keys; do
    dst=$(echo -n $sd | jq '.['$dom'][0]') # get the destination

    dom=$(dequote $dom)
    dst=$(dequote $dst)

    sd_stanza=${STUBDOMAIN_TEMPLATE/SD_DOMAIN/$dom} # replace SD_DOMAIN
    sd_stanza=${sd_stanza/SD_DESTINATION/$dst} # replace SD_DESTINATION
    echo "$sd_stanza"
  done
}

  sd=$(kubectl -n kube-system get configmap kube-dns  -ojsonpath='{.data.stubDomains}' 2> /dev/null)
  STUBDOMAINS=$(parse_stub_domains "$sd")
}


# Get Opts
while getopts "hsr:i:d:t:k:" opt; do
    case "$opt" in
    h)  show_help
        ;;
    s)  SKIP=1
        ;;
    r)  REVERSE_CIDRS="$REVERSE_CIDRS $OPTARG"
        ;;
    i)  CLUSTER_DNS_IP=$OPTARG
        ;;
    d)  CLUSTER_DOMAIN=$OPTARG
        ;;
    t)  YAML_TEMPLATE=$OPTARG
        ;;
    esac
done

# Conditional Defaults
if [[ -z $REVERSE_CIDRS ]]; then
  REVERSE_CIDRS="in-addr.arpa ip6.arpa"
fi
if [[ -z $CLUSTER_DNS_IP ]]; then
  # Default IP to kube-dns IP
  CLUSTER_DNS_IP=$(kubectl get service --namespace kube-system kube-dns -o jsonpath="{.spec.clusterIP}")
  if [ $? -ne 0 ]; then
      >&2 echo "Error! The IP address for DNS service couldn't be determined automatically. Please specify the DNS-IP with the '-i' option."
      exit 2
  fi
fi

if [[ "${SKIP}" -ne 1 ]] ; then
    translate-kube-dns-configmap
fi

orig=$'\n'
replace=$'\\\n'
sed -e "s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g" \
    -e "s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g" \
    -e "s?REVERSE_CIDRS?$REVERSE_CIDRS?g" \
    -e "s@STUBDOMAINS@${STUBDOMAINS//$orig/$replace}@g" \
    -e "s/UPSTREAMNAMESERVER/$UPSTREAM/g" \
    "${YAML_TEMPLATE}"

 

kubectl apply ... 安装部署

https://coredns.io/plugins/loop#troubleshooting:

一段关于CrashLoopBackOff的说明

When a CoreDNS Pod deployed in Kubernetes detects a loop, the CoreDNS Pod will start to “CrashLoopBackOff”. This is because Kubernetes will try to restart the Pod every time CoreDNS detects the loop and exits.
A common cause of forwarding loops in Kubernetes clusters is an interaction with a local DNS cache on the host node (e.g. systemd-resolved). For example, in certain configurations systemd-resolved will put the loopback address 127.0.0.53 as a nameserver into /etc/resolv.conf. Kubernetes (via kubelet) by default will pass this /etc/resolv.conf file to all Pods using the default dnsPolicy rendering them unable to make DNS lookups (this includes CoreDNS Pods). CoreDNS uses this /etc/resolv.conf as a list of upstreams to forward requests to. Since it contains a loopback address, CoreDNS ends up forwarding requests to itself.
There are many ways to work around this issue, some are listed here:

  • Add the following to your kubelet config yaml: resolvConf: (or via command line flag --resolv-conf deprecated in 1.10). Your “real” resolv.conf is the one that contains the actual IPs of your upstream servers, and no local/loopback address. This flag tells kubelet to pass an alternate resolv.conf to Pods. For systems using systemd-resolved, /run/systemd/resolve/resolv.conf is typically the location of the “real” resolv.conf, although this can be different depending on your distribution.
  • Disable the local DNS cache on host nodes, and restore /etc/resolv.conf to the original.
  • A quick and dirty fix is to edit your Corefile, replacing forward . /etc/resolv.conf with the IP address of your upstream DNS, for example forward . 8.8.8.8. But this only fixes the issue for CoreDNS, kubelet will continue to forward the invalid resolv.conf to all default dnsPolicy Pods, leaving them unable to resolve DNS

 

说明:
https://github.com/coredns/deployment 是他的针对不同环境的安装方式,包括debian,docker,以及kubernetes等的安装部署方式
请参考: https://github.com/coredns/deployment/tree/master/kubernetes 

Description

CoreDNS can run in place of the standard Kube-DNS in Kubernetes. Using the kubernetes plugin, CoreDNS will read zone data from a Kubernetes cluster.

It implements the spec defined for Kubernetes DNS-Based service discovery:

https://github.com/kubernetes/dns/blob/master/docs/specification.md

我们看到这段阐述的就是coredns就是基于service的发现方式. 



kubectl apply -f coredns.yaml 安装之后

[root@k2 dns]# kubectl get ServiceAccount -n kube-system | grep coredns
coredns                   1         3y

 [root@k2 dns]# kubectl get ClusterRole | grep 'system:coredns'
 system:coredns 3y

[root@k2 dns]# kubectl get ClusterRoleBinding | grep 'system:coredns'
system:coredns 3y

[root@k2 dns]# kubectl get ConfigMap -n kube-system | grep 'coredns'
coredns 1 3y
//镜像coredns 1.1.1
[root@k2 dns]# kubectl get Deployment -n kube-system | grep 'coredns'
coredns 1 1 1 1 3y

[root@k2 dns]# kubectl get svc -n kube-system | grep 'kube-dns'
kube-dns ClusterIP 172.17.0.2 <none> 53/UDP,53/TCP 3y

 

配置内部kubeapiserver dns服务地址

[root@k2 conf]# vi /etc/kubenetes/conf/kubeapiserver 

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=10.129.55.112 --bind-address=10.129.55.112"

# The port on the local server to listen on.
KUBE_API_PORT="--secure-port=6443"

# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://10.129.55.111:2379,https://10.129.55.112:2379,https://10.129.55.113:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=172.17.0.0/16"

# default admission control policies
KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction"

# Add your own!
KUBE_API_ARGS=" --anonymous-auth=false \
                --apiserver-count=3 \
                --audit-log-maxage=30 \
                --audit-log-maxbackup=3 \
                --audit-log-maxsize=100 \
                --audit-log-path=/var/log/kube-audit/audit.log \
                --audit-policy-file=/etc/kubernetes/conf/audit-policy.yaml \


代码分析(coredns被放在了kubernetes统一的第三方插件目录addons下)
E:\k8s源码\kubernetes-1.10.13\kubernetes-1.10.13\cluster\addons\dns
我们看到coredns做了第三方的插件放到了上面的目录下
我这里有一篇翻译官方的插件(addons)
https://www.cnblogs.com/aozhejin/p/16270418.html

参考:
https://github.com/kubernetes/dns/blob/master/docs/specification.md
https://github.com/coredns/deployment/tree/master/kubernetes
https://github.com/containernetworking/cni/blob/main/SPEC.md#network-configuration-lists
https://github.com/containernetworking/cni/blob/main/CONVENTIONS.md
//kubernetes和coredns的版本对应关系
https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md 
https://kubernetes.io/docs/concepts/cluster-administration/addons/
posted @ 2023-02-15 10:58  jinzi  阅读(138)  评论(0编辑  收藏  举报