03-Deployment操作

@

1. 结构体

1.1 DeploymentList

所属包:"k8s.io/api/apps/v1"

type DeploymentList struct {
    v1.TypeMeta `json:",inline"`
    v1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
    Items           []Deployment `json:"items" protobuf:"bytes,2,rep,name=items"`
}

其中Items的每个成员的结构体Deployment 如下:

1.2 Deployment

所属包:"k8s.io/api/apps/v1"

type Deployment struct {
    v1.TypeMeta   `json:",inline"`
    v1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
    Spec              DeploymentSpec   `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
    Status            DeploymentStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}

其中各成员结构体如下:

1.3 TypeMeta

所属包 "k8s.io/apimachinery/pkg/apis/meta/v1"

type TypeMeta struct {
    Kind       string `json:"kind,omitempty" protobuf:"bytes,1,opt,name=kind"`
    APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,2,opt,name=apiVersion"`
}

对应k8s的yml文件

apiVersion: apps/v1
kind: Deployment

1.4 ObjectMeta(metadata)

所属包: "k8s.io/apimachinery/pkg/apis/meta/v1"

type ObjectMeta struct {
    Name                       string               `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
    GenerateName               string               `json:"generateName,omitempty" protobuf:"bytes,2,opt,name=generateName"`
    Namespace                  string               `json:"namespace,omitempty" protobuf:"bytes,3,opt,name=namespace"`
    SelfLink                   string               `json:"selfLink,omitempty" protobuf:"bytes,4,opt,name=selfLink"`
    UID                        types.UID            `json:"uid,omitempty" protobuf:"bytes,5,opt,name=uid,casttype=k8s.io/kubernetes/pkg/types.UID"`
    ResourceVersion            string               `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`
    Generation                 int64                `json:"generation,omitempty" protobuf:"varint,7,opt,name=generation"`
    CreationTimestamp          Time                 `json:"creationTimestamp,omitempty" protobuf:"bytes,8,opt,name=creationTimestamp"`
    DeletionTimestamp          *Time                `json:"deletionTimestamp,omitempty" protobuf:"bytes,9,opt,name=deletionTimestamp"`
    DeletionGracePeriodSeconds *int64               `json:"deletionGracePeriodSeconds,omitempty" protobuf:"varint,10,opt,name=deletionGracePeriodSeconds"`
    Labels                     map[string]string    `json:"labels,omitempty" protobuf:"bytes,11,rep,name=labels"`
    Annotations                map[string]string    `json:"annotations,omitempty" protobuf:"bytes,12,rep,name=annotations"`
    OwnerReferences            []OwnerReference     `json:"ownerReferences,omitempty" patchStrategy:"merge" patchMergeKey:"uid" protobuf:"bytes,13,rep,name=ownerReferences"`
    Finalizers                 []string             `json:"finalizers,omitempty" patchStrategy:"merge" protobuf:"bytes,14,rep,name=finalizers"`
    ManagedFields              []ManagedFieldsEntry `json:"managedFields,omitempty" protobuf:"bytes,17,rep,name=managedFields"`
}

对应k8s的yml文件中如下部分

metadata:
  ……

和k8s用yml文件创建deployment一样,此处我们主要关心NameNamespace

1.5 DeploymentSpec(spec)

属于包:"k8s.io/api/apps/v1"

type DeploymentSpec struct {
    Replicas                *int32                `json:"replicas,omitempty" protobuf:"varint,1,opt,name=replicas"`
    Selector                *v1.LabelSelector `json:"selector" protobuf:"bytes,2,opt,name=selector"`
    Template                v1.PodTemplateSpec    `json:"template" protobuf:"bytes,3,opt,name=template"`
    Strategy                DeploymentStrategy    `json:"strategy,omitempty" patchStrategy:"retainKeys" protobuf:"bytes,4,opt,name=strategy"`
    MinReadySeconds         int32                 `json:"minReadySeconds,omitempty" protobuf:"varint,5,opt,name=minReadySeconds"`
    RevisionHistoryLimit    *int32                `json:"revisionHistoryLimit,omitempty" protobuf:"varint,6,opt,name=revisionHistoryLimit"`
    Paused                  bool                  `json:"paused,omitempty" protobuf:"varint,7,opt,name=paused"`
    ProgressDeadlineSeconds *int32                `json:"progressDeadlineSeconds,omitempty" protobuf:"varint,9,opt,name=progressDeadlineSeconds"`
}

对应k8s的yml文件中如下部分

spec:
  ……

和k8s用yml文件创建deployment一样,这一部分是我们主要要传入的信息。

1) Replicas(spec.replicas)

一个 int的指针,因此赋值的时候我们只好先定义int变量,在把指针赋给它

2)LabelSelector(spec.selector)

所属包:"k8s.io/apimachinery/pkg/apis/meta/v1"

type LabelSelector struct {
    MatchLabels      map[string]string          `json:"matchLabels,omitempty" protobuf:"bytes,1,rep,name=matchLabels"`
    MatchExpressions []LabelSelectorRequirement `json:"matchExpressions,omitempty" protobuf:"bytes,2,rep,name=matchExpressions"`
}

和k8s用yml文件创建deployment一样,我们主要用其中matchLabels打标签:

spec:
  selector:
    matchLabels:
      app: nginx

代码种示例如下

			Selector: &metaV1.LabelSelector{
				MatchLabels: map[string]string{
					"app": "nginx",
				},
			},

3)PodTemplateSpec(spec.template)

所在包:"k8s.io/api/core/v1"

type PodTemplateSpec struct {
    v1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
    Spec              PodSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}

对应k8s的yml文件中如下部分,顾名思义,这一部分定义的是pod的模板。

spec:
  ……
  template:
    ……

4)ObjectMeta (spec.template.metadata)

所在包:"k8s.io/apimachinery/pkg/apis/meta/v1"

type ObjectMeta struct {
    Name                       string               `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
    GenerateName               string               `json:"generateName,omitempty" protobuf:"bytes,2,opt,name=generateName"`
    Namespace                  string               `json:"namespace,omitempty" protobuf:"bytes,3,opt,name=namespace"`
    SelfLink                   string               `json:"selfLink,omitempty" protobuf:"bytes,4,opt,name=selfLink"`
    UID                        types.UID            `json:"uid,omitempty" protobuf:"bytes,5,opt,name=uid,casttype=k8s.io/kubernetes/pkg/types.UID"`
    ResourceVersion            string               `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`
    Generation                 int64                `json:"generation,omitempty" protobuf:"varint,7,opt,name=generation"`
    CreationTimestamp          Time                 `json:"creationTimestamp,omitempty" protobuf:"bytes,8,opt,name=creationTimestamp"`
    DeletionTimestamp          *Time                `json:"deletionTimestamp,omitempty" protobuf:"bytes,9,opt,name=deletionTimestamp"`
    DeletionGracePeriodSeconds *int64               `json:"deletionGracePeriodSeconds,omitempty" protobuf:"varint,10,opt,name=deletionGracePeriodSeconds"`
    Labels                     map[string]string    `json:"labels,omitempty" protobuf:"bytes,11,rep,name=labels"`
    Annotations                map[string]string    `json:"annotations,omitempty" protobuf:"bytes,12,rep,name=annotations"`
    OwnerReferences            []OwnerReference     `json:"ownerReferences,omitempty" patchStrategy:"merge" patchMergeKey:"uid" protobuf:"bytes,13,rep,name=ownerReferences"`
    Finalizers                 []string             `json:"finalizers,omitempty" patchStrategy:"merge" protobuf:"bytes,14,rep,name=finalizers"`
    ManagedFields              []ManagedFieldsEntry `json:"managedFields,omitempty" protobuf:"bytes,17,rep,name=managedFields"`
}

对应k8s的yml文件中如下部分

spec:
  ……
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx

代码中示例如下

				ObjectMeta: metaV1.ObjectMeta{
					Labels: map[string]string{
						"app": deploymentName,
					},
				}

5)Spec(spec.template.spec)

type PodSpec struct {
    Volumes                       []Volume                   `json:"volumes,omitempty" patchStrategy:"merge,retainKeys" patchMergeKey:"name" protobuf:"bytes,1,rep,name=volumes"`
    InitContainers                []Container                `json:"initContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,20,rep,name=initContainers"`
    Containers                    []Container                `json:"containers" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,2,rep,name=containers"`
    EphemeralContainers           []EphemeralContainer       `json:"ephemeralContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,34,rep,name=ephemeralContainers"`
    RestartPolicy                 RestartPolicy              `json:"restartPolicy,omitempty" protobuf:"bytes,3,opt,name=restartPolicy,casttype=RestartPolicy"`
    TerminationGracePeriodSeconds *int64                     `json:"terminationGracePeriodSeconds,omitempty" protobuf:"varint,4,opt,name=terminationGracePeriodSeconds"`
    ActiveDeadlineSeconds         *int64                     `json:"activeDeadlineSeconds,omitempty" protobuf:"varint,5,opt,name=activeDeadlineSeconds"`
    DNSPolicy                     DNSPolicy                  `json:"dnsPolicy,omitempty" protobuf:"bytes,6,opt,name=dnsPolicy,casttype=DNSPolicy"`
    NodeSelector                  map[string]string          `json:"nodeSelector,omitempty" protobuf:"bytes,7,rep,name=nodeSelector"`
    ServiceAccountName            string                     `json:"serviceAccountName,omitempty" protobuf:"bytes,8,opt,name=serviceAccountName"`
    DeprecatedServiceAccount      string                     `json:"serviceAccount,omitempty" protobuf:"bytes,9,opt,name=serviceAccount"`
    AutomountServiceAccountToken  *bool                      `json:"automountServiceAccountToken,omitempty" protobuf:"varint,21,opt,name=automountServiceAccountToken"`
    NodeName                      string                     `json:"nodeName,omitempty" protobuf:"bytes,10,opt,name=nodeName"`
    HostNetwork                   bool                       `json:"hostNetwork,omitempty" protobuf:"varint,11,opt,name=hostNetwork"`
    HostPID                       bool                       `json:"hostPID,omitempty" protobuf:"varint,12,opt,name=hostPID"`
    HostIPC                       bool                       `json:"hostIPC,omitempty" protobuf:"varint,13,opt,name=hostIPC"`
    ShareProcessNamespace         *bool                      `json:"shareProcessNamespace,omitempty" protobuf:"varint,27,opt,name=shareProcessNamespace"`
    SecurityContext               *PodSecurityContext        `json:"securityContext,omitempty" protobuf:"bytes,14,opt,name=securityContext"`
    ImagePullSecrets              []LocalObjectReference     `json:"imagePullSecrets,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,15,rep,name=imagePullSecrets"`
    Hostname                      string                     `json:"hostname,omitempty" protobuf:"bytes,16,opt,name=hostname"`
    Subdomain                     string                     `json:"subdomain,omitempty" protobuf:"bytes,17,opt,name=subdomain"`
    Affinity                      *Affinity                  `json:"affinity,omitempty" protobuf:"bytes,18,opt,name=affinity"`
    SchedulerName                 string                     `json:"schedulerName,omitempty" protobuf:"bytes,19,opt,name=schedulerName"`
    Tolerations                   []Toleration               `json:"tolerations,omitempty" protobuf:"bytes,22,opt,name=tolerations"`
    HostAliases                   []HostAlias                `json:"hostAliases,omitempty" patchStrategy:"merge" patchMergeKey:"ip" protobuf:"bytes,23,rep,name=hostAliases"`
    PriorityClassName             string                     `json:"priorityClassName,omitempty" protobuf:"bytes,24,opt,name=priorityClassName"`
    Priority                      *int32                     `json:"priority,omitempty" protobuf:"bytes,25,opt,name=priority"`
    DNSConfig                     *PodDNSConfig              `json:"dnsConfig,omitempty" protobuf:"bytes,26,opt,name=dnsConfig"`
    ReadinessGates                []PodReadinessGate         `json:"readinessGates,omitempty" protobuf:"bytes,28,opt,name=readinessGates"`
    RuntimeClassName              *string                    `json:"runtimeClassName,omitempty" protobuf:"bytes,29,opt,name=runtimeClassName"`
    EnableServiceLinks            *bool                      `json:"enableServiceLinks,omitempty" protobuf:"varint,30,opt,name=enableServiceLinks"`
    PreemptionPolicy              *PreemptionPolicy          `json:"preemptionPolicy,omitempty" protobuf:"bytes,31,opt,name=preemptionPolicy"`
    Overhead                      ResourceList               `json:"overhead,omitempty" protobuf:"bytes,32,opt,name=overhead"`
    TopologySpreadConstraints     []TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty" patchStrategy:"merge" patchMergeKey:"topologyKey" protobuf:"bytes,33,opt,name=topologySpreadConstraints"`
    SetHostnameAsFQDN             *bool                      `json:"setHostnameAsFQDN,omitempty" protobuf:"varint,35,opt,name=setHostnameAsFQDN"`
    OS                            *PodOS                     `json:"os,omitempty" protobuf:"bytes,36,opt,name=os"`
    HostUsers                     *bool                      `json:"hostUsers,omitempty" protobuf:"bytes,37,opt,name=hostUsers"`
}

对应k8s的yml文件中如下部分,是deployment中pod模板的规格信息

spec:
  ……
  template:
    spec:
      ……

这一部分通常是我们主要填写的,因为内容较多,目前不展开介绍了,以后用到我们再详细说明。
现在我们写一个代码中的简单示例

				Spec: coreV1.PodSpec{
					Containers: []coreV1.Container{
						{
							Name:  deploymentName,
							Image: image,
							Ports: []coreV1.ContainerPort{
								{
									ContainerPort: portNum,
								},
							},
						},
					},
				}

1.6 DeploymentStatus

所在包:"k8s.io/api/apps/v1"

type DeploymentStatus struct {
    ObservedGeneration  int64                 `json:"observedGeneration,omitempty" protobuf:"varint,1,opt,name=observedGeneration"`
    Replicas            int32                 `json:"replicas,omitempty" protobuf:"varint,2,opt,name=replicas"`
    UpdatedReplicas     int32                 `json:"updatedReplicas,omitempty" protobuf:"varint,3,opt,name=updatedReplicas"`
    ReadyReplicas       int32                 `json:"readyReplicas,omitempty" protobuf:"varint,7,opt,name=readyReplicas"`
    AvailableReplicas   int32                 `json:"availableReplicas,omitempty" protobuf:"varint,4,opt,name=availableReplicas"`
    UnavailableReplicas int32                 `json:"unavailableReplicas,omitempty" protobuf:"varint,5,opt,name=unavailableReplicas"`
    Conditions          []DeploymentCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,6,rep,name=conditions"`
    CollisionCount      *int32                `json:"collisionCount,omitempty" protobuf:"varint,8,opt,name=collisionCount"`
}

deployment的状态,入门中不必过多关注

1.7 对照yml文件示例

附原生k8s集群上一个deployment 信息,大家可以对照理解一下以上结构体

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "4"
    field.cattle.io/publicEndpoints: '[{"addresses":["10.10.117.53"],"port":30051,"protocol":"TCP","serviceName":"liubei:nginx","allNodes":true}]'
  creationTimestamp: "2022-09-28T06:07:39Z"
  generation: 6
  name: nginx
  namespace: liubei
  resourceVersion: "19656054"
  selfLink: /apis/apps/v1/namespaces/liubei/deployments/nginx
  uid: dcbf8aac-1c23-4be5-943e-c9f2b3e7c767
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: harbocto.xxx.com.cn/public/nginx:1.19.2-alpine
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2022-09-28T06:07:44Z"
    lastUpdateTime: "2022-09-28T06:07:44Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2022-09-28T06:07:39Z"
    lastUpdateTime: "2022-09-28T13:01:17Z"
    message: ReplicaSet "nginx-b6588447f" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 6
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

2. Deployment list

语法

  • 语法
func (DeploymentInterface) List(ctx context.Context, opts v1.ListOptions) (*v1.DeploymentList, error)
  • 语法示例
deploymentList,err = clientSet.AppsV1().Deployments(namespaceName).List(context.TODO(), metaV1.ListOptions{})

完整示例

  • 创建函数
package crowK8S

import (
	"context"
	appsV1 "k8s.io/api/apps/v1"
	metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

func GetDeploymentList(clientSet *kubernetes.Clientset,namespaceName string)(deploymentList *appsV1.DeploymentList,err error)  {
	deploymentList,err = clientSet.AppsV1().Deployments(namespaceName).List(context.TODO(), metaV1.ListOptions{})
	if err != nil{
		return nil, err
	}
	return deploymentList, err
}

说明:如果namespaceName 为空字串,则查所有namespace

  • 引用函数
package main

import (
	"fmt"
	"go-k8s/crowK8S"
)

func main()  {
	clientSet,err := crowK8S.ConnectK8s()
	if err !=nil {
		fmt.Println(err)
	}

    deploymentList,err := crowK8S.GetDeploymentList(clientSet,"kube-system")
	if err != nil {
		fmt.Println(err)
	}
	for _,deployment := range deploymentList.Items{
		fmt.Printf("%+v\n",deployment.Name)
	}
}

3. Get Deployment

语法

完整示例

  • 定义函数
package crowK8S

import (
	"context"
	appsV1 "k8s.io/api/apps/v1"
	metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

func GetDeployment(clientSet *kubernetes.Clientset,namespaceName string,deploymentName string)(deploymentInfo *appsV1.Deployment,err error)  {
	deploymentInfo,err = clientSet.AppsV1().Deployments(namespaceName).Get(context.TODO(), deploymentName,metaV1.GetOptions{})
	if err != nil{
		return nil, err
	}
	return deploymentInfo, err
}
  • 引用
package main

import (
	"fmt"
	"go-k8s/crowK8S"
)

func main()  {
	clientSet,err := crowK8S.ConnectK8s()
	if err !=nil {
		fmt.Println(err)
	}

    deploymentInfo,err := crowK8S.GetDeployment(clientSet,"kube-system","coredns")
	if err != nil {
		fmt.Println(err)
	}
    fmt.Printf("%+v\n",deploymentInfo)

}
  • 结果输出

说明:打印是一个json字串,以下是我格式化后的结果:

{
	ObjectMeta: {
		coredns kube - system / apis / apps / v1 / namespaces / kube - system / deployments / coredns e6bfc631 - fe3a - 4863 - b202 - a96188c4c3f4 5154836 1 2022 - 07 - 12 16: 41: 38 + 0800 CST < nil > < nil > map[k8s - app: kube - dns] map[deployment.kubernetes.io / revision: 1][][][{
			kubeadm Update apps / v1 2022 - 07 - 12 16: 41: 38 + 0800 CST FieldsV1 {
				"f:metadata": {
					"f:labels": {
						".": {},
						"f:k8s-app": {}
					}
				},
				"f:spec": {
					"f:progressDeadlineSeconds": {},
					"f:replicas": {},
					"f:revisionHistoryLimit": {},
					"f:selector": {},
					"f:strategy": {
						"f:rollingUpdate": {
							".": {},
							"f:maxSurge": {},
							"f:maxUnavailable": {}
						},
						"f:type": {}
					},
					"f:template": {
						"f:metadata": {
							"f:labels": {
								".": {},
								"f:k8s-app": {}
							}
						},
						"f:spec": {
							"f:containers": {
								"k:{\"name\":\"coredns\"}": {
									".": {},
									"f:args": {},
									"f:image": {},
									"f:imagePullPolicy": {},
									"f:livenessProbe": {
										".": {},
										"f:failureThreshold": {},
										"f:httpGet": {
											".": {},
											"f:path": {},
											"f:port": {},
											"f:scheme": {}
										},
										"f:initialDelaySeconds": {},
										"f:periodSeconds": {},
										"f:successThreshold": {},
										"f:timeoutSeconds": {}
									},
									"f:name": {},
									"f:ports": {
										".": {},
										"k:{\"containerPort\":53,\"protocol\":\"TCP\"}": {
											".": {},
											"f:containerPort": {},
											"f:name": {},
											"f:protocol": {}
										},
										"k:{\"containerPort\":53,\"protocol\":\"UDP\"}": {
											".": {},
											"f:containerPort": {},
											"f:name": {},
											"f:protocol": {}
										},
										"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}": {
											".": {},
											"f:containerPort": {},
											"f:name": {},
											"f:protocol": {}
										}
									},
									"f:readinessProbe": {
										".": {},
										"f:failureThreshold": {},
										"f:httpGet": {
											".": {},
											"f:path": {},
											"f:port": {},
											"f:scheme": {}
										},
										"f:periodSeconds": {},
										"f:successThreshold": {},
										"f:timeoutSeconds": {}
									},
									"f:resources": {
										".": {},
										"f:limits": {
											".": {},
											"f:memory": {}
										},
										"f:requests": {
											".": {},
											"f:cpu": {},
											"f:memory": {}
										}
									},
									"f:securityContext": {
										".": {},
										"f:allowPrivilegeEscalation": {},
										"f:capabilities": {
											".": {},
											"f:add": {},
											"f:drop": {}
										},
										"f:readOnlyRootFilesystem": {}
									},
									"f:terminationMessagePath": {},
									"f:terminationMessagePolicy": {},
									"f:volumeMounts": {
										".": {},
										"k:{\"mountPath\":\"/etc/coredns\"}": {
											".": {},
											"f:mountPath": {},
											"f:name": {},
											"f:readOnly": {}
										}
									}
								}
							},
							"f:dnsPolicy": {},
							"f:nodeSelector": {
								".": {},
								"f:kubernetes.io/os": {}
							},
							"f:priorityClassName": {},
							"f:restartPolicy": {},
							"f:schedulerName": {},
							"f:securityContext": {},
							"f:serviceAccount": {},
							"f:serviceAccountName": {},
							"f:terminationGracePeriodSeconds": {},
							"f:tolerations": {},
							"f:volumes": {
								".": {},
								"k:{\"name\":\"config-volume\"}": {
									".": {},
									"f:configMap": {
										".": {},
										"f:defaultMode": {},
										"f:items": {},
										"f:name": {}
									},
									"f:name": {}
								}
							}
						}
					}
				}
			}
		} {
			kube - controller - manager Update apps / v1 2022 - 08 - 05 16: 36: 43 + 0800 CST FieldsV1 {
				"f:metadata": {
					"f:annotations": {
						".": {},
						"f:deployment.kubernetes.io/revision": {}
					}
				},
				"f:status": {
					"f:availableReplicas": {},
					"f:conditions": {
						".": {},
						"k:{\"type\":\"Available\"}": {
							".": {},
							"f:lastTransitionTime": {},
							"f:lastUpdateTime": {},
							"f:message": {},
							"f:reason": {},
							"f:status": {},
							"f:type": {}
						},
						"k:{\"type\":\"Progressing\"}": {
							".": {},
							"f:lastTransitionTime": {},
							"f:lastUpdateTime": {},
							"f:message": {},
							"f:reason": {},
							"f:status": {},
							"f:type": {}
						}
					},
					"f:observedGeneration": {},
					"f:readyReplicas": {},
					"f:replicas": {},
					"f:updatedReplicas": {}
				}
			}
		}]
	},
	Spec: DeploymentSpec {
		Replicas: * 2,
		Selector: & v1.LabelSelector {
			MatchLabels: map[string] string {
				k8s - app: kube - dns,
			},
			MatchExpressions: [] LabelSelectorRequirement {},
		},
		Template: {
			{
				0 0001 - 01 - 01 00: 00: 00 + 0000 UTC < nil > < nil > map[k8s - app: kube - dns] map[][][][]
			} {
				[{
					config - volume {
						nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource {
							LocalObjectReference: LocalObjectReference {
								Name: coredns,
							},
							Items: [] KeyToPath {
								KeyToPath {
									Key: Corefile,
									Path: Corefile,
									Mode: nil,
								},
							},
							DefaultMode: * 420,
							Optional: nil,
						}
						nil nil nil nil nil nil nil nil nil nil
					}
				}][][{
					coredns registry.aliyuncs.com / google_containers / coredns: v1 .8 .0[][-conf / etc / coredns / Corefile][{
						dns 0 53 UDP
					} {
						dns - tcp 0 53 TCP
					} {
						metrics 0 9153 TCP
					}][][] {
						map[memory: {
							{
								178257920 0
							} { < nil >
							}
							170 Mi BinarySI
						}] map[cpu: {
								{
									100 - 3
								} { < nil >
								}
								100 m DecimalSI
							}
							memory: {
								{
									73400320 0
								} { < nil >
								}
								70 Mi BinarySI
							}]
					}[{
						config - volume true / etc / coredns < nil >
					}][] & Probe {
						ProbeHandler: ProbeHandler {
							Exec: nil,
							HTTPGet: & HTTPGetAction {
								Path: /health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,} &Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,
								Port: {
									0 8181
								},
								Host: ,
								Scheme: HTTP,
								HTTPHeaders: [] HTTPHeader {},
							},
							TCPSocket: nil,
							GRPC: nil,
						},
						InitialDelaySeconds: 0,
						TimeoutSeconds: 1,
						PeriodSeconds: 10,
						SuccessThreshold: 1,
						FailureThreshold: 3,
						TerminationGracePeriodSeconds: nil,
					}
					nil nil / dev / termination - log File IfNotPresent & SecurityContext {
						Capabilities: & Capabilities {
							Add: [NET_BIND_SERVICE],
							Drop: [all],
						},
						Privileged: nil,
						SELinuxOptions: nil,
						RunAsUser: nil,
						RunAsNonRoot: nil,
						ReadOnlyRootFilesystem: * true,
						AllowPrivilegeEscalation: * false,
						RunAsGroup: nil,
						ProcMount: nil,
						WindowsOptions: nil,
						SeccompProfile: nil,
					}
					false false false
				}][] Always 0xc000517f88 < nil > Default map[kubernetes.io / os: linux] coredns coredns < nil > false false false < nil > & PodSecurityContext {
					SELinuxOptions: nil,
					RunAsUser: nil,
					RunAsNonRoot: nil,
					SupplementalGroups: [],
					FSGroup: nil,
					RunAsGroup: nil,
					Sysctls: [] Sysctl {},
					WindowsOptions: nil,
					FSGroupChangePolicy: nil,
					SeccompProfile: nil,
				}[] nil
				default -scheduler[{
					CriticalAddonsOnly Exists < nil >
				} {
					node - role.kubernetes.io / master NoSchedule < nil >
				} {
					node - role.kubernetes.io / control - plane NoSchedule < nil >
				}][] system - cluster - critical < nil > nil[] < nil > < nil > < nil > map[][] < nil > nil < nil >
			}
		},
		Strategy: DeploymentStrategy {
			Type: RollingUpdate,
			RollingUpdate: & RollingUpdateDeployment {
				MaxUnavailable: 1,
				MaxSurge: 25 % ,
			},
		},
		MinReadySeconds: 0,
		RevisionHistoryLimit: * 10,
		Paused: false,
		ProgressDeadlineSeconds: * 600,
	},
	Status: DeploymentStatus {
		ObservedGeneration: 1,
		Replicas: 2,
		UpdatedReplicas: 2,
		AvailableReplicas: 2,
		UnavailableReplicas: 0,
		Conditions: [] DeploymentCondition {
			DeploymentCondition {
				Type: Progressing,
				Status: True,
				Reason: NewReplicaSetAvailable,
				Message: ReplicaSet "coredns-59d64cd4d4"
				has successfully progressed.,
				LastUpdateTime: 2022 - 07 - 12 16: 44: 35 + 0800 CST,
				LastTransitionTime: 2022 - 07 - 12 16: 41: 49 + 0800 CST,
			}, DeploymentCondition {
				Type: Available,
				Status: True,
				Reason: MinimumReplicasAvailable,
				Message: Deployment has minimum availability.,
				LastUpdateTime: 2022 - 08 - 05 16: 36: 39 + 0800 CST,
				LastTransitionTime: 2022 - 08 - 05 16: 36: 39 + 0800 CST,
			},
		},
		ReadyReplicas: 2,
		CollisionCount: nil,
	},
}

4. Create Deployment

语法示例

deployment,err := clientset.AppsV1().Deployments(namespace).Create(context.TODO(),deployment,metaV1.CreateOptions{})

完整示例

  • yml文件对照

以下是边这个yml文件为例,我们就照着这个内容写代码:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: liubei
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbocto.xxx.com.cn/public/nginx
        imagePullPolicy: Always
        ports:
        - containerPort: 80

  • 定义函数
package crowK8S

import (
	"context"
	appsV1 "k8s.io/api/apps/v1"
	coreV1 "k8s.io/api/core/v1"
	metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)


func CreateSimpleDeployment(clientSet *kubernetes.Clientset,namespaceName string,deploymentName string,image string,portNum int32,replicas int32)(deploymentInfo *appsV1.Deployment,err error)  {

	namespace := namespaceName
	//这个结构和原生k8s启动deployment的yml文件结构完全一样,对着写就好
	deployment := &appsV1.Deployment{
		ObjectMeta: metaV1.ObjectMeta{
			Name: deploymentName,
		},
		Spec: appsV1.DeploymentSpec{
			Replicas: &replicas,
			Selector: &metaV1.LabelSelector{
				MatchLabels: map[string]string{
					"app":deploymentName,
				},
			},
			Template: coreV1.PodTemplateSpec{
				ObjectMeta: metaV1.ObjectMeta{
					Labels: map[string]string{
						"app": deploymentName,
					},
				},
				Spec: coreV1.PodSpec{
					Containers: []coreV1.Container{
						{
							Name:  deploymentName,
							Image: image,
							Ports: []coreV1.ContainerPort{
								{
									ContainerPort: portNum,
								},
							},
						},
					},
				},
			},
		},
	}
	deploymentInfo,err = clientSet.AppsV1().Deployments(namespace).Create(context.TODO(),deployment,metaV1.CreateOptions{})
    if err != nil {
		return deploymentInfo,err
	}
	return deploymentInfo,nil
}
  • 调用
package main

import (
	"fmt"
	"go-k8s/crowK8S"
)

func main()  {
	clientSet,err := crowK8S.ConnectK8s()
	if err !=nil {
		fmt.Println(err)
	}

    deploymentInfo,err := crowK8S.CreateSimpleDeployment(clientSet,"liubei","nginx","harbocto.xxx.com.cn/public/nginx",80,1)
	if err != nil {
		fmt.Println(err)
	}
    fmt.Printf("%+v\n",deploymentInfo)
}
  • 输出
&Deployment{ObjectMeta:{nginx  liubei /apis/apps/v1/namespaces/liubei/deployments/nginx dcbf8aac-1c23-4be5-943e-c9f2b3e7c767 19114656 1 2022-09-28 14:07:39 +0800 CST <nil> <nil> map[] map[] [] [] [{___6go_build_main_go.exe Update apps/v1 2022-09-28 14:07:39 +0800 CST FieldsV1 {"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"nginx\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: nginx,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:nginx] map[] [] [] []} {[] [] [{nginx harbocto.xxx.com.cn/public/nginx [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File Always nil false false false}] [] Always 0xc000560040 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%,MaxSurge:25%,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

5. PUT Deployment

语法

  • 语法
func (DeploymentInterface) Update(ctx context.Context, deployment *v1.Deployment, opts v1.UpdateOptions) (*v1.Deployment, error)
  • 语法示例
deploymentInfo,err = clientSet.AppsV1().Deployments(namespaceName).Update(context.TODO(),deployment,metaV1.UpdateOptions{})

完整示例(修改镜像)

  • 创建函数
package crowK8S

import (
	"context"
	"fmt"
	appsV1 "k8s.io/api/apps/v1"
	coreV1 "k8s.io/api/core/v1"
	metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

func ApplyDeploymentByImage(clientSet *kubernetes.Clientset,namespaceName string,deploymentName string,image string)(deploymentInfo *appsV1.Deployment,err error)  {

	deployment,err := clientSet.AppsV1().Deployments(namespaceName).Get(context.TODO(),deploymentName,metaV1.GetOptions{})
	if err !=nil {
		return deploymentInfo,err
	}

	//deployment.Spec.Replicas = &replicas
	deployment.Spec.Template.Spec.Containers[0].Image = image
	deploymentInfo,err = clientSet.AppsV1().Deployments(namespaceName).Update(context.TODO(),deployment,metaV1.UpdateOptions{})
	if err !=nil {
		return deploymentInfo,err
	}
	return deploymentInfo,nil
}
  • 引用
package main

import (
	"fmt"
	"go-k8s/crowK8S"
)

func main()  {
	clientSet,err := crowK8S.ConnectK8s()
	if err !=nil {
		fmt.Println(err)
	}

    deploymentInfo,err := crowK8S.ApplyDeploymentByImage(clientSet,"liubei","nginx","harbocto.xxx.com.cn/public/nginx:1.19.2-alpine")
	if err != nil {
		fmt.Println(err)
	}
    fmt.Printf("%+v\n",deploymentInfo)

}
  • 输出
&Deployment{ObjectMeta:{nginx  liubei /apis/apps/v1/namespaces/liubei/deployments/nginx dcbf8aac-1c23-4be5-943e-c9f2b3e7c767 19190799 4 2022-09-28 14:07:39 +0800 CST <nil> <nil> map[] map[deployment.kubernetes.io/revision:3] [] [] [{___6go_build_main_go.exe Update apps/v1 2022-09-28 14:07:39 +0800 CST FieldsV1 {"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:app":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"nginx\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-09-28 21:00:18 +0800 CST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: nginx,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:nginx] map[] [] [] []} {[] [] [{nginx harbocto.boe.com.cn/public/nginx:1.19.2-alpine [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File Always nil false false false}] [] Always 0xc00043a180 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%,MaxSurge:25%,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-09-28 14:07:44 +0800 CST,LastTransitionTime:2022-09-28 14:07:44 +0800 CST,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "nginx-86d45dbbf4" has successfully progressed.,LastUpdateTime:2022-09-28 21:00:18 +0800 CST,LastTransitionTime:2022-09-28 14:07:39 +0800 CST,},},ReadyReplicas:1,CollisionCount:nil,},}

完整示例(修改副本数)

  • 创建函数
func ApplyDeploymentByReplicas(clientSet *kubernetes.Clientset,namespaceName string,deploymentName string,replicas int32)(deploymentInfo *appsV1.Deployment,err error)  {

	deployment,err := clientSet.AppsV1().Deployments(namespaceName).Get(context.TODO(),deploymentName,metaV1.GetOptions{})
	if err !=nil {
		return deploymentInfo,err
	}

	deployment.Spec.Replicas = &replicas
	deploymentInfo,err = clientSet.AppsV1().Deployments(namespaceName).Update(context.TODO(),deployment,metaV1.UpdateOptions{})
	if err !=nil {
		return deploymentInfo,err
	}
	//fmt.Println(err,deployment)
	return deploymentInfo,nil
}

6. Delete Deployment

语法

  • 语法
func (DeploymentInterface) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
  • 语法示例
err = clientSet.AppsV1().Deployments(namespaceName).Delete(context.TODO(),deploymentName,metaV1.DeleteOptions{})
  • metaV1.DeleteOptions
type DeleteOptions struct {
    TypeMeta           `json:",inline"`
    GracePeriodSeconds *int64               `json:"gracePeriodSeconds,omitempty" protobuf:"varint,1,opt,name=gracePeriodSeconds"`
    Preconditions      *Preconditions       `json:"preconditions,omitempty" protobuf:"bytes,2,opt,name=preconditions"`
    OrphanDependents   *bool                `json:"orphanDependents,omitempty" protobuf:"varint,3,opt,name=orphanDependents"`
    PropagationPolicy  *DeletionPropagation `json:"propagationPolicy,omitempty" protobuf:"varint,4,opt,name=propagationPolicy"`
    DryRun             []string             `json:"dryRun,omitempty" protobuf:"bytes,5,rep,name=dryRun"`
}

GracePeriodSeconds:表示对象被删除之前的持续时间(以秒为单位)。 值必须是非负整数。零值表示立即删除。如果此值为 nil,则将使用指定类型的默认宽限期。如果未指定,则为每个对象的默认值。

Preconditions:先决条件必须在执行删除之前完成。如果无法满足这些条件,将返回 409(冲突)状态。

执行操作(更新、删除等)之前必须满足先决条件。
preconditions.resourceVersion (string)
指定目标资源版本(resourceVersion)。
preconditions.uid (string)
指定目标 UID。

propagationPolicy:表示是否以及如何执行垃圾收集。

可以设置此字段或 orphanDependents 字段,但不能同时设置二者。 默认策略由 metadata.finalizers 中现有终结器(Finalizer)集合和特定资源的默认策略决定。 可接受的值为: Orphan - 令依赖对象成为孤儿对象;Background - 允许垃圾收集器在后台删除依赖项;Foreground - 一个级联策略,前台删除所有依赖项。

DryRun:该值如果存在,则表示不应保留修改。

无效或无法识别的 dryRun 指令将导致错误响应并且不会进一步处理请求。有效值为:
All:处理所有试运行阶段(Dry Run Stages)

完整示例

  • 创建函数
package crowK8S

import (
	"context"
	metaV1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

func DeleteDeployment(clientSet *kubernetes.Clientset,namespaceName string,deploymentName string)(err error)  {
	err = clientSet.AppsV1().Deployments(namespaceName).Delete(context.TODO(),deploymentName,metaV1.DeleteOptions{})
	if err != nil {
		return err
	}
	return nil
}
  • 调用

import (
	"fmt"
	"go-k8s/crowK8S"
)

func main()  {
	clientSet,err := crowK8S.ConnectK8s()
	if err !=nil {
		fmt.Println(err)
	}

	err = crowK8S.DeleteDeployment(clientSet,"liubei","nginx")
    if err != nil {
		fmt.Println(err)
	}else {
		fmt.Println("删除成功")
	}
}

posted on 2022-10-22 21:10  运维开发玄德公  阅读(44)  评论(0编辑  收藏  举报  来源

导航