Kubernetes Applications

1、概述

应用为用户提供完整的业务功能,由一个或多个特定功能的组件组成。一般来说,根据一个应用的功能以及与外部环境通信的方式,它可以由一个或多个 Kubernetes 工作负载(例如部署、有状态副本集和守护进程集)、服务和CRD等资源类型组成。

Application资源类型是Kubernetes特别兴趣组(kubernetes-sigs)里面的开源项目application中自定义的一个CRD资源。目前application工程稳定版本为v0.8.3(2020年6月10日发布,此工程较为简单,只有applocation这一个CRD及对应控制类,此工程功能已经趋于完整,master分支已经15个月没更新了)。在Kubernetes集群中想使用Application资源类型的话需要提前在k8s集群中发布此CRD资源,并运行此工程里面的application_controller.go。

application工程主要提供以下功能:

  • 应用会维护与其组件的级联关系,删除应用时会级联删除应用所有组件
  • 应用级健康检查

2、applications.app.k8s.io数据结构

application资源类型数据结构文件为application_types.go

对应CRD为app.k8s.io_applications.yaml,通过配置文件可以查看application资源的apiGroups、apiVersions 和 resources 以及资源的 scope信息,可以看到资源scope是Namespaced。  

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# Copyright 2020 The Kubernetes Authors.
# SPDX-License-Identifier: Apache-2.0
 
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    api-approved.kubernetes.io: https://github.com/kubernetes-sigs/application/pull/2
    controller-gen.kubebuilder.io/version: v0.4.0
  creationTimestamp: null
  name: applications.app.k8s.io
spec:
  group: app.k8s.io
  names:
    categories:
    - all
    kind: Application
    listKind: ApplicationList
    plural: applications
    shortNames:
    - app
    singular: application
  scope: Namespaced
  versions:
  - additionalPrinterColumns:
    - description: The type of the application
      jsonPath: .spec.descriptor.type
      name: Type
      type: string
    - description: The creation date
      jsonPath: .spec.descriptor.version
      name: Version
      type: string
    - description: The application object owns the matched resources
      jsonPath: .spec.addOwnerRef
      name: Owner
      type: boolean
    - description: Numbers of components ready
      jsonPath: .status.componentsReady
      name: Ready
      type: string
    - description: The creation date
      jsonPath: .metadata.creationTimestamp
      name: Age
      type: date
    name: v1beta1
    schema:
    ........
1
kubectl apply -f app.k8s.io_applications.yaml

3、applications.app.k8s.io应用示例详解

3.1 创建应用

在zmc-test namespcace下创建应用test-app,这里注意应用资源元数据里面的app.kubernetes.io/version和app.kubernetes.io/name这两个标签,它们在应用资源类型中特别重要,应用和其组件的关联关系都是通过这两个标签的维护的。addOwnerRef字段为true的话应用会维护和其所有组件的级联删除关系。componentKinds字段表示当前应用的组件只能包含componentKinds中定义的资源类型。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
{
    "apiVersion": "app.k8s.io/v1beta1",
    "kind": "Application",
    "metadata": {
        "name": "test-app",
        "namespace": "zmc-test",
        "labels": {   //通过判断当前namespace下所有属于componentKinds的资源实例是否包含以下两个标签与应用维护关联关系
            "app.kubernetes.io/version": "v1",
            "app.kubernetes.io/name": "test-app"
        }
    },
    "spec": {
        "selector": {
            "matchLabels": {
                "app.kubernetes.io/version": "v1",
                "app.kubernetes.io/name": "test-app"
            }
        },
        "addOwnerRef": true,   //维护和其所有组件的级联删除关系
        "componentKinds": [    //当前应用只能包含如下资源类型
            {
                "group": "",
                "kind": "Service"
            },
            {
                "group": "apps",
                "kind": "Deployment"
            },
            {
                "group": "apps",
                "kind": "StatefulSet"
            },
            {
                "group": "extensions",
                "kind": "Ingress"
            },
            {
                "group": "servicemesh.zmc.io",
                "kind": "Strategy"
            },
            {
                "group": "servicemesh.zmc.io",
                "kind": "ServicePolicy"
            }
        ]
    }
}

3.2 创建应用组件

在zmc-test下创建service和deployment实例,它们元数据中都包含如下标签,创建过程比较简单,本文忽略。

1
2
"app.kubernetes.io/version": "v1",
"app.kubernetes.io/name": "test-app"

3.3 application_controller.go维护应用和其组件关系

首先需要在kubernetes集群中运行application工程或者将application_controller.go加入到一个contoller manager的项目里,运行过程本文忽略,下面来剖析下application_controller.go源码,application_contoller.go依赖conditio.go和status.go

 application_controller.go

主要逻辑:

1.维护当前应用和其组件的级联关系
2.组织当前应用的status值(所有组件的状态,就绪组件数量,应用conditions)

注意:下文中的组件指的都是当前应用的所有组件,例如当前示例应用test app,其组件有2个(一个deployment资源实例和一个service资源实例)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
// Copyright 2020 The Kubernetes Authors.
// SPDX-License-Identifier: Apache-2.0
 
package controllers
 
import (
    "context"
    "fmt"
 
    "github.com/go-logr/logr"
    "k8s.io/apimachinery/pkg/api/equality"
    apierrors "k8s.io/apimachinery/pkg/api/errors"
    "k8s.io/apimachinery/pkg/api/meta"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/runtime/schema"
    "k8s.io/apimachinery/pkg/types"
    utilerrors "k8s.io/apimachinery/pkg/util/errors"
    "k8s.io/client-go/util/retry"
    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/client"
 
    appv1beta1 "sigs.k8s.io/application/api/v1beta1"
)
 
const (
    loggerCtxKey = "logger"
)
 
// ApplicationReconciler reconciles a Application object
type ApplicationReconciler struct {
    client.Client
    Mapper meta.RESTMapper
    Log    logr.Logger
    Scheme *runtime.Scheme
}
 
// +kubebuilder:rbac:groups=app.k8s.io,resources=applications,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=app.k8s.io,resources=applications/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=*,resources=*,verbs=list;get;update;patch;watch
 
func (r *ApplicationReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
    rootCtx := context.Background()
    logger := r.Log.WithValues("application", req.NamespacedName)
    ctx := context.WithValue(rootCtx, loggerCtxKey, logger)
 
    var app appv1beta1.Application
    err := r.Get(ctx, req.NamespacedName, &app)
    if err != nil {
        if apierrors.IsNotFound(err) {
            return ctrl.Result{}, nil
        }
        return ctrl.Result{}, err
    }
 
    // Application is in the process of being deleted, so no need to do anything.
    if app.DeletionTimestamp != nil {
        return ctrl.Result{}, nil
    }
 
    //更新应用组件(给当前应用组件资源实例维护级联关系)
    resources, errs := r.updateComponents(ctx, &app)
    //组织当前应用的status值(就绪组件数量,所有组件的状态,应用conditions)
    newApplicationStatus := r.getNewApplicationStatus(ctx, &app, resources, &errs)
 
    newApplicationStatus.ObservedGeneration = app.Generation
    if equality.Semantic.DeepEqual(newApplicationStatus, &app.Status) {
        return ctrl.Result{}, nil
    }
        //更新应用状态
    err = r.updateApplicationStatus(ctx, req.NamespacedName, newApplicationStatus)
    return ctrl.Result{}, err
}
 
//更新应用组件(给当前应用的组件资源实例维护级联关系)
func (r *ApplicationReconciler) updateComponents(ctx context.Context, app *appv1beta1.Application) ([]*unstructured.Unstructured, []error) {
    var errs []error
    //根据selector获取组件资源实例
    resources := r.fetchComponentListResources(ctx, app.Spec.ComponentGroupKinds, app.Spec.Selector, app.Namespace, &errs)
 
    if app.Spec.AddOwnerRef {
        ownerRef := metav1.NewControllerRef(app, appv1beta1.GroupVersion.WithKind("Application"))
        *ownerRef.Controller = false
        if err := r.setOwnerRefForResources(ctx, *ownerRef, resources); err != nil {
            errs = append(errs, err)
        }
    }
    return resources, errs
}
 
//组织当前应用的status值(所有组件的状态,就绪组件数量,应用conditions)
func (r *ApplicationReconciler) getNewApplicationStatus(ctx context.Context, app *appv1beta1.Application, resources []*unstructured.Unstructured, errList *[]error) *appv1beta1.ApplicationStatus {
    //获取当前应用的组件资源实例的状态
    objectStatuses := r.objectStatuses(ctx, resources, errList)
    errs := utilerrors.NewAggregate(*errList)
 
    //应用是否就绪  就绪组件个数
    aggReady, countReady := aggregateReady(objectStatuses)
 
    newApplicationStatus := app.Status.DeepCopy()
    newApplicationStatus.ComponentList = appv1beta1.ComponentList{
        Objects: objectStatuses,
    }
    newApplicationStatus.ComponentsReady = fmt.Sprintf("%d/%d", countReady, len(objectStatuses))
    if errs != nil {
        setReadyUnknownCondition(newApplicationStatus, "ComponentsReadyUnknown", "failed to aggregate all components' statuses, check the Error condition for details")
    } else if aggReady {
        setReadyCondition(newApplicationStatus, "ComponentsReady", "all components ready")
    } else {
        setNotReadyCondition(newApplicationStatus, "ComponentsNotReady", fmt.Sprintf("%d components not ready", len(objectStatuses)-countReady))
    }
 
    if errs != nil {
        setErrorCondition(newApplicationStatus, "ErrorSeen", errs.Error())
    } else {
        clearErrorCondition(newApplicationStatus)
    }
 
    return newApplicationStatus
}
 
//根据selector获取组件资源实例
func (r *ApplicationReconciler) fetchComponentListResources(ctx context.Context, groupKinds []metav1.GroupKind, selector *metav1.LabelSelector, namespace string, errs *[]error) []*unstructured.Unstructured {
    logger := getLoggerOrDie(ctx)
    var resources []*unstructured.Unstructured
 
    if selector == nil {
        logger.Info("No selector is specified")
        return resources
    }
 
    for _, gk := range groupKinds {
        mapping, err := r.Mapper.RESTMapping(schema.GroupKind{
            Group: appv1beta1.StripVersion(gk.Group),
            Kind:  gk.Kind,
        })
        if err != nil {
            logger.Info("NoMappingForGK", "gk", gk.String())
            continue
        }
 
        list := &unstructured.UnstructuredList{}
        list.SetGroupVersionKind(mapping.GroupVersionKind)
        if err = r.Client.List(ctx, list, client.InNamespace(namespace), client.MatchingLabels(selector.MatchLabels)); err != nil {
            logger.Error(err, "unable to list resources for GVK", "gvk", mapping.GroupVersionKind)
            *errs = append(*errs, err)
            continue
        }
 
        for _, u := range list.Items {
            resource := u
            resources = append(resources, &resource)
        }
    }
    return resources
}
 
//给当前应用的组件资源实例维护级联关系
func (r *ApplicationReconciler) setOwnerRefForResources(ctx context.Context, ownerRef metav1.OwnerReference, resources []*unstructured.Unstructured) error {
    logger := getLoggerOrDie(ctx)
    for _, resource := range resources {
        ownerRefs := resource.GetOwnerReferences()
        ownerRefFound := false
        for i, refs := range ownerRefs {
            if ownerRef.Kind == refs.Kind &&
                ownerRef.APIVersion == refs.APIVersion &&
                ownerRef.Name == refs.Name {
                ownerRefFound = true
                if ownerRef.UID != refs.UID {
                    ownerRefs[i] = ownerRef
                }
            }
        }
 
        if !ownerRefFound {
            ownerRefs = append(ownerRefs, ownerRef)
        }
        resource.SetOwnerReferences(ownerRefs)
        err := r.Client.Update(ctx, resource)
        if err != nil {
            // We log this error, but we continue and try to set the ownerRefs on the other resources.
            logger.Error(err, "ErrorSettingOwnerRef", "gvk", resource.GroupVersionKind().String(),
                "namespace", resource.GetNamespace(), "name", resource.GetName())
        }
    }
    return nil
}
 
//获取当前应用的组件资源实例的状态
func (r *ApplicationReconciler) objectStatuses(ctx context.Context, resources []*unstructured.Unstructured, errs *[]error) []appv1beta1.ObjectStatus {
    logger := getLoggerOrDie(ctx)
    var objectStatuses []appv1beta1.ObjectStatus
    for _, resource := range resources {
        os := appv1beta1.ObjectStatus{
            Group: resource.GroupVersionKind().Group,
            Kind:  resource.GetKind(),
            Name:  resource.GetName(),
            Link:  resource.GetSelfLink(),
        }
        s, err := status(resource)
        if err != nil {
            logger.Error(err, "unable to compute status for resource", "gvk", resource.GroupVersionKind().String(),
                "namespace", resource.GetNamespace(), "name", resource.GetName())
            *errs = append(*errs, err)
        }
        os.Status = s
        objectStatuses = append(objectStatuses, os)
    }
    return objectStatuses
}
 
//计算当前应用就绪组件的数量是不是和组件总数一致
func aggregateReady(objectStatuses []appv1beta1.ObjectStatus) (bool, int) {
    countReady := 0
    for _, os := range objectStatuses {
        if os.Status == StatusReady {
            countReady++
        }
    }
    if countReady == len(objectStatuses) {
        return true, countReady
    }
    return false, countReady
}
 
func (r *ApplicationReconciler) updateApplicationStatus(ctx context.Context, nn types.NamespacedName, status *appv1beta1.ApplicationStatus) error {
    if err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
        original := &appv1beta1.Application{}
        if err := r.Get(ctx, nn, original); err != nil {
            return err
        }
        original.Status = *status
        if err := r.Client.Status().Update(ctx, original); err != nil {
            return err
        }
        return nil
    }); err != nil {
        return fmt.Errorf("failed to update status of Application %s/%s: %v", nn.Namespace, nn.Name, err)
    }
    return nil
}
 
func (r *ApplicationReconciler) SetupWithManager(mgr ctrl.Manager) error {
    return ctrl.NewControllerManagedBy(mgr).
        For(&appv1beta1.Application{}).
        Complete(r)
}
 
func getLoggerOrDie(ctx context.Context) logr.Logger {
    logger, ok := ctx.Value(loggerCtxKey).(logr.Logger)
    if !ok {
        panic("context didn't contain logger")
    }
    return logger
}

status.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
// Copyright 2020 The Kubernetes Authors.
// SPDX-License-Identifier: Apache-2.0
 
package controllers
 
import (
    "strings"
 
    appsv1 "k8s.io/api/apps/v1"
    batchv1 "k8s.io/api/batch/v1"
    corev1 "k8s.io/api/core/v1"
    policyv1beta1 "k8s.io/api/policy/v1beta1"
    "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
    "k8s.io/apimachinery/pkg/runtime"
)
 
// Constants defining labels
const (
    StatusReady      = "Ready"
    StatusInProgress = "InProgress"
    StatusUnknown    = "Unknown"
    StatusDisabled   = "Disabled"
)
 
//获取对象状态
func status(u *unstructured.Unstructured) (string, error) {
    gk := u.GroupVersionKind().GroupKind()
    switch gk.String() {
    case "StatefulSet.apps":
        return stsStatus(u)
    case "Deployment.apps":
        return deploymentStatus(u)
    case "ReplicaSet.apps":
        return replicasetStatus(u)
    case "DaemonSet.apps":
        return daemonsetStatus(u)
    case "PersistentVolumeClaim":
        return pvcStatus(u)
    case "Service":
        return serviceStatus(u)
    case "Pod":
        return podStatus(u)
    case "PodDisruptionBudget.policy":
        return pdbStatus(u)
    case "ReplicationController":
        return replicationControllerStatus(u)
    case "Job.batch":
        return jobStatus(u)
    default:
        return statusFromStandardConditions(u)
    }
}
 
// Status from standard conditions
func statusFromStandardConditions(u *unstructured.Unstructured) (string, error) {
    condition := StatusReady
 
    // Check Ready condition
    _, cs, found, err := getConditionOfType(u, StatusReady)
    if err != nil {
        return StatusUnknown, err
    }
    if found && cs == corev1.ConditionFalse {
        condition = StatusInProgress
    }
 
    // Check InProgress condition
    _, cs, found, err = getConditionOfType(u, StatusInProgress)
    if err != nil {
        return StatusUnknown, err
    }
    if found && cs == corev1.ConditionTrue {
        condition = StatusInProgress
    }
 
    return condition, nil
}
 
// Statefulset
func stsStatus(u *unstructured.Unstructured) (string, error) {
    sts := &appsv1.StatefulSet{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, sts); err != nil {
        return StatusUnknown, err
    }
 
    if sts.Status.ObservedGeneration == sts.Generation &&
        sts.Status.Replicas == *sts.Spec.Replicas &&
        sts.Status.ReadyReplicas == *sts.Spec.Replicas &&
        sts.Status.CurrentReplicas == *sts.Spec.Replicas {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
// Deployment
func deploymentStatus(u *unstructured.Unstructured) (string, error) {
    deployment := &appsv1.Deployment{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, deployment); err != nil {
        return StatusUnknown, err
    }
 
    replicaFailure := false
    progressing := false
    available := false
 
    for _, condition := range deployment.Status.Conditions {
        switch condition.Type {
        case appsv1.DeploymentProgressing:
            if condition.Status == corev1.ConditionTrue && condition.Reason == "NewReplicaSetAvailable" {
                progressing = true
            }
        case appsv1.DeploymentAvailable:
            if condition.Status == corev1.ConditionTrue {
                available = true
            }
        case appsv1.DeploymentReplicaFailure:
            if condition.Status == corev1.ConditionTrue {
                replicaFailure = true
                break
            }
        }
    }
 
    if deployment.Status.ObservedGeneration == deployment.Generation &&
        deployment.Status.Replicas == *deployment.Spec.Replicas &&
        deployment.Status.ReadyReplicas == *deployment.Spec.Replicas &&
        deployment.Status.AvailableReplicas == *deployment.Spec.Replicas &&
        deployment.Status.Conditions != nil && len(deployment.Status.Conditions) > 0 &&
        (progressing || available) && !replicaFailure {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
// Replicaset
func replicasetStatus(u *unstructured.Unstructured) (string, error) {
    rs := &appsv1.ReplicaSet{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, rs); err != nil {
        return StatusUnknown, err
    }
 
    replicaFailure := false
    for _, condition := range rs.Status.Conditions {
        switch condition.Type {
        case appsv1.ReplicaSetReplicaFailure:
            if condition.Status == corev1.ConditionTrue {
                replicaFailure = true
                break
            }
        }
    }
    if rs.Status.ObservedGeneration == rs.Generation &&
        rs.Status.Replicas == *rs.Spec.Replicas &&
        rs.Status.ReadyReplicas == *rs.Spec.Replicas &&
        rs.Status.AvailableReplicas == *rs.Spec.Replicas && !replicaFailure {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
// Daemonset
func daemonsetStatus(u *unstructured.Unstructured) (string, error) {
    ds := &appsv1.DaemonSet{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, ds); err != nil {
        return StatusUnknown, err
    }
 
    if ds.Status.ObservedGeneration == ds.Generation &&
        ds.Status.DesiredNumberScheduled == ds.Status.NumberAvailable &&
        ds.Status.DesiredNumberScheduled == ds.Status.NumberReady {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
// PVC
func pvcStatus(u *unstructured.Unstructured) (string, error) {
    pvc := &corev1.PersistentVolumeClaim{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, pvc); err != nil {
        return StatusUnknown, err
    }
 
    if pvc.Status.Phase == corev1.ClaimBound {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
// Service
func serviceStatus(u *unstructured.Unstructured) (string, error) {
    service := &corev1.Service{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, service); err != nil {
        return StatusUnknown, err
    }
    stype := service.Spec.Type
 
    if stype == corev1.ServiceTypeClusterIP || stype == corev1.ServiceTypeNodePort || stype == corev1.ServiceTypeExternalName ||
        stype == corev1.ServiceTypeLoadBalancer && isEmpty(service.Spec.ClusterIP) &&
            len(service.Status.LoadBalancer.Ingress) > 0 && !hasEmptyIngressIP(service.Status.LoadBalancer.Ingress) {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
// Pod
func podStatus(u *unstructured.Unstructured) (string, error) {
    pod := &corev1.Pod{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, pod); err != nil {
        return StatusUnknown, err
    }
 
    for _, condition := range pod.Status.Conditions {
        if condition.Type == corev1.PodReady && (condition.Reason == "PodCompleted" || condition.Status == corev1.ConditionTrue) {
            return StatusReady, nil
        }
    }
    return StatusInProgress, nil
}
 
// PodDisruptionBudget
func pdbStatus(u *unstructured.Unstructured) (string, error) {
    pdb := &policyv1beta1.PodDisruptionBudget{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, pdb); err != nil {
        return StatusUnknown, err
    }
 
    if pdb.Status.ObservedGeneration == pdb.Generation &&
        pdb.Status.CurrentHealthy >= pdb.Status.DesiredHealthy {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
func replicationControllerStatus(u *unstructured.Unstructured) (string, error) {
    rc := &corev1.ReplicationController{}
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, rc); err != nil {
        return StatusUnknown, err
    }
 
    if rc.Status.ObservedGeneration == rc.Generation &&
        rc.Status.Replicas == *rc.Spec.Replicas &&
        rc.Status.ReadyReplicas == *rc.Spec.Replicas &&
        rc.Status.AvailableReplicas == *rc.Spec.Replicas {
        return StatusReady, nil
    }
    return StatusInProgress, nil
}
 
func jobStatus(u *unstructured.Unstructured) (string, error) {
    job := &batchv1.Job{}
 
    if err := runtime.DefaultUnstructuredConverter.FromUnstructured(u.Object, job); err != nil {
        return StatusUnknown, err
    }
 
    if job.Status.StartTime == nil {
        return StatusInProgress, nil
    }
 
    return StatusReady, nil
}
 
func hasEmptyIngressIP(ingress []corev1.LoadBalancerIngress) bool {
    for _, i := range ingress {
        if isEmpty(i.IP) {
            return true
        }
    }
    return false
}
 
func isEmpty(s string) bool {
    return len(strings.TrimSpace(s)) == 0
}
 
func getConditionOfType(u *unstructured.Unstructured, conditionType string) (string, corev1.ConditionStatus, bool, error) {
    conditions, found, err := unstructured.NestedSlice(u.Object, "status", "conditions")
    if err != nil || !found {
        return "", corev1.ConditionFalse, false, err
    }
 
    for _, c := range conditions {
        condition, ok := c.(map[string]interface{})
        if !ok {
            continue
        }
        t, found := condition["type"]
        if !found {
            continue
        }
        condType, ok := t.(string)
        if !ok {
            continue
        }
        if condType == conditionType {
            reason := condition["reason"].(string)
            conditionStatus := condition["status"].(string)
            return reason, corev1.ConditionStatus(conditionStatus), true, nil
        }
    }
    return "", corev1.ConditionFalse, false, nil
}

condition.go

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
// Copyright 2020 The Kubernetes Authors.
// SPDX-License-Identifier: Apache-2.0
 
package controllers
 
import (
    corev1 "k8s.io/api/core/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    appv1beta1 "sigs.k8s.io/application/api/v1beta1"
)
 
func setReadyCondition(appStatus *appv1beta1.ApplicationStatus, reason, message string) {
    setCondition(appStatus, appv1beta1.Ready, corev1.ConditionTrue, reason, message)
}
 
// NotReady - shortcut to set ready condition to false
func setNotReadyCondition(appStatus *appv1beta1.ApplicationStatus, reason, message string) {
    setCondition(appStatus, appv1beta1.Ready, corev1.ConditionFalse, reason, message)
}
 
// Unknown - shortcut to set ready condition to unknown
func setReadyUnknownCondition(appStatus *appv1beta1.ApplicationStatus, reason, message string) {
    setCondition(appStatus, appv1beta1.Ready, corev1.ConditionUnknown, reason, message)
}
 
// setErrorCondition - shortcut to set error condition
func setErrorCondition(appStatus *appv1beta1.ApplicationStatus, reason, message string) {
    setCondition(appStatus, appv1beta1.Error, corev1.ConditionTrue, reason, message)
}
 
// clearErrorCondition - shortcut to set error condition
func clearErrorCondition(appStatus *appv1beta1.ApplicationStatus) {
    setCondition(appStatus, appv1beta1.Error, corev1.ConditionFalse, "NoError", "No error seen")
}
 
func setCondition(appStatus *appv1beta1.ApplicationStatus, ctype appv1beta1.ConditionType, status corev1.ConditionStatus, reason, message string) {
    var c *appv1beta1.Condition
    for i := range appStatus.Conditions {
        if appStatus.Conditions[i].Type == ctype {
            c = &appStatus.Conditions[i]
        }
    }
    if c == nil {
        addCondition(appStatus, ctype, status, reason, message)
    } else {
        // check message ?
        if c.Status == status && c.Reason == reason && c.Message == message {
            return
        }
        now := metav1.Now()
        c.LastUpdateTime = now
        if c.Status != status {
            c.LastTransitionTime = now
        }
        c.Status = status
        c.Reason = reason
        c.Message = message
    }
}
 
func addCondition(appStatus *appv1beta1.ApplicationStatus, ctype appv1beta1.ConditionType, status corev1.ConditionStatus, reason, message string) {
    now := metav1.Now()
    c := appv1beta1.Condition{
        Type:               ctype,
        LastUpdateTime:     now,
        LastTransitionTime: now,
        Status:             status,
        Reason:             reason,
        Message:            message,
    }
    appStatus.Conditions = append(appStatus.Conditions, c)

3.4 查看应用详情及其组件详情

应用详情:

会发现应用维护了其status值,包括所有组件的状态,就绪组件数量,应用conditions。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
apiVersion: app.k8s.io/v1beta1
kind: Application
metadata:
  creationTimestamp: "2021-12-27T01:50:54Z"
  generation: 1
  labels:
    app.kubernetes.io/name: test-app
    app.kubernetes.io/version: v1
  managedFields:
    .....
    manager: cb-controller-manager
    operation: Update
    time: "2021-12-27T01:50:54Z"
  name: test-app
  namespace: zmc-test
  resourceVersion: "1873685"
  selfLink: /apis/app.k8s.io/v1beta1/namespaces/zmc-test/applications/test-app
  uid: f9c4d23a-b5a8-40a9-b046-eea3a40e11dc
spec:
  addOwnerRef: true
  componentKinds:
  - group: ""
    kind: Service
  - group: apps
    kind: Deployment
  - group: apps
    kind: StatefulSet
  - group: extensions
    kind: Ingress
  - group: servicemesh.zmc.io
    kind: Strategy
  - group: servicemesh.zmc.io
    kind: ServicePolicy
  selector:
    matchLabels:
      app.kubernetes.io/name: test-app
      app.kubernetes.io/version: v1
status: 
  components:
  - kind: Service
    link: /api/v1/namespaces/zmc-test/services/nginx
    name: nginx
    status: Ready
  - group: apps
    kind: Deployment
    link: /apis/apps/v1/namespaces/zmc-test/deployments/nginx-v1
    name: nginx-v1
    status: Ready
  componentsReady: 2/2
  conditions:
  - lastTransitionTime: "2021-12-27T01:50:58Z"
    lastUpdateTime: "2021-12-27T01:50:58Z"
    message: all components ready
    reason: ComponentsReady
    status: "True"
    type: Ready
  - lastTransitionTime: "2021-12-27T01:50:54Z"
    lastUpdateTime: "2021-12-27T01:50:54Z"
    message: No error seen
    reason: NoError
    status: "False"
    type: Error
  observedGeneration: 1

组件详情:

查看当前应用的所有组件(deployment和service)都会发现它们元数据部分都维护了级联关系,删除应用的话对应会删除应用的所有组件。

1
2
3
4
5
6
7
8
9
........
ownerReferences:
  - apiVersion: app.k8s.io/v1beta1
    blockOwnerDeletion: true
    controller: false
    kind: Application
    name: app-1227-v1
    uid: f9c4d23a-b5a8-40a9-b046-eea3a40e11dc
....... 

源码来自:https://github.com/kubernetes-sigs/application

posted @   人艰不拆_zmc  阅读(1450)  评论(5编辑  收藏  举报
编辑推荐:
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
阅读排行:
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?
历史上的今天:
2018-12-29 Mysql导入表信息[Err] 1067 - Invalid default value for '字段名'
2016-12-29 marathon新建应用映射端口限制
2016-12-29 Docker容器时间与宿主机时间不一致的问题
2016-12-29 java new Date()得到的时间和系统时间不一样
点击右上角即可分享
微信分享提示