kubernetes 实战2_命令_Configure Pods and Containers

--以yaml格式输出:pod\configmap\service\ingress\deployment
kubectl get pod platform-financeapi-deployment-6d9ff7dc8f-l774l -n alpha --output=yaml

kubectl get configmap platform-website-config -n alpha --output=yaml

kubectl get service platform-website -n alpha --output=yaml

kubectl get ingress platform-website-ingress -n alpha --output=yaml

kubectl get deployment platform-website-deployment -n alpha --output=yaml

 

#describe pod\deployments\service\

kubectl desribe pods platform-website-deployment-5bbb9b7976-fjrnk -n alpha

kubectl describe deployment platform-website-deployment -n alpha

kubectl describe service platform-website -n alpha

kubectl describe configmap platform-website-config -n alpha

kubectl describe ingress platform-website-ingress -n alpha

  

OOMKilled : out of memory (OOM)

  

Memory units

The memory resource is measured in bytes.

You can express memory as a plain integer or a fixed-point integer with one of these suffixes: E, P, T, G, M, K, Ei, Pi, Ti, Gi, Mi, Ki

128974848, 129e6, 129M , 123Mi

  

 

memory requests and limits

By configuring memory requests and limits for the Containers that run in your cluster, you can make efficient use of the memory resources available on your cluster’s Nodes.

 

By keeping a Pod’s memory request low, you give the Pod a good chance of being scheduled.

 

By having a memory limit that is greater than the memory request, you accomplish two things:

  • The Pod can have bursts of activity where it makes use of memory that happens to be available.
  • The amount of memory a Pod can use during a burst is limited to some reasonable amount.

 

 

CPU units

The CPU resource is measured in cpu units. One cpu, in Kubernetes, is equivalent to:

  • 1 AWS vCPU
  • 1 GCP Core
  • 1 Azure vCore
  • 1 Hyperthread on a bare-metal Intel processor with Hyperthreading

 

Fractional values are allowed. A Container that requests 0.5 cpu is guaranteed half as much CPU as a Container that requests 1 cpu.

 

You can use the suffix m to mean milli. For example 100m cpu, 100 millicpu, and 0.1 cpu are all the same. Precision finer than 1m is not allowed.

 

CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core machine.

 

 

 

CPU requests and limits

By configuring the CPU requests and limits of the Containers that run in your cluster, you can make efficient use of the CPU resources available on your cluster’s Nodes.

 

By keeping a Pod’s CPU request low, you give the Pod a good chance of being scheduled.

 

By having a CPU limit that is greater than the CPU request, you accomplish two things:

  • The Pod can have bursts of activity where it makes use of CPU resources that happen to be available.
  • The amount of CPU resources a Pod can use during a burst is limited to some reasonable amount.

 

 

Quality of Service (QoS) classes

Kubernetes uses QoS classes to make decisions about scheduling and evicting Pods.

 

When Kubernetes creates a Pod it assigns one of these QoS classes to the Pod:

  • Guaranteed
  • Burstable
  • BestEffort

 

For a Pod to be given a QoS class of Guaranteed:

  • Every Container in the Pod must have a memory limit and a memory request, and they must be the same.
  • Every Container in the Pod must have a cpu limit and a cpu request, and they must be the same.

 

A Pod is given a QoS class of Burstable if:

  • The Pod does not meet the criteria for QoS class Guaranteed.
  • At least one Container in the Pod has a memory or cpu request.

 

For a Pod to be given a QoS class of BestEffort,

  • the Containers in the Pod must not have any memory or cpu limits or requests.

 

 

Create a Pod that has two Containers

apiVersion: v1
kind: Pod
metadata:
  name: qos-demo-4
  namespace: qos-example
spec:
  containers:

  - name: qos-demo-4-ctr-1
    image: nginx
    resources:
      requests:
        memory: "200Mi"

  - name: qos-demo-4-ctr-2
    image: redis

Notice that this Pod meets the criteria for QoS class Burstable.

That is, it does not meet the criteria for QoS class Guaranteed, and one of its Containers has a memory request. 

 

 

Assign an extended resource to a Pod

To request an extended resource, include the resources:requests field in your Container manifest.

 

Extended resources are fully qualified with any domain outside of *.kubernetes.io/.

 

Valid extended resource names have the form example.com/foo where example.com is replaced with your organization’s domain and foo is a descriptive resource name.

 

apiVersion: v1
kind: Pod
metadata:
  name: extended-resource-demo
spec:
  containers:
  - name: extended-resource-demo-ctr
    image: nginx
    resources:
      requests:
        example.com/dongle: 3
      limits:
        example.com/dongle: 3

In the configuration file, you can see that the Container requests 3 dongles.  

apiVersion: v1
kind: Pod
metadata:
  name: extended-resource-demo-2
spec:
  containers:
  - name: extended-resource-demo-2-ctr
    image: nginx
    resources:
      requests:
        example.com/dongle: 2
      limits:
        example.com/dongle: 2

Kubernetes will not be able to satisfy the request for two dongles, because the first Pod used three of the four available dongles.

 

 

Configure a Pod to Use a Volume for Storage

A Container’s file system lives only as long as the Container does, so when a Container terminates and restarts, changes to the filesystem are lost.

 

For more consistent storage that is independent of the Container, you can use a Volume.

This is especially important for stateful applications, such as key-value stores and databases.

For example, Redis is a key-value cache and store.

 

Configure a volume for a Pod

This Pod has a Volume of type emptyDir that lasts for the life of the Pod, even if the Container terminates and restarts.

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis
    volumeMounts:
    - name: redis-storage
      mountPath: /data/redis
  volumes:
  - name: redis-storage
    emptyDir: {}
#pod:redis
kubectl get pod redis –watch
kubectl exec -it redis – /bin/bash

#go to /data/redis, and create a file:
root@redis:/data# cd /data/redis/
root@redis:/data/redis# echo Hello > test-file

#list the running processes
root@redis:/data/redis# ps aux

#kill the redis process
#where <pid> is the redis process ID (PID)
root@redis:/data/redis# kill 


#In your original terminal, watch for changes to the redis Pod
kubectl get pod redis –watch


#At this point, the Container has terminated and restarted. This is because the redis Pod has a restartPolicy of Always.
kubectl exec -it redis – /bin/bash


#goto /data/redis, and verify that test-file is still there
root@redis:/data# cd /data/redis/

  

  

 

posted @ 2018-05-31 11:34  PanPan003  阅读(314)  评论(0编辑  收藏  举报