【go云原生】Kubernetes Pod
最近更新:2025-01-03
|
字数总计:2.9k
|
阅读估时:14分钟
|
阅读量: 次
Kubernetes Pod Pod的定义 pod的定义 配置文件定义 关键词 Pod的使用 创建并访问pod 多个应用程序 Init容器 案例一 案例二 容器的生命周期处理函数 容器的探测 Pod的部署 Deployment DaemonSet 孤立Pod与Label Replication Controller Deployment 静态pod
Kubernetes Pod Pod的定义
k8s中创建和管理的最小可部署单元,是一组(一个或多个)容器的打包,其中的容器共享网络、存储,地位均等且一同调度。类似一个“逻辑主机”。
pod的阶段(状态)
pending:已经被k8s接收,但有一个或者多个容器尚未创建亦未运行
running:所有容器都已经创建,且至少有一个容器仍在运行,或者正处于启动或者重启状态
succeeded:pod中所有容器都已经成功终止,并且不会再重启
failed:所有容器都已终止,且至少有一个容器时因为失败终止。
unknown:因为某些原因无法取得pod的状态,这种通常是因为与pod所在主机通信失败
容器状态
waiting:如果不在running或terminated状态之一,他就在waiting状态
running:容器正在执行状态并且没有问题发生
terminated:已经开始执行并且正常结束或者因为某些原因失败
pod的定义 配置文件定义
myhello-pod-demo.yaml
通常情况下我们不会单独去定义一个pod1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 apiVersion: v1 kind: Pod metadata: name: myhello-pod-demo labels: name: myhello-pod-demo env : dev spec: restartPolicy: Always containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 command : ["./app" ] args: ["--param1=k8s-p1" , "--param2=k8s-p2" ] resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
关键词
restartPolicy 重启策略
imagePullPolicy 镜像拉取策略
command 容器启动命令
args 容器启动命令参数
resources 容器使用资源设置
Pod的使用 创建并访问pod 1 2 3 4 5 6 kubectl create -f myhello-pod-demo.yaml kubectl get pod kubectl describe -f myhello-pod-demo.yaml kubectl port-forward pod/myhello-pod-demo 5000:80 curl http://localhost:5000/print/env curl http://localhost:5000/print/startup
多个应用程序
定义
myhello-mult-pod-demo.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 apiVersion: v1 kind: Pod metadata: name: myhello-mult-pod-demo labels: name: myhello-mult-pod-demo env : dev spec: restartPolicy: Always containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 command : ["./app" ] args: ["--param1=k8s-p1" , "--param2=k8s-p2" ] resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2" - name: myredis image: redis imagePullPolicy: IfNotPresent ports: - containerPort: 6379 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
1 2 3 4 5 6 7 8 9 10 11 12 kubectl create -f myhello-mult-pod-demo.yaml kubectl get -f myhello-mult-pod-demo.yaml kubectl port-forward pod/myhello-mult-pod-demo 5001:80 5002:6379 --address 192.168.239.142 --address localhost curl http://localhost:5001/ping kubectl run myredis --image=redis kubectl exec -it pod/myredis -- /bin/bash > redis-cli -h 192.168.239.142 -p 5002 > redis-cli -h 10.244.1.155 -p 6379
Init容器
init容器与普通容器的区别,init容器是一种特殊的容器,在pod内的应用容器启动之前运行。它可以包含一些应用镜像中不存在的使用工具和安装脚本。
区别:
init容器总是运行到完成,即init容器提供的不是通过守护进程提供服务,而是通过启动容器来执行处理任务,当任务处理完成容器即完成
每个都必须在下一个容器之前成功完成,多个init按定义的顺序依次执行
只有当所有init容器完成时,k8s才会为pod初始化应用容器
使用:
案例一
myhello-init-pod-demo.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 apiVersion: v1 kind: Pod metadata: name: myhello-init-pod-demo namespace: default labels: name: myhello-init-pod-demo env : dev spec: restartPolicy: Always initContainers: - name: init-myservice image: busybox command : ['sh' , '-c' , "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) .svc.cluster.local; do echo waiting for myservice: sleep 2; done" ] - name: init-mydb image: busybox command : ['sh' , '-c' , "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace) .svc.cluster.local;do echo waiting for mydb; sleep 2: done" ] containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 command : ["./app" ] args: ["--param1=k8s-p1" , "--param2=k8s-p2" ] resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2" - name: myredis image: redis imagePullPolicy: IfNotPresent ports: - containerPort: 6379 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 kubectl apply -f myhello-init-pod-demo.yaml kubectl describe -f myhello-init-pod-demo.yaml vim myservice.yaml apiVersion: v1 kind: Service metadata: name: myservice spec: ports: - protocal: TCP port: 80 targetPort: 9376 kubectl create -f myservice.yaml kubectl describe -f myhello-init-pod-demo.yaml vim mydb.yaml apiVersion: v1 kind: Service metadata: name: mydb spec: ports: - protocal: TCP port: 80 targetPort: 9377 kubectl create -f mydb.yaml kubectl describe -f myhello-init-pod-demo.yaml
案例二
nginx-init-demo.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 apiVersion: v1 kind: Pod metadata: name: nginx-init-demo spec: containers: - name: nginx image:nginx ports: - containerPort: 80 volumeMounts: - name: workdir mountPath: /usr/share/nginx/html initContainers: - name: install image: busybox command : - wget - "-o" - "/work-dir/index.html" - https://www.baidu.com/ volumeMounts: - name: workdir mountPath: "/work-dir" dnsPolicy: Default volumes: - name: workdir emptyDir: {}
1 2 3 kubectl create -f nginx-init-demo.yaml kubectl port-forward pod/nginx-init-demo 5003:80 --address 192.168.239.142 --address localhost curl http://localhost:5003
容器的生命周期处理函数
处理函数:postStart、preStop函数
lifecycle-demo.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 apiVersion: v1 kind: Pod metadata: name: lifecysle-demo spec: containers: - name: nginx image: nginx lifecycle: postStart: exec : command : ["/bin/sh" , "-c" , "echo Hello from the postStart handler > /usr/share/message" ] preStop: exec : command : ["/bin/sh" , "-c" , "nginx -s quit; while killall -0 nginx; do sleep 1; done" ]
1 2 3 kubectl create -f lifecycle-demo.yaml kubectl exec -it lifecycle-demo -- /bin/bash cat /usr/share/message
容器的探测
探测类型
livenessProbe,存活探测
readinessProbe,就绪探测
startupProbe,启动探测
探测机制
exec
httpGet
tcpSocket
grpc
探测结果
Success
Failure
Unknown
案例1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 apiVersion: v1 kind: Pod metadata: labels: test : liveness name: liveness-exec spec: containers: - name: liveness image: busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec : command : - cat - /tmp/healthy initialDelaySeconds: 5s periodSeconds: 5 kubectl create -f liveness-exec.yaml kubectl describe pod liveness-exec apiVersion: v1 kind: Pod metadata: labels: test : liveness name: liveness-http spec: containers: - name: myhello image: xlhmzch/hello:1.0.2 readinessProbe: httpGet: path: /healthz port: 80 initialDelaySeconds: 3 periodSeconds: 3 livenessProbe: httpGet: path: /healthz port: 80 initialDelaySeconds: 3 periodSeconds: 3 kubectl create -f liveness-http.yaml kubectl describe pod liveness-http apiVersion: v1 kind: Pod metadata: labels: test : startup name: startup-tcp spec: containers: - name: myhello image: xlhmzch/hello:1.0.2 startupProbe: tcpSocket: port: 80 failureThreshold: 30 periodSeconds: 3 livenessProbe: httpGet: path: /healthz port: 80 failureThreshold: 1 periodSeconds: 3 kubectl create -f startup-tcp.yaml kubectl describe pod startup-tcp
Pod的部署
pod可通过deployment、daemonSet、StateFulSet、job等方式部署到k8s集群
Deployment
myapp-deployment.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment labels: name: myapp spec: replicas: 5 selector: matchLabels: name: myapp template: metadata: labels: name: myapp spec: containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2" - name: myredis image: redis imagePillPolicy: IfNotPresent ports: - containerPort: 6379 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
1 2 3 4 5 6 7 8 9 10 kubectl create -f myapp-deployment.yaml kubectl get -f myapp-deployment.yaml kubectl get pod vim myapp-deployment.yaml kubectl apply -f myapp-deployment.yaml kubectl rollout history deployment myapp-deployment kubectl rollout pause deployment myapp-deployment kubectl scale deployment myapp-deployment kubectl autoscale deployment myapp-deployment
DaemonSet
cadvisor-ds.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 apiVersion: apps/v1 kind: DaemonSet metadata: name: cadvisor namespace: default labels: app: cadvisor spec: selector: matchLabels: app: cadvisor template: metadata: labels: app: cadvisor spec: tolerations: - key: node-role.kubernetes.io/control-plane operator: Exists effect: NoSchedule containers: - name: cadvisor image: google/cadvisor:v0.32.0 ports: - containerPort: 8080 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: rootfs mountPath: /rootfs - name: run mountPath: /var/run - name: sys mountPath: /sys - name: varlibdocker mountPath: /var/lib/docker/ - name: devdisk mountPath: /dev/disk readOnly: true - name: localtime mountPath: /etc/localtime readOnly: true terminationGroupPeriodseconds: 30 volumes: - name: rootfs hostPath: path: / - name: run hostPath: path: /var/run - name: sys hostPath: path: /sys/ - name: varlibdocker hostPath: path: /var/lib/docker/ - name: devdisk hostPath: path: /dev/disk - name: localtime hostPath: path: /etc/localtime
1 2 3 4 5 6 7 kubectl get node kubectl get pod -o | grep master kubectl create -f cadvisor-ds.yaml kubectl get -f cadvisor-ds.yaml kubectl get pod -o wide | grep cadvisor kubectl rollout history ds cadvisor
孤立Pod与Label
孤立的Pod没有运行保障,一旦出现故障没有任何controller来确保Pod的存活状态
不方便升级、回滚、伸缩等操作
如果节点出现问题,需要维护,该类pod节点无法自动调度到其他节点。
Replication Controller
myhello-pod.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 apiVerision: v1 kind: Pod metadata: name: myhello-pod labels: name: myhello-pod spec: containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
myhello-rc.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 apiVersion: v1 kind: ReplicationController metadata: name: myhello-rc labels: name: myhello-rc spec: relicas: 5 selector: name: myhello-pod template: metadata: labels: name: myhello-pod spec: containers: - name: myhelloworld image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
1 2 3 4 5 6 7 kubectl create -f myhello-pod.yaml kubectl create -f myhello-rc.yaml kubectl get pod kubectl edit rc myhello-rc kubectl get pod kubectl edit rc myhello-rc kubectl get pod
Deployment
myhello-pod-demo.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 apiVerision: v1 kind: Pod metadata: name: myhello-pod-demo labels: name: myhello spec: restartPolicy: Always containers: - name: myapp image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
myapp-deployment.yaml1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deployment labels: name: myapp spec: replicas: 5 selector: matchLabels: name: myapp template: metadata: labels: name: myapp spec: containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2" - name: myredis image: redis imagePillPolicy: IfNotPresent ports: - containerPort: 6379 env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
1 2 3 4 5 kubectl create -f myhello-pod-demo.yaml kubectl create -f myapp-deployment.yaml kubectl get pod kubectl get rs myapp-deployment-noqiethopqewh -o yaml
结论
孤立的pod可以通过label被控制器捕获
RC与Deployment不一致的原因在于,Deployment会自动创建RS,为RS随机命名。RS管理的pod会被设置一 个新的Label值,RS通过两个label值确定其管理的pod。Deployment直接管理RS,不直接管理Pod, Deployment中的Label selector 是针对RS的。
静态pod
定义在某个Node的具体文件中,由kubelet直接管理,不受调度1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 vim /etc/kubernetes/kubelet.conf vim /etc/kubernetes/manifests/myhello-static-pod.yaml apiVersion: v1 kind: Pod metadata: name: myhello-static-pod namespace: default labels: name: myhello-static-pod env : dev spec: containers: - name: myhello image: xlhmzch/hello:1.0.0 imagePullPolicy: IfNotPresent ports: - containerPort: 80 command : ["./app" ] args: ["--param1=k8s-p1" , "--param2=k8s-p2" ] resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi env : - name: env1 value: "k8s-env1" - name: env2 value: "k8s-env2"
1 2 3 4 kubectl get pod kubectl delete pod myhello-static-pod-k8s-master1 kubectl get pod rm myhello-static-pod.yaml
2025-01-02
该篇文章被 Cleofwine
归为分类:
Go云原生