k8s有状态应用程序的部署
区别有状态和无状态应用:
这个应用是否会产生独立的存储和数据,如果产生了数据但是保存在了数据库,那就是无状态应用;如果产生了数据,但是是独立的存储,那就是有状态应用。
通常将应用尽量设计成无状态应用,因为方便管理和伸缩。
StatefulSet部署ETCD集群
StatefulSet使用场景,假设应用程序有以下的需求:
稳定的、唯一的网络标识符
稳定的、持久的存储
有序的、优雅的部署和扩缩
有序的、自动的滚动更新
pod标识
序号索引:对于具有N个副本的StatefulSet,该StatefulSet中每个Pod将分配一个整数序号,该序号在此StatefulSet中是唯一的。
起始序号
稳定的网络ID:StatefulSet中的每个Pod根据StatefulSet的名称和pod的序号派生出它的主机名。组合主机名的格式为$(StatefulSet).$(序号)。
稳定的存储:对于StatefulSet中定义的每个VolumeClaimTemplate,每个Pod接受到一个PersistentVolumeClaim。每个Pod将会得到基于StorageClass制备PersistentVolume。
pod名称标签:当StatefulSet控制器创建pod时,它会添加一个标签statefulset.kubernetes.io/pod-name,该标签值设置为pod名称。
部署ETCD 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 mkdir stscd ./stsvim etcd-sts.yaml apiVersion: v1 kind: Namespace metadata: name: etcd-ns --- apiVersion: v1 kind: Service metadata: name: etcd-svc namespace: etcd-ns spec: ports: - port: 2379 name: client - port: 2380 name: peer clusterIP: None selector: app: etcd --- apiVersion: apps/v1 kind: StatefulSet metadata: name: etce-sts namespace: etcd-ns spec: selector: matchLabels: app: etcd serviceName: etcd-svc replicas: 3 template: metadata: labels: app: etcd spec: containers: - name: etcd env : - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_INDEX valueFrom: fieldRed: fieldPath: metadata.labels['apps.kubernetes.io/pod-index' ] image: quay.io/cores/etcd:v3.5.7 imagePullPolicy: IfNotPresent ports: - containerPort: 2379 name: client protocol: TCP - containerPort: 2380 name: peer protocol: TCP command : - etcd - --name=$(POD_NAME) - --listen-client-urls=http://0.0.0.0:2379 - --listen-peer-urls=http://0.0.0.0:2380 - --initial-advertise-peer-urls=http://$(POD_NAME).etcd-svc.etcd-ns.svc.cluster.local:2380 - --advertise-client-urls=http://$(POD_NAME).etcd-svc.etcd-ns.svc.cluster.local:2379 - --initial-cluster=etcd-sts-0=http://etcd-sts-0.etcd-svc.etcd-ns.svc.cluster.local:2380,etcd-sts-1=http://etcd-sts-1.etcd-svc.etcd-ns.svc.cluster.local:2380,etcd-sts-2=http://etcd-sts-2.etcd-svc.etcd-ns.svc.cluster.local:2380 - --initial-cluster-state=new - --initial-cluster-token=token123456 volumeMounts: - name: etcd-data mountPath: /var/lib/etcd volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: etcd-data spec: storageClassName: ceph-rbd accessModes: - ReadWriteOnce resouces: requests: storage: 1Gi kubectl create -f etcd-sts.yaml kubectl get -f etcd-sts.yaml kubectl -n etcd-ns logs -f pod/etcd-sts-0 kubectl -n etcd-ns exec etcd-sts-0 -- etcdctl endpoint status --cluster -w table kubectl -n etcd-ns exec etcd-sts-0 -- etcdctl put key1 value1 kubectl -n etcd-ns exec etcd-sts-0 -- etcdctl put key2 value2 kubectl -n etcd-ns exec etcd-sts-0 -- etcdctl get key --prefix kubectl -n etcd-ns scale --replicas 4 statefulset etcd-sts kubectl -n etcd-ns scale --replicas 3 statefulset etcd-sts kubectl -n etcd-ns exec etcd-sts-0 -- etcdctl member add etcd-sts-3 --peer-urls="http://etcd-sts-3.etcd-svc.etcd-ns.svc.cluster.local:2380" vim etcd-sts.yaml kubectl apply -f etcd-sts.yaml
新节点无法加入到etcd集群,查看日志信息显示:request cluster ID mismatch1 kubectl -n etcd-ns logs -f pod/etcd-sts-0
原因:添加节点时操作失误或者参数错误,导致新节点形成了一个单节点集群,产生了一个新的集群ID。由于statefulset默认不回收pv,导致修正后仍然访问到了之前的PV,从而导致错误一直存在
解决方案:反向操作,将集群退回到扩容之前,删除因新节点产生的PVC和PV 再按流程操作即可
Operator部署Redis企业版集群 1 2 3 4 5 kubectl create namespace redis-op kubectl config set-context --current --namespace=redis-op kubectl config get-contexts kubectl apply -f https://raw.githubusercontent.com/RedisLabs/redis-enterprise-k8s-docs/v7.2.4-12/bundle.yaml
使用本地存储1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 vim redis-op.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: local_storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv1 labels: type : local spec: storageClassName: local-storage capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/redis1" --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv2 labels: type : local spec: storageClassName: local-storage capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/redis2" --- apiVersion: v1 kind: PersistentVolume metadata: name: redis-pv3 labels: type : local spec: storageClassName: local-storage capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/redis3" --- apiVersion: "app.redislabs.com/v1" kind: "RedisEnterpriseCluster" metadata: name: my-rec spec: nodes: 3 persistentSpec: storageClassName: local-storage volumeSize: 10Gi redisEnterpriseNodeResources: limits: cpu: 2000m memory: 4Gi requests: cpu: 2000m memory: 4Gi podTolerations: - key: node-role.kubernetes.io/control-plane effect: NoSchedule mkdir /mnt/redis1 /mnt/redis2 /mnt/redis3 chmod 777 /mnt/redis1 /mnt/redis2 /mnt/redis3
使用ceph1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 vim redis-ceph-op.yaml apiVersion: "app.redislabs.com/v1" kind: "RedisEnterpriseCluster" metadata: name: my-rec spec: nodes: 3 persistentSpec: storageClassName: ceph-rbd volumeSize: 10Gi redisEnterpriseNodeResources: limits: cpu: 2000m memory: 4Gi requests: cpu: 2000m memory: 4Gi podTolerations: - key: node-role.kubernetes.io/control-plane effect: NoSchedule
1 2 kubectl apply -f redis-ceph-op.yaml kubectl describe RedisEnterpriseCluster my-rec
启用webhook准入控制1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 kubectl get secret admission-tls CERT=`kubectl get secret admission-tls -o jsonpath='{.data.cert}' ` kubectl label ns redis-op namespace-name=redis-op kubectl get svc vim webhook.yaml apiversion:admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata: name: redis-enterprise-admission labels: app: redis-enterprise webhooks: - namne: redisenterprise.admission.redislabs failurePolicy: Fail matchPolicy: Exact sideEffects: None timeoutSeconds: 30 rules: - apiGroups: ["app.redislabs.com" ] apiVersions: ["lalphal" ] operations: ["CREATE" , "UPDATE" ] resources: ["redisenterprisedatabases" , "redisenterpriseactiveactivedatabase" , "redisenterpriseremoteclusters" ] clientConfig: service: namespace: OPERATOR_NAMESPACE name: admission path: /adnission caBundle: "" admissionReviewversions: ["vlbeta1" ] sed "s/\$CERT/$CERT /g" webhook.yaml | kubectl create -f - kubectl apply -f - << EOF apiVersion: app.redislabs.com/v1alpha1 kind: RedisEnterpriseDatabase metadata: name: redis-enterprise-database spec: evictionPolicy: illegal EOF
使用redis-operator1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 vim mydb.yaml apiVersion: app.redislabs.com/v1alpha1 kind: RedisEnterpriseDatabase metadata: name: mydb spec: memorySize: 1GB kubectl apply -f mydb.yaml kubectl edit redb mydb kubectl delete redb mydb kubectl get redb mydb -o jsonpath="{.spec.databaseSecretName}" kubectl get secret redb-mydb -o jsonpath="{.data.password}" | base64 --decode kubectl get secret redb-mydb -o jsonpath="{.data.port}" | base64 --decode kubectl get secret redb-mydb -o jsonpath="{.data.service_names}" | base64 --decode kubectl run myredis --image redis kubectl exec -it myredis -- base > redis-cli -h mydb -p 18124 > keys *
OLM部署mysql ndb集群
安装operator-sdk1 2 3 4 5 6 7 8 9 10 11 12 13 14 export ARCH=$(case $(uname -m) in x86_64) echo -n amd64 ;; aarch64) echo -n arm64 ;; *) echo -n $(uname -m) ;; esac )export OS=$(uname | awk '{print tolower($0)}' )export OPERATOR_SDK_DL_URL=https://github.com/operator-framework/operator-sdk/releases/download/v1.33.0curl -LO ${OPERATOR_SDK_DL_URL} /operator-sdk_${OS} _${ARCH} gpg --keyserver keyserver.ubuntu.com --recv-keys 365189qrgionr grep operator-sdk_${OS} _${ARCH} checksums.txt | sha256sum -c - chomd +x operator-sdk_${OS} _${ARCH} && mv operator-sdk_${OS} _${ARCH} /usr/local/bin/operator-sdk
安装OLM
通过operator-sdk安装1 2 operator-sdk olm install operator-sdk olm uninstall
通过manifest安装1 2 3 4 5 6 7 8 9 10 wget https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.26.0/crds.yaml wget https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.26.0/olm.yaml kubectl create -f crds.yaml kubectl create -f olm.yaml kubectl delete -f olm.yaml kubectl delete apiservices.apiregistration.k8s.io v1.packages.operators.coreos.com kubectl delete -f crds.yaml
使用OLM安装Operator1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 kubectl get packagemanifest -n olm kubectl get packagemanifest -n olm project-quay -o yaml cat operatorgroup.yamlkind: OperatorGroup apiVersion: operators.coreos.com/v1 metadata: name: og-single namespace: default spec: targetNamespaces: - default kubectl apply -f operatorgroup.yaml kubectl get operatorgroups -n operators cat subscriptions.yamlapiVersion: operators.cores.com/v1alhpa1 kind: Subscription metadata: name: quay namespace: operators spec: channel: stable-3.8 installPlanApproval: Automatic name: project-quay source : operatorhubio-catalog sourceNamespace: olm startingCSV: quay-operator.v3.8.1 kubectl apply -f subscriptions.yaml
OLM部署mysql ndb 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 kubectl create -f https://operatorhub.io/install/ndb-operator.yaml kubectl -n operators get sub kubectl -n operators get pod kubectl get csv -n operators vim ndb.yaml apiVersion: v1 kind: Namespace metadata: name: ndb --- apiVersion: v1 kind: Secret metadata: name: ndbop-mysql-secret namespace: ndb type : OpaquestringData: password: "123465" --- apiVersion: mysql.oracle.com/v1 kind: NdbCluster metadata: name: my-ndb namespace: ndb spec: redundancyLevel: 3 dataNode: nodeCount: 3 ndbPodSpec: tolerations: - key: node-role.kubernetes.io/control-plane effect: NoSchedule config: DataMemory: 1000M MaxNoOfTables: 1024 MaxNoOfConcurrentOperations: 409600 pvcSpec: storageClassName: ceph-rbd accessModes: - ReadWriteOnce resources: requests: storage: 1Gi mysqlNodes: nodeCount: 2 rootPasswordSecretName: ndbop-mysql-secret myCnf: | [mysqld] max-user-connections=42 ndb-extra-logging=10 pvcSpec: storageClassName: ceph-rbd accessModes: - ReadWriteOnce resources: requests: storage: 500Mi kubectl apply -f ndb.yaml kubectl -n ndb get pod kubectl -n ndb get pvc kubectl exec -it pod/my-ndb-mysqld-0 -- bash > ndb_mgm -c my-ndb-mgmd > show > exit > mysql --protocol=tcp -h my-ndb-mysqld -u root -p > show databases; > create database test ; > use test ; > create table cities( id int primary key auto_increment, name char(50), population int ) engine ndb; > insert into cities(name, population) values ('Bengaluru' , 851656), ('Chennai' , 9189849), ('Mysuru' , 98198), ('Tirunelveli' , 98197197),
k8s外部访问mysql集群 kubectl get services -l mysql.oracle.com/v1=my-ndb # 查看服务
kubectl patch ndb my-ndb --type='merge' -p '{"spec":{"managementNode":{"enbaleLoadBalancer": true}},"mysqlNode":{"enbaleLoadBalancer": true}}' # 更新ndb配置,启用负载均衡,也可以修改service
kubectl get svc -n ndb
# 外部机器
kubectl -n ndb exec -it pod/my-ndb-mysqld-0 -- bash
> mysql -h 192.168.239.142 -p 30094 -u root -p