After I renew the kubernetes certificate, now I found the kubernetes cluster scale did not work(I delete the pod and the desired pod did not create automatically too). when I turned the desire pod from 1 to 2, this is the deployment describe:
Name: texhub-server-service │
│ Namespace: reddwarf-pro │
│ CreationTimestamp: Thu, 04 Jul 2024 00:07:20 +0800 │
│ Labels: app=texhub-server-service │
│ k8slens-edit-resource-version=v1 │
│ Annotations: deployment.kubernetes.io/revision: 209 │
│ kubernetes.io/change-cause: │
│ kubectl set image deployment/texhub-server-service texhub-server-service=registry.cn-hongkong.aliyuncs.com/reddwarf-pro/texhub-server:982b... │
│ promtail.io/scrape: true │
│ Selector: app=texhub-server-service │
│ Replicas: 2 desired | 1 updated | 1 total | 1 available | 0 unavailable │
│ StrategyType: RollingUpdate │
│ MinReadySeconds: 0 │
│ RollingUpdateStrategy: 25% max unavailable, 25% max surge │
│ Pod Template: │
│ Labels: app=texhub-server-service │
│ Annotations: kubectl.kubernetes.io/restartedAt: 2025-10-24T22:46:07+08:00 │
│ promtail.io/scrape: true │
│ Containers: │
│ texhub-server-service: │
│ Image: registry.cn-hongkong.aliyuncs.com/reddwarf-pro/texhub-server:982bd7b0c7b728ba773ab9a55a84d026905185c3 │
│ Port: 8000/TCP │
│ Host Port: 0/TCP │
│ Limits: │
│ cpu: 500m │
│ memory: 256Mi │
│ Requests: │
│ cpu: 100m │
│ memory: 35Mi │
│ Liveness: http-get http://:8000/texhub/actuator/liveness delay=15s timeout=1s period=10s #success=1 #failure=3 │
│ Environment: │
│ TEXHUB_REDIS_URL: Optional: false │
│ REDIS_URL: Optional: false │
│ TEX_DATABASE_URL: Optional: false │
│ MEILI_MASTER_KEY: Optional: false │
│ ENV: Optional: false │
│ INFRA_URL: Optional: false │
│ Mounts: │
│ /opt/data from texhub-server-service-persistent-storage (rw) │
│ Volumes: │
│ texhub-server-service-persistent-storage: │
│ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) │
│ ClaimName: texhub-server-service-pv-claim-qingdao │
│ ReadOnly: false
│ Conditions: │
│ Type Status Reason │
│ ---- ------ ------ │
│ Available True MinimumReplicasAvailable │
│ Progressing True NewReplicaSetAvailable │
│ OldReplicaSets: texhub-server-service-5485b6f458 (0/0 replicas created), texhub-server-service-7b9d84c58c (0/0 replicas created), texhub-server-service-56d6c9bd84 (0/0 replicas created), texhub-server-service-677bf954c9 (0/0 replicas │
│ created), texhub-server-service-77d9f59c84 (0/0 replicas created), texhub-server-service-756dc69747 (0/0 replicas created), texhub-server-service-8c5d5447 (0/0 replicas created), texhub-server-service-76976f77c8 (0/0 replicas created │
│ ), texhub-server-service-78bc7b9976 (0/0 replicas created), texhub-server-service-6b9d49fd8f (0/0 replicas created) │
│ NewReplicaSet: texhub-server-service-69564b488d (1/1 replicas created) │
│ Events:
am I missing something? why the new pod did not create? I check the kubelet and kube-controller:
[root@iZm5e2jhfbrshckqh6qdbuZ ~]# ps aux|grep "kubelet"
root 424333 2.4 3.0 2075804 112328 ? Ssl 10:56 0:41 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
root 444475 0.0 0.0 11804 1192 pts/0 S+ 11:24 0:00 grep --color=auto kubelet
nobody 516738 0.0 0.2 1241440 10120 ? Ssl 2024 5:14 /bin/node_exporter --path.sysfs=/host/sys --path.rootfs=/host/root --no-collector.wifi --no-collector.hwmon --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/pods/.+)($|/) --collector.netclass.ignored-devices=^(veth.*)$
root 2635383 2.6 10.8 1685816 402932 ? Ssl Nov24 235:57 kube-apiserver --advertise-address=172.31.227.20 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
[root@iZm5e2jhfbrshckqh6qdbuZ ~]# ps aux|grep "controller"
root 441528 0.0 0.0 11804 1192 pts/0 S+ 11:20 0:00 grep --color=auto controller
root 3004315 0.1 0.5 1333488 22048 ? Ssl Nov24 8:59 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true
[root@iZm5e2jhfbrshckqh6qdbuZ ~]#
both works fine. what should I do to found out the issue? This is the version info:
[root@iZm5e2jhfbrshckqh6qdbuZ ~]# kubectl version
Client Version: v1.30.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.0