k8s通过virtual-kubelet部署带cinder后端动态pv的pod到openstack zun时报volume node affinity conflict

Phedra ·
更新时间:2024-11-10
· 861 次阅读

1、环境信息

同 k8s+virtual-kubelet部署带动态pv的pod到openstack-zun

2、创建带cinder动态pv的pod

具体创建过程参考
k8s+virtual-kubelet部署带动态pv的pod到openstack-zun

3、pod创建失败

pod处于pending状态

3.1、pod状态

查看pod列表

# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES testpvcpod 0/1 Pending 0 22s 3.2、pod详细信息

查看出问题的pod详细信息,问题出在调度上,调度失败的原因有两个

3.2.1、4 node(s) had taints that the pod didn’t tolerate

这个问题的原因是4个master和1个普通node都有taint,本例主要验证使用两个virtual-kubelet node部署pod,所以这儿不做修改

3.2.2、2 node(s) had volume node affinity conflict

该问题是本例要解决的问题

# kubectl describe pods testpvcpod ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling default-scheduler persistentvolumeclaim "testpvc" not found Warning FailedScheduling default-scheduler persistentvolumeclaim "testpvc" not found Warning FailedScheduling default-scheduler 0/6 nodes are available: 2 node(s) had volume node affinity conflict, 4 node(s) had taints that the pod didn't tolerate. Warning FailedScheduling default-scheduler 0/6 nodes are available: 2 node(s) had volume node affinity conflict, 4 node(s) had taints that the pod didn't tolerate. 3.3 查看pv和pvc信息 3.3.1、列出pv和pvc # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE testpvc Bound pvc-c9e46b6b-cd52-474d-9835-8f5757ac5ec7 2Gi RWO standard 26m [root@k8s-m1 test]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-c9e46b6b-cd52-474d-9835-8f5757ac5ec7 2Gi RWO Delete Bound default/te stpvc standard 26m

可以看到pv和pvc信息都正常创建了

3.3.2 查看pv详细信息

从pv详细信息中,pv有节点亲和的需求

Node Affinity: Required Terms: Term 0: failure-domain.beta.kubernetes.io/zone in # kubectl describe pv pvc-c9e46b6b-cd52-474d-9835-8f5757ac5ec7 Name: pvc-c9e46b6b-cd52-474d-9835-8f5757ac5ec7 Labels: failure-domain.beta.kubernetes.io/zone=nova Annotations: kubernetes.io/createdby: cinder-dynamic-provisioner pv.kubernetes.io/bound-by-controller: yes pv.kubernetes.io/provisioned-by: kubernetes.io/cinder Finalizers: [kubernetes.io/pv-protection] StorageClass: standard Status: Bound Claim: default/testpvc Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 2Gi Node Affinity: Required Terms: Term 0: failure-domain.beta.kubernetes.io/zone in [nova] Message: Source: Type: Cinder (a Persistent Disk resource in OpenStack) VolumeID: 9ac04a59-146c-4300-9030-58c1e153c2f1 FSType: ReadOnly: false SecretRef: nil Events: 4、解决问题

通过第三节分析,pv有节点亲和的需求
所以需要给节点打上相应的标签

4.1、打标签 # kubectl label node virtual-kubelet failure-domain.beta.kubernetes.io/zone=nova node/virtual-kubelet labeled 4.2、重新部署pod # kubectl delete -f testpvc.yaml pod "testpvcpod" deleted # kubectl apply -f testpvc.yaml pod/testpvcpod created persistentvolumeclaim/testpvc created 4.3、pod部署成功 # kubectl get pods NAME READY STATUS RESTARTS AGE testpvcpod 1/1 Running 0 4m29s
作者:vxlinux2019



pv affinity openstack virtual k8s pod volume node

需要 登录 后方可回复, 如果你还没有账号请 注册新账号