糖尿病康复,内容丰富有趣,生活中的好帮手!
糖尿病康复 > 不背锅运维:K8S之污点 污点容忍

不背锅运维:K8S之污点 污点容忍

时间:2020-02-15 06:39:33

相关推荐

不背锅运维:K8S之污点 污点容忍

写在开篇

本篇分享k8s的污点、污点容忍,感谢持续关注我的盆友们。如有考k8s认证需求的盆友,可和我取得联系,了解K8S认证请猛戳:《K8s CKA+CKS认证实战班》版

K8s CKA+CKS认证实战班》版:https://mp./s/h1bjcIwy2enVD203o-ntlA

污点和污点容忍

什么是污点?

节点亲和性是Pod的一种属性,它使 Pod 被吸引到一类特定的节点 (这可能出于一种偏好,也可能是硬性要求)。 污点(Taint) 则相反——它使节点能够排斥一类特定的Pod。也就是说避免pod调度到特定node上,告诉默认的pod,我拒绝你的分配,是一种主动拒绝的行为。

什么是污点容忍?

是应用于 Pod 上的。容忍度允许调度器调度带有对应污点的 Pod。 容忍度允许调度但并不保证调度。也就是说,允许pod调度到持有Taint的node上,希望pod能够分配到带污点的节点,增加了污点容忍,那么就会认可这个污点,就**「有可能」**分配到带污点的节点(如果希望pod可以被分配到带有污点的节点上,要在pod配置中添加污点容忍字段)。

污点和容忍(Toleration)相互配合,可以用来避免Pod被分配到不合适的节点上,每个节点上都可以应用一个或多个污点,这表示对于那些不能容忍这些污点的Pod, 是不会被该节点接受的。

再用大白话理解一下,也就是说,基于节点标签的分配,是站在Pod的角度上,它是通过在pod上添加属性来确定pod是否要调度到指定的node上的。其实,也还可以站在Node的角度上,通过在node上添加污点属性,来避免pod被分配到不合适的节点上。

语法格式

节点添加污点的语法格式

kubectltaintnodexxxxkey=value:[effect]

effect(效果):

NoSchedule:不能被调度PreferNoSchedule:尽量不要调度NoExecute:不但不会调度,还会驱逐Node上已有的pod

删除污点的语法格式

kubectltaintnodexxxxkey=value:[effect]-

实战

给test-b-k8s-node02节点打上污点,不干预调度到哪台节点,让k8s按自己的算法进行调度,看看这10个pod会不会分配到带有污点的节点上

#打污点kubectltaintnodetest-b-k8s-node02disktype=sas:NoSchedule#查看node详情的Taints字段tantianran@test-b-k8s-master:~$kubectldescribenodetest-b-k8s-node02|grepTaintTaints:disktype=sas:NoSchedule

goweb-demo.yaml

apiVersion:v1kind:Namespacemetadata:name:test-a---apiVersion:apps/v1kind:Deploymentmetadata:name:goweb-demonamespace:test-aspec:replicas:10selector:matchLabels:app:goweb-demotemplate:metadata:labels:app:goweb-demospec:containers:-name:goweb-demoimage:192.168.11.247/web-demo/goweb-demo:1229v3---apiVersion:v1kind:Servicemetadata:name:goweb-demonamespace:test-aspec:ports:-port:80protocol:TCPtargetPort:8090selector:app:goweb-demotype:NodePort

tantianran@test-b-k8s-master:~/goweb-demo$kubectlgetpod-ntest-a-owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESgoweb-demo-b98869456-84p4b1/1Running018s10.244.240.50test-b-k8s-node01<none><none>goweb-demo-b98869456-cjjj81/1Running018s10.244.240.13test-b-k8s-node01<none><none>goweb-demo-b98869456-fxgjf1/1Running018s10.244.240.12test-b-k8s-node01<none><none>goweb-demo-b98869456-jfdvl1/1Running018s10.244.240.43test-b-k8s-node01<none><none>goweb-demo-b98869456-k6krp1/1Running018s10.244.240.41test-b-k8s-node01<none><none>goweb-demo-b98869456-kcpsz1/1Running018s10.244.240.6test-b-k8s-node01<none><none>goweb-demo-b98869456-lrkzc1/1Running018s10.244.240.49test-b-k8s-node01<none><none>goweb-demo-b98869456-nqr2j1/1Running018s10.244.240.33test-b-k8s-node01<none><none>goweb-demo-b98869456-pt5zk1/1Running018s10.244.240.28test-b-k8s-node01<none><none>goweb-demo-b98869456-s9rt51/1Running018s10.244.240.42test-b-k8s-node01<none><none>tantianran@test-b-k8s-master:~/goweb-demo$

发现全部都在test-b-k8s-node01节点,test-b-k8s-node01节点有污点,因此拒绝承载pod。

test-b-k8s-node01节点已经有污点了,再通过nodeSelector强行指派到该节点,看看会不会分配到带有污点的节点上

goweb-demo.yaml

apiVersion:v1kind:Namespacemetadata:name:test-a---apiVersion:apps/v1kind:Deploymentmetadata:name:goweb-demonamespace:test-aspec:replicas:10selector:matchLabels:app:goweb-demotemplate:metadata:labels:app:goweb-demospec:nodeSelector:disktype:sascontainers:-name:goweb-demoimage:192.168.11.247/web-demo/goweb-demo:1229v3---apiVersion:v1kind:Servicemetadata:name:goweb-demonamespace:test-aspec:ports:-port:80protocol:TCPtargetPort:8090selector:app:goweb-demotype:NodePort

查看pod的创建情况

tantianran@test-b-k8s-master:~/goweb-demo$kubectlgetpod-ntest-aNAMEREADYSTATUSRESTARTSAGEgoweb-demo-54bc765fff-2gb980/1Pending020sgoweb-demo-54bc765fff-67c560/1Pending020sgoweb-demo-54bc765fff-6fdvx0/1Pending020sgoweb-demo-54bc765fff-c2bgd0/1Pending020sgoweb-demo-54bc765fff-d55mw0/1Pending020sgoweb-demo-54bc765fff-dl4x40/1Pending020sgoweb-demo-54bc765fff-g4vb20/1Pending020sgoweb-demo-54bc765fff-htjkp0/1Pending020sgoweb-demo-54bc765fff-s76rh0/1Pending020sgoweb-demo-54bc765fff-vg6dn0/1Pending020stantianran@test-b-k8s-master:~/goweb-demo$

尴尬的局面很显然了,该节点明明存在污点,又非得往上面指派,因此让所有Pod处于在了Pending的状态,也就是待分配的状态,那如果非要往带有污点的Node上指派pod,怎么办?看下面的例子

非要往带有污点的Node上指派pod,保留nodeSelector,直接增加污点容忍,pod是不是肯定会分配到带有污点的节点上?测试下便知 goweb-demo.yaml

apiVersion:v1kind:Namespacemetadata:name:test-a---apiVersion:apps/v1kind:Deploymentmetadata:name:goweb-demonamespace:test-aspec:replicas:10selector:matchLabels:app:goweb-demotemplate:metadata:labels:app:goweb-demospec:nodeSelector:disktype:sastolerations:-key:"disktype"operator:"Equal"value:"sas"effect:"NoSchedule"containers:-name:goweb-demoimage:192.168.11.247/web-demo/goweb-demo:1229v3---apiVersion:v1kind:Servicemetadata:name:goweb-demonamespace:test-aspec:ports:-port:80protocol:TCPtargetPort:8090selector:app:goweb-demotype:NodePort

查看Pod创建状态

tantianran@test-b-k8s-master:~/goweb-demo$kubectlgetpod-ntest-aNAMEREADYSTATUSRESTARTSAGEgoweb-demo-68cf558b74-6qddp0/1Pending0109sgoweb-demo-68cf558b74-7g6cm0/1Pending0109sgoweb-demo-68cf558b74-f7g6t0/1Pending0109sgoweb-demo-68cf558b74-kcs9j0/1Pending0109sgoweb-demo-68cf558b74-kxssv0/1Pending0109sgoweb-demo-68cf558b74-pgrvb0/1Pending0109sgoweb-demo-68cf558b74-ps5dn0/1Pending0109sgoweb-demo-68cf558b74-rb2w50/1Pending0109sgoweb-demo-68cf558b74-tcnj40/1Pending0109sgoweb-demo-68cf558b74-txqfs0/1Pending0109s

在上面的yaml中,tolerations字段为污点容忍,经过测试就可以回答刚才的问题:保留nodeSelector,直接增加污点容忍,pod是不是肯定会分配到带有污点的节点上?经过测试后,给出的答案是:不是。

那怎么办呢?继续测试,现在把nodeSelector去掉,只留下污点容忍,看看pod会不会有可能分配到打了污点的节点上?

goweb-demo.yaml

apiVersion:v1kind:Namespacemetadata:name:test-a---apiVersion:apps/v1kind:Deploymentmetadata:name:goweb-demonamespace:test-aspec:replicas:10selector:matchLabels:app:goweb-demotemplate:metadata:labels:app:goweb-demospec:tolerations:-key:"disktype"operator:"Equal"value:"sas"effect:"NoSchedule"containers:-name:goweb-demoimage:192.168.11.247/web-demo/goweb-demo:1229v3---apiVersion:v1kind:Servicemetadata:name:goweb-demonamespace:test-aspec:ports:-port:80protocol:TCPtargetPort:8090selector:app:goweb-demotype:NodePort

查看pod创建情况

tantianran@test-b-k8s-master:~/goweb-demo$kubectlgetpod-ntest-a-owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESgoweb-demo-55ff5cd68c-287vw1/1Running0110s10.244.222.57test-b-k8s-node02<none><none>goweb-demo-55ff5cd68c-7s7zb1/1Running0110s10.244.222.24test-b-k8s-node02<none><none>goweb-demo-55ff5cd68c-84jww1/1Running0110s10.244.240.24test-b-k8s-node01<none><none>goweb-demo-55ff5cd68c-b5l9m1/1Running0110s10.244.240.15test-b-k8s-node01<none><none>goweb-demo-55ff5cd68c-c2gfp1/1Running0110s10.244.222.3test-b-k8s-node02<none><none>goweb-demo-55ff5cd68c-hpjn41/1Running0110s10.244.240.62test-b-k8s-node01<none><none>goweb-demo-55ff5cd68c-j5bvc1/1Running0110s10.244.222.43test-b-k8s-node02<none><none>goweb-demo-55ff5cd68c-r95f61/1Running0110s10.244.240.16test-b-k8s-node01<none><none>goweb-demo-55ff5cd68c-rhvmw1/1Running0110s10.244.240.60test-b-k8s-node01<none><none>goweb-demo-55ff5cd68c-rl8nh1/1Running0110s10.244.222.8test-b-k8s-node02<none><none>

从上面的分配结果可以看出,有些Pod分配到了打了污点容忍的test-b-k8s-node02节点上。

再玩个小例子,让它容忍任何带污点的节点,master默认也是有污点的(二进制搭建的除外),那pod会不会有可能跑master去哦?测试一下便知

先看看master的污点情况

tantianran@test-b-k8s-master:~/goweb-demo$kubectldescribenodetest-b-k8s-master|grepTaintTaints:node-role.kubernetes.io/control-plane:NoSchedule

goweb-demo.yaml

apiVersion:v1kind:Namespacemetadata:name:test-a---apiVersion:apps/v1kind:Deploymentmetadata:name:goweb-demonamespace:test-aspec:replicas:10selector:matchLabels:app:goweb-demotemplate:metadata:labels:app:goweb-demospec:tolerations:-effect:"NoSchedule"operator:"Exists"containers:-name:goweb-demoimage:192.168.11.247/web-demo/goweb-demo:1229v3---apiVersion:v1kind:Servicemetadata:name:goweb-demonamespace:test-aspec:ports:-port:80protocol:TCPtargetPort:8090selector:app:goweb-demotype:NodePort

查看pod创建情况

tantianran@test-b-k8s-master:~/goweb-demo$kubectlgetpod-ntest-a-owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESgoweb-demo-65bbd7b49c-5qb5m0/1ImagePullBackOff020s10.244.82.55test-b-k8s-master<none><none>goweb-demo-65bbd7b49c-7qqw81/1Running020s10.244.222.13test-b-k8s-node02<none><none>goweb-demo-65bbd7b49c-9tflk1/1Running020s10.244.240.27test-b-k8s-node01<none><none>goweb-demo-65bbd7b49c-dgxhx1/1Running020s10.244.222.44test-b-k8s-node02<none><none>goweb-demo-65bbd7b49c-fbmn51/1Running020s10.244.240.1test-b-k8s-node01<none><none>goweb-demo-65bbd7b49c-h2nnz1/1Running020s10.244.240.39test-b-k8s-node01<none><none>goweb-demo-65bbd7b49c-kczsp1/1Running020s10.244.240.40test-b-k8s-node01<none><none>goweb-demo-65bbd7b49c-ms7681/1Running020s10.244.222.45test-b-k8s-node02<none><none>goweb-demo-65bbd7b49c-pbwht0/1ErrImagePull020s10.244.82.56test-b-k8s-master<none><none>goweb-demo-65bbd7b49c-zqxlt1/1Running020s10.244.222.18test-b-k8s-node02<none><none>

居然还真的有2个Pod跑master节点去了(test-b-k8s-master),master节点本地没有对应的image,且harbor服务器也没开机,所以拉取image失败了。这个不是关心的问题。关心的问题是,居然有pod跑master节点去了,是因为k8s的这个机制:如果一个容忍度的key为空且operator为Exists,表示这个容忍度与任意的key、value和effect都匹配,即这个容忍度能容忍任何污点。了解了这个Exists的机制后,就知道为啥会跑master去了吧?

警告:要注意了,master之所以默认会打上污点,是因为master是管理节点、考虑到安全的问题,所以master节点是不建议跑常规的pod(或者说是不建议跑业务pod)。

打了污点的节点,到底有没有办法可以强行分配到该节点上?我们试试

节点test-b-k8s-node02是打了污点的

tantianran@test-b-k8s-master:~/goweb-demo$kubectldescribenodetest-b-k8s-node02|grepTaintTaints:disktype=sas:NoSchedule

goweb-demo.yaml

apiVersion:v1kind:Namespacemetadata:name:test-a---apiVersion:apps/v1kind:Deploymentmetadata:name:goweb-demonamespace:test-aspec:replicas:10selector:matchLabels:app:goweb-demotemplate:metadata:labels:app:goweb-demospec:nodeName:test-b-k8s-node02containers:-name:goweb-demoimage:192.168.11.247/web-demo/goweb-demo:1229v3---apiVersion:v1kind:Servicemetadata:name:goweb-demonamespace:test-aspec:ports:-port:80protocol:TCPtargetPort:8090selector:app:goweb-demotype:NodePort

在上面的配置中,注意nodeName字段,nodeName指定节点名称,用于将Pod调度到指定的Node上,它的机制是**「不经过调度器」**

查看pod创建情况

tantianran@test-b-k8s-master:~/goweb-demo$kubectlgetpod-ntest-a-owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESgoweb-demo-dd446d4b9-2zdnx1/1Running013s10.244.222.39test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-4qbg91/1Running013s10.244.222.6test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-67cpl1/1Running013s10.244.222.63test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-fhsgf1/1Running013s10.244.222.53test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-gp9gj1/1Running013s10.244.222.49test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-hzvs21/1Running013s10.244.222.9test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-px5981/1Running013s10.244.222.22test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-rkbm41/1Running013s10.244.222.40test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-vr9mq1/1Running013s10.244.222.17test-b-k8s-node02<none><none>goweb-demo-dd446d4b9-wnfqc1/1Running013s10.244.222.16test-b-k8s-node02<none><none>

会发现,所有Pod都分配到了test-b-k8s-node02节点,怎么不会分一些到test-b-k8s-node01节点?原因就是,它的机制是不经过调度器的。nodeName这个字段建议在生产环境中还是少用,所有Pod都在一个节点上,这就存在单点故障了。其实,测试环境下还是可以用的嘛!

最后

关于本篇的分享就到这里,后面还会有更多的小案例分享给大家,欢迎持续关注。想要考K8S认证的盆友请猛戳了解:https://mp./s/h1bjcIwy2enVD203o-ntlA

本文转载于(喜欢的盆友关注我们):https://mp./s/qJ8gr4xyuTjXkA6p9Yrp7g

如果觉得《不背锅运维:K8S之污点 污点容忍》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。