所有命令都验证过,有更好的方式,欢迎留言~~~
CKA 习题和真题汇总
CKA考试经验:报考和考纲
CKA :2019年12月英文原题和分值
CKA考试习题:K8S基础概念--API 对象 CKA考试习题:调度管理- nodeAffinity、podAffinity、Taints CKA考试习题:K8S日志、监控与应用管理 CKA考试习题:网络管理-Pod网络、Ingress、DNS CKA考试习题:存储管理-普通卷、PV、PVC CKA考试习题:安全管理--Network Policy、serviceaccount、clusterrole CKA考试习题:k8s故障排查 CKA真题:题目和解析-1 CKA真题:题目和解析-2 CKA真题:题目和解析-3 CKA真题:题目和解析-4 CKA真题:题目和解析-5 CKA真题:题目和解析-6CKA真题:手动配置TLS BootStrap
更多CKA资料或交流:可加 wei xin :wyf19910905
5、 多个容器的pod的创建Set configuration context $ kubectl config use-context k8s
Create a pod named kucc4 with a single container for each of the following images running inside (there may be between 1 and 4 images specified):ngingx + redis + memcached + consul
创建一个名为kucc4的pod,1个POD容器其中包含4个镜像:nginx+redis+memcached+consul
答:
kubectl run kucc4 --image=nginx --generator=run-pod/v1 --restart=Never --dry-run -o yaml > ./5pod.yaml
先用命令行创建一个简单的pod模板,输出到一个yaml,然后再根据题目要求进行修改
apiVersion: v1 kind: Pod metadata: labels: run: kucc4 name: kucc4 spec: containers: - image: nginx name: nginx - image: redis name: redis - image: memcached name: memcached - image: consul name: consul status: {}
kubectl apply -f ./5pod.yaml
注意:如果指定要用busybox镜像一定要指定sleep,不然一直重启
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always
官网参考链接:https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/
6、pod的调度Set configuration context $ kubectl config use-context k8s
Schedule a Pod as follows:
Name:nginx-kusc00101
Image:nginx
Nodeselector:disk=ssd
创建一个pod名称为nginx,并将其调度到节点为 disk=ssd上
答:
#我的操作,实际上从文档复制更快 kubectl run nginx-kusc00101 --image=nginx --restart=Never --dry-run > 6pod.yaml #增加对应参数 vi 6pod.yaml kubectl apply -f 6pod.yaml
apiVersion: v1 kind: Pod metadata: name: nginx-kusc00101 labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: disk: ssd
校验:是否调度成功
在创建pod时,通过 kubectl get node --show-labels 查看 disk=ssd 标签在哪个node上 pod创建后,通过 kubectl get pod nginx-kusc00101 -o wide 查看pod是否调度到对应的主机官网链接: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
7、Deployment 资源的更新Set configuration context $ kubectl config use-context k8s
Create a deployment as follows
Name:nginx-app
Using container nginx with version 1.11.9-alpine
The deployment should contain 3 replicas
Next, deploy the app with new version 1.12.0-alpine by performing a rolling update and record that update.
Finally, rollback that update to the previous version 1.11.9-alpine
按照以下方式创建部署
名称:nginx-app
使用1.11.9-alpine版本的容器nginx
deploymnet应该包含3个副本
接下来,使用新版本1.12.0-alpine部署应用程序,执行滚动更新并记录更新。
最后,将更新回滚到前一个版本1.11.9-alpine
答:
kubectl run nginx-app --image=nginx:1.11.9-alpine --replicas=3 kubectl set image deployment/nginx-app nginx-app=nginx:1.12.0-alpine --record=true kubectl rollout undo deployment/nginx-app # 执行后,可以Watch rolling update status of "nginx-app" deployment until completion kubectl rollout status -w deployment nginx-app # Check the history of deployments including the revision kubectl rollout history deployment/nginx-app
官网链接:https://kubernetes.io/docs/reference/kubectl/cheatsheet/
导出最终的deployment spec
kubectl get deployment nginx-app -oyaml -o=jsonpath="{.spec}{\"\n\"}" >7.txt
$ kubectl get deployment nginx-app -o=custom-columns=NAME:spec > 7.txt $ cat 7.txtNAME map[progressDeadlineSeconds:600 replicas:3 revisionHistoryLimit:10 selector:map[matchLabels:map[run:nginx-app]] strategy:map[rollingUpdate:map[maxSurge:25%maxUnavailable:25%] type:RollingUpdate] template:map[metadata:map[creationTime stamp: labels:map[run:nginx-app]] spec:map[containers:[map[image:nginx:1.11.9-alpine imagePullPolicy:IfNotPresent name:nginx-app resources:map[] termin ationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30]]] $
8、ServiceSet configuration context $ kubectl config use-context k8s
Create and configure the service front-end-service
so it‘s accessible through NodePort/ClusterIP and routes to the existing pod named front-end
使用front-end-service服务,将名为front-end的pod,用NodePort/ClusterIP的方式发布出来
答:
[root@vms31 ~]# kubectl expose pod front-end --name=front-end-service --type='NodePort' --port=80 service/front-end-service exposed [root@vms31 ~]# [root@vms31 ~]# [root@vms31 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE front-end-service NodePort 10.96.19.240 80:31028/TCP 10s kubernetes ClusterIP 10.96.0.1 443/TCP 174d
官网链接:https://kubernetes.io/docs/reference/kubectl/cheatsheet/
作者:琦彦