第二章 Kubernetes集群探索

    xiaoxiao2022-06-27  145

    文章目录

    第二章 Kubernetes集群探索2-1 探索K8S集群路线2-2 kubeadm init流程揭秘2-2.1 引导前的检查2-2.2 生成私钥和数字证书2-2.3 生成控制平台组件kubeconfig文件2-2.4 生成控制平面组件minifest文件2-2.5 下载镜像,等待控制平面启动2-2.6 保存MasterConfiguration配置信息2-2.7 设定master标志2-2.8 进行基于TLS的安全引导的相关配置2-2.9 安装DNS及kube-rposy组件 2-3 kubeadm join揭秘2-4 kubernetes核心组件详解2-4.1 kubelet: 节点pod管家2-4.2 kube-apiserver: 集群管理入口2-4.3 etcd: 配置中心2-4.4 kube-controller-manager: 管理控制中心2-4.5 kube-scheduler: 调度器2-5.6 kube-proxy: 服务抽象实现 2-5 kubectl详解2-5.1 集群访问配置2-5.2 集群控制2-5.3 集群查看和问题调试

    第二章 Kubernetes集群探索

    2-1 探索K8S集群路线

    kubeadm做了那些事情 各组件的作用 kubeletkube-apiserveretctkube-controller-managerkube-schedulerkube-proxy kubectl使用方法

    2-2 kubeadm init流程揭秘

    2-2.1 引导前的检查

    各环境的检查,如果swap,如不符合查件,则终止初始化

    kubeadm init pre-flight check

    kubeadm版本与要安装的kubernetes版本的比对检查kubernetes安装的系统需求检查 docker服务cgroups相关 其他检查 用户: user是否为root主机: 域名是否合法,主机名不能带有下划线,主机名绑定,或者是公网可以正常解析的域名,DNS子域端口: apiserver绑定的10250, 10251, 10252端口是否被占用,ip, iptables, mount是否存在且在环境变量PATH中swap: 需要禁用swap分区工具

    2-2.2 生成私钥和数字证书

    生成各组件通信的证书及私钥文件,默认在/etc/kubernetes/pki/目录下

    自建CA,生成ca.key, ca.crt ca.crt是标准的ca509证书使用openssl查看ca证书 $ pwd /etc/kubernetes/pki# $ ls ls apiserver.crt apiserver.key ca.crt front-proxy-ca.crt front-proxy-client.key apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.key sa.key apiserver-etcd-client.key apiserver-kubelet-client.key etcd front-proxy-client.crt sa.pub $ openssl x509 -in ca.crt -noout -text Certificate: Data: Version: 3 (0x2) Serial Number: 0 (0x0) Signature Algorithm: sha256WithRSAEncryption Issuer: CN = kubernetes Validity Not Before: May 18 04:38:46 2019 GMT Not After : May 15 04:38:46 2029 GMT Subject: CN = kubernetes Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:cc:7f:f9:b3:47:92:0f:f3:42:73:65:f0:ac:53: 46:08:ce:1d:8e:a1:3f:16:9b:fe:e5:fb:83:e0:1e: ... ... d5:9f:d8:35:24:4d:ca:ef:81:c7:de:44:e9:37:f4: 6e:ac:e0:16:b5:bf:72:20:60:71:b3:fa:fa:8b:34: bc:b5 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment, Certificate Sign X509v3 Basic Constraints: critical CA:TRUE Signature Algorithm: sha256WithRSAEncryption 09:56:89:45:52:47:ff:36:ea:34:c9:ca:bd:d8:8d:c2:6f:4e: 52:b0:cf:51:1a:3e:a7:b7:bd:0a:f8:a8:f0:cb:2e:fc:f7:53: ... ... 62:f5:48:37:1d:68:ed:a6:17:8f:5d:3c:79:35:9e:37:b0:fa: e5:af:f2:29 验证apiserver-etcd-client.crt证书是不是由kubernetes的根证书(ca.crt)签发的 $ openssl verify -CAfile ca.crt ./apiserver-etcd-client.crt O = system:masters, CN = kube-apiserver-etcd-client error 7 at 0 depth lookup: certificate signature failure error ./apiserver-etcd-client.crt: verification failed # 这里可以看到验证失败了 140224101396544:error:0407008A:rsa routines:RSA_padding_check_PKCS1_type_1:invalid padding:../crypto/rsa/rsa_pk1.c:67: 140224101396544:error:04067072:rsa routines:rsa_ossl_public_decrypt:padding check failed:../crypto/rsa/rsa_ossl.c:573: 140224101396544:error:0D0C5006:asn1 encoding routines:ASN1_item_verify:EVP lib:../crypto/asn1/a_verify.c:171: # 这里说明:该证书是由etcd的ca.crt证书签发的 $ openssl verify -CAfile etcd/ca.crt apiserver-etcd-client.crt apiserver-etcd-client.crt: OK apiserver的私钥与公钥证书 apiserver访问kubelet使用的客户端私钥与证书 serviceaccount的私钥sa.key, sa.pub etcd相关的私钥和数字证书

    2-2.3 生成控制平台组件kubeconfig文件

    生成配置文件,用于各组件间通信使用

    kubeconfig配置文件是一系列配置文件的统称组件kubeconfig文件 ~/.kube/config 在初始化过程成功后,打印输出提示的配置/etc/kubernetes/*.confKUBECONFIG环境变量 kubeconfig配置: 包含cluster, user, context信息 查看配置文件信息 $ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://172.16.81.161:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED 允许kubectl快速切换context,管理多集群

    2-2.4 生成控制平面组件minifest文件

    生成的manifest文件被master节点读取,启动控制平面组件,并维护控制平面组件的状态

    组件 manifest 文件

    /etc/kubernetes/manifests etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml 控制平面组件以Static Pod形式运行 所谓 Static Pod 就是以节点上的kubelet来管理运行的而 kubelet 服务则是由 master 节点上的 apiserver 来管理的 kubelet 读取 manifests 目录并管理各控制平面组件 Pod 启停

    2-2.5 下载镜像,等待控制平面启动

    默认从google的官方镜像站下载,国内需要事件下载镜像并打好标签

    从k8s.gcr.io下载组件镜像kubeadm会一起等待探测并等待 localhost:6443/healthz 服务返回成功过一段时间后,控制平面主要组件都启动OK安装 dns 和 kube-prosy 插件 以 DaemonSet 方式部署 kube-proxy $ kubectl get daemonset -n kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE kube-proxy 3 3 3 3 3 <none> 1d weave-net 3 3 3 3 3 <none> 1d 部署 kube-dns(也可以用 core-dns 替代)dns 插件处于 Pending 状态,直到 cluster 网络就绪

    2-2.6 保存MasterConfiguration配置信息

    保存MasterConfiguration配置信息

    2-2.7 设定master标志

    将当前节点设定为master节点,默认工作负载不会被调度到master节点

    2-2.8 进行基于TLS的安全引导的相关配置

    2-2.9 安装DNS及kube-rposy组件

    DNS插件安装成功后会牌Pending状态,需要网络插件安装完成后才会切换成正常状态(Running)

    2-3 kubeadm join揭秘

    join: 将节点加入集群 join前检查discovery-token-ca-cert-hash: 用于Node验证Master身份 如何根据数字证书公钥算出对应的哈希值 $ openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -pubkey | openssl rsa -pubin -outform DER 2> /dev/null | sha256sum | cut -d ' ' -f 1 59012afebb4d66d6204f27598d634d53a94d2f296b5c49df670c6e47ed255799 # 官方提供的方式 $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 59012afebb4d66d6204f27598d634d53a94d2f296b5c49df670c6e47ed255799 token: 用于master验证node身份

    2-4 kubernetes核心组件详解

    2-4.1 kubelet: 节点pod管家

    运行于集群的所有节点上

    每个节点上的kubelet由操作系统init(如:systemd)进程启动kubelet服务的配置文件及启动参数配置文件 /lib/systemd/system/kubelet.service /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # 主要配置文件 配置文件修改后,需要使用下面的命令生效 $ systemctl daemon-reload $ systemctl restart kubelet

    2-4.2 kube-apiserver: 集群管理入口

    kube-apiserver:

    由kubectl启动的static podapiserver的pod spec: /etc/kubernetes/manifests/kube-apiserver.yamlkube监听/etc/kubernetes/manifests/下文件的变化,自动重启发生变化的apiserver pod

    2-4.3 etcd: 配置中心

    由kubelet启动的static popdapiserver 与 etcd 之间采用基于 tls 的安全通信etcd 挂载 master 节点本地路径 /var/lib/etcd 用于运行时数据存储如果要做etcd备份,需要关注下此路径 $ tree /var/lib/etcd /var/lib/etcd └── member ├── snap │ ├── 0000000000000007-000000000000c355.snap │ ├── 0000000000000007-000000000000ea66.snap │ ├── 0000000000000007-0000000000011177.snap │ ├── 0000000000000008-0000000000013888.snap │ ├── 0000000000000008-0000000000015f99.snap │ └── db └── wal ├── 0000000000000000-0000000000000000.wal ├── 0000000000000001-000000000000f5dc.wal └── 0.tmp 3 directories, 9 files

    2-4.4 kube-controller-manager: 管理控制中心

    负责集群内Node, Pod副本, 服务的Endpoint, 命名空间, ServiceAccount, 资源配额等管理由 kubelet 启动的 static pod

    2-4.5 kube-scheduler: 调度器

    Scheduler: 单纯地调度Pod

    按照特定的策略的调度算法,将等待调度Pod绑定到集群中某个适合的Node,并写入绑定信息由 kubelet 启动的 static pod配置文件: /etc/kubernetes/manifests/kube-scheduler.yaml

    2-5.6 kube-proxy: 服务抽象实现

    kube-proxy运行于kubernetes集群中每个节点上

    kube-proxy由 daemonset 控制器在各个节点上启动唯一实例配置文件: /var/lib/kube-proxy/config.conf(在Pod内,容器内) $ kubectl -n kube-system get pods -o wide | grep 'kube-proxy' kube-proxy-2bj4c 1/1 Running 2 2d 172.16.81.162 dbk8s-node-01 kube-proxy-bf7lf 1/1 NodeLost 1 2d 172.16.81.163 dbk8s-node-02 kube-proxy-p7n5s 1/1 Running 7 2d 172.16.81.161 dbk8s-master # 这里以 master 节点的kube-proxy为例,exec可以执行命令 $ kubectl -n kube-system exec kube-proxy-2bj4c -- cat /var/lib/kube-proxy/config.conf apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: 192.168.1.0/24 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: minSyncPeriod: 0s scheduler: "" syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "" nodePortAddresses: null oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy

    2-5 kubectl详解

    kubectl 是目前管理 k8s 集群的最强利器

    2-5.1 集群访问配置

    usage: kubectl config

    查看配置 kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://172.16.81.161:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED 创建一个集群入口 kubectl config set-cluster k8s1 --server=https://1.2.3.4 Cluster "k8s1" set. 查看集群 kubectl config get-clusters NAME kubernetes k8s1 删除集群 $ kubectl config delete-cluster k8s1 deleted cluster k8s1 from /root/.kube/config 创建context入口 $ kubectl config set-context admin1@k8s1 --user=admin1 Context "admin1@k8s1" created. 查看context $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE admin1@k8s1 admin1 * kubernetes-admin@kubernetes kubernetes kubernetes-admin 选择默认context $ kubectl config use-context admin1@k8s1 Switched to context "admin1@k8s1". 获取当前的context $ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * admin1@k8s1 admin1 kubernetes-admin@kubernetes kubernetes kubernetes-admin $ kubectl config current-context admin1@k8s1 删除context $ kubectl config delete-context admin1@k8s1 创建凭证 $ kubectl config set-credentials testing-01 --username=testing-01 --password=abcdef@ User "testing-01" set. # 这里可以看到已经有用户生成了 $ kubectl config view | tail -4 - name: testing-01 user: password: abcdef@ username: testing-01

    2-5.2 集群控制

    kubectl [ create|apply|delete|label|edit|expose|scale ] # 创建nginx-deployment配置文件 $ cat nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-example spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:alpine ports: - containerPort: 80 volumeMounts: - name: nginx-config mountPath: /etc/nginx/conf.d - name: web-root mountPath: /usr/share/nginx/html volumes: - name: nginx-config configMap: name: nginx-config - name: web-root hostPath: path: /var/www/html 根据配置文件来创建 $ kubectl create -f nginx-deployment.yaml deployment.apps "nginx-deployment-example" created 查看刚刚创建的deployent $ kubectl get pods --show-labels | grep deployment nginx-deployment-example-7945d4c5f-cg85r 0/1 ContainerCreating 0 14m app=nginx,pod-template-hash=350180719 给pod打标签 $ kubectl label pod/nginx-deployment-example-7945d4c5f-cg85r status=health pod "nginx-deployment-example-7945d4c5f-cg85r" labeled $ kubectl get pods --show-labels | grep deployment nginx-deployment-example-7945d4c5f-cg85r 0/1 ContainerCreating 0 16m app=nginx,pod-template-hash=350180719,status=health 通过kubect edit命令修改deployment的配置 root@dbk8s-master:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-example-7945d4c5f-cg85r 0/1 ContainerCreating 0 20m root@dbk8s-master:~# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment-example 1 1 1 0 20m root@dbk8s-master:~# kubectl edit deployment nginx-deployment-example ################## # 这里修改的部分如下,replicas: 3, 即创建的副本数 spec: progressDeadlineSeconds: 600 replicas: 3 ################## deployment.extensions "nginx-deployment-example" edited # 查看已存在的deployment root@dbk8s-master:~# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment-example 3 3 3 0 21m # 这里可以看到已经有3个副本了 root@dbk8s-master:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-example-7945d4c5f-cg85r 0/1 ContainerCreating 0 21m nginx-deployment-example-7945d4c5f-j9pr7 0/1 ContainerCreating 0 10s nginx-deployment-example-7945d4c5f-s7rc8 0/1 ContainerCreating 0 10s 使用kubectl scale扩容 # 扩容至10个副本 root@dbk8s-master:~# kubectl scale --replicas=10 deployment nginx-deployment-example deployment.extensions "nginx-deployment-example" scaled # 看到有10个副本正在被创建 root@dbk8s-master:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-example-7945d4c5f-777rp 0/1 ContainerCreating 0 3s nginx-deployment-example-7945d4c5f-cg85r 0/1 ContainerCreating 0 25m nginx-deployment-example-7945d4c5f-dwwt6 0/1 ContainerCreating 0 3s nginx-deployment-example-7945d4c5f-gcc8m 0/1 ContainerCreating 0 3s nginx-deployment-example-7945d4c5f-j9pr7 0/1 ContainerCreating 0 4m nginx-deployment-example-7945d4c5f-jnfmh 0/1 ContainerCreating 0 3s nginx-deployment-example-7945d4c5f-rqq6h 0/1 ContainerCreating 0 3s nginx-deployment-example-7945d4c5f-s7rc8 0/1 ContainerCreating 0 4m nginx-deployment-example-7945d4c5f-ssxgt 0/1 ContainerCreating 0 3s nginx-deployment-example-7945d4c5f-xn87h 0/1 ContainerCreating 0 3s # 这里可以看到简图信息 root@dbk8s-master:~# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment-example 10 10 10 0 26m 使用kubectl apply恢复原始配置 root@dbk8s-master:~# kubectl apply -f nginx-deployment.yaml deployment.apps "nginx-deployment-example" configured root@dbk8s-master:~# kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment-example 1 1 1 0 28m 将deployment删除 root@dbk8s-master:~# kubectl delete -f nginx-deployment.yaml deployment.apps "nginx-deployment-example" deleted # 正在终止pod root@dbk8s-master:~# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-deployment-example-7945d4c5f-cg85r 0/1 Terminating 0 29m app=nginx,pod-template-hash=350180719,status=health # 这里已经看不到了 root@dbk8s-master:~# kubectl get pods --show-labels No resources found.

    2-5.3 集群查看和问题调试

    kubectl [ get|describe|logs|exec|attach ] # 查看pod信息 $ kubectl get pods --all-namespaces -o wide # 查看pod详细信息 $ kubectl -n kube-system describe pods weave-net-4mwv6 # 查看pod日志 $ kubectl -n kube-system logs -f weave-net-4mwv6 # 在pod中执行命令 $ kubectl -n kube-system exec monitoring-influxdb-cc95575b9-7d9xj -- uptime 13:27:47 up 50 min, 0 users, load average: 0.04, 0.10, 0.08 # 通过kubectl attach挂载到某个容器上, 如果该容器中有日志产生,则会实时打印出来 $ kubectl -n kube-system attach monitoring-influxdb-cc95575b9-7d9xj Defaulting container name to influxdb. Use 'kubectl describe pod/monitoring-influxdb-cc95575b9-7d9xj -n kube-system' to see all of the containers in this pod. If you don't see a command prompt, try pressing enter.

    最新回复(0)