第四章 高可用Kubernetes集群搭建方案

    xiaoxiao2022-07-12  142

    文章目录

    第四章 高可用Kubernetes集群搭建方案4.1 什么是高可用Kubernetes集群4.1.1 将kubernetes用于生产环境首要考虑的问题4.1.2 Kubernetes控制平面高可用考量 4.2 高可用Kubernetes集群方案4.2.1 Kubernetes控制平面高可用方案-前提4.2.2 Kubernetes控制平面高可用方案-步骤4.2.2.1. 搭建高可用 etcd 集群4.2.2.2. 搭建 load balancer4.2.2.3. 创建访问 etcd 集群的 client 相关证书4.2.2.4. 安装 master0 节点上的控制平面4.2.2.5. 安装 master1, master2 上的控制平面4.2.2.6. 收尾工作

    第四章 高可用Kubernetes集群搭建方案

    4.1 什么是高可用Kubernetes集群

    4.1.1 将kubernetes用于生产环境首要考虑的问题

    共同且首要考虑的问题: Kubernetes集群的高可用性Kubernetes集群的高可用主要指控制平面的高可用Kubernetes引导的k8s集群的控制平面是单点

    4.1.2 Kubernetes控制平面高可用考量

    不止一个Master节点

    etcd集群数据库的高可用多Master节点下的kube-scheduler, kube-controller-manager的运行机制问题多Master节点上的kube-apiserver对外要暴露唯一入口,负载均衡

    4.2 高可用Kubernetes集群方案

    4.2.1 Kubernetes控制平面高可用方案-前提

    基于kubeadm引导的集群

    至少要有3个或3个以上的节点用于部署master组件建议master节点不承担工作负载建议etcd集群单独部署在3个干净独立的节点上

    4.2.2 Kubernetes控制平面高可用方案-步骤

    4.2.2.1. 搭建高可用 etcd 集群

    etcd集群搭建步骤

    创建etcd专用CA创建peer.crt, peer.key, server.crt, server.key ref: https://www.centos.bz/2017/09/k8s部署之使用cfssl创建证书/ $ cat config.json { "CN": "$HOSTNAME", "hosts": [ "$HOSTNAME", "PRIVATE_IP" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "US", "ST": "CA", "L": "San Francisco" } ] } # 安装 cfssl 相关工具 ref: https://blog.51cto.com/11448017/2048609 $ wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 $ chmod +x cfssl_linux-amd64 $ mv cfssl_linux-amd64 /usr/local/bin/cfssl $ wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 $ chmod +x cfssljson_linux-amd64 $ mv cfssljson_linux-amd64 /usr/local/bin/cfssljson $ wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 $ chmod +x cfssl-certinfo_linux-amd64 $ mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo # 生成默认证书 $ cfssl print-defaults config > ca-config.json # 修改内容如下 { "signing": { "default": { "expiry": "43800h" }, "profiles": { "server": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "43800h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } $ cfssl print-defaults csr > ca-csr.json # 修改内容如下 { "CN": "k8s-node-01", "hosts": [ "$HOST", "www.k8s-node-01.net" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "SH", "O": "Netease", "ST": "San Francisco", "OU": "OT" } ] } ## 生成CA证书和私钥 ### 生成ca.pem、ca.csr、ca-key.pem(CA私钥,需妥善保管) $ cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 签发Server Certificate $ cfssl print-defaults csr > server.json # 修改内容如下 { "CN": "Server", "hosts": [ "172.16.81.162" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] } # 生成服务端证书和私钥 $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server.json | cfssljson -bare server # 为节点member1生成证书和私钥: $ cfssl print-defaults csr > member1.json # 修改内容如下 { "CN": "Member1", "hosts": [ "172.16.81.163" ], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] } $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1

    $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server config.json | cfssljson -bare server $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer config.json | cfssljson -bare peer

    同步证书文件,并启动etcd集群

    4.2.2.2. 搭建 load balancer

    常用的负载均衡方式有多种,如lvs + keepalived,这里就略过了

    4.2.2.3. 创建访问 etcd 集群的 client 相关证书

    创建 client 访问 etcd 集群的证书 # 生成客户端证书和私钥 $ cfssl print-defaults csr > client.json # 修改内容如下 { "CN": "Client", "hosts": [], "key": { "algo": "ecdsa", "size": 256 }, "names": [ { "C": "US", "L": "CA", "ST": "San Francisco" } ] } $ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client 将 client 证书文件拷贝到每个 master node 上 $ scp ca.crt client.crt client.key node-master:/etc/kubernetes/pki/etcd/

    4.2.2.4. 安装 master0 节点上的控制平面

    依然基于kubeadm引导读取配置文件创建 $ cat > config.yaml << EOF apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration api: advertiseAddress: <lb-virtual-ip> controlPlaneEndpoint: <lb-virtual-ip> etcd: endpoints: - https://<ctcd0-ip-address>:2379 - https://<ctcd1-ip-address>:2379 - https://<ctcd2-ip-address>:2379 caFile: /etc/kubernetes/pki/etcd/ca.crt certFile: /etc/kubernetes/pki/etcd/client.crt keyFile: /etc/kubernetes/pki/etcd/client.key networking: podSubnet: <podCIDR> apiServerCertSANs: - <lb-virtual-ip> - <private-ip> apiServerExtraArgs: apiserver-count: "3" EOF $ kubeadm init -config=./config.yaml

    4.2.2.5. 安装 master1, master2 上的控制平面

    将CA等证书文件scp到master1, master2节点上 $ scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.key,sa.pub} master-[12]:/etc/kubernetes/pki/ 执行kubeadm init -config=config.yaml在master1, master2节点上将master1, master2加入到Load Balancer

    4.2.2.6. 收尾工作

    安装CNI网络将Worker Nodes加入到高可用集群配置Node组件kubelet和kube-proxy通过LB与apiserver通信
    最新回复(0)