K8S 环境部署(12.2版本)

    xiaoxiao2022-07-05  203

    1.1、环境整理

    首先将Linux 内核升级到4 .x ,并更新本地依赖。

    yum update -y

    1.2、Docker安装

    1.2.1、卸载旧版本

    首先需要对服务器进行清理, 如果之前安装过Docker , 需要先执行卸载操作,具体命令

    如下:

    sudo yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

    1.2.2、安装Docker CE

    安装yum-utils提供的yum-config-manager,device-mapper-persistent-data和lvm2

    sudo yum install -y yum-utils device-mapper-persistent-data lvm2

    使用下面的命令来设置稳定存储库:

    sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    可以选择性地启用edge 和测试存储库,这些存储库包含在Docker 中,在默认情况下是禁用的。具体执行命令如下:

    sudo yum-config-manager --enable docker-ce-edge sudo yum-config-manager --enable docker-ce-test

    也可以通过使用禁用标志来运行yum-config-manager 命令, 以禁用edge 或测试存储库。

    要重新启用它, 可以使用-enable 标志。下面的命令禁用edge 存储库:

    yum-config-manager --disable docker-ce-edge

    最后, 可执行如下命令安装最新版本的Docker CE :

    yum install docker-ce -y

    还能通过如下命令安装指定版本的Docker CE :

    yum install docker-ce -<VERSION STRING>

    执行查询Docke r 版本号, 看是否安装成功:

    docker --version

    如下:

    Docker version 18.09.3, build 774a1f4

    1.2.3、启动docker

    systemctl daemon-reload && systemctl restart docker

    1.3、安装kubeadm, kubelet和kubectl

    1.3.1、配置kubernetes源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

    关闭selinux

    sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config setenforce 0

    开始安装

    yum install -y kubelet-1.12.2 kubeadm-1.12.2 kubectl-1.12.2 kubernetes-cni-0.6.0 systemctl enable kubelet

    1.4、在master上配置

    1.4.1、配置内核

    cat <<EOF > /etc/sysctl.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p

    1.4.2、关闭swap

    swapoff -a

    1.4.3、初始化K8S

    kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 # 启动 sudo systemctl start kubelet

    由于https://k8s.gcr.io/v2/无法访问,所以先下载相关的镜像文件。

    docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2 docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.12.2 docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2 docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.12.2 docker pull mirrorgooglecontainers/pause:3.1 docker pull mirrorgooglecontainers/etcd-amd64:3.2.24 docker pull coredns/coredns:1.2.2

    给镜像文件打tag

    docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.12.2 k8s.gcr.io/kube-proxy:v1.12.2 docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2 k8s.gcr.io/kube-apiserver:v1.12.2 docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.12.2 k8s.gcr.io/kube-controller-manager:v1.12.2 docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2 k8s.gcr.io/kube-scheduler:v1.12.2 docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24 docker tag coredns/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2 docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

    1.4.4、配置KUBECONFIG环境变量

    cp -f /etc/kubernetes/admin.conf $HOME/ chown (id -u):(id -g) $HOME/admin.conf export KUBECONFIG=$HOME/admin.conf echo "export KUBECONFIG=$HOME/admin.conf" >> ~/.bash_profile

    允许master运行pod,如果需要master作为worker运行pod,执行

    kubectl taint nodes --all node-role.kubernetes.io/master-

    1.4.5、获取组件的健康状态

    [root@host-192-100-36-176 ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}

    1.4.6、查看节点信息

    [root@host-192-100-36-176 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION host-192-100-36-176 NotReady master 40m v1.12.2

    这里status未就绪,是因为没有网络插件,如flannel.地址https://github.com/coreos/flannel可以查看flannel在github上的相关项目。

    1.4.7、安装flannel

    [root@host-192-100-36-176 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created

    镜像拉取成功后,一般会启动起来。此时默认的node状态为Ready,如下

    [root@host-192-100-36-176 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION host-192-100-36-176 Ready master 47m v1.12.2

    获取当前系统上所有在运行的pod的状态,指定名称空间为kube-system

    [root@host-192-100-36-176 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-576cbf47c7-k6x2q 1/1 Running 0 47m coredns-576cbf47c7-ndj54 1/1 Running 0 47m etcd-host-192-100-36-176 1/1 Running 0 46m kube-apiserver-host-192-100-36-176 1/1 Running 0 46m kube-controller-manager-host-192-100-36-176 1/1 Running 0 46m kube-flannel-ds-amd64-zzmtt 1/1 Running 0 100s kube-proxy-l7hp2 1/1 Running 0 47m kube-scheduler-host-192-100-36-176 1/1 Running 0 46m

    获取当前系统的名称空间

    [root@host-192-100-36-176 ~]# kubectl get ns NAME STATUS AGE default Active 48m kube-public Active 48m kube-system Active 48m

    1.5、添加node

    1.5.1、下载镜像文件,同上

    1.5.2、安装flannel

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    1.5.3、节点加入到集群中

    kubeadm join 192.100.36.176:6443 --token q13c4x.0tsn1i3tj0zozbhc --discovery-token-ca-cert-hash sha256:14bdc1a457aeb34d43cebef2cfde34abd4cbaa3c5f8d3c05d55c24438e613926

    1.5.4、查看节点的状态

    [root@host-192-100-36-176 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION host-192-100-36-176 Ready master 112m v1.12.2 host-192-100-36-180 Ready <none> 43m v1.12.2 host-192-100-36-189 Ready <none> 49m v1.12.2

    1.6、安装dashboard

    dashboard project在https://github.com/kubernetes/dashboard/releases

    1.6.1、安装1.10.1版本的dashboard

    docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml

    查看部署信息

    kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcdf6894-nmcmz 1/1 Running 1 54d coredns-78fcdf6894-p5pfm 1/1 Running 1 54d etcd-k8s-master 1/1 Running 2 54d kube-apiserver-k8s-master 1/1 Running 9 54d kube-controller-manager-k8s-master 1/1 Running 5 54d kube-flannel-ds-n5c86 1/1 Running 1 54d kube-flannel-ds-nrcw2 1/1 Running 1 52d kube-flannel-ds-pgpr7 1/1 Running 5 54d kube-proxy-glzth 1/1 Running 1 52d kube-proxy-rxlt7 1/1 Running 2 54d kube-proxy-vxckf 1/1 Running 4 54d kube-scheduler-k8s-master 1/1 Running 3 54d kubernetes-dashboard-767dc7d4d-n4clq 1/1 Running 0 3s kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 54d kubernetes-dashboard ClusterIP 10.105.204.4 <none> 443/TCP 30m kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system #以打补丁方式修改dasboard的访问方式 service/kubernetes-dashboard patched kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 54d kubernetes-dashboard NodePort 10.105.204.4 <none> 443:32645/TCP 31m

    访问dashboard页面https://192.168.158.88:30849/#!/login

    1.6.2、service account

    创建一个cluster-admin角色的service account , 和一个clusterrolebinding, 以便访问所有的k8s资源

    kubectl create serviceaccount cluster-admin-dashboard-sa kubectl create clusterrolebinding cluster-admin-dashboard-sa \ --clusterrole=cluster-admin \ --serviceaccount=default:cluster-admin-dashboard-sa

    1.6.3、查找token

    Copy产生的Token,并使用此Token登录到dashboard中

    [root@host-192-100-36-176 ~]# kubectl get secret | grep cluster-admin-dashboard-sa cluster-admin-dashboard-sa-token-sswrb kubernetes.io/service-account-token 3 2m6s [root@host-192-100-36-176 ~]# kubectl describe secrets/cluster-admin-dashboard-sa-token-sswrb Name: cluster-admin-dashboard-sa-token-sswrb Namespace: default Labels: <none> Annotations: kubernetes.io/service-account.name: cluster-admin-dashboard-sa kubernetes.io/service-account.uid: b7c2c5db-7c6d-11e9-91b8-fa163e5187e6 Type: kubernetes.io/service-account-token Data ==== namespace: 7 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhLXRva2VuLXNzd3JiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXItYWRtaW4tZGFzaGJvYXJkLXNhIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjdjMmM1ZGItN2M2ZC0xMWU5LTkxYjgtZmExNjNlNTE4N2U2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2x1c3Rlci1hZG1pbi1kYXNoYm9hcmQtc2EifQ.jqxyUOdpUn2XaLX6Sd43q_oaAHhlaiV_JTWbd0vfL5W7o4UX_GUCXcCgG87yXxQdNX5ojjDENnkpp3mE6TJrergR7iKrTKvtAk5XsqSZ5L4CWJY3eJXH0MuovKtEUcV7Cwgq01fau7ZAOM85hukbLB4PXBT6XabhZH4Vks1pyIDzbnMLdZEChXu6XQQ3XTVj2c9FX_IaEfb_6GPZeP76R7VTO1euqaYqeHfTVSLP6pOgwjVFr-lKE5qLI-3E7BieKCjdeuQrNZp6-pyOHhinrQ3UHuNGlZGTZVhzfeLxq3h4a9G7w9uCBBqWeyTRpjP_uszhyY-5qiJJUzoO8aFWuA ca.crt: 1025 bytes

    将token拷贝到令牌的位置,登录进去,即可看到dashboard页面。

    最新回复(0)