CentOS 7 가이드에서 Kubespray로 Kubernetes 설치하기
29241 단어 devopskubernetes
CentOS 7 가이드에서 Kubespray로 Kubernetes 설치하기
가능한 준비
1. selinbux 및 firewalld 비활성화
$ sudo -i
# setenforce 0
# sed -i "s/^SELINUX\=enforcing/SELINUX\=disabled/g" /etc/selinux/config
# systemctl disable firewalld; systemctl stop firewalld; systemctl mask firewalld
2. 설치에는 git, ansible용 패키지가 필요합니다.
# yum update
# yum install git
# yum install epel-release
# yum install python-pip
# pip install — upgrade pip
3. Git은 Kubespray 저장소를 복제하고 요구 사항을 설치합니다.
# git clone https://github.com/kubernetes-sigs/kubespray.git
# cd kubespray
###Install requirements
# pip install -r requirements.txt
###Copy ``inventory/sample`` as ``inventory/mycluster``
# cp -rfp inventory/sample inventory/mycluster
4. ssh 키 생성 및 K8s 클러스터 설치를 준비하는 모든 VM에 복사
# ssh-keygen -t rsa
# ssh-copy-id -p 2324 admin@{ip of K8s node}
...
5. 파일 구성 호스트 준비
# cd $HOME/kubespray/inventory/k8scluster
# cp inventory.ini hosts.ini
호스트.ini
# cd $HOME/kubespray/inventory/k8scluster
# vi hosts.ini
---
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8sm901 ansible_host=10.233.247.64 ip=10.30.2.25
k8sm902 ansible_host=10.233.247.65 ip=10.30.2.26
k8sm903 ansible_host=10.233.247.66 ip=10.30.2.27
k8sw901 ansible_host=10.233.247.67 ip=10.30.2.28
k8sw902 ansible_host=10.233.247.68 ip=10.30.2.29
k8sw903 ansible_host=10.233.247.69 ip=10.30.2.30
k8sw904 ansible_host=10.233.247.61 ip=10.30.2.22
k8sw905 ansible_host=10.233.247.62 ip=10.30.2.23
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
k8sm901
k8sm902
k8sm903
[etcd]
k8sm901
k8sm902
k8sm903
[kube-node]
k8sm901
k8sm902
k8sm903
k8sw901
k8sw902
k8sw903
k8sw904
k8sw905
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
[all:vars]
ansible_ssh_user=admin
ansible_ssh_port=2324
---
6. 가능한 핑 테스트
# cd $HOME/kubespray
# ansible -i inventory/k8scluster/hosts.ini -m ping all
k8sw901 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sm902 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sm903 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sm901 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw903 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw904 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw905 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw906 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw902 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8s 노드 준비
7. K8s 노드에서 docker,k8s 제거 및 jq 설치
$ sudo -i
# docker rm `docker ps -a -q`
# docker rmi `docker images -q`
# kubeadm reset
# yum remove kubeadm kubectl kubelet kubernetes-cni kube* -y
# yum autoremove
# rm -rf ~/.kube
###Install jq
# (yum install epel-release -y; yum install jq -y)
kubespray를 통해 K8s 클러스터 설정
8. 설정
$ sudo -i
# cd $HOME/kubespray
ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become
#Download kube config file from one of master node to bastion vm
# ssh -p 2324 admin@k8sm901 'sudo cat /etc/kubernetes/admin.conf' >~/.kube/config
#Check all node status is Ready
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d21h v1.20.0
k8sm902 Ready control-plane,master 2d21h v1.20.0
k8sm903 Ready control-plane,master 2d21h v1.20.0
k8sw901 Ready <none> 2d21h v1.20.0
k8sw902 Ready <none> 2d21h v1.20.0
k8sw903 Ready <none> 2d21h v1.20.0
k8sw904 Ready <none> 2d21h v1.20.0
k8sw905 Ready <none> 2d21h v1.20.0
k8sw906 Ready <none> 2d21h v1.20.0
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8sm901 Ready control-plane,master 2d21h v1.20.0 10.30.2.25 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sm902 Ready control-plane,master 2d21h v1.20.0 10.30.2.26 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sm903 Ready control-plane,master 2d21h v1.20.0 10.30.2.27 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw901 Ready <none> 2d21h v1.20.0 10.30.2.28 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw902 Ready <none> 2d21h v1.20.0 10.30.2.29 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw903 Ready <none> 2d21h v1.20.0 10.30.2.30 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw904 Ready <none> 2d21h v1.20.0 10.30.2.22 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw905 Ready <none> 2d21h v1.20.0 10.30.2.23 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw906 Ready <none> 2d21h v1.20.0 10.30.2.24 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
자주 묻는 질문
클러스터에서 노드 제거
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d22h v1.20.0
k8sm902 Ready control-plane,master 2d22h v1.20.0
k8sm903 Ready control-plane,master 2d22h v1.20.0
k8sw901 Ready <none> 2d22h v1.20.0
k8sw902 Ready <none> 2d22h v1.20.0
k8sw903 Ready <none> 2d22h v1.20.0
k8sw904 Ready <none> 2d22h v1.20.0
k8sw905 Ready <none> 2d22h v1.20.0
k8sw906 Ready <none> 2d22h v1.20.0
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini remove-node.yml -e "node=k8sw906" --become
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d23h v1.20.0
k8sm902 Ready control-plane,master 2d23h v1.20.0
k8sm903 Ready control-plane,master 2d23h v1.20.0
k8sw901 Ready <none> 2d23h v1.20.0
k8sw902 Ready <none> 2d23h v1.20.0
k8sw903 Ready <none> 2d23h v1.20.0
k8sw904 Ready <none> 2d23h v1.20.0
k8sw905 Ready <none> 2d23h v1.20.0
클러스터에 새 노드 추가
sudo -i
# vi $HOME/inventory/k8scluster/hosts.ini
--------
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8sm901 ansible_host=10.233.247.64 ip=10.30.2.25
k8sm902 ansible_host=10.233.247.65 ip=10.30.2.26
k8sm903 ansible_host=10.233.247.66 ip=10.30.2.27
k8sw901 ansible_host=10.233.247.67 ip=10.30.2.28
k8sw902 ansible_host=10.233.247.68 ip=10.30.2.29
k8sw903 ansible_host=10.233.247.69 ip=10.30.2.30
k8sw904 ansible_host=10.233.247.61 ip=10.30.2.22
k8sw905 ansible_host=10.233.247.62 ip=10.30.2.23
*k8sw906 ansible_host=10.233.247.63 ip=10.30.2.24
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
k8sm901
k8sm902
k8sm903
[etcd]
k8sm901
k8sm902
k8sm903
[kube-node]
k8sm901
k8sm902
k8sm903
k8sw901
k8sw902
k8sw903
k8sw904
k8sw905
*k8sw906
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
[all:vars]
ansible_ssh_user=admin
ansible_ssh_port=2324
--------
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d23h v1.20.0
k8sm902 Ready control-plane,master 2d23h v1.20.0
k8sm903 Ready control-plane,master 2d23h v1.20.0
k8sw901 Ready <none> 2d23h v1.20.0
k8sw902 Ready <none> 2d23h v1.20.0
k8sw903 Ready <none> 2d23h v1.20.0
k8sw904 Ready <none> 2d23h v1.20.0
k8sw905 Ready <none> 2d23h v1.20.0
$ cd $HOME/kubespray
$ ansible-playbook -i inventory/k8scluster/hosts.ini scale.yml --become
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 47m v1.20.0
k8sm902 Ready control-plane,master 46m v1.20.0
k8sm903 Ready control-plane,master 46m v1.20.0
k8sw901 Ready <none> 45m v1.20.0
k8sw902 Ready <none> 45m v1.20.0
k8sw903 Ready <none> 45m v1.20.0
k8sw904 Ready <none> 45m v1.20.0
k8sw905 Ready <none> 45m v1.20.0
k8sw906 Ready <none> 66s v1.20.0
클러스터 재설정
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini reset.yml --become
# kubectl get node
The connection to the server 10.30.2.25:6443 was refused - did you specify the right host or port?
#
클러스터의 Kubernetes 버전 모든 노드 업그레이드
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 4m53s v1.20.0
k8sm902 Ready control-plane,master 4m22s v1.20.0
k8sm903 Ready control-plane,master 4m12s v1.20.0
k8sw901 Ready <none> 3m12s v1.20.0
k8sw902 Ready <none> 3m12s v1.20.0
k8sw903 Ready <none> 3m12s v1.20.0
k8sw904 Ready <none> 3m12s v1.20.0
k8sw905 Ready <none> 3m12s v1.20.0
k8sw906 Ready <none> 3m4s v1.20.0
#Edit kube_version
# vi $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/k8s-cluster.yml
...
kube_version: v1.20.0 #edit to v1.20.2
...
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini upgrade-cluster.yml --become
# watch -x kubectl get node,pod -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE K
ERNEL-VERSION CONTAINER-RUNTIME
node/k8sm901 Ready control-plane,master 100m v1.20.2 10.30.2.25 <none> CentOS Linux 7 (Core) 3
.10.0-1160.11.1.el7.x86_64 docker://19.3.14
node/k8sm902 Ready control-plane,master 100m v1.20.2 10.30.2.26 <none> CentOS Linux 7 (Core) 3
.10.0-1160.11.1.el7.x86_64 docker://19.3.14
...
인증서 갱신
kubeadm에 의해 생성된 클라이언트 인증서는 1년 후에 만료되기 때문입니다.
$ sudo -i
# setenforce 0
# sed -i "s/^SELINUX\=enforcing/SELINUX\=disabled/g" /etc/selinux/config
# systemctl disable firewalld; systemctl stop firewalld; systemctl mask firewalld
# yum update
# yum install git
# yum install epel-release
# yum install python-pip
# pip install — upgrade pip
# git clone https://github.com/kubernetes-sigs/kubespray.git
# cd kubespray
###Install requirements
# pip install -r requirements.txt
###Copy ``inventory/sample`` as ``inventory/mycluster``
# cp -rfp inventory/sample inventory/mycluster
# ssh-keygen -t rsa
# ssh-copy-id -p 2324 admin@{ip of K8s node}
...
# cd $HOME/kubespray/inventory/k8scluster
# cp inventory.ini hosts.ini
# cd $HOME/kubespray/inventory/k8scluster
# vi hosts.ini
---
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8sm901 ansible_host=10.233.247.64 ip=10.30.2.25
k8sm902 ansible_host=10.233.247.65 ip=10.30.2.26
k8sm903 ansible_host=10.233.247.66 ip=10.30.2.27
k8sw901 ansible_host=10.233.247.67 ip=10.30.2.28
k8sw902 ansible_host=10.233.247.68 ip=10.30.2.29
k8sw903 ansible_host=10.233.247.69 ip=10.30.2.30
k8sw904 ansible_host=10.233.247.61 ip=10.30.2.22
k8sw905 ansible_host=10.233.247.62 ip=10.30.2.23
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
k8sm901
k8sm902
k8sm903
[etcd]
k8sm901
k8sm902
k8sm903
[kube-node]
k8sm901
k8sm902
k8sm903
k8sw901
k8sw902
k8sw903
k8sw904
k8sw905
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
[all:vars]
ansible_ssh_user=admin
ansible_ssh_port=2324
---
# cd $HOME/kubespray
# ansible -i inventory/k8scluster/hosts.ini -m ping all
k8sw901 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sm902 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sm903 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sm901 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw903 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw904 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw905 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw906 | SUCCESS => {
"changed": false,
"ping": "pong"
}
k8sw902 | SUCCESS => {
"changed": false,
"ping": "pong"
}
$ sudo -i
# docker rm `docker ps -a -q`
# docker rmi `docker images -q`
# kubeadm reset
# yum remove kubeadm kubectl kubelet kubernetes-cni kube* -y
# yum autoremove
# rm -rf ~/.kube
###Install jq
# (yum install epel-release -y; yum install jq -y)
$ sudo -i
# cd $HOME/kubespray
ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become
#Download kube config file from one of master node to bastion vm
# ssh -p 2324 admin@k8sm901 'sudo cat /etc/kubernetes/admin.conf' >~/.kube/config
#Check all node status is Ready
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d21h v1.20.0
k8sm902 Ready control-plane,master 2d21h v1.20.0
k8sm903 Ready control-plane,master 2d21h v1.20.0
k8sw901 Ready <none> 2d21h v1.20.0
k8sw902 Ready <none> 2d21h v1.20.0
k8sw903 Ready <none> 2d21h v1.20.0
k8sw904 Ready <none> 2d21h v1.20.0
k8sw905 Ready <none> 2d21h v1.20.0
k8sw906 Ready <none> 2d21h v1.20.0
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8sm901 Ready control-plane,master 2d21h v1.20.0 10.30.2.25 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sm902 Ready control-plane,master 2d21h v1.20.0 10.30.2.26 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sm903 Ready control-plane,master 2d21h v1.20.0 10.30.2.27 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw901 Ready <none> 2d21h v1.20.0 10.30.2.28 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw902 Ready <none> 2d21h v1.20.0 10.30.2.29 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw903 Ready <none> 2d21h v1.20.0 10.30.2.30 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw904 Ready <none> 2d21h v1.20.0 10.30.2.22 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw905 Ready <none> 2d21h v1.20.0 10.30.2.23 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
k8sw906 Ready <none> 2d21h v1.20.0 10.30.2.24 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.14
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d22h v1.20.0
k8sm902 Ready control-plane,master 2d22h v1.20.0
k8sm903 Ready control-plane,master 2d22h v1.20.0
k8sw901 Ready <none> 2d22h v1.20.0
k8sw902 Ready <none> 2d22h v1.20.0
k8sw903 Ready <none> 2d22h v1.20.0
k8sw904 Ready <none> 2d22h v1.20.0
k8sw905 Ready <none> 2d22h v1.20.0
k8sw906 Ready <none> 2d22h v1.20.0
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini remove-node.yml -e "node=k8sw906" --become
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d23h v1.20.0
k8sm902 Ready control-plane,master 2d23h v1.20.0
k8sm903 Ready control-plane,master 2d23h v1.20.0
k8sw901 Ready <none> 2d23h v1.20.0
k8sw902 Ready <none> 2d23h v1.20.0
k8sw903 Ready <none> 2d23h v1.20.0
k8sw904 Ready <none> 2d23h v1.20.0
k8sw905 Ready <none> 2d23h v1.20.0
sudo -i
# vi $HOME/inventory/k8scluster/hosts.ini
--------
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8sm901 ansible_host=10.233.247.64 ip=10.30.2.25
k8sm902 ansible_host=10.233.247.65 ip=10.30.2.26
k8sm903 ansible_host=10.233.247.66 ip=10.30.2.27
k8sw901 ansible_host=10.233.247.67 ip=10.30.2.28
k8sw902 ansible_host=10.233.247.68 ip=10.30.2.29
k8sw903 ansible_host=10.233.247.69 ip=10.30.2.30
k8sw904 ansible_host=10.233.247.61 ip=10.30.2.22
k8sw905 ansible_host=10.233.247.62 ip=10.30.2.23
*k8sw906 ansible_host=10.233.247.63 ip=10.30.2.24
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
k8sm901
k8sm902
k8sm903
[etcd]
k8sm901
k8sm902
k8sm903
[kube-node]
k8sm901
k8sm902
k8sm903
k8sw901
k8sw902
k8sw903
k8sw904
k8sw905
*k8sw906
[calico-rr]
[k8s-cluster:children]
kube-master
kube-node
calico-rr
[all:vars]
ansible_ssh_user=admin
ansible_ssh_port=2324
--------
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 2d23h v1.20.0
k8sm902 Ready control-plane,master 2d23h v1.20.0
k8sm903 Ready control-plane,master 2d23h v1.20.0
k8sw901 Ready <none> 2d23h v1.20.0
k8sw902 Ready <none> 2d23h v1.20.0
k8sw903 Ready <none> 2d23h v1.20.0
k8sw904 Ready <none> 2d23h v1.20.0
k8sw905 Ready <none> 2d23h v1.20.0
$ cd $HOME/kubespray
$ ansible-playbook -i inventory/k8scluster/hosts.ini scale.yml --become
$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 47m v1.20.0
k8sm902 Ready control-plane,master 46m v1.20.0
k8sm903 Ready control-plane,master 46m v1.20.0
k8sw901 Ready <none> 45m v1.20.0
k8sw902 Ready <none> 45m v1.20.0
k8sw903 Ready <none> 45m v1.20.0
k8sw904 Ready <none> 45m v1.20.0
k8sw905 Ready <none> 45m v1.20.0
k8sw906 Ready <none> 66s v1.20.0
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini reset.yml --become
# kubectl get node
The connection to the server 10.30.2.25:6443 was refused - did you specify the right host or port?
#
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8sm901 Ready control-plane,master 4m53s v1.20.0
k8sm902 Ready control-plane,master 4m22s v1.20.0
k8sm903 Ready control-plane,master 4m12s v1.20.0
k8sw901 Ready <none> 3m12s v1.20.0
k8sw902 Ready <none> 3m12s v1.20.0
k8sw903 Ready <none> 3m12s v1.20.0
k8sw904 Ready <none> 3m12s v1.20.0
k8sw905 Ready <none> 3m12s v1.20.0
k8sw906 Ready <none> 3m4s v1.20.0
#Edit kube_version
# vi $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/k8s-cluster.yml
...
kube_version: v1.20.0 #edit to v1.20.2
...
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini upgrade-cluster.yml --become
# watch -x kubectl get node,pod -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE K
ERNEL-VERSION CONTAINER-RUNTIME
node/k8sm901 Ready control-plane,master 100m v1.20.2 10.30.2.25 <none> CentOS Linux 7 (Core) 3
.10.0-1160.11.1.el7.x86_64 docker://19.3.14
node/k8sm902 Ready control-plane,master 100m v1.20.2 10.30.2.26 <none> CentOS Linux 7 (Core) 3
.10.0-1160.11.1.el7.x86_64 docker://19.3.14
...
Kubernetes를 업그레이드하여
kubespray 배경은 kubeadm이므로 Kubernetes 버전을 업그레이드하면 인증서가 자동으로 갱신됩니다. 보안을 유지하려면 클러스터를 자주 업그레이드하는 것이 가장 좋습니다.
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/
# vi $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/k8s-cluster.yml
...
force_certificate_regeneration: true
--------
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become
컨테이너 런타임 변경
Kubernetes는 v1.20 이후에 컨테이너 런타임으로 Docker를 더 이상 사용하지 않기 때문입니다. 그들은 현재 2021년 말(거의 1년!)에 1.22 릴리스에서 Docker 런타임 지원만 제거할 계획입니다.
docker에서 containerd로의 변경 예
##Edit
# cd $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/
# cp k8s-cluster.yml k8s-cluster.yml.bk
# vi k8s-cluster.yml
...
container_manager: docker #Change from docker to containerd
...
# cd $HOME/kubespray/inventory/k8scluster/group_vars
# cp etcd.yml etcd.yaml.bk
# vi etcd.yml
...
etcd_deployment_type: docker #Change from docker to host
...
# cd $HOME/kubespray/inventory/k8scluster/group_vars/all
# cp containerd.yml containerd.yml.bk
## unbar config
# vi containerd.yml
...
--------
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options
# Example: define registry mirror for docker hub
containerd_config:
grpc:
max_recv_message_size: 16777216
max_send_message_size: 16777216
debug:
level: ""
registries:
"docker.io":
- "https://mirror.gcr.io"
- "https://registry-1.docker.io"
max_container_log_line_size: -1
# metrics:
# address: ""
# grpc_histogram: false
--------
## Apply the new Container runtime
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become
## Check container runtime
# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8sm901 Ready control-plane,master 11h v1.20.2 10.30.2.25 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sm902 Ready control-plane,master 11h v1.20.2 10.30.2.26 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sm903 Ready control-plane,master 11h v1.20.2 10.30.2.27 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sw901 Ready <none> 11h v1.20.2 10.30.2.28 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sw902 Ready <none> 11h v1.20.2 10.30.2.29 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sw903 Ready <none> 11h v1.20.2 10.30.2.30 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sw904 Ready <none> 11h v1.20.2 10.30.2.22 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sw905 Ready <none> 11h v1.20.2 10.30.2.23 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
k8sw906 Ready <none> 11h v1.20.2 10.30.2.24 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 containerd://1.4.3
https://github.com/kubernetes-sigs/kubespray/blob/master/docs/containerd.md
Reference
이 문제에 관하여(CentOS 7 가이드에서 Kubespray로 Kubernetes 설치하기), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://dev.to/kittisuw/installing-kubernetes-with-kubespray-on-cenos-7-guide-4ee3텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)