k8s v1.13.4 클러스터 배포
배포 환경
호스트 노드 목록
서버 이름
ip 주소
etcd
K8S server
K8s node
node01
172.16.50.111
Y
Y
node02
172.16.50.113
Y
Y
node03
172.16.50.115
Y
Y
node04
172.16.50.116
Y
node05
172.16.50.118
Y
node06
172.16.50.120
Y
node07
172.16.50.128
Y
버전 정보
# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
# uname -r
3.10.0-957.1.3.el7.x86_64
Server Version: 1.13.1
설치 전 준비
# vim /etc/sysconfig/selinux
SELINUX=disabled
# systemctl stop firewalld && \
systemctl disable firewalld
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
# yum install git -y && \
mkdir /data && \
cd /data && \
git clone https://github.com/fandaye/Deploy-Kubernetes.git
# cd /data/Deploy-Kubernetes
# git checkout v1.13.4 # v1.13.4
# cd /data/Deploy-Kubernetes/image
# for i in `ls *.zip` ; do \
unzip $i ; \
done
# for i in `ls *tar` ; do \
docker load -i $i ; \
done
Docker 배포
# yum install docker -y
# cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://8ph3bzw4.mirror.aliyuncs.com"],
"graph": "/data/docker"
}
EOF
# mkdir /data/docker -p && \
systemctl start docker && systemctl enable docker
##kube 구성 요소 설치
kube 소스 추가
# cat > /etc/yum.repos.d/kube.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# yum install /data/Deploy-Kubernetes/pkg/*.rpm -y
# systemctl enable kubelet
Etcd 클러스터 배포
패키지 설치
cd /data/Deploy-Kubernetes/pkg && \
tar -zxf etcd-v3.3.11-linux-amd64.tar.gz && \
cp etcd-v3.3.11-linux-amd64/{etcd,etcdctl} /usr/bin/ && \
rm -rf etcd-v3.3.11-linux-amd64
사용
kubeadm
생성Etcd
에 필요한 인증서[root@node01 ~]# mkdir /etc/kubernetes/pki/etcd -p
# kubeadm init phase certs etcd-ca && \
kubeadm init phase certs apiserver-etcd-client && \
kubeadm init phase certs etcd-healthcheck-client && \
kubeadm init phase certs etcd-peer && \
kubeadm init phase certs etcd-server
[root@node01 ~]# for node in 113 115 ; do \
scp /etc/kubernetes/pki/etcd/{ca.crt,ca.key} \
[email protected].${node}:/etc/kubernetes/pki/etcd/ && \
scp /etc/kubernetes/pki/{apiserver-etcd-client.crt,apiserver-etcd-client.key} \
[email protected].${node}:/etc/kubernetes/pki ; \
done
[root@node02 ~]# kubeadm init phase certs etcd-healthcheck-client && \
kubeadm init phase certs etcd-peer && \
kubeadm init phase certs etcd-server
[root@node03 ~]# kubeadm init phase certs etcd-healthcheck-client && \
kubeadm init phase certs etcd-peer && \
kubeadm init phase certs etcd-server
Etcd 시작 스크립트
/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
# WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--advertise-client-urls=https://{{NodeIP}}:2379 \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--client-cert-auth=true \
--data-dir=/var/lib/etcd \
--initial-advertise-peer-urls=https://{{NodeIP}}:2380 \
--key-file=/etc/kubernetes/pki/etcd/server.key \
--listen-client-urls=https://127.0.0.1:2379,https://{{NodeIP}}:2379 \
--listen-peer-urls=https://{{NodeIP}}:2380 \
--name={{NodeName}} \
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--peer-client-cert-auth=true \
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--snapshot-count=10000 \
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=node01=https://172.16.50.111:2380,node02=https://172.16.50.113:2380,node03=https://172.16.50.115:2380 \
--initial-cluster-state=new
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
{{NodeIP}}/{NodeName}} 교체
etcd 클러스터 시작
systemctl enable etcd && systemctl start etcd
클러스터 실행 확인
# for i in 111 113 115 ; do \
etcdctl \
--endpoints=https://172.16.50.$i:2379 \
--ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--cert-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key-file=/etc/kubernetes/pki/etcd/healthcheck-client.key \
member list \
; done
:
... peerURLs=https://172.16.50.115:2380 clientURLs=https://172.16.50.115:2379 isLeader=true
... peerURLs=https://172.16.50.113:2380 clientURLs=https://172.16.50.113:2379 isLeader=false
... peerURLs=https://172.16.50.111:2380 clientURLs=https://172.16.50.111:2379 isLeader=false
Kubernetes 클러스터 초기화
프로필 편집
/data/Deploy-Kubernetes/config/config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
certSANs:
- "node01"
- "node02"
- "node03"
- "172.16.50.111"
- "172.16.50.113"
- "172.16.50.115"
- "172.16.50.190"
controlPlaneEndpoint: "172.16.50.190:6443"
etcd:
external:
endpoints:
- https://172.16.50.111:2379
- https://172.16.50.113:2379
- https://172.16.50.115:2379
caFile: /etc/kubernetes/pki/etcd/ca.crt
certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
172.16.50.190을 로드 IP로 사용
IP 주소 추가
[root@node01 ~]# ifconfig eth0:0 172.16.50.190 netmask 255.255.255.0 up
게으르다,keepalived를 사용하여 고장 자동 이동을 실현할 수 있다
초기화
[root@node01 ~]# kubeadm init --config /data/Deploy-Kubernetes/config/config.yaml
클러스터 상태 보기
[root@node01 ~]# mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@node01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node01 NotReady master 6m4s v1.13.4
[root@node01 ~]# kubectl get pod --all-namespaces # -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-ckc8f 0/1 Pending 0 8m3s
kube-system coredns-86c58d9df4-wt2dp 0/1 Pending 0 8m3s
kube-system kube-apiserver-node01 1/1 Running 0 7m5s
kube-system kube-controller-manager-node01 1/1 Running 0 7m4s
kube-system kube-proxy-q2449 1/1 Running 0 8m3s
kube-system kube-scheduler-node01 1/1 Running 0 7m1s
네트워크 설치 후에coredns가 시작됩니다.
네트워크 설치
[root@node01 ~]# kubectl create -f /data/Deploy-Kubernetes/config/weave-net.yaml
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.extensions/weave-net created
[root@node01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node01 Ready master 10m v1.13.4
[root@node01 ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-ckc8f 1/1 Running 0 10m
kube-system coredns-86c58d9df4-wt2dp 1/1 Running 0 10m
kube-system kube-apiserver-node01 1/1 Running 0 9m36s
kube-system kube-controller-manager-node01 1/1 Running 0 9m35s
kube-system kube-proxy-q2449 1/1 Running 0 10m
kube-system kube-scheduler-node01 1/1 Running 0 9m32s
kube-system weave-net-6znrt 2/2 Running 0 91s
인증서를 node02 node03 노드로 복사
[root@node01 ~]# for node in 113 115 ; do \
scp /etc/kubernetes/pki/{ca.crt,ca.key,sa.key,sa.pub,front-proxy-ca.crt,front-proxy-ca.key} \
[email protected].$node:/etc/kubernetes/pki/ && \
scp /etc/kubernetes/admin.conf [email protected].$node:/etc/kubernetes \
; done
node02 노드 클러스터 가입
[root@node02 pkg]# kubeadm join 172.16.50.190:6443 \
--token xoy1bv.tniobqdvl7r70f3j \
--discovery-token-ca-cert-hash sha256:cc5d9b58a0482dc77bde9656946e04b0eb40ca8522b752839fa8bb449fb21a3f \
--experimental-control-plane
node03 노드 클러스터 가입
[root@node03 pkg]# kubeadm join 172.16.50.190:6443 \
--token xoy1bv.tniobqdvl7r70f3j \
--discovery-token-ca-cert-hash sha256:cc5d9b58a0482dc77bde9656946e04b0eb40ca8522b752839fa8bb449fb21a3f \
--experimental-control-plane
--experimental-control-plane 매개 변수는 관리 노드가 집단에 가입했음을 나타냅니다.work 노드라면 이 매개 변수를 추가하지 않습니다
클러스터 정보 다시 보기
[root@node01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node01 Ready master 16m v1.13.4
node02 Ready master 2m50s v1.13.4
node03 Ready master 72s v1.13.4
[root@node01 ~]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-ckc8f 1/1 Running 0 16m
kube-system coredns-86c58d9df4-wt2dp 1/1 Running 0 16m
kube-system kube-apiserver-node01 1/1 Running 0 15m
kube-system kube-apiserver-node02 1/1 Running 0 3m20s
kube-system kube-apiserver-node03 1/1 Running 0 103s
kube-system kube-controller-manager-node01 1/1 Running 0 15m
kube-system kube-controller-manager-node02 1/1 Running 0 3m20s
kube-system kube-controller-manager-node03 1/1 Running 0 103s
kube-system kube-proxy-ls95l 1/1 Running 0 103s
kube-system kube-proxy-q2449 1/1 Running 0 16m
kube-system kube-proxy-rk4rf 1/1 Running 0 3m20s
kube-system kube-scheduler-node01 1/1 Running 0 15m
kube-system kube-scheduler-node02 1/1 Running 0 3m20s
kube-system kube-scheduler-node03 1/1 Running 0 103s
kube-system weave-net-6znrt 2/2 Running 0 7m49s
kube-system weave-net-r5299 2/2 Running 1 3m20s
kube-system weave-net-xctmb 2/2 Running 1 103s
node04 node05 node06 node07 노드 클러스터 가입
kubeadm join 172.16.50.190:6443 --token xoy1bv.tniobqdvl7r70f3j --discovery-token-ca-cert-hash sha256:cc5d9b58a0482dc77bde9656946e04b0eb40ca8522b752839fa8bb449fb21a3f
클러스터 정보 다시 보기
[root@node01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node01 Ready master 89m v1.13.4
node02 Ready master 76m v1.13.4
node03 Ready master 74m v1.13.4
node04 Ready 60m v1.13.4
node05 Ready 61m v1.13.4
node06 Ready 61m v1.13.4
node07 Ready 61m v1.13.4
token 보기 방법
[root@node01 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES
xoy1bv.tniobqdvl7r70f3j 23h 2019-03-14T02:37:38-04:00 .....
ca 인증서sha256 인코딩hash값 보기 방법
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'
dashboard 배포
만들다
[root@node01 config]# kubectl create -f /data/Deploy-Kubernetes/config/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created
로그인 인증용 Token 로깅
[root@node01 ~]# kubectl describe secrets/`kubectl get secrets -n kube-system | \
grep kubernetes-dashboard-admin | awk '{print $1}'` -n kube-system | grep "token:"
인증서 내보내기
[root@node01 ~]# cat /etc/kubernetes/admin.conf | grep client-certificate-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.crt && \
> cat /etc/kubernetes/admin.conf | grep client-key-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.key
[root@node01 ~]# openssl pkcs12 -export -inkey /etc/kubernetes/pki/client.key -in /etc/kubernetes/pki/client.crt -out /etc/kubernetes/pki/client.pfx
브라우저에 인증서 가져오기
파이어폭스 브라우저 권장
이 내용에 흥미가 있습니까?
현재 기사가 여러분의 문제를 해결하지 못하는 경우 AI 엔진은 머신러닝 분석(스마트 모델이 방금 만들어져 부정확한 경우가 있을 수 있음)을 통해 가장 유사한 기사를 추천합니다:
[K8s] Kubernetes Pod를 다시 시작하는 방법이 경우 빠르고 쉽게 다시 시작할 수 있는 방법이 필요할 수 있습니다. 따라서 단일 포드를 다시 시작하는 간단한 방법이 없습니다. 이 문서에서 다룰 몇 가지 사용 가능한 옵션이 있습니다. ReplicaSet에서 유지...
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
CC BY-SA 2.5, CC BY-SA 3.0 및 CC BY-SA 4.0에 따라 라이센스가 부여됩니다.