K8S 1.10.1 고가용성 환경 구축 실전

서버 정보는 다음과 같습니다.
호스트 이름
IP
비고
node01
10.150.27.51
master and etcd
rode02
10.150.27.65
master and etcd
node03
10.150.27.66
node
VIP
10.150.27.99
소프트웨어 버전:
docker17.03.2-cesocat-1.7.3.2-2.el7.x86_64 kubelet-1.10.0-0.x86_64 kubernetes-cni-0.6.0-0.x86_64 kubectl-1.10.0-0.x86_64 kubeadm-1.10.0-0.x86_64
참조 문서:https://github.com/cookeem/kubeadm-ha/blob/master/README_CN.md

1: 환경 초기화


1: 호스트 이름을 각각 세 호스트에 설정
hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03

2: 호스트 매핑 구성
cat < /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
## k8s
10.150.27.51 node01
10.150.27.65 node02
10.150.27.66 node03
EOF

3:node01에서 ssh 비밀번호 없는 로그인 설정을 실행합니다
ssh-keygen  # 
ssh-copy-id  node02
ssh-copy-id  node03

4: 세 개의 호스트 설정, 방화벽 정지, Swap 닫기, Selinux 닫기, 코어 설정, K8S의 yum 원본 설치, 의존 패키지 설치, ntp 설정(설정 후 다시 시작하는 것을 권장)

systemctl stop firewalld
systemctl disable firewalld

swapoff -a 
sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce  0 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux 
sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config  

modprobe br_netfilter
cat <  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
ls /proc/sys/net/bridge

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y epel-release
yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl 

systemctl enable ntpdate.service
echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
crontab /tmp/crontab2.tmp
systemctl start ntpdate.service

echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
echo "* soft nproc 65536"  >> /etc/security/limits.conf
echo "* hard nproc 65536"  >> /etc/security/limits.conf
echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

2: keepalived 설치, 구성 (주 노드)


1: keepalived 설치
yum install -y keepalived
systemctl enable keepalived

==Node01의keepalived.conf==
cat < /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

# K8S API , 3s。
vrrp_script CheckK8sMaster {
    script "curl -k https://10.150.27.99:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 { ## 
    state MASTER  ## keepalived 
    interface eno16780032 ## VIP 
    virtual_router_id 61  ## VRID 
    priority 100  ## , , 
    advert_int 1  ## 
    mcast_src_ip 10.150.27.51
    nopreempt ## 
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj  ## , 
    }
    unicast_peer {
        10.150.27.65
    }
    virtual_ipaddress {
        10.150.27.99/24 ## VIP
    }
    track_script {
        CheckK8sMaster ## 
    }

}

=Node02의keepalived.conf==
cat < /etc/keepalived/keepalived.conf
global_defs {
   router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    script "curl -k https://10.150.27.99:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eno16780032
    virtual_router_id 61
    priority 90
    advert_int 1
    mcast_src_ip 10.150.27.65
    nopreempt
    authentication {
        auth_type PASS
        auth_pass sqP05dQgMSlzrxHj
    }
    unicast_peer {
        10.150.27.51
    }
    virtual_ipaddress {
        10.150.27.99/24
    }
    track_script {
        CheckK8sMaster
    }

}

2: keepalived 시작
systemctl restart keepalived

VIP가 node01에 귀속되어 있는 것을 볼 수 있습니다.
2: eno16780032:  mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:50:56:aa:5b:e8 brd ff:ff:ff:ff:ff:ff
    inet 10.150.27.51/24 brd 10.150.27.255 scope global eno16780032
       valid_lft forever preferred_lft forever
    inet 10.150.27.99/24 scope global secondary eno16780032
       valid_lft forever preferred_lft forever

3: etcd 인증서 만들기(node01에서 실행하면 됨)


1: cfssl 환경 설정
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH

2: CA 프로필 생성(아래 etc 노드의 IP로 구성됨)
mkdir /root/ssl
cd /root/ssl
cat >  ca-config.json <  ca-csr.json < etcd-csr.json <

3:node01 etcd 인증서를 node02에 배포
mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/
ssh -n node02 "mkdir -p /etc/etcd/ssl && exit"
scp -r /etc/etcd/ssl/*.pem node02:/etc/etcd/ssl/

4: 설치 설정 etcd(두 주 노드)의 실제 사용은 홀수 노드가 필요합니다.


1: 설치 etcd
yum install etcd -y
mkdir -p /var/lib/etcd

==node01의 etcd.service==
cat </etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd   \
    --name node01   \
    --cert-file=/etc/etcd/ssl/etcd.pem   \
    --key-file=/etc/etcd/ssl/etcd-key.pem   \
    --peer-cert-file=/etc/etcd/ssl/etcd.pem   \
    --peer-key-file=/etc/etcd/ssl/etcd-key.pem   \
    --trusted-ca-file=/etc/etcd/ssl/ca.pem   \
    --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem   \
    --initial-advertise-peer-urls https://10.150.27.51:2380   \
    --listen-peer-urls https://10.150.27.51:2380   \
    --listen-client-urls https://10.150.27.51:2379,http://127.0.0.1:2379   \
    --advertise-client-urls https://10.150.27.51:2379   \
    --initial-cluster-token etcd-cluster-0   \
    --initial-cluster node01=https://10.150.27.51:2380,node02=https://10.150.27.65:2380   \
    --initial-cluster-state new   \
    --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

==node02의 etcd.service==
cat </etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd   \
    --name node02   \
    --cert-file=/etc/etcd/ssl/etcd.pem   \
    --key-file=/etc/etcd/ssl/etcd-key.pem   \
    --peer-cert-file=/etc/etcd/ssl/etcd.pem   \
    --peer-key-file=/etc/etcd/ssl/etcd-key.pem   \
    --trusted-ca-file=/etc/etcd/ssl/ca.pem   \
    --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem   \
    --initial-advertise-peer-urls https://10.150.27.65:2380   \
    --listen-peer-urls https://10.150.27.65:2380   \
    --listen-client-urls https://10.150.27.65:2379,http://127.0.0.1:2379   \
    --advertise-client-urls https://10.150.27.65:2379   \
    --initial-cluster-token etcd-cluster-0   \
    --initial-cluster node01=https://10.150.27.51:2380,node02=https://10.150.27.65:2380   \
    --initial-cluster-state new   \
    --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

2: 자체 시작 추가 (etcd 집단은 최소 2개 노드를 사용해야 시작합니다. 시작은 mesages 로그를 잘못 보십시오)
 mv etcd.service /usr/lib/systemd/system/
 systemctl daemon-reload
 systemctl enable etcd
 systemctl start etcd
 systemctl status etcd

3: 두 개의 etcd 노드에서 명령 검사를 실행합니다
etcdctl --endpoints=https://10.150.27.51:2379,https://10.150.27.65:2379,https://192.168.150.183:2379 \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

상태 표시가 정상입니다.
member 753a005b7804171f is healthy: got healthy result from https://10.150.27.65:2379
member e8aa5c83cd4f744a is healthy: got healthy result from https://10.150.27.51:2379
cluster is healthy

5: 모든 노드 설치 설정docker


1: docker 설치(kubeadm 현재 docker 지원 최고 버전은 17.03.x)
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm  -y
yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm  -y

프로필vim/usr/lib/systemd/system/docker를 수정합니다.service
ExecStart=/usr/bin/dockerd   -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock  --registry-mirror=https://ms3cfraz.mirror.aliyuncs.com

docker 시작
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl status docker

6: kubeadm 설치, 구성


1: 모든 노드에 kubelet kubeadm kubectl 설치
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet 

2: 모든 노드에서kubelet 프로필 수정
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# 
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
# 
Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"

3: 모든 노드가 프로필을 수정하면 반드시 프로필을 다시 불러와야 한다
systemctl daemon-reload
systemctl enable kubelet

4: 명령 보완
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source > ~/.bashrc

7: 클러스터 초기화


1:node01,node02 클러스터 초기 프로필 추가(클러스터 프로필과 동일)
cat < config.yaml 
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://10.150.27.51:2379
  - https://10.150.27.65:2379
  caFile: /etc/etcd/ssl/ca.pem
  certFile: /etc/etcd/ssl/etcd.pem
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 10.244.0.0/16
kubernetesVersion: 1.10.0
api:
  advertiseAddress: "10.150.27.99"
token: "b99a00.a144ef80536d4344"
tokenTTL: "0s"  ## token  
apiServerCertSANs:
- node01
- node02
- node03
- 10.150.27.51
- 10.150.27.65
- 10.150.27.66
- 10.150.27.99
featureGates:
  CoreDNS: true
imageRepository: "registry.cn-hangzhou.aliyuncs.com/k8sth"
EOF

2: 우선 node01 클러스터 초기화
프로필 정의podnetwork는 172.16.30.0/16
kubeadm init --hlep에서 알 수 있듯이 서비스 기본 네트워크는 10.96.0.0/12입니다.
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf 기본 dns 주소 cluster-dns = 10.96.0.10
kubeadm init --config config.yaml 

== 초기화 실패 후 처리 방법 ==
kubeadm reset
# 
rm -rf /etc/kubernetes/*.conf
rm -rf /etc/kubernetes/manifests/*.yaml
docker ps -a |awk '{print $1}' |xargs docker rm -f
systemctl  stop kubelet

== 정상적인 초기화 결과는 다음과 같습니다==
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 10.150.27.99:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847f967dc58a6296911892662b98b1315

3:node01에서 다음과 같은 명령을 실행합니다
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

4:kubeadm 생성 인증서 암호 파일 node02 위로 나누기
scp -r /etc/kubernetes/pki  node02:/etc/kubernetes/

5:flannel 네트워크를 배치하고 node01에서만 실행하면 된다
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# :quay.io/coreos/flannel:v0.10.0-amd64

kubectl create -f  kube-flannel.yml

명령을 집행하다
[root@node01 ~]# kubectl   get node
NAME      STATUS    ROLES     AGE       VERSION
node01    Ready     master    31m       v1.10.0
[root@node01 ~]# kubectl   get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   coredns-7997f8864c-4x7mg         1/1       Running   0          29m
kube-system   coredns-7997f8864c-zfcck         1/1       Running   0          29m
kube-system   kube-apiserver-node01            1/1       Running   0          29m
kube-system   kube-controller-manager-node01   1/1       Running   0          30m
kube-system   kube-flannel-ds-hw2xb            1/1       Running   0          1m
kube-system   kube-proxy-s265b                 1/1       Running   0          29m
kube-system   kube-scheduler-node01            1/1       Running   0          30m

6: 대시보드 배포
kubectl create -f kubernetes-dashboard.yaml

token 가져오기×××상륙
 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Firefox를 통해dashboard에 접근하고 token을 입력하면 로그인합니다
https://10.150.27.99:30000/

kubernetes-dashboard.yaml 파일 내용은 다음과 같습니다.
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Configuration to deploy release version of the Dashboard UI compatible with
# Kubernetes 1.8.
#
# Example usage: kubectl create -f 

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---
# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1beta2
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/k8sth/kubernetes-dashboard-amd64:v1.8.3
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---
# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

7: heapster 설치
[root@node01 ~]# kubectl create -f kube-heapster/influxdb/
deployment.extensions "monitoring-grafana" created
service "monitoring-grafana" created
serviceaccount "heapster" created
deployment.extensions "heapster" created
service "heapster" created
deployment.extensions "monitoring-influxdb" created
service "monitoring-influxdb" created
[root@node01 ~]#  kubectl create -f kube-heapster/rbac/
clusterrolebinding.rbac.authorization.k8s.io "heapster" created
[root@node01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   coredns-7997f8864c-4x7mg                1/1       Running   0          1h
kube-system   coredns-7997f8864c-zfcck                1/1       Running   0          1h
kube-system   heapster-647b89cd4b-wmvmw               1/1       Running   0          39s
kube-system   kube-apiserver-node01                   1/1       Running   0          1h
kube-system   kube-controller-manager-node01          1/1       Running   0          1h
kube-system   kube-flannel-ds-hw2xb                   1/1       Running   0          49m
kube-system   kube-proxy-s265b                        1/1       Running   0          1h
kube-system   kube-scheduler-node01                   1/1       Running   0          1h
kube-system   kubernetes-dashboard-7b44ff9b77-26fkj   1/1       Running   0          44m
kube-system   monitoring-grafana-74bdd98b7d-szvqg     1/1       Running   0          40s
kube-system   monitoring-influxdb-55bbd4b96-95tw7     1/1       Running   0          40s

방문https://10.150.27.99:30000/#!/로그인하면 모니터링 정보를 볼 수 있습니다.
heapster 파일 정보
[root@node01 ~]# tree kube-heapster/
kube-heapster/
├── influxdb
│   ├── grafana.yaml
│   ├── heapster.yaml
│   └── influxdb.yaml
└── rbac
    └── heapster-rbac.yaml

grafana.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-grafana
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: grafana
    spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      containers:
      - name: grafana
        image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-grafana-amd64:v4.4.3
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-certificates
          readOnly: true
        - mountPath: /var
          name: grafana-storage
        env:
        - name: INFLUXDB_HOST
          value: monitoring-influxdb
        - name: GF_SERVER_HTTP_PORT
          value: "3000"
          # The following env variables are required to make Grafana accessible via
          # the kubernetes api-server proxy. On production clusters, we recommend
          # removing these env variables, setup auth for grafana, and expose the grafana
          # service using a LoadBalancer or a public IP.
        - name: GF_AUTH_BASIC_ENABLED
          value: "false"
        - name: GF_AUTH_ANONYMOUS_ENABLED
          value: "true"
        - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          value: Admin
        - name: GF_SERVER_ROOT_URL
          # If you're only using the API Server proxy, set this value instead:
          # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
          value: /
      volumes:
      - name: ca-certificates
        hostPath:
          path: /etc/ssl/certs
      - name: grafana-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP.
  # type: LoadBalancer
  # You could also use NodePort to expose the service at a randomly-generated port
  # type: NodePort
  ports:
  - port: 80
    targetPort: 3000
  selector:
    k8s-app: grafana

heapster.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: heapster
  namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: heapster
    spec:
      serviceAccountName: heapster
      nodeSelector:
        node-role.kubernetes.io/master: ""
      containers:
      - name: heapster
        image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-amd64:v1.4.2
        imagePullPolicy: IfNotPresent
        command:
        - /heapster
        - --source=kubernetes:https://kubernetes.default
        - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

influxdb.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monitoring-influxdb
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        task: monitoring
        k8s-app: influxdb
    spec:
      nodeSelector:
        node-role.kubernetes.io/master: ""
      containers:
      - name: influxdb
        image: registry.cn-hangzhou.aliyuncs.com/k8sth/heapster-influxdb-amd64:v1.3.3
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    task: monitoring
    # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
    # If you are NOT using this as an addon, you should comment out this line.
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: monitoring-influxdb
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - port: 8086
    targetPort: 8086
  selector:
    k8s-app: influxdb

heapster-rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

9: node02에서 초기화 수행
kubeadm init --config config.yaml
# node01 
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

10: 노드 정보 보기
[root@node01 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node01    Ready     master    1h        v1.10.0
node02    Ready     master    1h        v1.10.0
[root@node01 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE       IP             NODE
kube-system   coredns-7997f8864c-cr725                 1/1       Running   0          40d       10.244.0.2     node01
kube-system   coredns-7997f8864c-qp79g                 1/1       Running   0          40d       10.244.0.3     node01
kube-system   elasticsearch-logging-1                  1/1       Running   0          7d        10.244.0.18    node01
kube-system   heapster-647b89cd4b-pmlwh                1/1       Running   0          11d       10.244.0.15    node01
kube-system   kube-apiserver-node02                    1/1       Running   1          7d        10.150.27.65   node02
kube-system   kube-apiserver-node01                    1/1       Running   0          40d       10.150.27.51   node01
kube-system   kube-controller-manager-node02           1/1       Running   2          7d        10.150.27.65   node02
kube-system   kube-controller-manager-node01           1/1       Running   1          40d       10.150.27.51   node01
kube-system   kube-flannel-ds-7f67k                    1/1       Running   1          40d       10.150.27.65   node02
kube-system   kube-flannel-ds-mjl2d                    1/1       Running   0          40d       10.150.27.51   node01
kube-system   kube-proxy-75t65                         1/1       Running   1          40d       10.150.27.65   node02
kube-system   kube-proxy-mtnnw                         1/1       Running   0          40d       10.150.27.51   node01
kube-system   kube-scheduler-node02                    1/1       Running   1          7d        10.150.27.65   node02
kube-system   kube-scheduler-node01                    1/1       Running   1          40d       10.150.27.51   node01
kube-system   kubernetes-dashboard-7b44ff9b77-zx448    1/1       Running   0          40d       10.244.0.4     node01
kube-system   monitoring-grafana-74bdd98b7d-2grhz      1/1       Running   0          11d       10.244.0.16    node01
kube-system   monitoring-influxdb-55bbd4b96-xxfrr      1/1       Running   0          11d       10.244.0.17    node01

11:master도pod를 실행하도록 (기본 master는pod를 실행하지 않음)
kubectl taint nodes --all node-role.kubernetes.io/master-

8:node03 노드를 그룹에 추가


node03 노드에서 다음과 같은 명령을 실행하면 노드를 그룹에 추가할 수 있습니다
kubeadm join 10.150.27.99:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847f967dc58a6296911892662b98b1315
[root@node01 ~]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
node01    Ready     master    45m       v1.10.0
node02    Ready     master    15m       v1.10.0
node03    Ready         13m       v1.10.0

12: Dashboard 예제https://10.150.27.99:30000/다음 그림 참조.
이로써 K8S1.10쌍의 Master 고가용 모드 구축이 완료되었고 Node01에서 네트워크 카드의 아날로그 고장을 끄면 VIP는 자동으로 Node02로 이동합니다.

좋은 웹페이지 즐겨찾기