kubernetesflannel 플러그인 설치

flannel 플러그인 설치
flannel 네트워크 플러그인 설치
각 노드에 docker 서비스가 설치되어 있으면, 네트워크 카드 정보를 보면 각 노드의 docker0 네트워크 카드의 IP가 172.17.0.1인 것을 알 수 있습니다.
[root@wecloud-test-k8s-4 ~]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:8e:7c:23:ea  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.99.196  netmask 255.255.255.0  broadcast 192.168.99.255
        inet6 fe80::f816:3eff:feb1:afe9  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:b1:af:e9  txqueuelen 1000  (Ethernet)
        RX packets 10815343  bytes 1108180112 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6551758  bytes 933543908 (890.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 32212  bytes 1680632 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32212  bytes 1680632 (1.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

이것은 각 노드 간에 어떻게 통신하는가에 대한 문제를 도입했다. k8s는 다중 노드 통신의 해결 방안을 직접 제공하지 않았기 때문에 flannel,calico,weave 등 네트워크 해결 방안이 있다. 본고는 다음과 같은 flannel의 방식을 소개한다.
flannel의 홈페이지 주소는 다음과 같습니다.https://coreos.com/flannel/docs/latest/
배치 절차
flannel 버전에 특별한 수요가 없으면centos7에서yum 설치 방식을 직접 사용할 수 있습니다.
[root@wecloud-test-k8s-2 ~]# yum install flannel -y

flannel의 서비스 시작 관리 파일은/usr/lib/systemd/system/flanneld입니다.서비스, 내용은 다음과 같습니다.
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
  $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

이 서비스 관리 파일에는 다음과 같은 구성 정보가 들어 있는/etc/sysconfig/flanneld를 구성해야 합니다.
# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

NIC가 여러 개인 경우 FLANNELOPTIONS에서 외부 네트워크 출구의 NIC를 지정합니다.
etcd에서 네트워크 설정 만들기
docker에 ip 주소 세그먼트 할당 명령 실행
[root@wecloud-test-k8s-2 ~]# etcdctl --endpoints=https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379 \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
> --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> mkdir /kube-centos/network
[root@wecloud-test-k8s-2 ~]# etcdctl --endpoints=https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379 \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
> --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
>   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'
{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

서브넷 주소 범위를 만들고 네트워크 형식을 vxlan으로 지정하지만 flannel은 vxlan 방식을 사용하는 성능이 비교적 낮기 때문에 생산 환경에서host-gw(vxlan을 교체하면 됩니다)를 사용하는 것을 권장합니다.
flannel 서비스 시작
세 개의 node 노드에서 flannel 서비스를 시작하고 POST로 설정합니다.
[root@wecloud-test-k8s-2 ~]# systemctl daemon-reload
[root@wecloud-test-k8s-2 ~]# systemctl enable flanneld.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@wecloud-test-k8s-2 ~]# systemctl start flanneld.service 
[root@wecloud-test-k8s-2 ~]# systemctl status flanneld.service 
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
   Active: active (running) since   2018-04-13 09:48:57 CST; 4s ago
  Process: 24392 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker (code=exited, status=0/SUCCESS)
 Main PID: 24378 (flanneld)
   CGroup: /system.slice/flanneld.service
           └─24378 /usr/bin/flanneld -etcd-endpoints=https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379 ...

4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld[24378]: warning: ignoring ServerName for user-provided CA for backwards compa...cated
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.594025   24378 main.go:132] Installing signal handlers
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.594196   24378 manager.go:136] Determining IP ad...rface
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.594522   24378 manager.go:149] Using interface w...9.189
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.594547   24378 manager.go:166] Defaulting extern....189)
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.954118   24378 local_manager.go:179] Picking sub...255.0
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.995655   24378 manager.go:250] Lease acquired: 1....0/24
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.996165   24378 network.go:58] Watching for L3 misses
4  13 09:48:56 wecloud-test-k8s-2.novalocal flanneld-start[24378]: I0413 09:48:56.996192   24378 network.go:66] Watching for new s...eases
4  13 09:48:57 wecloud-test-k8s-2.novalocal systemd[1]: Started Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.

세 노드에서 모두 flannel 서비스를 시작해야 합니다.
노드 노드에 관련된 flannel ip가 있는 것을 볼 수 있습니다.
[root@wecloud-test-k8s-2 ~]# ip addr list
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:08:db:33 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.189/24 brd 192.168.99.255 scope global dynamic eth0
       valid_lft 76224sec preferred_lft 76224sec
    inet6 fe80::f816:3eff:fe08:db33/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:76:5e:fb:fa brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 36:99:fa:cc:37:60 brd ff:ff:ff:ff:ff:ff
    inet 172.30.93.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3499:faff:fecc:3760/64 scope link 
       valid_lft forever preferred_lft forever


[root@wecloud-test-k8s-3 ~]# ip addr list
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:7d:65:65 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.185/24 brd 192.168.99.255 scope global dynamic eth0
       valid_lft 62802sec preferred_lft 62802sec
    inet6 fe80::f816:3eff:fe7d:6565/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:0c:11:31:e1 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 3e:14:5e:a1:81:5d brd ff:ff:ff:ff:ff:ff
    inet 172.30.26.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3c14:5eff:fea1:815d/64 scope link 
       valid_lft forever preferred_lft forever


[root@wecloud-test-k8s-4 ~]# ip addr list
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether fa:16:3e:b1:af:e9 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.196/24 brd 192.168.99.255 scope global dynamic eth0
       valid_lft 81961sec preferred_lft 81961sec
    inet6 fe80::f816:3eff:feb1:afe9/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0:  mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:8e:7c:23:ea brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 3e:ec:21:e5:e4:df brd ff:ff:ff:ff:ff:ff
    inet 172.30.81.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::3cec:21ff:fee5:e4df/64 scope link 
       valid_lft forever preferred_lft forever

세 개의 flannel의 IP는 각각 172.30.93.0(node1), 172.30.26.0(node2), 172.30.81.0(node3)으로 172.30.93.0ping의 다른 두 노드에서 네트워크가 서로 통하는지 테스트한다.
[root@wecloud-test-k8s-2 ~]# ping 172.30.26.0 
PING 172.30.26.0 (172.30.26.0) 56(84) bytes of data.
64 bytes from 172.30.26.0: icmp_seq=1 ttl=64 time=0.820 ms
64 bytes from 172.30.26.0: icmp_seq=2 ttl=64 time=0.616 ms
64 bytes from 172.30.26.0: icmp_seq=3 ttl=64 time=0.637 ms
^C
--- 172.30.26.0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.616/0.691/0.820/0.091 ms
[root@wecloud-test-k8s-2 ~]# ping 172.30.81.0
PING 172.30.81.0 (172.30.81.0) 56(84) bytes of data.
64 bytes from 172.30.81.0: icmp_seq=1 ttl=64 time=2.70 ms
64 bytes from 172.30.81.0: icmp_seq=2 ttl=64 time=0.675 ms
64 bytes from 172.30.81.0: icmp_seq=3 ttl=64 time=0.612 ms
^C
--- 172.30.81.0 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 0.612/1.329/2.700/0.969 ms

flannel의 정보는 etcd 그룹에 등록됩니다. 이것은 설정 파일에 설명되어 있으며 etcd에서 조회할 수 있습니다.
[root@wecloud-test-k8s-2 ~]# ETCD_ENDPOINTS="https://192.168.99.189:2379,https://192.168.99.185:2379,https://192.168.99.196:2379"
[root@wecloud-test-k8s-2 ~]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
> --ca-file=/etc/kubernetes/ssl/ca.pem \
>   --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
>   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
> ls /kube-centos/network/subnets
/kube-centos/network/subnets/172.30.93.0-24
/kube-centos/network/subnets/172.30.26.0-24
/kube-centos/network/subnets/172.30.81.0-24
[root@wecloud-test-k8s-2 ~]# etcdctl --endpoints=${ETCD_ENDPOINTS} \
>   --ca-file=/etc/kubernetes/ssl/ca.pem \
>   --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
>   --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
>   get /kube-centos/network/config
{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}

소결
flannel 서비스는 k8s 각 노드 간의 네트워크 통신을 만족시키기 위해 이 외에 k8s는 다른 네트워크 솔루션(calico,weave)을 지원한다.네트워크는 용기가 최적화되어야 하는 큰 방면이다.그 방안을 구체적으로 선택하려면 실제 상황과 결합하여 테스트를 해야 한다.

좋은 웹페이지 즐겨찾기