Docker 1.12: Now with Built-in Orchestration! 테스트

13702 단어 Docker
Docker 1.12: Now with Built-in Orchestration! | Docker Blog
[참고 번역] Docker 1.12: 드디어 공개 퍼포먼스에 합류했습니다!Pocketstudio Technology Log
[익스프레스] Docker는 최신 버전의 Docker Engine에 공개 기능을 내장했습니다.외부 도구가 필요 없이 집단 운용을 실현하다.DockerCon 16 - Publickey

Docker만 공개 공연을 할 수 있다고 합니다.


한 번 시험해 보았다


Version

  • Version 1.12.0-rc2-beta16 (build: 9493)
  • OS X EI Capitan 10.11.5(15F34)
  • 맥으로 이동하는 Docker 호스트로 하고 싶어요.

    $ docker node ls
    ID                           NAME  MEMBERSHIP  STATUS  AVAILABILITY  MANAGER STATUS
    5cqlhnd928r9pbklk3n7r9nb2 *  moby  Accepted    Ready   Active        Leader
    $
    
    돌아오겠지만
    $ docker-machine ip moby
    Host does not exist: "moby"
    $
    
    IP가 아직 돌아오지 않기 때문에 VirtualBox로 합니다
    (아무래도 Mac는 DHCP가 설정한 것이기 때문에 swarm에서 합작하기에 적합하지 않다)

    다중 구성 시도

  • Manager 3대
  • 3대의 워커
  • VirtualBox를 사용하여 구성 시도
    VirtualBox에서 docker machine 만들기
    $ docker-machine create swarm-manager01 --driver virtualbox
    $ docker-machine create swarm-manager02 --driver virtualbox
    $ docker-machine create swarm-manager03 --driver virtualbox
    
    $ docker-machine create swarm-worker01 --driver virtualbox
    $ docker-machine create swarm-worker02 --driver virtualbox
    $ docker-machine create swarm-worker03 --driver virtualbox
    
    관리자/worker라는 이름이 붙었지만 각자 움직이는 상태입니다.
    호스트 정보 확인
    $ docker-machine ls
    NAME              ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
    swarm-manager01   *        virtualbox   Running   tcp://192.168.99.103:2376           v1.12.0-rc2
    swarm-manager02   -        virtualbox   Running   tcp://192.168.99.104:2376           v1.12.0-rc2
    swarm-manager03   -        virtualbox   Running   tcp://192.168.99.105:2376           v1.12.0-rc2
    swarm-worker01    -        virtualbox   Running   tcp://192.168.99.106:2376           v1.12.0-rc2
    swarm-worker02    -        virtualbox   Running   tcp://192.168.99.107:2376           v1.12.0-rc2
    swarm-worker03    -        virtualbox   Running   tcp://192.168.99.108:2376           v1.12.0-rc2
    $
    
  • swarm-manager01 : 192.168.99.103:2376
  • swarm-manager02 : 192.168.99.104:2376
  • swarm-manager03 : 192.168.99.105:2376
  • swarm-worker01 : 192.168.99.106:2376
  • swarm-worker02 : 192.168.99.107:2376
  • swarm-worker03 : 192.168.99.108:2376
  • Manager 구성


    swarm-manager01에서 init
    $ eval $(docker-machine env swarm-manager01)
    $ docker swarm init --listen-addr 192.168.99.103:2377
    $ #自動でjoinしないようにしたい場合は --auto-accept none とか付ける
    
    swarm-manager02의 join--관리자
    $ eval $(docker-machine env swarm-manager02)
    $ docker swarm join --manager --listen-addr 192.168.99.104:2377 192.168.99.103:2377
    This node joined a Swarm as a manager.
    $ 
    
    swarm-manager03의join--관리자
    $ eval $(docker-machine env swarm-manager03)
    $ docker swarm join --manager --listen-addr 192.168.99.105:2377 192.168.99.103:2377
    This node joined a Swarm as a manager.
    $ 
    

    Worker 구성


    join 명령에 관리자 옵션을 추가하지 않으면worker
    swarm-worker01의 join
    $ eval $(docker-machine env swarm-worker01)
    $ docker swarm join  --listen-addr 192.168.99.106:2377 192.168.99.103:2377
    This node joined a Swarm as a worker.
    $
    
    swarm-worker02의 join
    $ eval $(docker-machine env swarm-worker02)
    $ docker swarm join  --listen-addr 192.168.99.107:2377 192.168.99.103:2377
    This node joined a Swarm as a worker.
    $
    
    swarm-worker03의join
    $ eval $(docker-machine env swarm-worker03)
    $ docker swarm join  --listen-addr 192.168.99.108:2377 192.168.99.103:2377
    This node joined a Swarm as a worker.
    $
    

    Nginx를 시작해 보십시오.

  • fronted 서비스 이름으로 5개nginx
  • 시작
    Manager node 작업
    $ eval $(docker-machine env swarm-manager01)
    
    창설
    $ docker service create --name frontend --replicas 5 -p 80:80/tcp nginx:latest
    0ktteptq3v85p499fsd6hjeah
    $
    
    확인
    관리자는 워크맨 겸 업무
    $ docker service tasks frontend
    ID                         NAME        SERVICE   IMAGE         LAST STATE         DESIRED STATE  NODE
    64ikgt2aogz4yvf2420087eeu  frontend.1  frontend  nginx:latest  Running 3 minutes  Running        swarm-manager01
    5pirlmlbm29tagjx7t7ty96o2  frontend.2  frontend  nginx:latest  Running 3 minutes  Running        swarm-manager03
    8rebiu9bf74g2fe971z03d4m7  frontend.3  frontend  nginx:latest  Running 3 minutes  Running        swarm-worker01
    3codqzxy7qui6t62hbu1axu8r  frontend.4  frontend  nginx:latest  Running 3 minutes  Running        swarm-manager02
    0zi5xg7qecxjhgrfhm1bh6q4t  frontend.5  frontend  nginx:latest  Running 3 minutes  Running        swarm-worker03
    
    5>2로 확대!
    $ docker service scale frontend=2
    frontend scaled to 2
    $ 
    
    $ docker service tasks frontend
    ID                         NAME        SERVICE   IMAGE         LAST STATE         DESIRED STATE  NODE
    3codqzxy7qui6t62hbu1axu8r  frontend.4  frontend  nginx:latest  Running 8 minutes  Running        swarm-manager02
    0zi5xg7qecxjhgrfhm1bh6q4t  frontend.5  frontend  nginx:latest  Running 8 minutes  Running        swarm-worker03
    $
    
    이 상태에서 swarm-manager01의 IP를 사용해도 연결할 수 있다
    워크맨의 IP도 연결됩니다.균형이 잡혔어요.
    (반응이 좋지 않습니다. 시간이 필요합니다. VM이 너무 많이 늘었습니까?)
    http://192.168.99.103/
    http://192.168.99.104/
    http://192.168.99.105/
    http://192.168.99.106/
    http://192.168.99.107/
    http://192.168.99.108/

    2>50으로 확대/축소

    $ docker service scale frontend=50 ; while true; do date; docker service tasks frontend; sleep 1; done
    frontend scaled to 50
    2016年 6月21日 火曜日 18時08分32秒 JST
    ID                         NAME         SERVICE   IMAGE         LAST STATE                   DESIRED STATE  NODE
    53zq623h6pydgwz8gibbyg0ma  frontend.1   frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    b2342f9gr6ddzhgxkv4ehao4w  frontend.2   frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    5mofgkdru0g62y80g3yzr8po4  frontend.3   frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    3codqzxy7qui6t62hbu1axu8r  frontend.4   frontend  nginx:latest  Running 16 minutes           Running        swarm-manager02
    0zi5xg7qecxjhgrfhm1bh6q4t  frontend.5   frontend  nginx:latest  Running 16 minutes           Running        swarm-worker03
    bpt3lqq15jh1rqjli7pejcafc  frontend.6   frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    892j9gm1zjfprktjd0h8v65zp  frontend.7   frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    93f0xh7jo99leq5qfcmv3fwg2  frontend.8   frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    bd028u32w2dn0qra7qdougim2  frontend.9   frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    3lu7qnvlpv7hcet94ex3mkktr  frontend.10  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    704qhs8fafrkqu892bp1mfuvu  frontend.11  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    60q9h6739t6quwhb0s3ry5vic  frontend.12  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    2gn9go7e8vspa1tlfrs81bmtv  frontend.13  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    edph6viy94ojhcup2k5hkj6wa  frontend.14  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    4rfworr36efy3xdblknyt0afc  frontend.15  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    1m22vxsnzq5ycs7thwo5bxqsf  frontend.16  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    7pgdqys387811krx3hnkyeo10  frontend.17  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    b5wjgv6qhe0bq124xkrjjqj5p  frontend.18  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    926l77zjfvoegwalopmxoto98  frontend.19  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    dyt78zxvpx5nefgipbpp2f6bt  frontend.20  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    d67a68cysagkezm5s8uzjcv6c  frontend.21  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    cmd6wqb5o6h7vhggr0oc1agyc  frontend.22  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    7wdzv2opjk3vamx9vzz7b8jtb  frontend.23  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    7fh7k7lp1p8g45x8giud05rsi  frontend.24  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    9pgcixsz2i2j9wfhqnomu78bk  frontend.25  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    aw0dlr1n4ncddwfnkkls3qpxu  frontend.26  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    7swtukjp7cjxir3cj9kghwaca  frontend.27  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    d344xub98wzp7mqfuvgrj9kgr  frontend.28  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    4le0yrl7xzaijna82ffff72xp  frontend.29  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    48vkny4d13u2fjskty1pfro1i  frontend.30  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    6g03xa6lgfd8gqctpszpk3qey  frontend.31  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    eoan69u29a6ah0vi05ihl8ccp  frontend.32  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    9ie6s3b7viybtbyc7g32lsddj  frontend.33  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    4t7n6owvqbd3wd2fyhb6p033t  frontend.34  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    6adu64lap9xp8if0f32e6uvqd  frontend.35  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    09v5jml347eor4a83xfh7ng6a  frontend.36  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    aie6px9sg75ju3pbnh5vs8pyt  frontend.37  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    3k3qtoszxtwcm3junsjuo9bxg  frontend.38  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    8wikdzvj293fo0qe7q477pkcp  frontend.39  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    2jbil2dexyjd6jh85182m5n15  frontend.40  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    29bphav3mp67yuimkqpdvmx18  frontend.41  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    0dt1tc8707uf519ua2gc07qcj  frontend.42  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker01
    6vtc5qkftqtpac4xegzs07qlp  frontend.43  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager03
    aq2ya45sturcn6ekr1ejeggj7  frontend.44  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    5e4mv8ap983bnd0e8vxz4o0p8  frontend.45  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    4e46ffegsx2j750hyehjdnlht  frontend.46  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker03
    7dpfq95k7o4qm9nyranb6r1yh  frontend.47  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    8xvdmtxrd9ucprlusf7vn7f9j  frontend.48  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager02
    8d5t4d52kdx2btmzv8givx818  frontend.49  frontend  nginx:latest  Assigned Less than a second  Running        swarm-manager01
    10wg1ljjavdem11u4vxsnv122  frontend.50  frontend  nginx:latest  Assigned Less than a second  Running        swarm-worker02
    $
    

    초 시동!


    고르게 가동하다
    $ docker service tasks frontend | grep -v NODE |awk '{print $NF}' | sort | uniq -c
       9 swarm-manager01
       8 swarm-manager02
       8 swarm-manager03
       9 swarm-worker01
       8 swarm-worker02
       8 swarm-worker03
    $
    

    워커가 멈추면 어떻게 돼요?


    빨리 Orchest tration 해줘!


    kill swarm-worker03

    $ docker-machine kill swarm-worker03
    

    2초 뒤쯤?


    (1Mac에 있으니까)

    $ # DESIRED STATE が Accepted になって移動
    $ docker service tasks frontend | grep -v NODE |awk '{print $NF}' | sort | uniq -c
      10 swarm-manager01
      10 swarm-manager02
      10 swarm-manager03
      10 swarm-worker01
      10 swarm-worker02
    $ 
    

    균등하다


    끝맺다

    좋은 웹페이지 즐겨찾기