playbook 자동 설치 kafka 집단
10346 단어 playbookkafka 클러스터 자동 설치데이터베이스
172.21.184.43 kafka、zk
172.21.184.44 kafka、zk
172.21.184.45 kafka、zk
172.21.244.7 ansible
2. 소프트웨어 버전 정보
:CentOS Linux release 7.5.1804 (Core)
kafka:kafka_2.11-2.2.0
Zookeeper version: 3.4.8
ansible:ansible 2.7.10
2. 설정 준비 1,playbook 관련 프로필을 작성하고 트리에서 전체 디렉터리 구조를 보십시오
tree
.
├── kafka
│ ├── group_vars
│ │ └── kafka
│ ├── hosts
│ ├── kafkainstall.yml
│ └── templates
│ ├── server.properties-1.j2
│ ├── server.properties-2.j2
│ ├── server.properties-3.j2
│ └── server.properties.j2
└── zookeeper
├── group_vars
│ └── zook
├── hosts
├── templates
│ └── zoo.cfg.j2
└── zooKeeperinstall.yml
2. 관련 디렉터리 만들기
mkdir /chj/ansibleplaybook/kafka/group_vars -p
mkdir /chj/ansibleplaybook/kafka/templates
mkdir /chj/ansibleplaybook/zookeeper/group_vars -p
mkdir /chj/ansibleplaybook/zookeeper/templates
3. zookeeper를 배치하는 프로필 A, zookeeper의 그룹_ 작성vars 파일
vim /chj/ansibleplaybook/zookeeper/group_vars/zook
---
zk01server: 172.21.184.43
zk02server: 172.21.184.44
zk03server: 172.21.184.45
zookeeper_group: work
zookeeper_user: work
zookeeper_dir: /chj/data/zookeeper
zookeeper_appdir: /chj/app/zookeeper
zk01myid: 43
zk02myid: 44
zk03myid: 45
B,zookeeper의templates 파일
vim /chj/ansibleplaybook/zookeeper/templates/zoo.cfg.j2
tickTime=2000
initLimit=500
syncLimit=20
dataDir={{ zookeeper_dir }}
dataLogDir=/chj/data/log/zookeeper/
clientPort=10311
maxClientCnxns=1000000
server.{{ zk01myid }}={{ zk01server }}:10301:10331
server.{{ zk02myid }}={{ zk02server }}:10302:10332
server.{{ zk03myid }}={{ zk03server }}:10303:10333
C, zookeeper의host 파일
vim /chj/ansibleplaybook/zookeeper/hosts
[zook]
172.21.184.43
172.21.184.44
172.21.184.45
D,zookeeper가 설치된 yml 파일
vim /chj/ansibleplaybook/zookeeper/zooKeeperinstall.yml
---
- hosts: "zook"
gather_facts: no
tasks:
- name: Create zookeeper group
group:
name: '{{ zookeeper_group }}'
state: present
tags:
- zookeeper_user
- name: Create zookeeper user
user:
name: '{{ zookeeper_user }}'
group: '{{ zookeeper_group }}'
state: present
createhome: no
tags:
- zookeeper_group
- name: zk
stat:
path: /chj/app/zookeeper
register: node_files
- debug:
msg: "{{ node_files.stat.exists }}"
- name: java
shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo " "; curl -o /usr/local/jdk1.8.0_121.tar.gz http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf /usr/local/jdk1.8.0_121.tar.gz -C /usr/local/jdk1.8.0_121; cd /usr/local/; mv /usr/local/jdk1.8.0_121 jdk; ln -s /usr/local/jdk/bin/java /sbin/java; else echo "
" ;fi
- name: chj_zookeeper
unarchive: src=http://ops.chehejia.com:9090/pkg/zookeeper.tar.gz dest=/chj/app/ copy=no
when: node_files.stat.exists == False
register: unarchive_msg
- debug:
msg: "{{ unarchive_msg }}"
- name: zookeeper
shell: if [ ! -d "/chj/data/zookeeper" ] && [ ! -d "/chj/data/log/zookeeper" ];then echo " "; mkdir -p /chj/data/{zookeeper,zookeeperLog} ; else echo "
" ;fi
- name:
shell: chown work:work -R /chj/{data,app}
when: node_files.stat.exists == False
- name: zk myid
shell: "hostname -i| cut -d '.' -f 4|awk '{print $1}' > /chj/data/zookeeper/myid"
- name: Config zookeeper service
template:
src: zoo.cfg.j2
dest: /chj/app/zookeeper/conf/zoo.cfg
mode: 0755
- name: Reload systemd
command: systemctl daemon-reload
- name: Restart ZooKeeper service
shell: sudo su - work -c "/chj/app/zookeeper/console start"
- name: Status ZooKeeper service
shell: "sudo su - work -c '/chj/app/zookeeper/console status'"
register: zookeeper_status_result
ignore_errors: True
- debug:
msg: "{{ zookeeper_status_result }}"
4. kafka를 배치하는 프로필 A, kafka의 그룹_ 작성vars 파일
vim /chj/ansibleplaybook/kafka/group_vars/kafka
---
kafka01: 172.21.184.43
kafka02: 172.21.184.44
kafka03: 172.21.184.45
kafka_group: work
kafka_user: work
log_dir: /chj/data/kafka
brokerid1: 1
brokerid2: 2
brokerid3: 3
zk_addr: 172.21.184.43:10311,172.21.184.44:10311,172.21.184.45:10311/kafka
B,kafka의templates 파일
vim /chj/ansibleplaybook/kafka/templates/server.properties-1.j2
broker.id={{ brokerid1 }} ##server.properties-2.j2 server.properties-3.j2 brokerid2 brokerid3
auto.create.topics.enable=false
auto.leader.rebalance.enable=true
broker.rack=/default-rack
compression.type=snappy
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=true
fetch.message.max.bytes=10485760
fetch.purgatory.purge.interval.requests=10000
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
host.name= {{ kafka01 }}
listeners=PLAINTEXT://{{ kafka01}}:9092 ##server.properties-2.j2 server.properties-3.j2 brokerid2 brokerid3
log.cleanup.interval.mins=1200
log.dirs= {{ log_dir}}
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=10000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=1
offsets.topic.segment.bytes=104857600
port=9092
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=10485760
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect= {{ zk_addr }}
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
group.initial.rebalance.delay.ms=10000
C,kafka의host 파일
vim /chj/ansibleplaybook/kafka/hosts
[kafka]
172.21.184.43
172.21.184.44
172.21.184.45
D,kafka에 설치된 yml 파일
vim /chj/ansibleplaybook/kafka/kafkainstall.yml
---
- hosts: "kafka"
gather_facts: yes
tasks:
- name: obtain eth0 ipv4 address
debug: msg={{ ansible_default_ipv4.address }}
when: ansible_default_ipv4.alias == "eth0"
- name: Create kafka group
group:
name: '{{ kafka_group }}'
state: present
tags:
- kafka_user
- name: Create kafka user
user:
name: '{{ kafka_user }}'
group: '{{ kafka_group }}'
state: present
createhome: no
tags:
- kafka_group
- name: zk
stat:
path: /chj/app/kafka
register: node_files
- debug:
msg: "{{ node_files.stat.exists }}"
- name: java
shell: if [ ! -f "/usr/local/jdk/bin/java" ];then echo " "; curl -o /usr/local/jdk1.8.0_121.tar.gz http://download.pkg.chj.cloud/chj_jdk1.8.0_121.tar.gz; tar xf /usr/local/jdk1.8.0_121.tar.gz -C /usr/local/jdk1.8.0_121; cd /usr/local/; mv /usr/local/jdk1.8.0_121 jdk; ln -s /usr/local/jdk/bin/java /sbin/java; else echo "
" ;fi
- name: kafka
unarchive: src=http://ops.chehejia.com:9090/pkg/kafka.tar.gz dest=/chj/app/ copy=no
when: node_files.stat.exists == False
register: unarchive_msg
- debug:
msg: "{{ unarchive_msg }}"
- name: kafka
shell: if [ ! -d "/chj/data/kafka" ] && [ ! -d "/chj/data/log/kafka" ];then echo " "; mkdir -p /chj/data/{kafka,log/kafka} ; else echo "
" ;fi
- name:
shell: chown work:work -R /chj/{data,app}
when: node_files.stat.exists == False
- name: Config kafka01 service
template:
src: server.properties-1.j2
dest: /chj/app/kafka/config/server.properties
mode: 0755
when: ansible_default_ipv4.address == "172.21.184.43"
- name: Config kafka02 service
template:
src: server.properties-2.j2
dest: /chj/app/kafka/config/server.properties
mode: 0755
when: ansible_default_ipv4.address == "172.21.184.44"
- name: Config kafka03 service
template:
src: server.properties-3.j2
dest: /chj/app/kafka/config/server.properties
mode: 0755
when: ansible_default_ipv4.address == "172.21.184.45"
- name: Reload systemd
command: systemctl daemon-reload
- name: Restart kafka service
shell: sudo su - work -c "/chj/app/kafka/console start"
- name: Status kafka service
shell: "sudo su - work -c '/chj/app/kafka/console status'"
register: kafka_status_result
ignore_errors: True
- debug:
msg: "{{ kafka_status_result }}"
PS: 필요한 jdk,kafka,zk의 바이너리 패키지를 설치하여 접근할 수 있는 다운로드 주소로 직접 교체
3. 배포 1. 먼저 zookeeper 집단을 배치한다.
cd /chj/ansibleplaybook/zookeeper/
ansible-playbook -i hosts zooKeeperinstall.yml -b
2. kafka 클러스터 구축
cd /chj/ansibleplaybook/kafka/
ansible-playbook -i hosts kafkainstall.yml -b
이 내용에 흥미가 있습니까?
현재 기사가 여러분의 문제를 해결하지 못하는 경우 AI 엔진은 머신러닝 분석(스마트 모델이 방금 만들어져 부정확한 경우가 있을 수 있음)을 통해 가장 유사한 기사를 추천합니다:
Ansible Study - PlaybooksInventory 에서 정의된 호스트에서 무엇을 해야할지를 정의한 것. 자동화 절차를 기술한 코드 파일 코드 Set을 의미함. YAML 형식으로 기록 playbook의 목표는 호스트의 그룹을 정의된 Ansible내에...
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
CC BY-SA 2.5, CC BY-SA 3.0 및 CC BY-SA 4.0에 따라 라이센스가 부여됩니다.