항상 동일한 버전의 Prometheus를 사용하는 방법 (이미지 버전을 변경하고 싶지 않음)
13329 단어 Ansible도커centos7prometheus
1 소개
다음 조건에서 Prometheus를 시작하고 싶습니다.
2 검증 환경
+---- master1 ----+ +---- master2 ----+
| (CentOS7.2) | | (CentOS7.2) |
| | | |
| prometheus | | |
| node-exporter | | node-exporter |
| | | |
+----- eth0 ------+ +----- eth0 ------+
| |
+------------------------------------------+
| VMWare Workstation |
+------------------------------------------+
3 검증을 위한 준비
検証で使うイメージをあらかじめダウンロードする。master1だけでよい。
[root@master1 prometheus]# docker pull quay.io/coreos/prometheus:0.19.2
[root@master1 prometheus]# docker pull docker.io/prom/node-exporter
dockerイメージを確認する。
[root@master1 prometheus]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
quay.io/coreos/prometheus 0.19.2 1adebd6630d9 7 months ago 43.17 MB
dockerイメージをtarファイルに変換する。
[root@master1 ansible]# docker save quay.io/coreos/prometheus:0.19.2 > prometheus.tar
[root@master1 ansible]# ls prometheus.tar
prometheus.tar
dockerイメージをtarファイルに変換する。
[root@master1 ansible]# docker save docker.io/prom/node-exporter > node-exporter.tar
[root@master1 ansible]# ls node-exporter.tar
node-exporter.tar
tarファイルに変換したあと、イメージを削除する。(テストのため)
この時点で、master1でprometheusとnode-exporterイメージは存在しないことになる。
[root@master1 prometheus]# docker rmi quay.io/coreos/prometheus:0.19.2
[root@master1 prometheus]# docker rmi docker.io/prom/node-exporter
[root@master1 prometheus]# docker images |grep prom
[root@master1 prometheus]#
ファイルを確認する。作成したtarファイルが確認できる。yamlファイルの中身は、あとに記載。
[root@master1 prometheus]# ls
node-exporter.tar node-exporter.yaml prometheus-configmap-1.yaml prometheus-deployment.yaml prometheus.tar
[root@master1 prometheus]#
4 플레이북 만들기
이야기를 쉽게하기 위해 playbook은 서버에서 tar 파일을 다운로드하는 대신
master1에 있는 tar 파일을 사용한다는 내용으로 되어 있다.
インベントリファイルを作成する。
[root@master1 ansible]# vi hosts
[root@master1 ansible]# cat hosts
[master]
master1
master2
playbookファイルを作成する。
[root@master1 ansible]# vi test.yml
[root@master1 ansible]# cat test.yml
- hosts: master1
tasks:
- name: copying prometheus.tar
copy: src=/root/prometheus/prometheus.tar dest=/tmp/
- name: copying node-exporter.tar
copy: src=/root/prometheus/node-exporter.tar dest=/tmp/
- name: unarchiving prometheus.tar
shell: docker load -i /tmp/prometheus.tar
- name: unarchiving node-exporter.tar
shell: docker load -i /tmp/node-exporter.tar
- name: removing files
file: path=/tmp/prometheus.tar state=absent
- name: removing files
file: path=/tmp/node-exporter.tar state=absent
- hosts: master2
tasks:
- name: copying node-exporter.tar
copy: src=/root/prometheus/node-exporter.tar dest=/tmp/
- name: unarchiving node-exporter.tar
shell: docker load -i /tmp/node-exporter.tar
- name: removing files
file: path=/tmp/node-exporter.tar state=absent
- hosts: master1
tasks:
- name: copying
copy: src=/root/prometheus/node-exporter.yaml dest=/tmp/
- name: copying
copy: src=/root/prometheus/prometheus-configmap-1.yaml dest=/tmp/
- name: copying
copy: src=/root/prometheus/prometheus-deployment.yaml dest=/tmp/
- name:
shell: kubectl create -f /tmp/prometheus-configmap-1.yaml
- name:
shell: kubectl create -f /tmp/prometheus-deployment.yaml
- name:
shell: kubectl create -f /tmp/node-exporter.yaml
- name: removing files
file: path=/tmp/node-exporter.yaml state=absent
- name: removing files
file: path=/tmp/prometheus-configmap-1.yaml state=absent
- name: removing files
file: path=/tmp/prometheus-deployment.yaml state=absent
[root@master1 ansible]#
作成したファイルを確認する。
[root@master1 ansible]# ls
hosts test.yml
5 플레이북 실행
playbookを実行する。
[root@master1 ansible]# ansible-playbook -i hosts test.yml
PLAY [master1] *****************************************************************
TASK [setup] *******************************************************************
ok: [master1]
TASK [copying prometheus.tar] **************************************************
changed: [master1]
TASK [copying node-exporter.tar] ***********************************************
changed: [master1]
TASK [unarchiving prometheus.tar] **********************************************
changed: [master1]
TASK [unarchiving node-exporter.tar] *******************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
PLAY [master2] *****************************************************************
TASK [setup] *******************************************************************
ok: [master2]
TASK [copying node-exporter.tar] ***********************************************
changed: [master2]
TASK [unarchiving node-exporter.tar] *******************************************
changed: [master2]
TASK [removing files] **********************************************************
changed: [master2]
PLAY [master1] *****************************************************************
TASK [setup] *******************************************************************
ok: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
PLAY RECAP *********************************************************************
master1 : ok=17 changed=15 unreachable=0 failed=0
master2 : ok=4 changed=3 unreachable=0 failed=0
[root@master1 ansible]#
6 플레이북 실행 결과 확인
Podの状態を確認する。prometheus,node-exporterが起動できたことがわかる。
[root@master1 ansible]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE NODE
node-exporter-qfsp3 1/1 Running 0 1m master1
node-exporter-xe044 1/1 Running 0 1m master2
prometheus-1402422302-nicfk 1/1 Running 0 1m master1
[root@master1 ansible]#
master1でイメージを確認する。ansible-playbook実行により、イメージがインストールされていることがわかる。
[root@master1 ansible]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
quay.io/coreos/prometheus 0.19.2 1adebd6630d9 7 months ago 43.17 MB
[root@master1 ansible]#
master2でイメージを確認する。ansible-playbook実行により、イメージがインストールされていることがわかる。
[root@master2 ~]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
[root@master2 ~]#
7 prometheus에 액세스합니다.
8 검증에 사용되는 yaml 파일
모두 3개.
--------------------------
1. node-exporter.yaml
--------------------------
[root@master1 prometheus]# cat node-exporter.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: node-exporter
name: node-exporter
name: node-exporter
spec:
clusterIP: None
ports:
- name: scrape
port: 9100
protocol: TCP
selector:
app: node-exporter
type: ClusterIP
----
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
spec:
template:
metadata:
labels:
app: node-exporter
name: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
--------------------------------
2. prometheus-configmap-1.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-configmap-1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
# etcd is living outside of our cluster and we configure
# it directly.
- job_name: 'etcd'
target_groups:
- targets:
- 192.168.0.10:2379 # mod (172.17.4.51:2379 => 192.168.0.10:2379)
- job_name: 'node_exporter' # new add
target_groups: # new add
- targets: # new add
- 192.168.0.10:9100 # master1=192.168.0.10
- 192.168.0.20:9100 # master2=192.168.0.20
- job_name: 'kubernetes_components'
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes'
in_cluster: true
# This configures Prometheus to identify itself when scraping
# metrics from Kubernetes cluster components.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# Prometheus provides meta labels for each monitoring targets. We use
# these to select targets we want to monitor and to modify labels attached
# to scraped metrics.
relabel_configs:
# Only scrape apiserver and kubelets.
- source_labels: [__meta_kubernetes_role]
action: keep
regex: (?:apiserver|node)
# Redefine the Prometheus job based on the monitored Kubernetes component.
- source_labels: [__meta_kubernetes_role]
target_label: job
replacement: kubernetes_$1
# Attach all node labels to the metrics scraped from the components running
# on that node.
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
[root@master1 prometheus]#
--------------------------------
3. prometheus-deployment.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-deployment.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
name: prometheus
spec:
selector:
app: prometheus
type: NodePort
ports:
- name: prometheus
protocol: TCP
port: 9090
nodePort: 30900
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
name: prometheus
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: quay.io/coreos/prometheus:0.19.2
args:
- '-storage.local.retention=6h'
- '-storage.local.memory-chunks=500000'
- '-config.file=/etc/prometheus/prometheus.yml'
ports:
- name: web
containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
nodeSelector: # new add
kubernetes.io/hostname: master1 # new add
volumes:
- name: config-volume
configMap:
name: prometheus
[root@master1 prometheus]#
9 skydns 시작시 필요한 이미지
gcr.io/google_containers/exechealthz 1.0 82a141f5d06d 10 months ago 7.116 MB
gcr.io/google_containers/kube2sky 1.14 a4892326f8cf 10 months ago 27.8 MB
gcr.io/google_containers/etcd-amd64 2.2.1 3ae398308ded 12 months ago 28.19 MB
gcr.io/google_containers/skydns 2015-10-13-8c72f8c 718809956625 15 months ago 40.55 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 15 months ago 350.2 kB
Reference
이 문제에 관하여(항상 동일한 버전의 Prometheus를 사용하는 방법 (이미지 버전을 변경하고 싶지 않음)), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://qiita.com/hana_shin/items/bdbd9a76ded1e294e475
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
検証で使うイメージをあらかじめダウンロードする。master1だけでよい。
[root@master1 prometheus]# docker pull quay.io/coreos/prometheus:0.19.2
[root@master1 prometheus]# docker pull docker.io/prom/node-exporter
dockerイメージを確認する。
[root@master1 prometheus]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
quay.io/coreos/prometheus 0.19.2 1adebd6630d9 7 months ago 43.17 MB
dockerイメージをtarファイルに変換する。
[root@master1 ansible]# docker save quay.io/coreos/prometheus:0.19.2 > prometheus.tar
[root@master1 ansible]# ls prometheus.tar
prometheus.tar
dockerイメージをtarファイルに変換する。
[root@master1 ansible]# docker save docker.io/prom/node-exporter > node-exporter.tar
[root@master1 ansible]# ls node-exporter.tar
node-exporter.tar
tarファイルに変換したあと、イメージを削除する。(テストのため)
この時点で、master1でprometheusとnode-exporterイメージは存在しないことになる。
[root@master1 prometheus]# docker rmi quay.io/coreos/prometheus:0.19.2
[root@master1 prometheus]# docker rmi docker.io/prom/node-exporter
[root@master1 prometheus]# docker images |grep prom
[root@master1 prometheus]#
ファイルを確認する。作成したtarファイルが確認できる。yamlファイルの中身は、あとに記載。
[root@master1 prometheus]# ls
node-exporter.tar node-exporter.yaml prometheus-configmap-1.yaml prometheus-deployment.yaml prometheus.tar
[root@master1 prometheus]#
이야기를 쉽게하기 위해 playbook은 서버에서 tar 파일을 다운로드하는 대신
master1에 있는 tar 파일을 사용한다는 내용으로 되어 있다.
インベントリファイルを作成する。
[root@master1 ansible]# vi hosts
[root@master1 ansible]# cat hosts
[master]
master1
master2
playbookファイルを作成する。
[root@master1 ansible]# vi test.yml
[root@master1 ansible]# cat test.yml
- hosts: master1
tasks:
- name: copying prometheus.tar
copy: src=/root/prometheus/prometheus.tar dest=/tmp/
- name: copying node-exporter.tar
copy: src=/root/prometheus/node-exporter.tar dest=/tmp/
- name: unarchiving prometheus.tar
shell: docker load -i /tmp/prometheus.tar
- name: unarchiving node-exporter.tar
shell: docker load -i /tmp/node-exporter.tar
- name: removing files
file: path=/tmp/prometheus.tar state=absent
- name: removing files
file: path=/tmp/node-exporter.tar state=absent
- hosts: master2
tasks:
- name: copying node-exporter.tar
copy: src=/root/prometheus/node-exporter.tar dest=/tmp/
- name: unarchiving node-exporter.tar
shell: docker load -i /tmp/node-exporter.tar
- name: removing files
file: path=/tmp/node-exporter.tar state=absent
- hosts: master1
tasks:
- name: copying
copy: src=/root/prometheus/node-exporter.yaml dest=/tmp/
- name: copying
copy: src=/root/prometheus/prometheus-configmap-1.yaml dest=/tmp/
- name: copying
copy: src=/root/prometheus/prometheus-deployment.yaml dest=/tmp/
- name:
shell: kubectl create -f /tmp/prometheus-configmap-1.yaml
- name:
shell: kubectl create -f /tmp/prometheus-deployment.yaml
- name:
shell: kubectl create -f /tmp/node-exporter.yaml
- name: removing files
file: path=/tmp/node-exporter.yaml state=absent
- name: removing files
file: path=/tmp/prometheus-configmap-1.yaml state=absent
- name: removing files
file: path=/tmp/prometheus-deployment.yaml state=absent
[root@master1 ansible]#
作成したファイルを確認する。
[root@master1 ansible]# ls
hosts test.yml
5 플레이북 실행
playbookを実行する。
[root@master1 ansible]# ansible-playbook -i hosts test.yml
PLAY [master1] *****************************************************************
TASK [setup] *******************************************************************
ok: [master1]
TASK [copying prometheus.tar] **************************************************
changed: [master1]
TASK [copying node-exporter.tar] ***********************************************
changed: [master1]
TASK [unarchiving prometheus.tar] **********************************************
changed: [master1]
TASK [unarchiving node-exporter.tar] *******************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
PLAY [master2] *****************************************************************
TASK [setup] *******************************************************************
ok: [master2]
TASK [copying node-exporter.tar] ***********************************************
changed: [master2]
TASK [unarchiving node-exporter.tar] *******************************************
changed: [master2]
TASK [removing files] **********************************************************
changed: [master2]
PLAY [master1] *****************************************************************
TASK [setup] *******************************************************************
ok: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
PLAY RECAP *********************************************************************
master1 : ok=17 changed=15 unreachable=0 failed=0
master2 : ok=4 changed=3 unreachable=0 failed=0
[root@master1 ansible]#
6 플레이북 실행 결과 확인
Podの状態を確認する。prometheus,node-exporterが起動できたことがわかる。
[root@master1 ansible]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE NODE
node-exporter-qfsp3 1/1 Running 0 1m master1
node-exporter-xe044 1/1 Running 0 1m master2
prometheus-1402422302-nicfk 1/1 Running 0 1m master1
[root@master1 ansible]#
master1でイメージを確認する。ansible-playbook実行により、イメージがインストールされていることがわかる。
[root@master1 ansible]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
quay.io/coreos/prometheus 0.19.2 1adebd6630d9 7 months ago 43.17 MB
[root@master1 ansible]#
master2でイメージを確認する。ansible-playbook実行により、イメージがインストールされていることがわかる。
[root@master2 ~]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
[root@master2 ~]#
7 prometheus에 액세스합니다.
8 검증에 사용되는 yaml 파일
모두 3개.
--------------------------
1. node-exporter.yaml
--------------------------
[root@master1 prometheus]# cat node-exporter.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: node-exporter
name: node-exporter
name: node-exporter
spec:
clusterIP: None
ports:
- name: scrape
port: 9100
protocol: TCP
selector:
app: node-exporter
type: ClusterIP
----
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
spec:
template:
metadata:
labels:
app: node-exporter
name: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
--------------------------------
2. prometheus-configmap-1.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-configmap-1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
# etcd is living outside of our cluster and we configure
# it directly.
- job_name: 'etcd'
target_groups:
- targets:
- 192.168.0.10:2379 # mod (172.17.4.51:2379 => 192.168.0.10:2379)
- job_name: 'node_exporter' # new add
target_groups: # new add
- targets: # new add
- 192.168.0.10:9100 # master1=192.168.0.10
- 192.168.0.20:9100 # master2=192.168.0.20
- job_name: 'kubernetes_components'
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes'
in_cluster: true
# This configures Prometheus to identify itself when scraping
# metrics from Kubernetes cluster components.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# Prometheus provides meta labels for each monitoring targets. We use
# these to select targets we want to monitor and to modify labels attached
# to scraped metrics.
relabel_configs:
# Only scrape apiserver and kubelets.
- source_labels: [__meta_kubernetes_role]
action: keep
regex: (?:apiserver|node)
# Redefine the Prometheus job based on the monitored Kubernetes component.
- source_labels: [__meta_kubernetes_role]
target_label: job
replacement: kubernetes_$1
# Attach all node labels to the metrics scraped from the components running
# on that node.
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
[root@master1 prometheus]#
--------------------------------
3. prometheus-deployment.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-deployment.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
name: prometheus
spec:
selector:
app: prometheus
type: NodePort
ports:
- name: prometheus
protocol: TCP
port: 9090
nodePort: 30900
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
name: prometheus
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: quay.io/coreos/prometheus:0.19.2
args:
- '-storage.local.retention=6h'
- '-storage.local.memory-chunks=500000'
- '-config.file=/etc/prometheus/prometheus.yml'
ports:
- name: web
containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
nodeSelector: # new add
kubernetes.io/hostname: master1 # new add
volumes:
- name: config-volume
configMap:
name: prometheus
[root@master1 prometheus]#
9 skydns 시작시 필요한 이미지
gcr.io/google_containers/exechealthz 1.0 82a141f5d06d 10 months ago 7.116 MB
gcr.io/google_containers/kube2sky 1.14 a4892326f8cf 10 months ago 27.8 MB
gcr.io/google_containers/etcd-amd64 2.2.1 3ae398308ded 12 months ago 28.19 MB
gcr.io/google_containers/skydns 2015-10-13-8c72f8c 718809956625 15 months ago 40.55 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 15 months ago 350.2 kB
Reference
이 문제에 관하여(항상 동일한 버전의 Prometheus를 사용하는 방법 (이미지 버전을 변경하고 싶지 않음)), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://qiita.com/hana_shin/items/bdbd9a76ded1e294e475
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
playbookを実行する。
[root@master1 ansible]# ansible-playbook -i hosts test.yml
PLAY [master1] *****************************************************************
TASK [setup] *******************************************************************
ok: [master1]
TASK [copying prometheus.tar] **************************************************
changed: [master1]
TASK [copying node-exporter.tar] ***********************************************
changed: [master1]
TASK [unarchiving prometheus.tar] **********************************************
changed: [master1]
TASK [unarchiving node-exporter.tar] *******************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
PLAY [master2] *****************************************************************
TASK [setup] *******************************************************************
ok: [master2]
TASK [copying node-exporter.tar] ***********************************************
changed: [master2]
TASK [unarchiving node-exporter.tar] *******************************************
changed: [master2]
TASK [removing files] **********************************************************
changed: [master2]
PLAY [master1] *****************************************************************
TASK [setup] *******************************************************************
ok: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [copying] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [command] *****************************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
TASK [removing files] **********************************************************
changed: [master1]
PLAY RECAP *********************************************************************
master1 : ok=17 changed=15 unreachable=0 failed=0
master2 : ok=4 changed=3 unreachable=0 failed=0
[root@master1 ansible]#
Podの状態を確認する。prometheus,node-exporterが起動できたことがわかる。
[root@master1 ansible]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE NODE
node-exporter-qfsp3 1/1 Running 0 1m master1
node-exporter-xe044 1/1 Running 0 1m master2
prometheus-1402422302-nicfk 1/1 Running 0 1m master1
[root@master1 ansible]#
master1でイメージを確認する。ansible-playbook実行により、イメージがインストールされていることがわかる。
[root@master1 ansible]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
quay.io/coreos/prometheus 0.19.2 1adebd6630d9 7 months ago 43.17 MB
[root@master1 ansible]#
master2でイメージを確認する。ansible-playbook実行により、イメージがインストールされていることがわかる。
[root@master2 ~]# docker images |grep prom
docker.io/prom/node-exporter latest 7faa2f21a307 8 weeks ago 14.56 MB
[root@master2 ~]#
7 prometheus에 액세스합니다.
8 검증에 사용되는 yaml 파일
모두 3개.
--------------------------
1. node-exporter.yaml
--------------------------
[root@master1 prometheus]# cat node-exporter.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: node-exporter
name: node-exporter
name: node-exporter
spec:
clusterIP: None
ports:
- name: scrape
port: 9100
protocol: TCP
selector:
app: node-exporter
type: ClusterIP
----
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
spec:
template:
metadata:
labels:
app: node-exporter
name: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
--------------------------------
2. prometheus-configmap-1.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-configmap-1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
# etcd is living outside of our cluster and we configure
# it directly.
- job_name: 'etcd'
target_groups:
- targets:
- 192.168.0.10:2379 # mod (172.17.4.51:2379 => 192.168.0.10:2379)
- job_name: 'node_exporter' # new add
target_groups: # new add
- targets: # new add
- 192.168.0.10:9100 # master1=192.168.0.10
- 192.168.0.20:9100 # master2=192.168.0.20
- job_name: 'kubernetes_components'
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes'
in_cluster: true
# This configures Prometheus to identify itself when scraping
# metrics from Kubernetes cluster components.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# Prometheus provides meta labels for each monitoring targets. We use
# these to select targets we want to monitor and to modify labels attached
# to scraped metrics.
relabel_configs:
# Only scrape apiserver and kubelets.
- source_labels: [__meta_kubernetes_role]
action: keep
regex: (?:apiserver|node)
# Redefine the Prometheus job based on the monitored Kubernetes component.
- source_labels: [__meta_kubernetes_role]
target_label: job
replacement: kubernetes_$1
# Attach all node labels to the metrics scraped from the components running
# on that node.
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
[root@master1 prometheus]#
--------------------------------
3. prometheus-deployment.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-deployment.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
name: prometheus
spec:
selector:
app: prometheus
type: NodePort
ports:
- name: prometheus
protocol: TCP
port: 9090
nodePort: 30900
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
name: prometheus
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: quay.io/coreos/prometheus:0.19.2
args:
- '-storage.local.retention=6h'
- '-storage.local.memory-chunks=500000'
- '-config.file=/etc/prometheus/prometheus.yml'
ports:
- name: web
containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
nodeSelector: # new add
kubernetes.io/hostname: master1 # new add
volumes:
- name: config-volume
configMap:
name: prometheus
[root@master1 prometheus]#
9 skydns 시작시 필요한 이미지
gcr.io/google_containers/exechealthz 1.0 82a141f5d06d 10 months ago 7.116 MB
gcr.io/google_containers/kube2sky 1.14 a4892326f8cf 10 months ago 27.8 MB
gcr.io/google_containers/etcd-amd64 2.2.1 3ae398308ded 12 months ago 28.19 MB
gcr.io/google_containers/skydns 2015-10-13-8c72f8c 718809956625 15 months ago 40.55 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 15 months ago 350.2 kB
Reference
이 문제에 관하여(항상 동일한 버전의 Prometheus를 사용하는 방법 (이미지 버전을 변경하고 싶지 않음)), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://qiita.com/hana_shin/items/bdbd9a76ded1e294e475
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
모두 3개.
--------------------------
1. node-exporter.yaml
--------------------------
[root@master1 prometheus]# cat node-exporter.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
app: node-exporter
name: node-exporter
name: node-exporter
spec:
clusterIP: None
ports:
- name: scrape
port: 9100
protocol: TCP
selector:
app: node-exporter
type: ClusterIP
----
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: node-exporter
spec:
template:
metadata:
labels:
app: node-exporter
name: node-exporter
spec:
containers:
- image: prom/node-exporter
name: node-exporter
ports:
- containerPort: 9100
hostPort: 9100
name: scrape
hostNetwork: true
hostPID: true
--------------------------------
2. prometheus-configmap-1.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-configmap-1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_configs:
# etcd is living outside of our cluster and we configure
# it directly.
- job_name: 'etcd'
target_groups:
- targets:
- 192.168.0.10:2379 # mod (172.17.4.51:2379 => 192.168.0.10:2379)
- job_name: 'node_exporter' # new add
target_groups: # new add
- targets: # new add
- 192.168.0.10:9100 # master1=192.168.0.10
- 192.168.0.20:9100 # master2=192.168.0.20
- job_name: 'kubernetes_components'
kubernetes_sd_configs:
- api_servers:
- 'https://kubernetes'
in_cluster: true
# This configures Prometheus to identify itself when scraping
# metrics from Kubernetes cluster components.
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
# Prometheus provides meta labels for each monitoring targets. We use
# these to select targets we want to monitor and to modify labels attached
# to scraped metrics.
relabel_configs:
# Only scrape apiserver and kubelets.
- source_labels: [__meta_kubernetes_role]
action: keep
regex: (?:apiserver|node)
# Redefine the Prometheus job based on the monitored Kubernetes component.
- source_labels: [__meta_kubernetes_role]
target_label: job
replacement: kubernetes_$1
# Attach all node labels to the metrics scraped from the components running
# on that node.
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
[root@master1 prometheus]#
--------------------------------
3. prometheus-deployment.yaml
--------------------------------
[root@master1 prometheus]# cat prometheus-deployment.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
labels:
name: prometheus
name: prometheus
spec:
selector:
app: prometheus
type: NodePort
ports:
- name: prometheus
protocol: TCP
port: 9090
nodePort: 30900
----
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
name: prometheus
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: quay.io/coreos/prometheus:0.19.2
args:
- '-storage.local.retention=6h'
- '-storage.local.memory-chunks=500000'
- '-config.file=/etc/prometheus/prometheus.yml'
ports:
- name: web
containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
nodeSelector: # new add
kubernetes.io/hostname: master1 # new add
volumes:
- name: config-volume
configMap:
name: prometheus
[root@master1 prometheus]#
9 skydns 시작시 필요한 이미지
gcr.io/google_containers/exechealthz 1.0 82a141f5d06d 10 months ago 7.116 MB
gcr.io/google_containers/kube2sky 1.14 a4892326f8cf 10 months ago 27.8 MB
gcr.io/google_containers/etcd-amd64 2.2.1 3ae398308ded 12 months ago 28.19 MB
gcr.io/google_containers/skydns 2015-10-13-8c72f8c 718809956625 15 months ago 40.55 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 15 months ago 350.2 kB
Reference
이 문제에 관하여(항상 동일한 버전의 Prometheus를 사용하는 방법 (이미지 버전을 변경하고 싶지 않음)), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다
https://qiita.com/hana_shin/items/bdbd9a76ded1e294e475
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념
(Collection and Share based on the CC Protocol.)
gcr.io/google_containers/exechealthz 1.0 82a141f5d06d 10 months ago 7.116 MB
gcr.io/google_containers/kube2sky 1.14 a4892326f8cf 10 months ago 27.8 MB
gcr.io/google_containers/etcd-amd64 2.2.1 3ae398308ded 12 months ago 28.19 MB
gcr.io/google_containers/skydns 2015-10-13-8c72f8c 718809956625 15 months ago 40.55 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 15 months ago 350.2 kB
Reference
이 문제에 관하여(항상 동일한 버전의 Prometheus를 사용하는 방법 (이미지 버전을 변경하고 싶지 않음)), 우리는 이곳에서 더 많은 자료를 발견하고 링크를 클릭하여 보았다 https://qiita.com/hana_shin/items/bdbd9a76ded1e294e475텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
우수한 개발자 콘텐츠 발견에 전념 (Collection and Share based on the CC Protocol.)