elasticsearch-7.1.1-linux를 사용하여 발생할 수 있는 문제 처리 설치
16112 단어 서버 및 스토리지
ERROR: [6] bootstrap checks failed
[1]: max file descriptors [1024] for elasticsearch process is too low, increase to at least [65535]
[2]: memory locking requested for elasticsearch process but memory is not locked
[3]: max number of threads [1024] for user [es] is too low, increase to at least [4096]
[4]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[5]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
[6]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
해결 방법:
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
es soft memlock unlimited
es hard memlock unlimited
vim /etc/security/limits.d/90-nproc.conf
* soft nproc 2048
root soft nproc ulimited
/etc/sysctl.conf
vm.max_map_count=262144
sysctl -p
vim /opt/elasticsearch-7.1.1/config/elasticsearch.yml
bootstrap.system_call_filter: false
둘째,CentOS 7에서 docker로 es7 docker pull elasticsearch:7.1.1 docker run-itd-p 9200:9200-p 9300:9300 --name es1 elasticsearch:7.1.1 오류가 발생했습니다. "the default discovery settings are unsuitable for production use;atleast one of [discovery.seed_hosts,discovery.seed_providers,cluster.initial_master_nodes]must be configured"해결 방법은 단일 노드 모드로 설정하는 것입니다. docker run-itd-p 9200:9200-p 9300:9300-e "discovery.type=single-node"--name es1 elasticsearch:7.1.1
3. es 집단을 구축할 때 집단 이름 설정이 잘못되면 오류를 보고합니다
[2019-06-17T21:04:33,627][INFO ][o.e.c.c.ClusterBootstrapService] [node-3] skipping cluster bootstrapping as local node does not match bootstrap requirements: [node-1, node-2]
[2019-06-17T21:04:43,631][WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-3] master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes [node-1, node-2] to bootstrap a cluster: have discovered []; discovery will continue using [10.127.158.45:9300, 10.128.126.189:9300] from hosts providers and [{node-3}{1n0BbUZAQv-BCFFqfVDKAg}{q3dOwtwKSPGT7mLnTqrlGA}{10.127.158.47}{10.127.158.47:9300}{ml.machine_memory=270443114496, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 0, last-accepted version 0 in term 0
클러스터를 구성할 때 기본 프로파일의 옵션에 따라 작성하면 됩니다.
[root@node189 config]# egrep -v "^#" elasticsearch.yml
cluster.name: logs-es
node.name: node-1
bootstrap.memory_lock: true
network.host: 10.128.126.189
discovery.seed_hosts: ["10.127.158.45", "10.128.126.189",'10.127.158.47']
cluster.initial_master_nodes: ["node-1","node-2"]
bootstrap.system_call_filter: false
http.cors.allow-origin: "*"
http.cors.enabled: true
discovery.zen.ping_timeout: 30s
4. Filebeats 7.1.1을 사용하여 시스템 로그 데이터를 ES 7.1.1의 구성에 직접 입력
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"
output.elasticsearch:
hosts: ["10.127.158.47:9200"]
indices:
- index: "filebeat-10.127.158.47syslogs-%{+yyyy.MM.dd}"
processors:
-add_locale: ~
하지만 Filebeat가 입력한 데이터를 발견할 수 있는 timestamp는 es에 들어간 후 8시간 동안 Filebeat의 로그를 보는 것과 같이 추가되었다
2019-06-20T16:09:21.457+0800 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":3}},"total":{"ticks":150,"time":{"ms":9},"value":150},"user":{"ticks":120,"time":{"ms":6}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":9},"info":{"ephemeral_id":"9d9c7b26-ee8a-4622-a955-234dd74b1582","uptime":{"ms":180023}},"memstats":{"gc_next":7123280,"memory_alloc":4971472,"memory_total":15742824}},"filebeat":{"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"read":{"bytes":2355},"write":{"bytes":847}},"pipeline":{"clients":3,"events":{"active":25,"retry":21}}},"registrar":{"states":{"current":19}},"system":{"load":{"1":2.44,"15":2.43,"5":2.39,"norm":{"1":0.0381,"15":0.038,"5":0.0373}}}}}}
2019-06-20T16:09:51.457+0800 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":30,"time":{"ms":3}},"total":{"ticks":160,"time":{"ms":9},"value":160},"user":{"ticks":130,"time":{"ms":6}}},"handles":{"limit":{"hard":999999,"soft":999999},"open":9},"info":{"ephemeral_id":"9d9c7b26-ee8a-4622-a955-234dd74b1582","uptime":{"ms":210023}},"memstats":{"gc_next":7123280,"memory_alloc":5686592,"memory_total":16457944}},"filebeat":{"events":{"active":3,"added":3},"harvester":{"open_files":1,"running":1}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"pipeline":{"clients":3,"events":{"active":28,"published":3,"total":3}}},"registrar":{"states":{"current":19}},"system":{"load":{"1":2.48,"15":2.43,"5":2.41,"norm":{"1":0.0388,"15":0.038,"5":0.0377}}}}}}
시간이 지나면 +0800이 되는데...참고https://wyp0596.github.io/2018/04/25/Common/elk_tz/조작,pipeline이 삭제된 후 자동으로 생성되지 않습니다.es는 Filebeat에서 보낸 데이터를 받을 수 없습니다.해결 방법:logstash를 사용하여 중간, 데이터->filebeat->logstash->es를 사용합니다.filebeat 설정
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/secure
include_lines: [".*Failed.*",".*Accepted.*"]
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
setup.kibana:
output.logstash:
hosts: ["localhost:5044"]
processors:
- add_locale: ~
logstash 구성
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["http://10.127.158.47:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.128.126.189securelog-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
이때 filebeat의 로그에서timestamp가 8시간 추가된 것을 볼 수 있지만, 키바나에서 본 로그의timestamp는 현재 시간입니다.https://wyp0596.github.io/2018/04/25/Common/elk_tz/글에서 언급한 것처럼 filebeat의pipeline 프로필의date 항목을 수정합니다.5. filebeat를 사용하여 es 로그 데이터를 가져와logstash에 보내는 설정 수정 filebeat 디렉터리 모듈.d하의elasticsearch.yml
# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.1/filebeat-module-elasticsearch.html
- module: elasticsearch
# Server log
server:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/home/elasticsearch_work/logs/*.log"]
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
gc:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
audit:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
slowlog:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
var.paths: ["/home/elasticsearch_work/logs/*_index_search_slowlog.log","/home/elasticsearch_work/logs/*_index_indexing_slowlog.log"]
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
deprecation:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths:
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
5. es의 사용자 이름 비밀번호를 설정합니다. 1. 각 es 노드를 닫고 한 노드에서 실행합니다.
[es@node189 elasticsearch-7.1.1]$ ./bin/elasticsearch-certutil cert -out config/elastic-certificates.p12 -pass ""
This tool assists you in the generation of X.509 certificates and certificate
signing requests for use with SSL/TLS in the Elastic stack.
The 'cert' mode generates X.509 certificate and private keys.
* By default, this generates a single certificate and key for use
on a single instance.
* The '-multiple' option will prompt you to enter details for multiple
instances and will generate a certificate and key for each one
* The '-in' option allows for the certificate generation to be automated by describing
the details of each instance in a YAML file
* An instance is any piece of the Elastic Stack that requires a SSL certificate.
Depending on your configuration, Elasticsearch, Logstash, Kibana, and Beats
may all require a certificate and private key.
* The minimum required value for each instance is a name. This can simply be the
hostname, which will be used as the Common Name of the certificate. A full
distinguished name may also be used.
* A filename value may be required for each instance. This is necessary when the
name would result in an invalid file or directory name. The name provided here
is used as the directory name (within the zip) and the prefix for the key and
certificate files. The filename is required if you are prompted and the name
is not displayed in the prompt.
* IP addresses and DNS names are optional. Multiple values can be specified as a
comma separated string. If no IP addresses or DNS names are provided, you may
disable hostname verification in your SSL configuration.
* All certificates generated by this tool will be signed by a certificate authority (CA).
* The tool can automatically generate a new CA for you, or you can provide your own with the
-ca or -ca-cert command line options.
By default the 'cert' mode produces a single PKCS#12 output file which holds:
* The instance certificate
* The private key for the instance certificate
* The CA certificate
If you specify any of the following options:
* -pem (PEM formatted output)
* -keep-ca-key (retain generated CA key)
* -multiple (generate multiple certificates)
* -in (generate certificates from an input file)
then the output will be be a zip file containing individual certificate/key files
Certificates written to /opt/elasticsearch-7.1.1/config/elastic-certificates.p12
This file should be properly secured as it contains the private key for
your instance.
This file is a self contained file and can be copied and used 'as is'
For each Elastic product that you wish to configure, you should copy
this '.p12' file to the relevant configuration directory
and then follow the SSL configuration instructions in the product guide.
생성된 인증서elastic-certificates.p12 다른 노드에 보내는 config 디렉터리 2, 모든 노드에서vimconfig/elasticsearch.yaml 추가
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
3. 각 es 노드를 시작하여 집단을 구성한 다음에 실행한다
[es@node189 elasticsearch-7.1.1]$ ./bin/elasticsearch-setup-passwords interactive
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]: # 1, 6
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana]:
Reenter password for [kibana]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
임의의es노드에서 이 명령을 실행하여 비밀번호를 설정하면 다른es노드가 자동으로 동기화됩니다.명령bin/elasticsearch-setup-passwords auto도 실행할 수 있습니다.이것은 서로 다른 내부 창고 사용자를 위해 무작위 암호를 생성할 것이다.4. 로그인 방법 테스트
[es@Antiy45 elasticsearch-7.1.1]$ curl -u "elastic:111111" 10.127.158.45:9200/_cat/health?v
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1561354439 05:33:59 logs-es green 3 3 42 21 0 0 0 0 - 100.0%
5. 키바나 설정;config/kibana를 엽니다.yml 파일.아래와 같은 코드 줄을 찾습니다. #elasticsearch.username: “user” #elasticsearch.password: "pass"는 사용자 이름 키바나와 비밀번호 6개 1을 설정한 다음 키바나를 시작합니다. 브라우저가 키바나 웹 페이지에 로그인할 때 로그인하는 계정은 키바나가 아닌 엘라스틱입니다.6. logstash 설정;Logstash 프로필의 output 부분 사용자 이름을 logstash_로시스템 오류 발생 [2019-06-24T13:49:25827] [ERROR] [logstash.outputs.elasticsearch] Retryable 오류가 발생했습니다.Will Retry with exponential backoff {:code=>403, :url=>“http://10.127.158.47:9200/_bulk"} 기본값은 elastic이어야 합니다.
output {
elasticsearch {
hosts => ["http://10.127.158.47:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.127.158.47syslog-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
로 수정
output {
elasticsearch {
hosts => ["http://10.127.158.47:9200"]
index => "%{[@metadata][beat]}-%{[@metadata][version]}-10.127.158.47syslog-%{+YYYY.MM.dd}"
user => "elastic"
password => "111111"
}
7. Cerebro 플러그인을 설치하여 RHEL 6.2 시스템에서 ES를 성공적으로 설치한 후 cerebro-0.8.4를 다운로드합니다.tgz, 압축 해제, 압축 해제 후 conf 디렉터리 아래 응용 프로그램.conf 파일의 마지막 esurl, 로그인 사용자 이름과 비밀번호를 실행합니다./bin/cerebro--version 버그bad root path.이 문제는 해결할 필요가 없습니다.readme 파일에 따라 실행합니다
bin/cerebro -Dhttp.port=1234 -Dhttp.address=10.128.126.189
1234 포트에 직접 액세스하면 플러그인이 성공적으로 실행되었습니다.CentOS 7.6 시스템에서 이 문제가 발견되지 않았습니다.