ES日志索引配置

ES日志索引配置

1.日常服务索引配置:

  • elasticsearch版本6.5.3
  • logstash版本6.5.3
  • filebeat版本5.5.3
  • kafka版本2.11-0.10.1.1
  • zookeeper版本3.4.6
  • kibana查询索引为:logstash-flume-fusion-*
1.1 日志消费方配置-logstash读kafka写ES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
cat logstash-6.5.3-es/conf.d/logstash-flume-fusion

input {
kafka {
bootstrap_servers => "kafka1.feiersmart.local:9092,kafka2.feiersmart.local:9092,kafka3.feiersmart.local:9092"
topics => ["logstash-flume-fusion"]
codec => "json"
}
}

output {

if [type] == "flume-fusion" {

elasticsearch {
hosts => ["bj-es1.feiersmart.local:9200","bj-es2.feiersmart.local:9200","bj-es3.feiersmart.local:9200"]
index => "logstash-flume-fusion-%{+YYYY.MM.dd}"
document_id => "%{type}-%{+YYYY.MM.dd}-%{host}-%{offset}"
}
}
}
1.2 日志生产方配置-logstash接收filebeat日志转写kafka
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
cat logstash-6.5.3-kafka/conf.d/logstash-flume-fusion

output {

if [type] == "flume-fusion" {

#stdout {
# codec=>rubydebug
#}

kafka {
bootstrap_servers => "kafka1.feiersmart.local:9092,kafka2.feiersmart.local:9092,kafka3.feiersmart.local:9092"
topic_id => "logstash-flume-fusion"
codec => "json"
compression_type => "gzip"
}
}
}
1.3 filebeat 配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat filebeat/filebeat.yml

filebeat.prospectors:
- input_type: log
paths:
- /work/admin/logs/outerlogs/accservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/deviceservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/noeservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/openbizservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/pushservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/resservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/smartservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/userservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/commerceservicelogs/outer_visit_log.log
- /work/admin/logs/outerlogs/configservicelogs/outer_visit_log.log
document_type: outer-context

- input_type: log
paths:
- /work/admin/logs/cacheexlogs/*/cache_ex_log_detail.log
document_type: flume-fusion

output.logstash:
hosts: ["logstash.feiersmart.com:5044"]
output.console:
pretty: true

2.kubernetes 容器标准输出日志采集

  • filebeat版本6.2.4
  • kibana查询索引为:zz-kubernetes-stdout-*
2.1 日志消费方配置-logstash读kafka写ES
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat logstash-6.5.3-es/conf.d/zz-kubernetes-stdout.conf

input {
kafka {
bootstrap_servers => "kafka1.feiersmart.local:9092,kafka2.feiersmart.local:9092,kafka3.feiersmart.local:9092"
topics => ["zz-kubernetes-stdout"]
codec => "json"
}
}


output {

if [fields][service] == "zz-kubernetes-stdout" {

#stdout {
# codec=>rubydebug
#}

elasticsearch {
hosts => ["bj-es1.feiersmart.local:9200","bj-es2.feiersmart.local:9200","bj-es3.feiersmart.local:9200"]
index => "zz-kubernetes-stdout-%{+YYYY.MM.dd}"
document_id => "%{type}-%{+YYYY.MM.dd}-%{host}-%{offset}"
}
}
}
2.2 日志生产方配置-logstash接收filebeat日志转写kafka
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
cat logstash-6.5.3-kafka/conf.d/zz-kubernetes-stdout.conf

filter {

if "name" == "filebeat-service" {
drop {}
}
}

output {

if [fields][service] == "zz-kubernetes-stdout" {

#stdout {
# codec=>rubydebug
#}

kafka {
bootstrap_servers => "kafka1.feiersmart.local:9092,kafka2.feiersmart.local:9092,kafka3.feiersmart.local:9092"
topic_id => "zz-kubernetes-stdout"
codec => "json"
compression_type => "gzip"
max_request_size => 20000120
}
}
}
2.3 filebeat 配置 6.0以上版本支持对容器日志的解析
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
cat filebeat/filebeat.yml

filebeat.prospectors:
- type: log
scan_frequency: 10s
paths:
- '/work/admin/docker/containers/*/*.log'
multiline:
pattern: '^\{'
negate: true
match: after
exclude_lines: ['INFO','info','DBG','debug','filebeat-service','kubernetes-dashboard','peer$','code$','"default"']
json.message_key: log
json.keys_under_root: true
json.add_error_key: true
json.ignore_decoding_error: true
processors:
- add_docker_metadata: ~
fields:
service: zz-kubernetes-stdout

output.logstash:
hosts: ["logstash.feiersmart.com:5044"]
bulk_max_size: 204800

这里我用了multiline对容器采集的日志进行多行合并,接着再用exclude_lines排除掉不需要采集的容器filebeat-service的标准输出日志,但这并不是绝对的,它只会在log项里过滤该字段,

2.4 扩展:使用k8s DaemonSet方式运行filebeat来收集日志
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
cat filebeat-stdout.yaml

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat-stdout
namespace: kube-public
labels:
app: filebeat-stdout
spec:
selector:
matchLabels:
k8s-app: filebeat-stdout
template:
metadata:
labels:
k8s-app: filebeat-stdout
spec:
nodeSelector:
kubernetes.io/role: node
containers:
- name: filebeat-stdout
image: harbor.feiersmart.local/public/filebeat-stdout-test:v0.2
imagePullPolicy: Always
resources:
limits:
cpu: 250m
memory: "128Mi"
volumeMounts:
- name: docker-sock
mountPath: /var/run/docker.sock
readOnly: true
- name: containers-logs
mountPath: /work/admin/docker/containers/
- name: filebeat-data
mountPath: /work/admin/filebeat/data/
imagePullSecrets:
- name: harbor-pub
volumes:
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: containers-logs
hostPath:
path: /work/admin/docker/containers/
- name: filebeat-data
hostPath:
path: /work/admin/filebeat/data2/

最后献上一张收集后的效果

image

3.常用的一些操作

查看集群状态

1
curl 127.0.0.1:9200/_cluster/health?pretty

ES删除日志

1
curl -XDELETE localhost:9200/logstash-xy-msp-general-2019.03.11

查看每个索引的状态

1
curl -XGET "http://localhost:9200/_cat/indices?v"

删除所有索引

1
curl -XDELETE -u elastic:changeme gdg-dev:9200/_all

删除指定索引

1
curl -XDELETE -u elastic:changeme gdg-dev:9200/test-2019.01.08

删除自定义模板

1
curl -XDELETE master:9200/_template/temp*

查看template_1 模板

1
curl -XGET master:9200/_template/template_1

获取指定索引详细信息

1
curl -XGET 'master:9200/system-syslog-2018.12?pretty'

获取所有索引详细信息

1
2
curl -XGET master:9200/_mapping?pretty
curl 127.0.0.1:9200/_cat/indices

查看索引副本数

1
curl -XGET localhost:9200/_cat/indices?v

修改索引副本数

1
2
3
4
5
6
7
8
curl -XPUT 'http://localhost:9200/_template/logstash-job-admin-*' -H 'Content-Type: application/json' -d'{
"index_patterns" : ["logstash-devicelog-gather-*"],
"order" : 0,
"settings" : {
"number_of_shards": 5,
"number_of_replicas": 1
}
}'

查看kafka里没有该type的消息

1
./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic logstash-ms-service-context --from-beginning
-------------本文结束感谢您的阅读-------------