You'blog

ELK安装和使用

2018-01-17

ELK 安装与使用

近期我们公司的被DDOS攻击了,才发现我们的日志系统非常的落后,所以着手开始调研了几家的日志分析系统
想法是不要重复造轮子,所以前后调研了以下几家:

阿里云日志分析 大牌厂商可以通过自家的日志收集脚本(可是不敢用)也可以通过api进行日志收集,报警和监控分开,适合部署在阿里云上的客户,收费

七牛云日志分析 还未对外开放

ELK 成熟开源,使用起来比较复杂,不过相对灵活

GoAccess 简单功能单一

其他的脚本式日志分析

经过最终对比最终选择了ELK,下面我将ELK的部署与使用一一介绍。

一、简介

ELKElasticsearchLogstashKibana 三者的缩写。

Elasticsearch 是基于 JSON 的分布式搜索和分析引擎,专为实现水平扩展、高可用和管理便捷性而设计。

Logstash 是动态数据收集管道,拥有可扩展的插件生态系统,能够与 Elasticsearch 产生强大的协同作用。

Kibana 能够以图表的形式呈现数据,并且具有可扩展的用户界面,供您全方位配置和管理 Elastic Stack。

二、安装

1、Install Elasticsearch

1
2
3
4
5
6
7
8
9
10
/usr/sbin/groupadd es
/usr/sbin/useradd -g es es
su es -
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.1.1.tar.gz
tar zxvf elasticsearch-6.1.1.tar.gz
mv elasticsearch-6.1.1 es-6.1.1
cd es-6.1.1
mkdir data
mkdir logs
chown -R es:es ./*

修改配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /home/es/es-5.4.0/data
#
# Path to log files:
#
path.logs: /home/es/es-5.4.0/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.1.142
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["192.168.1.142:9200"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
#

1
2
3
4
5
echo "#!/bin/bash

nohup /home/es/es-6.1.1/bin/elasticsearch &" > start.sh

chmod +x start.sh

启动 Elasticsearch
./start.sh

测试是否启动成功

1
curl http://192.168.1.142:9200

返回以下证明成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
"name" : "node-1",
"cluster_name" : "es",
"cluster_uuid" : "NNoxUpPjQUq2-PHqjfV54g",
"version" : {
"number" : "6.1.1",
"build_hash" : "bd92e7f",
"build_date" : "2017-12-17T20:23:25.338Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

2、Install Kibana

1
2
3
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.1.1-linux-x86_64.tar.gz
tar zxvf kibana-6.1.1-linux-x86_64.tar.gz
mv kibana-6.1.1-linux-x86_64 /home/pubsrv/kibana-6.1.1

修改配置文件

1
cd /home/pubsrv/kibana-6.1.1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.1.142"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://192.168.1.142:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "1qazxsw2!@#$"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"

设置启动脚本

1
2
3
4
5
echo "#!/bin/bash

nohup /home/pubsrv/kibana-5.4.0/bin/kibana &" > start.sh

chmod +x start.sh

3、Install X-Pack into Elasticsearch

1
2
3
4
5
su es -
cd /home/es/es-6.1.1
./bin/elasticsearch-plugin install x-pack
./start.sh
./bin/x-pack/setup-passwords auto

记录生成的密码。

4、Install X-Pack into Kibana

重新切回root用户

1
2
cd /home/pubsrv/kibana-6.1.1
./bin/kibana-plugin install x-pack

Add credentials to the kibana.yml file

elasticsearch.username: "kibana"

elasticsearch.password: "<pwd>"

启动kibana

1
./start.sh

5、配置nginx代理

1
2
3
4
5
6
7
8
9
10
11
12
server {
listen 80;
server_name elk.sharecmd.com;
access_log logs/elk_proxy.log main1;
error_log logs/elk_proxy_error.log ;
charset utf8;
location / {
proxy_pass http://192.168.1.142:5601;
proxy_set_header Host $host;
}

}

6、Install Logstash

1
2
3
4
5
6
7
8
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.1.2.tar.gz
tar zxvf logstash-6.1.2.tar.gz
mv logstash-6.1.2 /home/pubsrv/logstash-6.1.2
cd /home/pubsrv/logstash-6.1.2/
echo "#!/bin/bash

nohup /home/pubsrv/logstash-6.1.2/bin/logstash -f /home/pubsrv/logstash-6.1.2/config/logstash.conf &" > start.sh
chmod +x start.sh

添加配置文件,位置/home/pubsrv/logstash-6.1.2/config/logstash.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# For detail structure of this file
# Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html
input {
file {
type => "messagelog"
path => ["/home/mosh/logs/nginx/qipai.test-access.log","/tmp/20170601.log.5"]
start_position => "beginning"
}

http {
host => "192.168.1.142"
port => "30000"
}
}
filter {
#Only matched data are send to output.
}
output {
# For detail config for elasticsearch as output,
# See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html
elasticsearch {
action => "index" #The operation on ES
hosts => "192.168.1.142:9200" #ElasticSearch host, can be array.
index => "applog" #The index to write data to.
user => "elastic"
password => "1qazxsw2!@#$"
}
}

至此EKL已经安装完毕!

使用支付宝打赏
使用微信打赏

若你觉得我的文章对你有帮助,欢迎点击上方按钮对我打赏

扫描二维码,分享此文章