本文共 21556 字,大约阅读时间需要 71 分钟。
需要利用Suricata作为IDS来监控办公网出口流量,同时利用ELK(Elasticsearch+Logstash+Kibana)集群进行数据存储与展示。
准备工作:在办公网出口核心交换机上做端口流量镜像,将流量镜像端口连接到一台服务器上,以下的内容都是在这台服务器上开展的。
基础环境更新:
yum updateyum upgrade
安装基础依赖:
yum -y install gcc libpcap-devel pcre-devel libyaml-devel file-devel zlib-devel jansson-devel nss-devel libcap-ng-devel libnet-devel tar make libnetfilter_queue-devel lua-devel
其他基础组件,后面遇到缺啥再装啥就可以了。
需要安装的软件
:suricata、Luajit、Hyperscan、elasticsearch(集群)、elasticsearch-head、logstash、filebeat、kibana
其中elasticsearch、logstash、filebeat、kibana需要安装同一版本。
安装的目标机器
:192.168.1.101(主)、192.168.1.102、192.168.1.103
注意点
:出于性能考量,监控系统各组件安装方式都不建议使用Docker安装
部署目标机器:192.168.1.101
Suricata官网:
Suricata和ELK流量检测:安装依赖:
yum install wget libpcap-devel libnet-devel pcre-devel gcc-c++ automake autoconf libtool make libyaml-devel zlib-devel file-devel jansson-devel nss-devel epel-release lz4-devel rustc cargo
编译并安装suricata:
wget https://www.openinfosecfoundation.org/download/suricata-6.0.2.tar.gztar -xvf suricata-6.0.2.tar.gzcd suricata-6.0.2# 注意,这里的默认参数尽量不要改,否则后面各种问题排查起来也是要命的./configure --prefix=/usr --sysconfdir=/etc --localstatedir=/var --enable-geoip --enable-luajit --with-libluajit-includes=/usr/local/include/luajit-2.0/ --with-libluajit-libraries=/usr/local/lib/ --with-libhs-includes=/usr/local/include/hs/ --with-libhs-libraries=/usr/local/lib/ --enable-profiling
编译后的参数:
Suricata Configuration: AF_PACKET support: yes eBPF support: no XDP support: no PF_RING support: no NFQueue support: no NFLOG support: no IPFW support: no Netmap support: no DAG enabled: no Napatech enabled: no WinDivert enabled: no Unix socket enabled: yes Detection enabled: yes Libmagic support: yes libnss support: yes libnspr support: yes libjansson support: yes hiredis support: no hiredis async with libevent: no Prelude support: no PCRE jit: yes LUA support: yes, through luajit libluajit: yes GeoIP2 support: yes Non-bundled htp: no Hyperscan support: yes Libnet support: yes liblz4 support: yes Rust support: yes Rust strict mode: no Rust compiler path: /usr/bin/rustc Rust compiler version: rustc 1.50.0 (Red Hat 1.50.0-1.el7) Cargo path: /usr/bin/cargo Cargo version: cargo 1.50.0 Cargo vendor: yes Python support: yes Python path: /usr/bin/python3 Python distutils yes Python yaml yes Install suricatactl: yes Install suricatasc: yes Install suricata-update: yes Profiling enabled: yes Profiling locks enabled: no Plugin support (experimental): yesDevelopment settings: Coccinelle / spatch: no Unit tests enabled: no Debug output enabled: no Debug validation enabled: noGeneric build parameters: Installation prefix: /usr Configuration directory: /etc/suricata/ Log directory: /var/log/suricata/ --prefix /usr --sysconfdir /etc --localstatedir /var --datarootdir /usr/share Host: x86_64-pc-linux-gnu Compiler: gcc (exec name) / g++ (real) GCC Protect enabled: no GCC march native enabled: yes GCC Profile enabled: no Position Independent Executable enabled: no CFLAGS -g -O2 -std=gnu99 -march=native -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist PCAP_CFLAGS SECCFLAGS
执行安装:
make && make install
安装其他组件:
# 这里执行这一个就够了,它相当于安装:configuration、rules、providemake install-full
修改配置文件:
vim /etc/suricata/suricata.yaml
# 修改的第一部分# 修改相关参数,并把不用的注释掉vars: address-groups: HOME_NET: "[10.10.11.0/24,172.16.10.0/24]" DNS_NET: "[10.10.10.100,10.10.10.101,10.10.10.102]"# HOME_NET: "[10.0.0.0/8]"# HOME_NET: "[172.16.0.0/12]"# HOME_NET: "any" EXTERNAL_NET: "!$HOME_NET"# EXTERNAL_NET: "any" HTTP_SERVERS: "$HOME_NET"# SMTP_SERVERS: "$HOME_NET"# SQL_SERVERS: "$HOME_NET" DNS_SERVERS: "$DNS_NET"# TELNET_SERVERS: "$HOME_NET"# AIM_SERVERS: "$EXTERNAL_NET"# DC_SERVERS: "$HOME_NET"# DNP3_SERVER: "$HOME_NET"# DNP3_CLIENT: "$HOME_NET"# MODBUS_CLIENT: "$HOME_NET"# MODBUS_SERVER: "$HOME_NET"# ENIP_CLIENT: "$HOME_NET"# ENIP_SERVER: "$HOME_NET" port-groups: HTTP_PORTS: "80,443"# SHELLCODE_PORTS: "!80"# ORACLE_PORTS: 1521 SSH_PORTS: 22# DNP3_PORTS: 20000# MODBUS_PORTS: 502# FILE_DATA_PORTS: "[$HTTP_PORTS,110,143]" FTP_PORTS: 21# GENEVE_PORTS: 6081# VXLAN_PORTS: 4789# TEREDO_PORTS: 3544
# 修改的第二部分# 修改日志存储文件位置。因为home分区比较大,一般是几百GB,而root分区一般几十GBdefault-log-dir: /home/suricata/log/suricata/
# 修改的第三部分# 主要是修改部分参数配置 - http: extended: yes # enable this for extended logging information custom: [ accept, accept_charset, accept_datetime, accept_encoding, accept_language, accept_range, age, allow, authorization, cache_control, connection, content_encoding, content_language, content_length, content_location, content_md5, content_range, content_type, cookie, date, dnt, etag, expires, from, last_modified, link, location, max_forwards, org_src_ip, origin, pragma, proxy_authenticate, proxy_authorization, range, referrer, refresh, retry_after, server, set_cookie, te, trailer, transfer_encoding, true_client_ip, upgrade, vary, via, warning, www_authenticate, x_authenticated_user, x_bluecoat_via, x_flash_version, x_forwarded_proto, x_requested_with ] dump-all-headers: [both] - dns: enabled: yes version: 1 requests: yes responses: yes - tls: extended: yes session-resumption: yes custom: [ subject, issuer, session_resumed, serial, fingerprint, sni, version, not_before, not_after, certificate, chain, ja3 ]
# 修改第四部分# 全局搜索,将网卡名称改为自己的网卡名称。我这里网卡名称是p2p1,因此将所有eth0/eth1/eth2……都改为p2p1- interface: p2p1
# 修改第五部分# 全局搜索,将所有完整校验关掉,不然的话会有非常多的误报,并且占空间checksum-checks: no
# 修改第六部分# 修改我们要使用的规则,把不使用的注释掉default-rule-path: /var/lib/suricata/rules# 需要把这个目录下的规则复制过去:/usr/share/suricata/rulesrule-files:# - suricata.rules# - app-layer-events.rules# - decoder-events.rules# - dhcp-events.rules# - dnp3-events.rules - dns-events.rules - files.rules - http-events.rules# - ipsec-events.rules# - kerberos-events.rules# - modbus-events.rules# - nfs-events.rules# - ntp-events.rules# - smb-events.rules# - smtp-events.rules# - stream-events.rules - tls-events.rules
更新规则集:
pip install --upgrade suricata-updatesuricata-update
启动测试,若不报错即成功:
/usr/bin/suricata -T
正常启动:
/usr/local/bin/suricata -c /etc/suricata/suricata.yaml -i p2p1 --init-errors-fatal
也可以将其使用supervisord守护进程启动:
vim /etc/supervisord.d/suricata.conf
[program:suricata]directory=/usr/bincommand=suricata -c /etc/suricata/suricata.yaml -i p2p1 --init-errors-fatalautostart=trueautorestart=false#stderr_logfile=/tmp/test_stderr.log#stdout_logfile=/tmp/test_stdout.loguser=root
部署目标机器:192.168.1.101
简介:
LuaJIT是采用C语言写的Lua代码的解释器,LuaJIT试图保留Lua的精髓–轻量级,高效和可扩展。
安装:
wget http://luajit.org/download/LuaJIT-2.0.5.tar.gztar -zxf LuaJIT-2.0.5.tar.gzcd LuaJIT-2.0.5/sudo make && make install
编辑配置文件:
vim /etc/ld.so.conf
# 添加如下路径,保存退出/usr/local/lib
执行加载命令:
sudo ldconfig
部署目标机器:192.168.1.101
简介:
Hyperscan是一个高性能的多重正则表达式匹配库。在Suricata中它可以用来执行多模式匹配。Hyperscan适用于部署在诸如DPI/IPS/IDS/FW等场景中,目前已经在全球多个客户网络安全方案中得到实际的应用。
使用 Hyperscan 作为 Suricata 的 MPM(多处理模块))匹配器(mpm-algo 设置)可以大大提高性能,尤其是在快速模式匹配方面。 Hyperscan 还在快速模式匹配时考虑深度和偏移量。
安装依赖:
yum install cmake ragel libtool python-devel GyeoIP-develyum install boost boost-devel boost-docyum install libquadmath libquadmath-devel bzip2-devel
安装:
wget http://downloads.sourceforge.net/project/boost/boost/1.66.0/boost_1_66_0.tar.gztar xvzf boost_1_66_0.tar.gzcd boost_1_66_0/./bootstrap.sh --prefix=/home/suricata/boost-1.66./b2 install// 不要退出目录git clone https://github.com/intel/hyperscan.gitcd hyperscancmake -DBUILD_STATIC_AND_SHARED=1 -DBOOST_ROOT=/home/suricata/boost-1.66makemake install
编辑配置文件:
vim /etc/ld.so.conf
# 添加如下路径,保存退出/usr/local/lib64
执行加载命令:
sudo ldconfig
文件结构如下:
部署目标机器:192.168.1.101、192.168.1.102、192.168.1.103
简介:
为什么ES集群至少要3个节点:
ELFK专栏: ELK集群安装:ElasticSearch简称ES,它是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析。它是一个建立在全文搜索引擎 Apache Lucene 基础上的搜索引擎,使用 Java 语言编写。
基础配置,三台机器都一样:
vim /etc/security/limits.conf
# 这些数尽量不要省,不然启动失败还得改回来,麻烦* soft nofile 65536* hard nofile 131072* soft nproc 2048* hard nproc 4096
vim /etc/sysctl.conf
vm.max_map_count=655360
安装Java,三台机器都一样:
tar zxf jdk-8u271-linux-x64.tar.gzmv jdk1.8.0_271/ /usr/local/java
vim /etc/profile
export JAVA_HOME=/usr/local/javaexport JRE_HOME=/usr/local/java/jreexport PATH=$PATH:/usr/local/java/binexport CLASSPATH=./:/usr/local/java/lib:/usr/local/java/jre/lib
# 让环境变量生效source !$java -version# 这里最好直接放在/bin下面,否则后面logstash报错非常难排查原因which javaln -s /usr/local/java/bin/* /bin
安装elasticsearch,三台机器都一样:
tar zxf elasticsearch-7.5.1.tar.gzmv elasticsearch-7.5.1 /usr/local/elasticsearchmkdir /usr/local/elasticsearch/datachown -R admin:admin /usr/local/elasticsearch
192.168.1.101机器修改配置文件:
vim /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: ELK # 集群名,同一个集群,集群名必须一致node.name: es-1 # 集群节点,可任意取transport.tcp.compress: truepath.data: /usr/local/elasticsearch/data # 数据存放路径path.logs: /usr/local/elasticsearch/logs # 日志存放路径network.host: 192.168.1.101 # 监听IP地址,http.port: 9200transport.tcp.port: 9300discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]cluster.initial_master_nodes: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]network.publish_host: 192.168.1.101node.master: true # 允许成为主节点node.data: true # 允许成为数据节点#xpack.security.enabled: true # 建议关闭或不设置,若设置了有很多非常麻烦的事http.cors.enabled: truehttp.cors.allow-origin: "*"indices.query.bool.max_clause_count: 8192search.max_buckets: 100000
192.168.1.102机器修改配置文件:
vim /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: ELK # 集群名,同一个集群,集群名必须一致node.name: es-2 # 集群节点,可任意取transport.tcp.compress: truepath.data: /usr/local/elasticsearch/data # 数据存放路径path.logs: /usr/local/elasticsearch/logs # 日志存放路径network.host: 192.168.1.102 # 监听IP地址,http.port: 9200transport.tcp.port: 9300discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]cluster.initial_master_nodes: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]network.publish_host: 192.168.1.102node.master: true # 允许成为主节点node.data: true # 允许成为数据节点#xpack.security.enabled: true # 建议关闭或不设置,若设置了有很多非常麻烦的事http.cors.enabled: truehttp.cors.allow-origin: "*"indices.query.bool.max_clause_count: 8192search.max_buckets: 100000
192.168.1.103机器修改配置文件:
vim /usr/local/elasticsearch/config/elasticsearch.yml
cluster.name: ELK # 集群名,同一个集群,集群名必须一致node.name: es-3 # 集群节点,可任意取transport.tcp.compress: truepath.data: /usr/local/elasticsearch/data # 数据存放路径path.logs: /usr/local/elasticsearch/logs # 日志存放路径network.host: 192.168.1.103 # 监听IP地址,http.port: 9200transport.tcp.port: 9300discovery.seed_hosts: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]cluster.initial_master_nodes: ["192.168.1.101", "192.168.1.102", "192.168.1.103"]network.publish_host: 192.168.1.103node.master: true # 允许成为主节点node.data: true # 允许成为数据节点#xpack.security.enabled: true # 建议关闭或不设置,若设置了有很多非常麻烦的事http.cors.enabled: truehttp.cors.allow-origin: "*"indices.query.bool.max_clause_count: 8192search.max_buckets: 100000
尽量将所有机器设置为允许成为主节点和数据节点,除非机器负载很高。
修改日志,三台机器都一样:
vim //usr/local/elasticsearch/config/log4j2.properties
appender.rolling.strategy.action.condition.nested_condition.type = IfLastModifiedappender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB# 只存储7天的日志appender.rolling.strategy.action.condition.nested_condition.age = 7D
启动elasticsearch集群,三台机器都一样:
cd /usr/local/elasticsearch/bin# 后台启动./elasticsearch -d# 非后台启动,主要用于调试./elasticsearch
查看集群健康状态:
# curl '192.168.1.101:9200/_cluster/health?pretty'# curl '192.168.1.102:9200/_cluster/health?pretty'# curl '192.168.1.103:9200/_cluster/health?pretty' { "cluster_name" : "ELK", "status" : "green", "timed_out" : false, "number_of_nodes" : 3, "number_of_data_nodes" : 3, "active_primary_shards" : 0, "active_shards" : 0, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0}
在root下执行以下命令,三台机器都一样,否则在运行一段时间后就会出错:
// 自行换IPcurl -XPUT -H 'Content-Type: application/json' http://192.168.1.101:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
部署目标机器:192.168.1.101
用它来看ES状态非常直观,除此外感觉这个没啥卵用,可以不安装。
nodejs安装:
因为head插件是用node.js开发的,所以需要此环境。
tar -Jxf node-v14.15.4-linux-x64.tar.xzmv node-v14.15.4-linux-x64/ /usr/local/node
vim /etc/profile
export NODE_HOME=/usr/local/nodeexport PATH=$NODE_HOME/bin:$PATHexport NODE_PATH=$NODE_HOME/lib/node_modules:$PATH
source !$node -v
head插件安装:
wget https://github.com/mobz/elasticsearch-head/archive/master.zipunzip master.zipmv elasticsearch-head-master/ /usr/local/elasticsearch-headcd /usr/local/elasticsearch-headnpm install -g cnpm --registry=https://registry.npm.taobao.orgcnpm install -g grunt-clicnpm install -g gruntcnpm install grunt-contrib-cleancnpm install grunt-contrib-concatcnpm install grunt-contrib-watchcnpm install grunt-contrib-connectcnpm install grunt-contrib-copycnpm install grunt-contrib-jasmine #若报错就再执行一遍
vim /usr/local/elasticsearch-head/Gruntfile.js
connect: { server: { options: { hostname: '0.0.0.0', # 新增这一行即可,不要忘了后面的逗号 port: 9100, base: '.', keepalive: true } }}
后台启动
cd /usr/local/elasticsearch-headnohup grunt server &eval "cd /usr/local/elasticsearch-head/ ; nohup npm run start >/dev/null 2>&1 & "
最终web页面:http://192.168.1.101:9100/
部署目标机器:192.168.1.101
logstash安装:
logstash出现的问题: synesis_lite_suricata项目:简介:
Logstash是一个具有实时传输能力的数据收集引擎,用来进行数据收集(如:读取文本文件)、解析、过滤,并将数据发送给ES。
由于需要加入数据模板,最好使用yum部署安装
yum安装部署logstash:
vim /etc/yum.repos.d/logstash.repo
[logstash-7.x]name=Elastic repository for 7.x packagesbaseurl=https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md
安装密钥,否则无法下载
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearchyum install logstash-7.5.1
安装x-pack:
./logstash-plugin install x-pack
修改配置文件:
vim /etc/logstash/jvm.options
-Xms4g-Xmx4g
vim /etc/logstash/log4j2.properties
appender.rolling.strategy.action.type = Deleteappender.rolling.strategy.action.basepath = ${sys:ls.logs}appender.rolling.strategy.action.condition.type = IfFileNameappender.rolling.strategy.action.condition.glob = ${sys:ls.logs}/logstash-${sys:ls.log.format}appender.rolling.strategy.action.condition.nested_condition.type = IfLastModifiedappender.rolling.strategy.action.condition.nested_condition.age = 7D
vim /etc/logstash/logstash.yml
http.host: "172.16.10.248"http.port: 9600path.data: /usr/share/logstash/data2path.logs: /usr/share/logstash/logsxpack.monitoring.enabled: truexpack.monitoring.elasticsearch.hosts: [ "172.16.10.248:9200","10.10.11.33:9200","192.168.150.134:9200" ]
vim /etc/logstash/pipelines.yml
这一步由于使用了synesis_lite_suricata模板的配置文件,因此比较消耗性能,会影响到数据的实时性。如果对数据实时性要求较高,这里的目录可以定向到自己的配置文件。- pipeline.id: synlite_suricata path.config: "/etc/logstash/synlite_suricata/conf.d/*.conf"# 如果是自己的配置文件,例如:- pipeline.id: synlite_suricata path.config: "/etc/logstash/logstash.conf"
导入数据模板synesis_lite_suricata:
项目主页:
在实践中,这一步千万不要执行,否则报错,而且无法排查原因,博主就因为这个失去了一个周末假期
cd /usr/share/logstash./logstash-plugin update logstash-filter-dns
将synesis_lite_suricata下的synlite_suricata移动到logstash的配置目录下
mv synesis_lite_suricata/logstash/synlite_suricata /etc/logstash
此时的目录结构(logstash.conf和logstash.conf没用,是我用来测试的。当然,如果要使用自定义配置文件的话,它就有用了):
vim logstash.service.d/synlite_suricata.conf
[Service]# Synesis Lite for Suricata global configurationEnvironment="SYNLITE_SURICATA_DICT_PATH=/etc/logstash/synlite_suricata/dictionaries"Environment="SYNLITE_SURICATA_TEMPLATE_PATH=/etc/logstash/synlite_suricata/templates"Environment="SYNLITE_SURICATA_GEOIP_DB_PATH=/etc/logstash/synlite_suricata/geoipdbs"Environment="SYNLITE_SURICATA_GEOIP_CACHE_SIZE=8192"Environment="SYNLITE_SURICATA_GEOIP_LOOKUP=true"Environment="SYNLITE_SURICATA_ASN_LOOKUP=true"Environment="SYNLITE_SURICATA_CLEANUP_SIGS=false"# Name resolution optionEnvironment="SYNLITE_SURICATA_RESOLVE_IP2HOST=false"Environment="SYNLITE_SURICATA_NAMESERVER=127.0.0.1"Environment="SYNLITE_SURICATA_DNS_HIT_CACHE_SIZE=25000"Environment="SYNLITE_SURICATA_DNS_HIT_CACHE_TTL=900"Environment="SYNLITE_SURICATA_DNS_FAILED_CACHE_SIZE=75000"Environment="SYNLITE_SURICATA_DNS_FAILED_CACHE_TTL=3600"# Elasticsearch connection settingsEnvironment="SYNLITE_SURICATA_ES_HOST=[192.168.1.101:9200, 192.168.1.102:9200, 192.168.1.103:9200]"# 如果是开源的ES,这里的用户名和密码都不用管,它会自动忽略Environment="SYNLITE_SURICATA_ES_USER=elastic"Environment="SYNLITE_SURICATA_ES_PASSWD=changeme"# Beats inputEnvironment="SYNLITE_SURICATA_BEATS_HOST=172.16.10.248"Environment="SYNLITE_SURICATA_BEATS_PORT=5044"
移动synlite_suricata.conf
mv logstash.service.d/synlite_suricata.conf /etc/systemd/system/logstash.service.d/synlite_suricata.conf# 使配置生效systemctl daemon-reload# 启动logstashsystemctl start logstash
验证配置文件,这里就是我测试配置文件用的:
./logstash --path.settings /etc/logstash/ -f /etc/logstash/logstash.conf --config.test_and_exit
部署目标机器:192.168.1.101
下载及安装:
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.5.1-linux-x86_64.tar.gztar -zxvf filebeat-7.5.1-linux-x86_64.tar.gzmv filebeat-7.5.1-linux-x86_64 filebeatcd filebeat
修改配置文件:
vim filebeat.yml
filebeat.inputs:- type: log enabled: true # 必须和/etc/suricata/suricata.yaml力的日志配置路径一致 paths: - /home/suricata/log/suricata/eve.json fields: event.type: suricata json.keys_under_root: true json.overwrite_keys: true filebeat.config.modules: path: ${path.config}/modules.d/*.yml reload.enabled: falsesetup.template.settings: index.number_of_shards: 1setup.kibana: host: ["192.168.1.101:5601"]output.logstash: hosts: ["192.168.1.101:5044"]processors: - add_host_metadata: ~ - add_cloud_metadata: ~ - add_docker_metadata: ~
运行:
./filebeat -c ./filebeat.yml
也可以将其使用supervisord守护进程启动:
vim /etc/supervisord.d/filebeat.conf
[program:filebeat]directory=/home/suricata/filebeatcommand=/home/suricata/filebeat/filebeat -e -c /home/suricata/filebeat/filebeat.ymlautostart=trueautorestart=falsestderr_logfile=/tmp/test_stderr.logstdout_logfile=/tmp/test_stdout.loguser=root
部署目标机器:192.168.1.101
Kibana为 Elasticsearch 提供了分析和可视化的 Web 平台。它可以在 Elasticsearch 的索引中查找,交互数据,并生成各种维度表格、图形。
yum安装部署kibana:
vim /etc/yum.repos.d/kibana.repo
[kibana-7.x]name=Kibana repository for 7.x packagesbaseurl=https://artifacts.elastic.co/packages/7.x/yumgpgcheck=1gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearchenabled=1autorefresh=1type=rpm-md
安装密钥,否则无法下载
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearchyum install kibana-7.5.1
修改配置文件:
vi /etc/kibana/kibana.yml
server.port: 5601server.host: "0.0.0.0"elasticsearch.hosts: ["http://192.168.1.101:9200","http://192.168.1.101:9200","http://192.168.1.101:9200"]logging.dest: /usr/share/kibana/logs/kibana.logkibana.index: ".kibana"i18n.locale: "zh-CN"
启动kibana:
systemctl start kibana
最终web页面:http://192.168.1.101:5601/
添加模板文件:/kibana/synlite_suricata.kibana.7.1.x.json
添加位置:
创建索引:
要创建至少两个索引:suricata-*
和suricata_stats-*
数据展示
垃圾数据删除
suricata在内网跑起来后,短短时间就会有大量告警。所以我们得对规则进行优化,某些我们不关心的规则可以禁用掉。禁用掉相关规则后,不会再生成对应的告警。但是ES中已存在的该规则告警该怎么删除呢?我们可以在kibana中直接删除:使用kibana面板中的Dev Tools。
告警量不大的删除方式:
POST logstash-suricata_log-*/_delete_by_query{ "query": { "match": { "alert.signature": "SURICATA STREAM 3way handshake wrong seq wrong ack" } }}
若告警量大,则会报超时错误,此时的删除方式:
POST logstash-suricata_log-*/_delete_by_query?wait_for_completion=false{ "query": { "match": { "alert.signature": "SURICATA STREAM bad window update" } }}
上述步骤若成功会返回一个task,检查清空操作是否完成:
GET _tasks/NQtjLxAaTiig6ZDZ3nK-cw:126846320
若删除完成,则会提示"completed": true
删除ES中的索引数据
这个就很简单了,如下面这样
转载地址:http://dgeaf.baihongyu.com/