ELK Log Server Installation

ELK Log Server Installation

  • Introduction
  • Installation & Configuration
  • Alerts
  • Firewall

Introduction

ELK is a collection of open-source software produced by Elastic which allows you to search, analyze, and visualize logs generated from any source in any format, a practice known as centralized logging.

Centralized logging can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place. It is also beneficial because it helps you to recognize problems that span several servers over a single time frame by correlating their logs.

The Elastic Stack has four main components:

  • Elasticsearch
  • Logstash
  • Kibana
  • Beats

Installation & Configuration

Step 1 — Installing and Configuring Elasticsearch

  • wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
  • echo “deb https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
  • sudo apt update
  • sudo apt install elasticsearch
  • sudo nano /etc/elasticsearch/elasticsearch.yml
  • Find the line that specifies network.host, uncomment it, and replace its value with localhost
  • sudo systemctl start elasticsearch
  • sudo systemctl enable elasticsearch
  • curl -X GET “localhost:9200”
  • This will determine the correct installation of the ElasticSearch. If the curl commands give basic information about your local node, this means that ElasticSearch is up and running.

Step 2 — Installing and Configuring the Kibana Dashboard

  • sudo apt install kibana
  • sudo systemctl enable kibana
  • sudo systemctl start kibana
  • echo “kibanaadmin:openssl passwd -apr1” | sudo tee -a /etc/nginx/htpasswd.users
  • sudo nano /etc/nginx/sites-available/example.com
  • Delete all the existing content in the file before adding the following:
server {

listen 80;

server_name my_example_elk.com;

auth_basic “Restricted Access”;

auth_basic_user_file /etc/nginx/htpasswd.users;

location / {

proxy_pass http://localhost:5601;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection ‘upgrade’;

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

}

}


  • sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
  • sudo nginx -t
  • sudo systemctl restart nginx
  • sudo ufw allow ‘Nginx Full’

Step 3 — Installing and Configuring Logstash

  • sudo apt install logstash
  • sudo nano /etc/logstash/conf.d/02-beats-input.conf
input {

beats {

port => 5044

}

}
  • sudo nano /etc/logstash/conf.d/10-syslog-filter.conf


filter {

if [fileset][module] == “system” {

if [fileset][name] == “auth” {

grok {

match => { “message” => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",

“%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}”,

“%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}”,

“%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:[%{POSINT:[system][auth][pid]}])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}”,

“%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:[%{POSINT:[system][auth][pid]}])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}”,

“%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:[%{POSINT:[system][auth][pid]}])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$”,

“%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:[%{POSINT:[system][auth][pid]}])?: %{GREEDYMULTILINE:[system][auth][message]}”] }

pattern_definitions => {

“GREEDYMULTILINE”=> “(.|\n)"

}

remove_field => “message”

}

date {

match => [ “[system][auth][timestamp]”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss” ]

}

geoip {

source => “[system][auth][ssh][ip]”

target => “[system][auth][ssh][geoip]”

}

}

else if [fileset][name] == “syslog” {

grok {

match => { “message” => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:[%{POSINT:[system][syslog][pid]}])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }

pattern_definitions => { “GREEDYMULTILINE” => "(.|\n)” }

remove_field => “message”

}

date {

match => [ “[system][syslog][timestamp]”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss” ]

}

}

}

}
  • sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf
output {

elasticsearch {

hosts => [“localhost:9200”]

manage_template => false

index => “%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}”

}

}
  • sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
  • sudo systemctl start logstash
  • sudo systemctl enable logstash

Step 4 — Installing and Configuring Filebeat

  • sudo apt install filebeat
  • sudo nano /etc/filebeat/filebeat.yml


#output.elasticsearch:

#Array of hosts to connect to.

#hosts: [“localhost:9200”]

output.logstash:

#The Logstash hosts

hosts: [“localhost:5044”]


  • sudo filebeat modules enable system
  • sudo filebeat setup --template -E output.logstash.enabled=false -E ‘output.elasticsearch.hosts=[“localhost:9200”]’
  • sudo filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.hosts=[‘localhost:9200’] -E setup.kibana.host=localhost:5601
  • sudo systemctl start filebeat
  • sudo systemctl enable filebeat

Send Windows Logs to Elastic Stack Using Winlogbeat and Sysmon

  • cd C:‘Program Files’\Winlogbeat
  • .\install-service-winlogbeat.ps1
  • PowerShell.exe -ExecutionPolicy UnRestricted -File .\install-service-winlogbeat.ps1
  • cd C:\Program Files\Sysmon
  • sysmon -i -accepteula -h md5,sha256,imphash -l -n

The main configuration file for Winlogbeat is C:\Program Files\Winlogbeat\winlogbeat.yml with the reference config file being C:\Program Files\Winlogbeat\winlogbeat.reference.yml.

To edit this file, you can use Notepad++.

By default, Winlogbeat is set to monitor application, security, and system logs, and logs from Sysmon.

...
winlogbeat.event_logs:
  - name: Application
    ignore_older: 72h

  - name: System

  - name: Security
    processors:
      - script:
          lang: javascript
          id: security
          file: ${path.home}/module/security/config/winlogbeat-security.js

  - name: Microsoft-Windows-Sysmon/Operational
    processors:
      - script:
          lang: javascript
          id: sysmon
          file: ${path.home}/module/sysmon/config/winlogbeat-sysmon.js
...

If you need to see more event types, you can execute the command Get-EventLog * in PowerShell.

Under the general settings, we are going to setup the optional name of the Beat and the Tags associated with the events.

...
#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: winlogbeat

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
tags: ["windows_systems"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
...

Next, setup the Winlogbeat output. In this demo, we are sending the logs directly to Elasticsearch nodes.

...
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]
  hosts: ["your-ip-address:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"


#----------------------------- Logstash output --------------------------------

output.logstash:

#The Logstash hosts

#hosts: [“localhost:5044”]

hosts: [“192.168.43.104:5044”]

...

If Elasticsearch and Kibana are not running on the same host and you want to use Kibana Winlogbeat dashboards, you can specify Kibana URL. Kibana must be reachable on non-loopback address. For example;

...
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  host: "192.168.43.104:5601"
...
  • cd C:‘Program Files’\Winlogbeat
  • .\winlogbeat.exe test config -c .\winlogbeat.yml -e
  • If there is no error with the configuration, you should see the Config Ok.

Alerts

The Alerts can be created using the User Interface. There are three main sections to configure:

  • General alert details
  • Alert type and conditions
  • Action type and action details

General alert details

All alert share the following four properties in common:

  • Name: The name of the alert. While this name does not have to be unique, the name can be referenced in actions and also appears in the searchable alert listing in the management UI. A distinctive name can help identify and find an alert.
  • Tags: A list of tag names that can be applied to an alert. Tags can help you organize and find alerts, because tags appear in the alert listing in the management UI which is searchable by tag.
  • Check every: This value determines how frequently the alert conditions below are checked.
  • Notify every: This value limits how often actions are repeated when an alert instance remains active across alert checks.

Firewall

The Uncomplicated Firewall (ufw) for iptables is a frontend and is especially suitable for host-based firewalls. A linux operating system has a command-line interface for controlling the firewall are supported by ufw. In order to enable ELK Server to collect logs with an active firewall, the ports used must be entered as the communication ports allowed in the Firewall rules.

  • sudo ufw default deny incoming
  • sudo ufw default allow outgoing

These commands set incoming rejection defaults and allow outgoing connections. For a personal computer, these firewall defaults alone might suffice, but servers usually need to respond to incoming requests from outside users.

  • sudo ufw allow ssh
  • Also for the ELK specfic ports, the ufw contain the allow rule for the ports that are required by ELK for logs collection and connections. Those ports are dependent upon the user which configures the ports, these ports can be changed for the configuration person.

To view or add a comment, sign in

More articles by Mohammad Salahuddin Kurd

  • SAP vs SQL: Why SAP is the Future of Data Management for Modern Organizations

    AP and SQL are two of the most widely used data management systems in organizations. While both have similarities in…

  • DBT VS Python - ETL

    As data analytics professionals, we all know that ETL (Extract, Transform, Load) is a critical process for data…

  • Fintech on the rise - The future is here

    Finance has become the fastest-growing industry in the world, thanks to an annual increase in technological…

  • Reverse Proxy VS Load Balancing

    What is a Reverse Proxy ? Reverse Proxy is the connection point that is places at the edge of the network. It receives…

  • GrayLog Server

    Installation & Configuration Linux Logs Windows Logs Alerts Reverse Proxy Firewall Installation & Configuration GrayLog…

Insights from the community

Others also viewed

Explore topics