iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://wikitech.wikimedia.org/wiki/Logstash
Logstash - Wikitech Jump to content

Logstash

From Wikitech

Logstash is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities.

Overview

Slides from TechTalk on ELK by Bryan Davis
Wikipedia request flow
Slides from TechTalk on Kibana4 by Bryan Davis

Various Wikimedia applications send log events to Logstash, which gathers the messages, converts them into JSON documents, and stores them in an OpenSearch cluster. Wikimedia uses OpenSearch Dashboards as a front-end client to filter and display messages from the OpenSearch cluster. These are the core components of our ELK stack, but we use additional components as well. Since we utilize more than the core ELK components, we refer to our stack as "ELK+'".

(OpenSearch and OpenSearch Dashboards were forked from Elasticsearch and Kibana when those became non-free, hence the name "ELK".)

OpenSearch

OpenSearch is a multi-node Lucene implementation.

Logstash

Logstash is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs, local files, and several message bus implementations.

OpenSearch Dashboards

OpenSearch Dashboards is a browser-based analytics and search interface for OpenSearch.

Kafka

Apache Kafka is a distributed streaming system. In our ELK stack Kafka buffers the stream of log messages produced by rsyslog (on behalf of applications) for consumption by Logstash. Nothing should output logs to logstash directly, logs should always be sent by way of Kafka.

Rsyslog

Rsyslog is the "rocket-fast system for log processing". In our ELK stack rsyslog is used as the host "log agent". Rsyslog ingests log messages in various formats and from varying protocols, normalizes them and outputs to Kafka.

OpenSearch quick intro

To learn how to use the dashboards at logstash.wikimedia.org, refer to OpenSearch Dashboards.

That's also where you can find how to access the Beta Cluster Logstash: OpenSearch Dashboards#Beta Cluster Logstash.

Systems feeding into logstash

See 2015-08 Tech talk slides

Writing new filters is easy.

Supported log shipping protocols & formats ("interfaces")

Support of logs shipped directly from application to Logstash has been deprecated.

Please see Logstash/Interface for details regarding long-term supported log shipping interfaces.

Kubernetes

Kubernetes hosted services are taken care of directly by the kubernetes infrastructure which ships via rsyslog into the logstash pipeline. All a kubernetes service needs to do is log in a JSON structured format (e.g. bunyan for nodejs services) to standard output/standard error. Note that some field names mind end up being a bit tricky. See also Logstash/Common Logging Schema for an effort to standardize on them.

Systems not feeding into logstash

  • EventLogging (of program-defined events with schemas), despite its name, uses a different pipeline.
  • Varnish logs of the billions of pageviews of WMF wikis would require a lot more hardware. Instead we use Kafka to feed web requests into Hadoop. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier.
  • While most of MediaWiki logs go to Logstash (and mwlog files), a few channels go exclusively to mwlog files. You can check which ones, via $wmgMonologChannels in InitialiseSettings.php.

Writing & testing filters

When in the process of writing new logstash filters, take a look at what's existing already in puppet. Each filter must be tested to avoid regressions, we are using logstash filter verifier and existing tests can be found in the tests/ directory. To write tests or run existing tests you will need logstash-filter-verifier and logstash installed locally, or you can use docker/podman and the puppet repository:

# From the base dir of operations/puppet.
$ cd modules/profile/files/logstash/

# The Makefile recognizes if one of podman or docker is installed
# and then it uses it.
$ make test-local
/usr/bin/docker run --rm --workdir /src -v $(pwd):/src:Z -v $(pwd)/templates:/etc/logstash/templates:Z -v $(pwd)/filter_scripts:/etc/logstash/filter_scripts:Z --entrypoint make docker-registry.wikimedia.org/releng/logstash-filter-verifier:latest
logstash-filter-verifier --diff-command="diff -u --color=always" --sockets tests/ filters/*.conf
Use Unix domain sockets.
[...cut...]

Each filter has a corresponding test after its name in tests/. Within the test file the fields map lists the fields common to all tests and are used to trigger a specific filter's "if" conditions. The ignore key usually contains only @timestamp since that field is bound to change across invocations and can be safely ignored. The remainder of a test file is a list of testcases in the form of input/expected pairs. For "input" it is recommended to use yaml > to include verbatim JSON, whereas "expected" is usually yaml, although it can be also verbatim JSON if more convenient.

Getting logs from misc systems into logstash

Please see Logstash/Interface#Tailing Log Files.

Production Logstash Architecture

As of FY2019 Logstash infrastructure is owned by SRE. See also Logstash/SRE onboard for more information on how to migrate services/applications.

Architecture Diagram



Web interface

https://logstash.wikimedia.org

Authentication

Log into https://logstash.wikimedia.org/ with your Wikimedia Developer account (sometimes known as your "LDAP" account).

Access to Logstash is restricted to members that are in one of the these LDAP groups:

  • wmf (for most Foundation staff),
  • nda (for volunteers),
  • ops (for SRE).

Configuration

The cluster contains two types of nodes, configured by Puppet.

  • role::logging::opensearch::collector manages the Logstash "collector" instances. These run Logstash, an OpenSearch indexing node, and an Apache vhost serving OpenSearch Dashboards. The Apache vhosts perform LDAP-based authentication to restrict access to the potentially sensitive log information.
  • role::logging::opensearch::data configures an OpenSearch data node providing storage for log data.
  • role::kafka::logging configures a Kafka broker for producers to publish log data to and for Logstash to consume from. This is a buffering layer to absorb log spikes and queue log events when maintenance is being performed on the logging cluster.

Hostnames

In October of 2023, the Observability team decided to rename the Logstash cluster to better align with the role(s) of each host. The previous naming convention, regardless of role or configuration, assigned each node the hostname logstash. This legacy naming convention required operators to know by the host ID (1001, et. al.) the role, function, and class.

The new naming convention selection criteria included:

  • a concise name that could be expanded as needed
  • indicated the difference between stateful and stateless components
  • did not specify the underlying technology deployed
  • an incremental improvement over role assignment based on node id

The 10-2023 Logging Cluster Naming Convention:

  • logging-hd - OpenSearch data node: HDD Class
  • logging-sd - OpenSearch data node: SSD Class
  • logging-fe - OpenSearch API (logs-api), OpenSearch Dashboards (kibana7), Logstash Collector node

Load Balancing and TLS

The "misc" Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application.

Common Logging Schema

Seeː Logstash/Common Logging Schema.

API

The Elasticsearch API is accessible at https://logs-api.svc.eqiad.wmnet or by SSH tunneling port 9200 from an opensearch node.

Note: The _search endpoint can only be used without a request body (see task T174960). Use _msearch instead for complex queries that need a request body.

Extract data from Logstash (OpenSearch) with curl and jq

logstash-server:~$ cat search.sh
curl -XGET 'localhost:9200/_search?pretty&size=10000' -d '
{
    "query": {
        "query_string" : {
            "query" : "facility:19,local3 AND host:csw2-esams AND @timestamp:[2019-08-04T03:00 TO 2019-08-04T03:15] NOT program:mgd"
        }
    },
    "sort": ["@timestamp"]
} '
logstash-server:~$ bash search.sh | jq '.hits.hits[]._source | {timestamp,host,level,message}' | head -20
{
  "timestamp": "2019-08-04T03:00:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (newsyslog)"
}
{
  "timestamp": "2019-08-04T03:00:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (   /usr/libexec/atrun)"
}
{
  "timestamp": "2019-08-04T03:01:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (adjkerntz -a)"
}
$ bash search.sh | jq -r '.hits.hits[]._source | {timestamp,host,level,program,message} | map(.) | @csv' > asw2-d2-eqiad-crash.csv

Plugins

Logstash plugins are fetched and compiled into a Debian package for distribution and installation on Logstash servers.

The plugin git repository is located at https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/logstash/plugins

Plugin build process

The build can be run on the production builder host. See README for up-to-date build steps.

Deployment
  • Add package to reprepro and install on the host normally.
Package installation will not restart Logstash. This must be done manually in a rolling fashion, and it's strongly suggested to perform this in step with the plugin deploy.

Gotchas

GELF transport

Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/Logstash config discards any events that have a different value set for "type" or "_type". The final "type" seen in OpenSearch/Dashboards will be take from the "facility" element of the original GELF packet. The application sending the log data to Logstash should set "facility" to a reasonably unique value that identifies your application.

Documents

Troubleshooting

Kafka consumer lag

For a host of reasons it might happen that there's a buildup of messages on Kafka. For example:

OpenSearch is refusing to index messages, thus Logstash can't consume properly from Kafka.
The reason for index failure is usually conflicting fields, see also bug T150106 for a detailed discussion of the problem. The solution is to find what programs are generating the conflicts and drop them on Logstash accordingly, see also bug T228089

Using the dead letter queue

The Logstash DLQ is not enabled normally, however it comes handy when debugging indexing failures and the problematic log entries don't show up in the logstash logs.

Enable (with puppet disabled) the DLQ in /etc/logstash/logstash.yml

dead_letter_queue.enable: true
path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue/"

And systemctl restart logstash. The DLQ will start filling up as soon as unindexable logs are received. At a later time the DLQ can be dumped with (running as logstash user)

$ /usr/share/logstash/bin/logstash -e '
input {
  dead_letter_queue {
    path => "/var/lib/logstash/dead_letter_queue/" 
    commit_offsets => false 
    pipeline_id => "main" 
  }
}

output {
  stdout {
    codec => rubydebug { metadata => true }
  }
}
' 2>&1 | less

Once debugging is complete, clear the queue with rm /var/lib/logstash/dead_letter_queue/main/*.log, and reenable puppet

Operations

Configuration changes

After merging your configuration change Puppet will automatically restart Logstash. To force this to run:

 cumin -b1 -s60 'O:logging::opensearch::collector' 'run-puppet-agent -q'

Test a configuration snippet before merge

Copy your ready to merge snippet (eg. modules/profile/files/logstash/filter-syslog-network.conf) to a Logstash host. Then run

 sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f <myfile>

It should return "Configuration OK".

Indexing errors

Have a look at the Dead Letter Queue Dashboard. The original message that caused the error is in the log.original field.

We're alerting on errors that Logstash gets from OpenSearch whenever there's an "indexing conflict" between fields of the same index (see also bug T236343). The reason usually is because two applications send logs with the same field name but two different types, e.g. response will be sent as a string in one case but as nested object in another. bug T239458 is a good example of this, where different parts of mediawiki send logs formatted in a different way.

No logs indexed

This alert is based on the incoming logs per second indexed by OpenSearch. During normal operation there is a baseline of ~1k logs/s (July 2020) and anything significantly lower than that is an unexpected condition. Check the Logstash dashboard attached to alert for signs of root causes. Most likely Logstash has stopped sending logs to OpenSearch.

Drop spammy logs

Occasionally producers will outpace Logstash's ingestion capabilities, most often with what's considered "log spam" (e.g. dumping whole request/response in debug logs). In these case one solution is to drop the offending logs from Logstash, and ideally the producer has already stopped spamming. The simplest such filter is installed before most/all other filters, matches a few fields and then drops the message:

filter {
  if [program] == "producer" and [nested][field] == "offending value" {
    drop {}
  }
}

See also this Gerrit change for a real-world example.

UDP packet loss

Logstash 5 locks up from time to time, causing UDP packet loss on the host it is running on. The fix in this case is to restart logstash.service on the host in question.

Replace failed disk and rebuild RAID

The storage drives on Logstash data are configured in an mdraid RAID0. OpenSearch handles data redundancy, so the rest of the cluster will absorb the impact of the downed node.

Once the disk is replaced, the RAID will have to be rebuilt:

First stop opensearch and disable puppet.

Copy disk partition layout from good disk to new disk

 sfdisk -d /dev/sdb | sfdisk /dev/sdi

determine md device mounted at /srv (/dev/md2 for example) and check mdstat

 cat /proc/mdstat

get array information. make a note of the remaining array members, we'll need this information when rebuilding

 mdadm --query --detail /dev/md2

stop and remove the raid0 array

 mdadm --stop /dev/md2 && mdadm --remove /dev/md2

remove traces of the previous array on the old partitions

 mdadm --zero-superblock /dev/sdb4
 mdadm --zero-superblock /dev/sdc4
 # ... etc
 mdadm --zero-superblock /dev/sdh4

create new raid0 array (WARNING: DISKS MAY BE DIFFERENT)

 mdadm --create --verbose /dev/md/2 --level=0 --raid-devices=8 /dev/sdb4 /dev/sdc4 /dev/sdd4 /dev/sde4 /dev/sdf4 /dev/sdg4 /dev/sdh4 /dev/sdi4

make filesystem

 mkfs.ext4 /dev/md2

workaround systemd mount management by commenting out the old array mount in fstab and issuing a daemon reload

 vim /etc/fstab
 systemctl daemon-reload

add mount point back in with new uuid and mount

 vim /etc/fstab
 mount /srv

check to make sure the disk is mounted and add new array definition to /etc/mdadm/mdadm.conf - also remove old definition

 mdadm --detail --scan
 vim /etc/mdadm/mdadm.conf

update initramfs

 update-initramfs -u

check other arrays for failed partitions and add partitions to them

 mdadm --manage /dev/md0 --add /dev/sdi2
 mdadm --manage /dev/md1 --add /dev/sdi3

make opensearch data directory

 mkdir /srv/opensearch && chown opensearch:opensearch /srv/opensearch

re-enable puppet and run puppet. OpenSearch should start up, join the cluster, and immediately start rebalancing shards.

Restore Dashboards from backup

From a single collector node, delete all .kibana indexes and restart opensearch-dashboards. Check that the restart created .kibana_1 and aliased it with .kibana. Fetch and unzip the backup and run:

 BACKUP_FILE=<myfile>.ndjson; curl -s -X POST http://localhost:5601/api/saved_objects/_import?createNewCopies=false -H "osd-xsrf: true" --form file=@$BACKUP_FILE > response.json

Check the response for problems and navigate into OpenSearch Dashboards to ensure expected saved objects are present.

Stats

Documents and bytes counts

The OpenSearch cat API provides a simple way to extract general statistics about log storage, e.g. total logs and bytes (not including replication)

 logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ {b+=$10; d+=$7} END {print d; print b}'

Or logs per day (change $3 to $7 to get bytes sans replication)

 logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort

Or logs per month:

 logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); gsub(/\.[0-9][0-9]$/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort

Data Retention

Logs are retained in Logstash for a maximum of 90 days by default in accordance with our Privacy Policy and Data Retention Guidelines.

Extended Retention

See Logstash/Extended Retention

See also