I knew the garden was growing well — there are tons of bean flowers, cucumber flowers, and tomato flowers. The buckwheat has sprouted, some of the corn is about a foot high. But it’s really cool to see a few veggies forming!
Author: Lisa
Did you know … you can chat with yourself in Teams?
Logstash, JRuby, and Private Temp
There’s a long-standing bug in logstash where the private temp folder created for jruby isn’t cleaned up when the logstash process exits. To avoid filling up the temp disk space, I put together a quick script to check the PID associated with each jruby temp folder, see if it’s an active process, and remove the temp folder if the associated process doesn’t exist.
When the PID has been re-used, this means we’ve got an extra /tmp/jruby-### folder hanging about … but each folder is only 10 meg. The impacting issue is when we’ve restarted logstash a thousand times and a thousand ten meg folders are hanging about.
This script can be cron’d to run periodically or it can be run when the logstash service launches.
import subprocess
import re
from shutil import rmtree
strResult = subprocess.check_output(f"ls /tmp", shell=True)
for strLine in strResult.decode("utf-8").split('\n'):
if len(strLine) > 0 and strLine.startswith("jruby-"):
listSplitFileNames = re.split("jruby-([0-9]*)", strLine)
if listSplitFileNames[1] is not None:
try:
strCheckPID = subprocess.check_output(f"ps -efww | grep {listSplitFileNames[1]} | grep -v grep", shell=True)
#print(f"PID check result is {strCheckPID}")
except:
print(f"I am deleting |{strLine}|")
rmtree(f"/tmp/{strLine}")
Duck Egg Spaetzle
Duck Egg Spaetzle
Course: SidesDifficulty: Easy4
servings15
minutes5
minutesIngredients
1 cup all purpose flour
1 Tbsp buttermilk
1 tsp salt
2 large duck eggs
Method
- Combine dry ingredients in a bowl and mix.
- From a depression in the flour, add eggs and buttermilk to hole.
- Mix the egg and buttermilk together, then fold in flour to create a wet dough.
- Drop small pieces into boiling water — when they float to the top, they are done.
Logstash
General Info
Logstash is a pipeline based data processing service. Data comes into logstash, is manipulated, and is sent elsewhere. The source is maintained on GitHub by ElasticCo.
Installation
Logstash was downloaded from ElasticCo and installed from a gzipped tar archive to the /opt/elk/logstash folder.
Configuration
The Logstash server is configured using the logstash.yml file.
Logstash uses Log4J 2 for logging. Logging configuration is maintained in the log4j2.properties file
Logstash is java-based, and the JVM settings are maintained in the jvm.options file – this includes min heap space, garbage collection configuration, JRuby settings, etc.
Logstash loads the pipelines defined in /opt/elk/logstash/config/pipelines.yml – each pipeline needs an ID and a path to its configuration. The path can be to a config file or to a folder of config files for the pipeline. The number of workers for the pipeline defaults to the number of CPUs, so we normally define a worker count as well – this can be increased as load dictates.
– pipeline.id: LJR
pipeline.workers: 2
path.config: “/opt/elk/logstash/config/ljr.conf”
Each pipeline is configured in an individual config file that defines the input, any data manipulation to be performed, and the output.
Testing Configuration
As we have it configured, you must reload Logstash to implement any configuration changes. As errors in pipeline definitions will prevent the pipeline from loading, it is best to test the configuration prior to restarting Logstash.
/opt/elk/logstash/bin/logstash –config.test_and_exit -f ljr_firewall_logs_wip.conf
Some errors may occur – if the test ends with “Configuration OK”, then it’s OK!
Automatic Config Reload
The configuration can automatically be reloaded when changes to config files are detected. This doesn’t give you the opportunity to test a configuration prior to it going live on the server (once it’s saved, it will be loaded … or not loaded if there’s an error)
Input
Input instructs logstash about what format data the pipeline will receive – is JSON data being sent to the pipeline, is syslog sending log data to the pipeline, or does data come from STDIN? The types of data that can be received are defined by the input plugins. Each input has its own configuration parameters. We use Beats, syslog, JSON (a codec, not a filter type), Kafka, stuff
The input configuration also indicates which port to use for the pipeline – this needs to be unique!
Input for a pipeline on port 5055 receiving JSON formatted data
Input for a pipeline on port 5100 (both TCP and UDP) receiving syslog data
Output
Output is similarly simple – various output plugins define the systems to which data can be shipped. Each output has its own configuration parameters – ElasticSearch, Kafka, and file are the three output plug-ins we currently use.
ElasticSearch
Most of the data we ingest into logstash is processed and sent to ElasticSearch. The data is indexed and available to users through ES and Kibana.
Kafka
Some data is sent to Kafka basically as a holding queue. It is then picked up by the “aggregation” logstash server, processed some more, and relayed to the ElasticSearch system.
File
File output is generally used for debugging – seeing the output data allows you to verify your data manipulations are working property (as well as just make sure you see data transiting the pipeline without resorting to tcpdump!).
Filter
Filtering allows data to be removed, attributes to be added to records, and parses data into fields. The types of filters that can be applied are defined by the filter plugins. Each plugin has its own documentation. Most of our data streams are filtered using Grok – see below for more details on that.
Conditional rules can be used in filters. This example filters out messages that contain the string “FIREWALL”, “id=firewall”, or “FIREWALL_VRF” as the business need does not require these messages, so there’s no reason to waste disk space and I/O processing, indexing, and storing these messages.
This example adds a field, ‘sourcetype’, with a value that is based on the log file path.
Grok
The grok filter is a Logstash plugin that is used to extract data from log records – this allows us to pull important information into distinct fields within the ElasticSearch record. Instead of having the full message in the ‘message’ field, you can have success/failure in its own field, the logon user in its own field, or the source IP in its own field. This allows more robust reporting. If the use case just wants to store data, parsing the record may not be required. But, if they want to report on the number of users logged in per hour or how much data is sent to each IP address, we need to have the relevant fields available in the document.
Patterns used by the grok filter are maintained in a Git repository – the grok-patterns contains the data types like ‘DATA’ in %{DATA:fieldname}
The following are the ones I’ve used most frequently:
Name | Field Type | Pattern Notes | Notes |
DATA | Text data | .*? | This does not expand to the most matching characters – so looking for foo.*?bar in “foobar is not really a word, but foobar gets used a lot in IT documentation” will only match the bold portion of the text |
GREEDYDATA | Text data | .* | Whereas this matches the most matching characters – so foo.*bar in “foobar is not really a word, but foobar gets used a lot in IT documentation” matches the whole bold portion of the text |
IPV4 | IPv4 address | ||
IPV6 | IPv6 address | ||
IP | IP address – either v4 or v6 | (?:%{IPV6}|%{IPV4}) | This provides some flexibility as groups move to IPv6 – but it’s a more complex filter, so I’ve been using IPV4 with the understanding that we may need to adjust some parsing rules in the future |
LOGLEVEL | Text data | Regex to match list of standard log level strings – provides data validation over using DATA (i.e. if someone sets their log level to “superawful”, it won’t match) | |
SYSLOGBASE | Text data | This matches the standard start of a syslog record. Often used as “%{SYSLOGBASE} %{GREEDYDATA:msgtext}” to parse out the timestamp, facility, host, and program – the remainder of the text is mapped to “msgtext” | |
URI | Text data | protocol://stuff text is parsed into the protocol, user, host, path, and query parameters | |
INT | Numeric data | (?:[+-]?(?:[0-9]+)) | Signed or unsigned integer |
NUMBER | Numeric data | Can include a casting like %{NUMBER:fieldname;int} or %{NUMBER:fieldname;float} | |
TIMESTAMP_ISO8601 | DateTime | %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}? | There are various other date patterns depending on how the string will be formatted. This is the one that matches YYYYMMDDThh:mm:ss |
Parsing an entire log string
In a system with a set format for log data, parsing the entire line is reasonable – and, often, there will be a filter for well-known log types. I.E. if you are using the default Apache HTTPD log format, you don’t need to write a filter for each component of the log line – just match either the HTTPD_COMBINEDLOG or HTTPD_COMMONLOG pattern.
match => { “message” => “%{HTTPD_COMMONLOG}” }
But you can create your own filter as well – internally developed applications and less common vendor applications won’t have prebuilt filter rules.
match => { “message” => “%{TIMESTAMP_ISO8601:logtime} – %{IPV4:srcip} – %{IPV4:dstip} – %{DATA:result}” }
Extracting an array of data
Instead of trying to map an entire line at once, you can extract individual data elements by matching an array of patterns within the message.
match => { “message” => [“srcip=%{IPV4:src_ip}”
, “srcport=%{NUMBER:srcport:int}”
,”dstip=%{IPV4:dst_ip}”
,”dstport=%{NUMBER:dstport:int}”] }
This means the IP and port information will be extracted regardless of the order in which the fields are written in the log record. This also allows you to parse data out of log records where multiple different formats are used (as an example, the NSS Firewall logs) instead of trying to write different parsers for each of the possible string combinations.
Logstash, by default, breaks when a match is found. This means you can ‘stack’ different filters instead of using if tests. Sometimes, though, you don’t want to break when a match is found – maybe you are extracting a bit of data that gets used in another match. In these cases, you can set break_on_match to ‘false’ in the grok rule.
I have also had to set break_on_match when extracting an array of values from a message.
Troubleshooting
Log Files
Logstash logs output to /opt/elk/logstash/logs/logstash-plain.log – the logging level is defined in the /opt/elk/logstash/config/log4j2.properties configuration file.
Viewing Data Transmitted to a Pipeline
There are several ways to confirm that data is being received by a pipeline – tcpdump can be used to verify information is being received on the port. If no data is being received, the port may be offline (if there is an error in the pipeline config, the pipeline will not load – grep /opt/elk/logstash/logs/logstash-plain.log for the pipeline name to view errors), there may be a firewall preventing communication, or the sender could not be transmitting data.
tcpdump dst port 5100 -vv
If data is confirmed to be coming into the pipeline port, add a “file” output filter to the pipeline.
Issues
Data from filebeat servers not received in ElasticSearch
We have encountered a scenario were data from the filebeat servers was not being transmitted to ElasticSearch. Monitoring the filebeat server did not show any data being sent. Restarting the Logstash servers allowed data to be transmitted as expected.
OHM v/s Survey Results – Word Clouds
I was extremely suspect when the term ‘bucolic’ made it to the tag cloud presented by OHM — it’s a great word, sure. But with a thousand responses … I highly doubt a significant number of people used the term (five did, based on the raw survey results). So I generated a few word clouds of my own for comparison. This is the image presented in the Township meeting — now the idea of a word cloud is that the size of the word increases related to the frequency in which the word occurs.
Compared to the tag cloud generated by separating responses on commas — not great because some people space-delimited their three words.
And separating on word boundaries — also not great because some people’s “words” were actually phrases, but it gets those who space delimited their three words.
And separating on word boundaries and aligning similar-meaning words (e.g. farm, farms, farmland => farm) which becomes a little subjective — does “wooded” and “trees” mean the same thing? Maybe and maybe not. Same with ‘country’ and ‘countryside’ … or even the difference between ‘farm’ and ‘agriculture’.
Simulating Syslog Data
After creating a syslog pipeline, it is convenient to be able to test that data is being received and parsed as expected. You can use the logger utility (from the util-linux package) using “-n” to specify the target server, -P to specify the target port, either -d for udp or -T for tcp, -i with the process name, -p with the log priority, and the message content in quotes.
As an example, this command sends a sample log record to the logstash server. If the pipeline is working properly, the document will appear in ElasticSearch.
logger -n logstash.example.com -P 5101 -d -i ljrtest -p user.notice '<date=2022-06-22 time=09:09:28 devname="fcd01" \
devid="AB123DEF45601874" eventtime=1655914168555429048 tz="-0700" logid="0001000014" type="traffic" subtype="local" \
level="notice" vd="EXAMPLE-CORP" srcip=10.4.5.10 srcport=56317 srcintf="VLAN1" srcintfrole="wan" dstip=10.2.3.212 \
dstport=61234 dstintf="EXAMPLE-CORP" dstintfrole="undefined" srccountry="United States" dstcountry="United States" sessionid=3322792 \
proto=6 action="deny" policyid=0 policytype="local-in-policy" service="tcp/61234" trandisp="noop" app="tcp/61234" duration=0 \
sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0'
Useful K8s Commands
Shell into pod
kubectl exec -it <pod_name> -- /bin/bash
Quick Docker Test
I’m building a quick image to test permissions to a folder structure for a friend … so I’m making a quick note about how I’m building this test image so I don’t have to keep re-typing it all.
FROM python:3.7-slim
RUN apt-get update && apt-get install -y curl --no-install-recommends
RUN mkdir -p /srv/ljr/test
# Create service account
RUN useradd --comment "my service account" -u 30001 -g 30001 --shell /sbin/nologin --no-create-home webuser
# Copy a bunch of stuff various places under /srv
# Set ownership on /srv folder tree
RUN chown -R 30001:30001 /srv
USER 30001:30001
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]
build the image
docker build -t “ljr:latest” .
Run a transient container that is deleted on exit
docker run –rm -it –entrypoint “/bin/bash” ljr
Totally Normal Party Platform
Nothing unusual here … just your every day state GOP platform document reminding everyone that the state has a right to secede from the US
And looking to hold a referendum in 2023 to decide if they should do it or not.