This isn’t something I will notice again now that I’ve got my heap space sorted … but when you are using Kibana to add index patterns, there’s nothing that waits until you’ve stopped typing for x seconds or implements a minimum characters before searching. So, after each character you type into the dialog, a search is performed against the ES server — here I wanted to create a pattern for logstash-* — which sent nine different search requests to my ES. And, if the server is operating reasonably, all of these searches would deliver a result set that Kibana essentially ignores as I’m tying more characters. But, in my case, the ES server fell over before I could even type logstash!
Category: ELK
ElasticSearch Java Heap Size — Order of Precedence
I was asked to look at a malfunctioning ElasticSearch server this week. It’s a lab sandbox, so not a huge deal … but still something they wanted online and functioning for some proof-of-concept testing. And, as a bonus, they’re willing to let us use their lab servers for our own sandboxing (IPv6 implementation, ELK upgrades). There were a handful of problems (it looks like the whole thing used to be a multi-node cluster but the second cluster node has vanished without any trace, the entire platform was re-IP’d, vm.max_map_count wasn’t set on the Docker server, and the logstash folder had a backup of a pipeline config in /path/to/logstash/pipeline/ … which still loads, so causes a continual stream of port-in-use exceptions). But the biggest problem was that any attempt at actually using the ElasticSearch server resulted in it falling over. Java crashed with an out of memory error because the heap space was exhausted. Now I’ve had really small sandboxes before — my first ES sandbox only had a gig of memory and was quite prone to this crash. But the lab server they’ve set up has 64GB of memory. So allocating a few gigs seemed like a quick solution.
The jvm.options file and jvm.options.d folder weren’t mounted into the container — they were the default files held within the container. Which seemed odd, and I made a mental note that it was something we’d need to either mount in or update again when the container gets updated. But no matter how much heap space I allocated, ES crashed.
I discovered that the Docker deployment set an ENV variable for ES_JAVA_OPTS — something which, per the ElasticSearch documentation, overrides all other JVM options. So no matter what I was putting into the jvm options file, the 256 meg set in the ENV was actually being used.
Luckily it’s not terribly difficult to modify the ENV’s within an existing container. You could, of course, redeploy the container with the new settings (and I’ll do that next time, since I’ve also got to get IPv6 enabled). But I wasn’t planning on making any other changes.
ElasticSearch – Useful API Commands
In all of these examples, the copy/paste text uses localhost and port 9200. Since some of my sandboxes don’t use the default port, some of the example outputs will use a different port. Obviously, use your hostname and port. And, if your ES instance requires authentication, add the “-u” option with the user (or user:password … but that’s not a good idea outside of sandboxes as the password is then stored to the shell history). If you are using https for the API endpoint, you may also need to add the “-k” option to establish an untrusted SSL connection (e.g. the CA isn’t trusted by your OS).
curl -k -u elastic https://localhost...
Listing All Indices
Use the following command to list all of the indices in the ES system:
curl http://localhost:9200/_cat/indices?v
Listing All Templates
Use the following command to list all of the templates:
curl http://localhost:9200/_cat/templates?pretty
Explain Shard Allocation
I was asked to help get a ELK installation back into working order — one of the things I noticed is that all of the indices were yellow. The log file showed allocation errors. This command reported on the allocation decision that was being made. In the case I was looking at, the problem became immediately obvious — it was a single node system and 1 replica was defined. The explanation was that the shard could not be stored because it already existed in that place.
curl http://localhost:9200/_cluster/allocation/explain
If the maximum number of allocation retries has been exceeded, you can force ES to re-try allocation (as an example, a disk was full for an extended period of time but space has been cleared and everything should work now)
curl http://localhost:9200/_cluster/reroute?retry_failed=true
Set the Number of Replicas for a Single Index
Once I identified that the single node ELK instance had indices configured
curl -X PUT \ http://127.0.0.1:9200/logstash-2021.05.08/_settings \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{"index" : {"number_of_replicas" : 0}} '
Add an Alias to an Index
To add an alias to an existing index, use PUT /<indexname>/_alias/<aliasname> — e.g.
PUT /ljr-2022.07.05/_alias/ljr
ElasticSearch 7.7.0 Log4J Vulnerability Remediation
We are a little stuck with our ElasticSearch implementation — we need the OpenDistro authentication, so either need to buy newer ElasticSearch or move to OpenSearch. That’s an ongoing project, but it won’t be accomplished in a timely fashion to address these log4j issues.
To address the existing Log4J issues in ElasticSearch as well as, evidently, another challenge that meant a new log4j build over the weekend, I am manually replacing the Log4j jar files for ElasticSearch 7.7.0, OpenDistro Security, and the S3 backup plugin. The 2.11.1 version that was bundled with the distribution can be replaced with the 2.17.0 release from Dec 18th.
The new JARs can be downloaded from Maven Repo at:
https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-core/
https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-api/
https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-slf4j-impl/
https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-1.2-api/
To upgrade just log4j, the following script is run … well first allocation is disabled:
# Stuff to stop/start routing in Kibana
POST /_flush/synced
PUT /_cluster/settings
{ "transient" : { "cluster.routing.allocation.enable": "none" } }
Then the script is run:
export l4jver=2.17.0
systemctl stop elasticsearch
# Remove old jars
rm --interactive=never /opt/elk/elasticsearch/lib/log4j-api-*.jar
rm --interactive=never /opt/elk/elasticsearch/lib/log4j-core-*.jar
rm --interactive=never /opt/elk/elasticsearch/modules/x-pack-identity-provider/log4j-slf4j-impl-*.jar
rm --interactive=never /opt/elk/elasticsearch/modules/x-pack-security/log4j-slf4j-impl-*.jar
rm --interactive=never /opt/elk/elasticsearch/plugins/opendistro_security/log4j-slf4j-impl-*.jar
rm --interactive=never /opt/elk/elasticsearch/modules/x-pack-core/log4j-1.2-api-*.jar
rm --interactive=never /opt/elk/elasticsearch/plugins/repository-s3/log4j-1.2-api-*.jar
# Copy in upgraded jars
cp /tmp/log4j-api-$l4jver*.jar /opt/elk/elasticsearch/lib/
cp /tmp/log4j-core-$l4jver*.jar /opt/elk/elasticsearch/lib/
cp /tmp/log4j-slf4j-impl-$l4jver*.jar /opt/elk/elasticsearch/modules/x-pack-identity-provider/
cp /tmp/log4j-slf4j-impl-$l4jver*.jar /opt/elk/elasticsearch/modules/x-pack-security/
cp /tmp/log4j-slf4j-impl-$l4jver*.jar /opt/elk/elasticsearch/plugins/opendistro_security/
cp /tmp/log4j-1.2-api-$l4jver*.jar /opt/elk/elasticsearch/modules/x-pack-core/
cp /tmp/log4j-1.2-api-$l4jver*.jar /opt/elk/elasticsearch/plugins/repository-s3/
# Fix permissions
chown elkadmin:elkadmin /opt/elk/elasticsearch/lib/log4j*
chown elkadmin:elkadmin /opt/elk/elasticsearch/modules/x-pack-core/log4j*
chown elkadmin:elkadmin /opt/elk/elasticsearch/modules/x-pack-identity-provider/log4j*
chown elkadmin:elkadmin /opt/elk/elasticsearch/modules/x-pack-security/log4j*
chown elkadmin:elkadmin /opt/elk/elasticsearch/plugins/repository-s3/log4j*
chown elkadmin:elkadmin /opt/elk/elasticsearch/plugins/opendistro_security/log4j*
systemctl start elasticsearch
# Clean up temp files
rm /tmp/log4j*
And finally routing is re-enabled:
PUT /_cluster/settings
{ "transient" : { "cluster.routing.allocation.enable" : "all" } }
Baked French Toast Casserole with Maple Syrup
Ingredients
- 1 loaf French bread (~16 ounces)
- 8 large eggs
- 1.5 cups heavy whipping cream
- 1.5 cups milk
- 2 tablespoons maple syrup
- 1 teaspoon vanilla extract
- 1/4 teaspoon ground cinnamon
- 1/4 teaspoon ground nutmeg
- Dash salt
Directions
Slice bread into 20 slices, 1-inch each. Arrange slices in a generously buttered 9 by 13-inch baking dish in 2 rows, overlapping the slices. In a large bowl, combine the eggs, half-and-half, milk, sugar, vanilla, cinnamon, nutmeg and salt and beat with a rotary beater or whisk until blended but not too bubbly. Pour mixture over the bread slices, making sure all are covered evenly with the milk-egg mixture. Spoon some of the mixture in between the slices. Cover with foil and refrigerate overnight.
The next day, preheat oven to 350 degrees F.
Bake for 40 minutes, until puffed and lightly golden. Serve with maple syrup, fresh fruit, and fresh whipped cream.