Author: Lisa

Adding SSL To Kafka Server

Obtain SSL Certificates for Each Server

The following process was used to enable SSL communication with the Kakfa servers. Firstly, generate certificates for each server in the environment. I am using a third-party certificate provider, Venafi. When you download the certificates, make sure to select the “PEM (OpenSSL)” format and check the box to “Extract PEM content into separate files (.crt, .key)”

Upload each zip file to the appropriate server under /tmp/ named in the $(hostname).zip format. The following series of commands creates the files needed in the Kafka server configuration. You will be asked to set passwords for the keystore and truststore JKS files. Don’t forget what you use — we’ll need them later.

# Assumes Venafi certificates downloaded as OpenSSL zip files with separate public/private keys are present in /tmp/$(hostname).zip
mkdir /kafka/config/ssl/$(date +%Y)
cd /kafka/config/ssl/$(date +%Y)
mv /tmp/$(hostname).zip ./
unzip $(hostname).zip

# Create keystore for Kakfa
openssl pkcs12 -export -in $(hostname).crt -inkey $(hostname).key -out $(hostname).p12 -name $(hostname) -CAfile ./ca.crt -caname root
keytool -importkeystore -destkeystore $(hostname).keystore.jks -srckeystore $(hostname).p12 -srcstoretype pkcs12 -alias $(hostname)

# Create truststore from CA certs
keytool -keystore kafka.server.truststore.jks -alias SectigoRoot -import -file "Sectigo RSA Organization Validation Secure Server CA.crt"
keytool -keystore kafka.server.truststore.jks -alias UserTrustRoot -import -file "USERTrust RSA Certification Authority.crt"

# Fix permissions
chown -R kafkauser:kafkagroup /kafka/config/ssl

# Create symlinks for current-year certs
cd ..
ln -s /kafka/config/ssl/$(date +%Y)/$(hostname).keystore.jks /kafka/config/ssl/kafka.keystore.jks
ln -s /kafka/config/ssl/$(date +%Y)/kafka.server.truststore.jks /kafka/config/ssl/kafka.truststore.jks

By creating symlinks to the active certs, you can renew the certificates by creating a new /kafka/config/ssl/$(date +%Y) folder and updating the symlink. No change to the configuration files is needed.

Update Kafka server.properties to Use SSL

Append a listener prefixed with SSL:// to the existing listeners – as an example:

#2024-03-27 LJR Adding SSL port on 9095
#listeners=PLAINTEXT://kafka1587.example.net:9092
#advertised.listeners=PLAINTEXT://kafka1587.example.net:9092
listeners=PLAINTEXT://kafka1587.example.net:9092,,SSL://kafka1587.example.net:9095
advertised.listeners=PLAINTEXT://kafka1587.example.net:9092,SSL://kafka1587.example.net:9095

Then add configuration values to use the keystore and truststore, specify which SSL protocols will be permitted, and set whatever client auth requirements you want:

ssl.keystore.location=/kafka/config/ssl/kafka.keystore.jks
ssl.keystore.password=<WhateverYouSetEarlier>
ssl.truststore.location=/kafka/config/ssl/kafka.truststore.jks
ssl.truststore.password=<WhateverYouSetForThisOne>
ssl.enabled.protocols=TLSv1.2,TLSv1.3
ssl.client.auth=none # Or whatever auth setting you require

Save the server.properties file and use “systemctl restart kafka” to restart the Kafka service.

Update Firewall Rules to Permit Traffic on New Port

firewall-cmd –add-port=9095/tcp
firewall-cmd –add-port=9095/tcp –permanent

Bacon

I read this crazy way of cooking bacon — so much better than carefully spreading three slices out across the pan and cooking in batches. You take the whole package (or whatever portion thereof you wish to cook). Separate the slices, but pile them up loosely in the pan over medium heat.

This takes 15-25 minutes — just let it cook, stirring occasionally.

Then remove your beautiful, crispy, curly bacon slices to a paper towel to drain.

OpenSearch Proof of Concept In-Place Upgrade from ElasticSearch 7.7.0 to OpenSearch 2.12.0

I need to migrate my ElasticSearch installation over to OpenSearch. From reading the documentation, it isn’t really clear if that is even possible as an in-place upgrade or if I’d need to use a remote reindex or snapshot backup/restore. So I tested the process with a minimal data set. TL;DR: Yes, it works.

Create a docker instance of ElasticSearch 7.7.0

mkdir /docker/es/esdata
chmod -R g+dwx /docker/es/esdata
chgrp -R 0 /docker/es/esdata

mkdir /docker/es/esconfig

Populate configuration info into ./esconfig and ./esdata is an empty directory

docker run –name es770 -dit -v /docker/es/esdata:/usr/share/elasticsearch/data -v /docker/es/esconfig:/usr/share/elasticsearch/config -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” docker.elastic.co/elasticsearch/elasticsearch:7.7.0

Populate Data into ElasticSearch Sandbox

Use curl to populate an index with some records – you can create lifecycle policies, customize the fields, etc … this is the bare minimum to validate that data in ES7.7 can be ingested by OS2.12curl -X POST “localhost:9200/ljrtest/_bulk” -H “Content-Type: application/x-ndjson” -d’
{“index”: {“_id”: “1”}}
{“id”: “1”, “message”: “Record one”}
{“index”: {“_id”: “2”}}
{“id”: “2”, “message”: “Record two”}
{“index”: {“_id”: “3”}}
{“id”: “3”, “message”: “Record three”}
{“index”: {“_id”: “4”}}
{“id”: “4”, “message”: “Record four”}
{“index”: {“_id”: “5”}}
{“id”: “5”, “message”: “Record five”}
{“index”: {“_id”: “6”}}
{“id”: “6”, “message”: “Record six”}
{“index”: {“_id”: “7”}}
{“id”: “7”, “message”: “Record seven”}
{“index”: {“_id”: “8”}}
{“id”: “8”, “message”: “Record eight”}
{“index”: {“_id”: “9”}}
{“id”: “9”, “message”: “Record nine”}
{“index”: {“_id”: “10”}}
{“id”: “10”, “message”: “Record ten”}

Shut Down ElasticSearch

docker stop es770

Bring Up an OpenSearch 2.12 Host

mkdir /docker/es/osconfig

Populate the configuration data for OpenSearch in ./osconfig

docker run –name os212 -dit -v /docker/es/esdata:/usr/share/opensearch/data -v /docker/es/osconfig:/usr/share/opensearch/config -p 9200:9200 -p 9600:9600 -e “discovery.type=single-node” -e “OPENSEARCH_INITIAL_ADMIN_PASSWORD=P@s5w0rd-123” opensearchproject/opensearch:2.12.0

Verify Data is Still Available in OpenSearch

[root@docker es]# curl -k -u “admin:P@s5w0rd-123” https://localhost:9200/ljrtest
{“ljrtest”:{“aliases”:{},”mappings”:{“properties”:{“id”:{“type”:”text”,”fields”:{“keyword”:{“type”:”keyword”,”ignore_above”:256}}},”message”:{“type”:”text”,”fields”:{“keyword”:{“type”:”keyword”,”ignore_above”:256}}}}},”settings”:{“index”:{“creation_date”:”1710969477402″,”number_of_shards”:”1″,”number_of_replicas”:”1″,”uuid”:”AO5JBoyzSJiKZA9xeA2imQ”,”version”:{“created”:”7070099″,”upgraded”:”136337827″},”provided_name”:”ljrtest”}}}}

Conclusion

Yes, a very basic data set in ElasticSearch 7.7.0 can be upgraded in-place to OpenSearch 2.12.0 — in the “real world” compatibility issues will crop up (flatten!!), but the idea is fundamentally sound.

Problem, though, is compatibility issues. We don’t have exotic data types in our instance but Kibana uses “flatten” … so those rare people use use Kibana to access and visualize their data really cannot just move to OpenSearch. That’s a huge caveat. I can recreate everything manually after deleting all of the Kibana indices (and possibly some more, haven’t gone this route to see). But if I’m going to recreate everything, why wouldn’t I recreate everything and use remote reindex to move data? I can do this incrementally — take a week to move all the data slowly, do a catch-up reindex t-2 days, another t-1 days, another the day of the change, heck even one a few hours before the change. Then the change is a quick delta reindex, stop ElasticSearch, and swap over to OpenSearch. The backout is to just swing back to the fully functional, unchanged ElasticSearch instance.

2024 Orchard: Filling the Orchard

This is the year we fill the orchard! The plants have started arriving, and hopefully we will be able to get them in the ground this weekend (not looking good, so it will probably be next weekend)

I’ve also got plums, northern almonds, and northern pecans to plant in the pasture area. We’re excited to see what pecans and almonds that can survive USDA Growing Zone 6 taste like!

Office 365 Activation Failure

We’ve been working to lock down our workstations … not “so secure you cannot use it”, but just this side of the functional/nonfunctional line. Everything went surprisingly well except I use the Office 365 suite for work. Periodically, it has to “phone home” and verify my work account is still valid. And that didn’t seem to go through the proxy well. The authentication screen would pop up and immediately throw an error:

No internet connection. Please check your network settings and try again [2604]

I spent a whole bunch of time playing around with the firewall rules, the proxy rules … and finally went so far as to just turn off the firewall and remove the proxy. And it still didn’t work. Which was nice because it means I didn’t break it … but also meant it was going to be a lot harder to fix!

Finally found the culprit — a new Windows installation, for some reason, uses really old SSL/TLS versions. Turned on 1.2 and, voila, I’ve got a sign-on screen. Sigh! Turned the firewall & proxy back on, and everything works beautifully. I think I’m going to add these settings to the domain policy so I don’t have to configure this silliness every time.