I made my own SCOBY without using kombucha — diced up some old apples and coated them in sugar. It worked! I just started my first batch of kombucha using tea and maple syrup.
Resetting Lost/Forgotten ElasticSearch Admin Passwords
There are a few ways to reset the password on an individual account … but they require you to have a known password. But what about when you don’t have any good passwords? (You might be able to read your kibana.yml and get a known good password, so that would be a good place to check). Provided you have OS access, just create another superuser account using the elasticsearch-users binary:
/usr/share/elasticsearch/bin/elasticsearch-users useradd ljradmin -p S0m3pA5sw0Rd -r superuser
You can then use curl to the ElasticSearch API to reset the elastic account password
curl -s --user ljradmin:S0m3pA5sw0Rd -XPUT "http://127.0.0.1:9200/_xpack/security/user/elastic/_password" -H 'Content-Type: application/json' -d'
{
"password" : "N3wPa5sw0Rd4ElasticU53r"
}
'
ElasticSearch ILM – Data Lifecycle
The following defines a simple data lifecycle policy we use for event log data.
Immediately, the data is in the “hot” phase.
After one day, it is moved to the “warm” phase where the number of segments is compressed to 1 (lots-o-segments are good for writing, but since we’re dealing with timescale stats & log data [i.e. something that’s not being written to the next day], there is no need to optimize write performance. The index will be read only, thus can be optimized for read performance). After seven days, the index is frozen (mostly moved out of memory) as in this use case, data generally isn’t used after a week. Thus, there is no need to fill up the server’s memory to speed up access to unused data elements. Since freeze is deprecated in a future version (due to improvements in memory utilization that should obsolete freezing indices), we’ll need to watch our memory usage after upgrading to ES8.
Finally, after fourteen days, the data is deleted.

To use the policy, set it as the template on an index:

Upon creating a new index (ljrlogs-5), the ILM policy has been applied:

Greenhouse Construction – Part 1
Orchard Layout
Upgrading ElasticSearch – From 7.6 to 7.17
Before upgrading to 8, you must be running at least version 7.17 … so I am first upgrading my ES7 to a new enough version that upgrading to ES8 is possible.
Environment
Not master eligible nodes:
a6b30865c82c.example.com
a6b30865c83c.example.com
Master eligible nodes:
a6b30865c81c.example.com
- Disable shard allocation
PUT _cluster/settings{ "persistent": { "cluster.routing.allocation.enable": "primaries" }}
- Stop non-essential indexing and flush
POST _flush/synced
- Upgrade the non-master eligible nodes first then the master-eligible nodes. One at a time, SSH to the host and upgrade ES
a. Stop ES
systemctl stop elasticsearch
b. Install the new RPM:
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-x86_64.rpm
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.3-x86_64.rpm.sha512
shasum -a 512 -c elasticsearch-7.17.3-x86_64.rpm.sha512
![]()
rpm -U elasticsearch-7.17.3-x86_64.rpm
c. Update configuration for new version
vi /usr/lib/tmpfiles.d/elasticsearch.conf
![]()
vi /etc/elasticsearch/elasticsearch.yml # Add the action.auto_create_index as required -- * for all, or you can restrict auto-creation to certain indices
d. Update unit file and start services
systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch.service
- On the Kibana server, upgrade Kibana to a matching version:systemctl stop kibana
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.3-x86_64.rpm
rpm -U kibana-7.17.3-x86_64.rpm
sytemctl daemon-reload
systemctl enable kibana
systemctl start kibana - Access the Kibana console and ensure the upgraded node is back online
- Re-enable shard allocation
PUT _cluster/settings{"persistent": {"cluster.routing.allocation.enable": null }}
Linux Disk Utilization – Reducing Size of /var/log/sa
We occasionally get alerted that our /var volume is over 80% full … which generally means /var/log has a lot of data, some of which is really useful and some of it not so useful. The application-specific log files already have the shortest retention period that is reasonable (and logs that are rotated out are compressed). Similarly, the system log files rotated through logrotate.conf and logrotate.d/* have been configured with reasonable retention.
Using du -sh /var/log/ showed the /var/log/sa folder took half a gig of space.
This is the daily output from sar (a “daily summary of process accounting” cron’d up with /etc/cron.d/sysstat). This content doesn’t get rotated out with the expected logrotation configuration. It’s got a special configuration at /etc/sysconfig/sysstat — changing the number of days (or, in my case, compressing some of the older files) is a quick way to reduce the amount of space the sar output files consume).
Proto-chickens and Proto-ducks — 8 days later
Scott and Anya candled all of the eggs tonight — of the 41 eggs, there are three that might not be developing. But all of the eggs are still in the incubator because there weren’t any obviously undeveloped eggs. If all of these eggs hatch, we’re going to have an absolute swarm of baby birds!
Certbot — Plugin Not Found
I got a certificate expiry warning this morning — an oddity because I’ve had a cron task renewing our certificates for quite some time. Running the cron’d command manually … well, that would do it! The plug-in for my DNS registrar isn’t found.
Checking the registered plugins, well … it’s not there.
Except it’s there — running “pip install certbot-dns-porkbun” (and even trying pip3 just to make sure) tells me it’s already installed. Looking around for the files, this turns out to be one of those things that there’s obviously a right way to solve and a quick way to solve. For some reason, /usr/local/lib is not being searched for packages even though it’s included in my PYTHONPATH. The right thing to do is figure out why this is happening. Quick solution? Symlink the things into where they need to be
ln -s /usr/local/lib/python3.10/site-packages/certbot_dns_porkbun /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/pkb_client /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/filelock /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.7/site-packages/tldextract /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/requests_file /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/certbot_dns_porkbun-0.2.1.dist-info /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/filelock-3.6.0.dist-info /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/pkb_client-1.2.dist-info /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.7/site-packages/tldextract-3.0.2.dist-info/ /usr/lib/python3.10/site-packages/ ln -s /usr/local/lib/python3.10/site-packages/requests_file-1.5.1.dist-info /usr/lib/python3.10/site-packages/
Voila, the plug-in exists again (and my cron task successfully renews the certificate)
Did you know … Microsoft Teams now has navigation history
You ever navigate away from a discussion and realize you needed to go back — or not quite remember where you just posted that message? Teams now has a “Back” button — in the upper left-hand corner of the Teams client, you can click back and forth to navigate between the last 12 channels/chats you’ve visited.










