Category: System Administration

Quick OpenHAB2 Apt Install In Docker Ubuntu Container

# Set up docker image — exposes OpenHAB web on your port 8080
docker run -p 8080:8080 -dit –name UbuntuOH2 ubuntu:latest

# Shell into the container
docker exec -it UbuntuOH2 /bin/bash

# From within the container, run:
apt update
apt install sudo
apt install vim
apt install wget
apt install gnupg
apt install apt-transport-https

# Repo for Zulu Java
echo ‘deb http://repos.azulsystems.com/debian stable main’ > /etc/apt/sources.list.d/zulu.list

# Repo for OpenHAB2 stable build
wget -qO – ‘https://bintray.com/user/downloadSubjectPublicKey?username=openhab’ | apt-key add –
apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv-keys 0xB1998361219BD9C9
echo ‘deb https://dl.bintray.com/openhab/apt-repo2 stable main’ | tee /etc/apt/sources.list.d/openhab2.list

apt-get update
apt-get install zulu-8
apt-get install openhab2
apt-get install openhab2-addons

/etc/init.d/openhab2 start

# OpenHAB will be accessible on your IP at 8080. E.g. http://10.10.10.123:8080.
# docker start/stop UbuntuOH2

WordPress 5 AutoSave Issue

I upgraded to WordPress 5 back when it was still in preview, and wasn’t shocked to find some issues. When I would create a post, WordPress would go into an auto-saving loop. Now I like the idea of background saves to prevent data loss if by browser falls over … but the instant the auto-save would complete, another would kick off and the editor was basically unusable (copy content / refresh page / paste content / cross fingers that the auto-save loop didn’t happen that time). I tried setting the auto-refresh interval in the config file, but that had no impact.

With the 5.0.0 release, I was dismayed to find the problem lingering but 0.0 releases usually have some quirks too. But two dot-dot releases later, and a lot of frustration that auto-save has caused waaaay more data loss in six months than years worth of browser glitches ever managed, I started researching the problem to find a solution other than “wait for the next release to fix it”.

I came across an issue on GitHub which not only reports the same issue, but included a bit of code to add to the theme functions.php file:

add_filter( 'block_editor_settings', 'jp_block_editor_settings', 10, 2 );

function jp_block_editor_settings( $editor_settings, $post ) {
	$editor_settings['autosaveInterval'] = 2000; //number of second [default value is 10]

	return $editor_settings;
}

And my WordPress is usable again!

Preventing Unauthenticated Binds in Active Directory

There is finally a Windows server-side solution to prevent “unauthenticated bind” requests (detailed in LDAP RFC 4513 section 5.1.2 with a note regarding the subsequent security considerations in section 6.3.1) from allowing unauthorized users to successfully authenticate.

It has always been possible to handle in code (i.e. verify that username and password are both non-null prior to communicating with the directory server) and is my personal preference as a developer cannot predict how individual directory services will be configured.

But for the third-party apps that don’t prevent unauthenticated binds, a setting to disallow unauthenticated bind operations to Active Directory was added in Windows 2019 — in your Configuration partition, open the properties of CN=Directory Service, CN=Windows NT, CN=Services, CN=Configuration — find the msDS-Other-Settings attribute, and add a new entry DenyUnauthenticatedBind=1

Microsoft Graph — Application Registration

The application I am registering will pull report data from Graph for use within existing company systems. I will be assigning application-level permissions and no callback URL is needed.

To register an application, log into http://portal.azure.com and select “Azure Active Directory” from the left-hand navigation column. Then select “App Registrations (preview)”.

Click on “New registration”

Provide a descriptive name for the application — tenant managers can see all of the registered applications and it’s a lot easier if you ask them to approve access for “Specific Application Name For Engineering” than “LJR Test”.

The application will be created and you will be brought to the app overview. Select “API permissions” then click the “Microsoft Graph” hyperlink.

Click on “Application permissions”

And find the permissions you need. For the script I want to run, I need Reports.Read.All. Click “Update permissions” to save your changes.

If you are a tenant admin, you can approve your own rights. Otherwise, you’ll need to contact a tenant admin and have them approve the permissions you have requested. Once the permissions have been acknowledged, you’re ready to go.

You will need the app ID and a secret for use within your code. The application ID is listed on the application “Overview”.

To create a secret, select “Certificates & secrets” then click “New client secret”. This is displayed one time, so copy it into your code now.

Splunk Teams Connector – Followup

We managed to use the stock Teams webhook app in Splunk — just needed to modify the search being used. Adding “|table” and specific fields to be included in the table avoids having to filter the list data within the Python code

There still is a tweak to the code that I prefer — Python lists aren’t in any particular order. I’d like to be able to look the same place in the Teams post to see a particular field. Adding a sort when the facts array is put into the post body ensures the fields are in the same order each time.

        sections=[
            {"activityTitle": settings.get("search_name") + " was triggered"},
            {
                "title": "Details",
                "facts": sorted(facts)
            }
        ],

And I’ve got a Teams post from Splunk with a generic script — desired fields are specified within the search, so can be easily changed.

Splunk – Posting to Microsoft Teams via Webhooks

Using either the default webhook action or the Teams-specific webhook, Splunk searches can post data into Microsoft Teams. First, you need to get a webhook URL for your Teams channel. On the hamburger menu next to the channel, select “Connectors”. Select Webhook, provide a name for the webhook, and copy the webhook URL.

If you intend to use the generic webhook app, there is no need to install the Teams-specific one. The Teams-specific app gives you prettier output & a “view in splunk” button. Download the app tgz. To install the app, go into “Manage Apps” and select “Install app from file”.

Click ‘Browse…’ and find the tgz you downloaded. Click ‘Upload’ to install the app to Splunk.

Now create a search for which you want to post data into your Teams channel. Click “Save As” and select “Alert”

Provide a title for the alert — you can use real-time or scheduled alerts. Once you’ve got the alert sorted, select “Add Actions” and select the Teams webhook action (or the generic webhook action if you intend to use that one). Paste in the URL from your Teams channel webhook and click “Save”.

You”ll see a confirmation that the alert has been saved. Close this.

Now you would think you’d be ready to use it … but wait. Neither one works out of the box. In the Splunk log, you see error 400 “Bad data” reported.

For the default webhook app, edit the Python script (/opt/splunk/etc/apps/alert_webhook/bin/webhook.py in my case). Find the section where the JSON body is built. Teams requires a summary or title within the POST data. I just added a static summary, but you could do something fancier.

        body = OrderedDict(
            sid=settings.get('sid'),
            summary='LJRWebhook',
            search_name=settings.get('search_name'),
            app=settings.get('app'),
            owner=settings.get('owner'),
            results_link=settings.get('results_link'),
            result=settings.get('result')

For the Teams-specific webhook, edit the Python script (/opt/splunk/etc/apps/alert_msteams/bin/teams.py in my case) and find the section where the facts list is populated. There’s too much data being sent through. There’s probably a way to filter it out in Splunk, but I don’t know how 🙂

The right way to do it is select the most important items from settings.get(‘result’).items that you want to be displayed within Teams and populate facts with those elements. I used a new list, strWantedKeys, to determine which keys should be added to the facts list. The quick/ugly way is to just take the first n items from the result items (settings.get(‘results’).items()[:7] gets seven … 8 produced a ‘bad payload received by generic incoming webhook’ error from Teams.

try:
settings = json.loads(sys.stdin.read())
print >> sys.stderr, "DEBUG Settings: %s" % settings
url = settings['configuration'].get('url')
facts = []
strWantedKeys = ['sourcetype', '_raw', 'host', 'source']
for key,value in settings.get('result').items():
if key in strWantedKeys:
facts.append({"name":key, "value":value})
body = OrderedDict(

For reference, the original facts list was:

    "facts": [{
        "name": "index",
        "value": "history"
    }, {
        "name": "_raw",
        "value": "Test push to teams 555"
    }, {
        "name": "_eventtype_color",
        "value": ""
    }, {
        "name": "host",
        "value": "10.10.15.134:8088"
    }, {
        "name": "source",
        "value": "http:Sendmail testing"
    }, {
        "name": "_si",
        "value": ["49cgc3e5e52e", "history"]
    }, {
        "name": "sourcetype",
        "value": "mysourcetype"
    }, {
        "name": "_indextime",
        "value": "1544554125"
    }, {
        "name": "punct",
        "value": "___"
    }, {
        "name": "linecount",
        "value": ""
    }, {
        "name": "_time",
        "value": "1544554125"
    }, {
        "name": "eventtype",
        "value": ""
    }, {
        "name": "_sourcetype",
        "value": "mysourcetype"
    }, {
        "name": "_kv",
        "value": "1"
    }, {
        "name": "_serial",
        "value": "15"
    }, {
        "name": "_confstr",
        "value": "source::http:Sendmail testing|host::10.10.15.134:8088|mysourcetype"
    }, {
        "name": "splunk_server",
        "value": ""
    }]

Now generate a message that matches your search — you’ll see a post created in your Teams channel.

Active Directory – Prevent Renaming and Moving OU

On my home domain, I’ve always added an access control entry to prevent this from happening … it’s really easy to double-click and be in rename mode or drag and drop an OU into a new location. I’ve always considered this to be a bit of paranoia on my part — not like anyone’s routinely screwing up entire OU’s.

Until they are. We’ve had significant two outages at work caused by unintentional changes to Active Directory organizational unit names. Partially to avoid wide-spread outages due to something that’s fundamentally silly and partially because any widespread outage requires a root cause analysis that includes some action you’ve taken to prevent whatever from happening again we’re going to implement the same permission tweak at work now.

Since it’s not just me who has wanted/needed this permission, figured I would publish how I’ve got the permissions set:

No idea why the GUI shows name and Name instead of rdn and CN respectively. But I’ve denied write for adminDisplayName, rdn, and cn.

Just like the “prevent accidental deletion” checkbox is a bit of a pain when you want to delete an OU, this is inconvenient if you want to rename or move OUs. The first step is to remove the permission, then you can make your change, and then you’ve got to re-apply the permission. Slight inconvenience, but having the entire company failing LDAP authentications (where the base DN no longer finds the users) is a massive inconvenience too.

OUD 11g Failure

Ran out of disk space on the OUD partition (don’t let people turn on debug logging!!). Easily fixed — stop service, remove big log file, start again. Unfortunately that “start again” wasn’t as easy as it could have been. Server startup fails. It says the cryptomanager failed to publish the instance-key-pair in ADS …. which, Oracle says means you get to recreate the single instance. Honestly, this isn’t a horrible process since I’ve got a backup of the directory. But I’d rather not spend the next hour messing around with the server.

The underlying problem is that the admin partition didn’t dump out to its storage file (full disk), so admin-backend.ldif is a 0-byte file. Now, had the directory been running when I noticed this … I could have dumped the cn=admin data partition to another volume and copied the file in once I cleared up disk space. But I have a file-level backup … and it was easy enough to pull the file from last night back. Voila, one server online in a couple of minutes. And a good reminder for the future that a shutdown with a full disk … no good. I probably want to find something to free up a couple meg of space prior to shutting down the server. Extra-cautious option would be the MS Exchange approach of creating a few 5-meg files to ensure there’s space that can be freed up.

 

Added 06 Feb 2020: One of my colleagues encountered this problem again today, and it turns out you can copy the admin-backend.ldif from another server too. It may not be 100% — he cannot log into the server through the admin console. But he’s got a directory being served & no users complaining about an outage. It can get sorted properly later.

Cloudy ROI

I often have trouble seeing the value behind cloud offerings — but most cloud migrations I’ve seen have done 1:1 replacement of locally hosted servers with cloud hosted servers. The first two years, the cloud hosted servers are cheaper (although that’s some dodgy accounting as we’re assuming no workforce changes as a result of outsourcing servers and depreciation of the owned asset is not considered). The third year, though, is a break-even point. General Depreciation System considers computers a five-year property, but there are accounting practices to handle fully depreciated assets. It remains on the balance sheet as a cost, it’s accumulated depreciation is listed as a accumulated depreciation contra asset item. When you *do* stop using the asset, the accumulated depreciation account is debited for the full depreciated amount, the fixed asset account is credited with its full cost. Point being I can continue using a computer asset after five years. Cloud hosted servers make financial sense for a company that tends towards “bleeding edge” implementations (buying the new whatever next year), but for a company that buys a server or application and then uses it for a decade … you’re simply turning capital expense into a greater ongoing operating expense. Which … good this year, but bad in the long term.

Now for a smaller company that doesn’t have a dedicated IT department, and that doesn’t actually need the capacity provided by a single modern server … externally hosting resources is financially beneficial. A web site, e-mail, chat-based customer service? All make sense to host externally. You don’t have to own half a dozen servers, make sure they’re backed up, etc. But I don’t see the cost benefit at enterprise levels unless (1) you want to build data centers close to customers without the expense of actually building a data center. For instance, opening your services to customers in the EU … getting a data center set up in, say, Germany isn’t a quick proposition. As your business grows, it may become “worth it” to invest money into a European data center. But cloud-hosted computers from some major provider who already has a presence there provides quick time-to-market and minimizes up-front cost. Some countries may have a laborious process for prospective businesses too — a process the cloud hosting provider has already navigated. Or you (2) plan a substantial workforce reduction. If someone else is backing up, patching, and monitoring systems … you don’t need people performing those duties. Since a cloud-hosting provider is able to leverage those employees across far more servers than you’d need — there’s a place where scale produces a cost benefit. But, strangely, I don’t see companies reducing IT operations staff after moving to the cloud. This may be a long-term goal to ensure the enthusiasm of staff for the move — it’s not particularly enticing to put six months of work into a project that ensures my job goes away. Or this may just be a thing — move to the cloud and still have twenty ops employees.