Category: System Administration

Squid Custom Error

We’ve been having a challenge with Anya getting her school work completed. Part of the problem is the school’s own fault — they provide a site where kids are encouraged to read, but don’t provide any way to ensure this reading is done after classwork has been completed. But, even if that site didn’t exist … the Internet has all sorts of fun ways you can find to waste some time.

So her computer now routes through my proxy server. I’d set up a squid server so *I* could use the Internet unfettered whilst VPN’d in to work. It’s really annoying to get told you’re a naughty hacker every time you want to see some code example on StackOverflow!

While I didn’t really care about the default messages for my use (nor did I actually block anything for it to matter), I want Anya to be able to differentiate between “technical problem” the site didn’t load and “you are not allowed to be using this site now” the site didn’t load. So I customized the Squid error message for access denied. This can quickly be done by editing /usr/share/squid/errors/en-us/ERR_ACCESS_DENIED (you’ll need to make a backup of your version & may need to replace the file when upgrading squid in the future).

 

Bash – Spaces, Quotes, and String Replacement

Had to figure out how to do string replacement (Scott wanted to convert WMA files to similarly named MP3 files) and pass a single parameter that has spaces into a shell script.

${original_string/thing_to_find/thing_to_replace_there} does string replacement. And $@ is the unexpanded parameter set. So this wouldn’t work if we were trying to pass in more than one parameter (well, it *would* … I’d just have to custom-handle parameter expansion in the script)

 

SSO In Apache HTTPD – OAuth2

PingID is another external authentication source that looks to be replacing ADFS at work in the not-too-distant future. Unfortunately, I’ve not been able to get anyone to set up the “other side” of this authentication method … so the documentation is untested. There is an Apache Integration Kit available from PingID (https://www.pingidentity.com/en/resources/downloads/pingfederate.html). Documentation for setup is located at https://docs.pingidentity.com/bundle/pingfederate-apache-linux-ik/page/kxu1563994990311.html

Alternately, you can use OAuth2 through Apache HTTPD to authenticate users against PingID. To set up OAuth, you’ll need the mod_auth_openidc module (this is also available from the RedHat dnf repository). You’ll also need the client ID and secret that make up the OAuth2 client credentials. The full set of configuration parameters used in /etc/httpd/conf.d/auth_openidc.conf (or added to individual site-httpd.conf files) can be found at https://github.com/zmartzone/mod_auth_openidc/blob/master/auth_openidc.conf

As I am not able to register to use PingID, I am using an alternate OAUTH2 provider for authentication. The general idea should be the same for PingID – get the metadata URL, client ID, and secret added to the oidc configuration.

Setting up Google OAuth Client:

Register OAuth on Google Cloud Platform (https://console.cloud.google.com/) – Under “API & Services”, select “OAuth Consent Screen”. Build a testing app – you can use URLs that don’t go anywhere interesting, but if you want to publish the app for real usage, you’ll need real stuff.

Under “API & Services”, select “Credentials”. Select “Create Credentials” and select “OAuth Client ID”

Select the application type “Web application” and provide a name for the connection

You don’t need any authorized JS origins. Add the authorized redirect URI(s) appropriate for your host. In this case, the internal URI is my docker host, off port on 7443. The generally used URI is my reverse proxy server. I’ve had redirect URI mismatch errors when the authorized URIs don’t both include and exclude the trailing slash. Click “Create” to complete the operation.

You’ll see a client ID and secret – stash those as we’ll need to drop them into the openidc config file. Click “OK” and we’re ready to set up the web server.

Setting Up Apache HTTPD to use mod_auth_openidc

Clone the mod_auth_openidc repo (https://github.com/zmartzone/mod_auth_openidc.git) – I made one change to the Dockerfile. I’ve seen general guidance that using ENV to set DEBIAN_FRONTEND to noninteractive is not ideal, so I replaced that line with the transient form of the directive:

ARG DEBIAN_FRONTEND=noninteractive

I also changed the index.php file to

RUN echo "<html><head><title>Sample OAUTH Site</title><head><body><?php print $_SERVER['OIDC_CLAIM_email'] ; ?><pre><?php print_r(array_map(\"htmlentities\", apache_request_headers())); ?></pre><a href=\"/protected/?logout=https%3A%2F%2Fwww.rushworth.us%2Floggedout.html\">Logout</a></body></html>" > /var/www/html/protected/index.php

Build an image:

docker build -t openidc:latest .

Create an openidc.conf file on your file system. We’ll bind this file into the container so our config is in place instead of the default one. In my example, I have created “/opt/openidc.conf”. File content included below (although you’ll need to use your client ID and secret and your hostname). I’ve added a few claims so we have access to the name and email address (email address is the logon ID)

Then run a container using the image. My sandbox is fronted by a reverse proxy, so the port used doesn’t have to be well known.

docker run --name openidc -p 7443:443 -v /opt/openidc.conf:/etc/apache2/conf-available/openidc.conf -it openidc /bin/bash -c "source /etc/apache2/envvars && valgrind --leak-check=full /usr/sbin/apache2 -X"

* In my case, the docker host is not publicly available. I’ve also added the following lines to the reverse proxy at www.rushworth.us

ProxyPass /protected https://docker.rushworth.us:7443/protected
ProxyPassReverse /protected https://docker.rushworth.us:7443/protected

Access https://www.rushworth.us/protected/index.php (I haven’t published my app for Google’s review, so it’s locked down to use by registered accounts only … at this time, that’s only my ID. I can register others too.) You’ll be bounced over to Google to provide authentication, then handed back to my web server.

We can then use the OIDC_CLAIM_email — $_SERVER[‘OIDC_CLAIM_email’] – to continue in-application authorization steps (if needed).

openidc.conf content:

LogLevel auth_openidc:debug

LoadModule auth_openidc_module /usr/lib/apache2/modules/mod_auth_openidc.so

OIDCSSLValidateServer On

OIDCProviderMetadataURL https://accounts.google.com/.well-known/openid-configuration
OIDCClientID uuid-thing.apps.googleusercontent.com
OIDCClientSecret uuid-thingU4W

OIDCCryptoPassphrase S0m3S3cr3tPhrA53
OIDCRedirectURI https://www.rushworth.us/protected
OIDCAuthNHeader X-LJR-AuthedUser
OIDCScope "openid email profile"

<Location /protected>
     AuthType openid-connect
     Require valid-user
</Location>

OIDCOAuthSSLValidateServer On
OIDCOAuthRemoteUserClaim Username

SSO In Apache HTTPD — ADFS

Active Directory Federated Services (ADFS) can be used by servers inside or outside of the company network. This makes it an especially attractive authentication option for third party companies as no B2B connectivity is required to just authenticate the user base. Many third-party vendors are starting to support ADFS authentication in their out-of-the-box solution (in which case they should be able to provide config documentation), but anything hosted on Apache HTTPD can be configured using these directions:

This configuration uses the https://github.com/UNINETT/mod_auth_mellon module — I’ve built this from the repo. Once mod_auth_mellon is installed, create a directory for the configuration

mkdir /etc/httpd/mellon

Then cd into the directory and run the config script:

/usr/libexec/mod_auth_mellon/mellon_create_metadata.sh urn:samplesite:site.example.com "https://site.example.com/auth/endpoint/"

 

You will now have three files in the config directory – an XML file along with a cert/key pair. You’ll also need the FederationMetadata.xml from the IT group – it should be

Now configure the module – e.g. a file /etc/httpd/conf.d/20-mellon.conf – with the following:

MellonCacheSize 100
MellonLockFile /var/run/mod_auth_mellon.lock
MellonPostTTL 900
MellonPostSize 1073741824
MellonPostCount 100
MellonPostDirectory "/var/cache/mod_auth_mellon_postdata"

To authenticate users through the ADFS directory, add the following to your site config

MellonEnable "auth"
Require valid-user
AuthType "Mellon"
MellonVariable "cookie" 
MellonSPPrivateKeyFile /etc/httpd/mellon/urn_samplesite_site.example.com.key
MellonSPCertFile /etc/httpd/mellon/urn_samplesite_site.example.com.cert
MellonSPMetadataFile /etc/httpd/mellon/urn_samplesite_site.example.com.xml
MellonIdPMetadataFile /etc/httpd/mellon/FederationMetadata.xml
MellonMergeEnvVars On ":"
MellonEndpointPath /auth/endpoint

 

Provide the XML file and certificate to the IT team that manages ADFS to configure the relying party trust.

Porkbun DDNS API

I’ve been working on a script that updates our host names in Porkbun, but the script had a problem with the example.com type A records. Updating host.example.com worked fine, but example.com became example.com.example.com

Now, in a Bind zone, you just fully qualify the record by post-pending the implied root dot (i.e. instead of “example.com”, you use “example.com.”, but Porkbun didn’t understand a fully qualified record. You cannot say the name is null (or “”). You cannot say the name is “example.com” or “example.com.”

In what I hope is my final iteration of the script, I now identify cases where the name matches the zone and don’t include the name parameter in the JSON data. Otherwise I include the ‘name’ as the short hostname (i.e. the fully qualified hostname minus the zone name). This appears to be working properly, so (fingers crossed, knock on wood, and all that) our ‘stuff’ won’t go offline next time our IP address changes.

It’s Not A DNS Problem

I used to work at a company where everything was called an Exchange problem — not that Exchange 2000 didn’t have it’s share of problems (store.exe silent exit failures? Yes, that’s absolutely an Exchange problem) … but the majority of the time, the site had lost their connectivity back to the corporate data center. Or, when I’d see the network guys sprinting down the hallway as the first calls started to come in … the corporate data center had some sort of meltdown.

I’m reminded of this as I see people calling the Facebook outage a “DNS problem”. Facebook’s networks dropped out of BGP routing. That means there’s no route to their DNS server, so you get a resolution failure. It doesn’t mean there’s a DNS problem. Any more than it means there’s an IP or power problem — I’m sure it’s all working as designed and either someone screwed up a config change or someone didn’t screw up and was trying to drop them off the Internet.

Saw much the same thing back when Egypt dropped off of the Internet back in 2011 — their routes were withdrawn from the routing tables. That’s an initiated process — maybe accidental, but it’s not the same as a bunch of devices losing power or a huge fiber cut.

And, when there’s no route you can use to get there … if DNS, web servers, databases, etc are working or not becomes moot.

Diffing two strings

Yes, I know md5sum has a “-c” option for checking the checksum in a file … but, if I was going to screw with a file, I’d have the good sense to edit the checksum file in the archive!

 

#!/bin/bash
#STRFILE=dead.letter
#STRCHECKMD5=5f3748d9c653b78c9ee7559acd423652
#STRMD5=`md5sum $STRFILE`

STRFILE=$1
STRCHECKMD5=$2

STRMD5=`md5sum $STRFILE`

diff -s <( printf '%s\n' "$STRCHECKMD5 $STRFILE" ) <( printf '%s\n' "$STRMD5" )

It’ll either output the file hash and the hash to match (a problem) or indicate the files are identical (a good thing)

DNF — What Provides This File?

“Dependency hell” used to be a big problem — you’d download one package, attempt to install it, and find out you needed three other packages. Download one of them, attempt to install it, and learn about five other packages. Fifty seven packages later, you forgot what you were trying to install in the first place and went home. Or, I suppose, you managed to install that first package and actually use it. The advent of repo-based deployments — where dependencies can be resolved and automatically downloaded — has mostly eliminated dependency hell. But, occasionally, you’ll have a manual install that says “oh, I cannot complete. I need libgdkglext-x11-1.0.so.0 or libminizip.so.1 … and, if there’s a package that’s named libgdkglext-x11 or libminizip … you’re good. There’s not. Fortunately, you can use “dnf provides” to search for a package that provides a specific file — thus learning that you need the gtkglext-libs and minizip-compat packages to resolve your dependencies.