Category: System Administration

Using File System As A Git Repository

I’ve used GitLab for quite some time, and as a full featured CI/CD platform that also provides git functionality … it’s awesome. But it’s also serious overkill for someone who wants to coordinate code with another developer or two. Or just keep history for their code. Or a backup. To accomplish this, all you need is drive space. If you’ve got a file server, any folder on the share can be a git repository.

On the server, create a git repository and add something to it.

Z:\Temp\test>git init
Initialized empty Git repository in Z:/Temp/test/.git/

Z:\Temp\test>notepad testfile.txt

Z:\Temp\test>git add testfile.txt

Z:\Temp\test>git commit -m "Initial file upload"
[master (root-commit) 9a3ebe7] Initial file upload
1 file changed, 1 insertion(+)
create mode 100644 testfile.txt

Then on your client, either clone the repo from the drive path

C:\Users\lisa>git clone file://z:/temp/test
Cloning into 'test'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3/3), done.

Or from the UNC path

C:\Users\lisa>git clone file://server01.example.com/data/temp/test
Cloning into 'test'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (3/3), done.

I prefer to use the UNC paths – if my drive letter fails to map for some reason, the UNC path is still available.

If you’ve got pre-existing code, there’s a bit of a different process. On the server, create an empty folder and use “git init” to initialize the empty repo. On the client where the code exists, run:

git init
git add *
git commit -m “Initial code upload”
git remote add origin git clone file://server01.example.com/data/temp/test
git push origin master

 

Finding Disabled Accounts In Active Directory

When using Active Directory (AD) as a source of user data, it’s useful to filter out disabled accounts. Unfortunately, AD has a lot of different security-related settings glomed together in the userAccountControl attribute. Which means there’s no single attribute/value combination you can use to ignore disabled accounts.

The decimal value you see for userAccountControl isn’t terribly useful, but display it in binary and each bit position has a meaning. The userAccountControl value is just the number with a bunch of bits set. Numbering the bits from left to right, here is what each one means.

Bit # Meaning
0 Unused – must be 0
1 Unused – must be 0
2 Unused – must be 0
3 Unused – must be 0
4 Unused – must be 0
5 ADS_UF_PARTIAL_SECRETS_ACCOUNT
6 ADS_UF_NO_AUTH_DATA_REQUIRED
7 ADS_UF_TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION
8 ADS_UF_PASSWORD_EXPIRED
9 ADS_UF_DONT_REQUIRE_PREAUTH
10 ADS_UF_USE_DES_KEY_ONLY
11 ADS_UF_NOT_DELEGATED
12 ADS_UF_TRUSTED_FOR_DELEGATION
13 ADS_UF_SMARTCARD_REQUIRED
14 Unused – must be 0
15 ADS_UF_DONT_EXPIRE_PASSWD
16 Unused – must be 0
17 Unused – must be 0
18 ADS_UF_SERVER_TRUST_ACCOUNT
19 ADS_UF_WORKSTATION_TRUST_ACCOUNT
20 ADS_UF_INTERDOMAIN_TRUST_ACCOUNT
21 Unused – must be 0
22 ADS_UF_NORMAL_ACCOUNT
23 Unused – must be 0
24 ADS_UF_ENCRYPTED_TEXT_PASSWORD_ALLOWED
25 ADS_UF_PASSWD_CANT_CHANGE
26 ADS_UF_PASSWD_NOTREQD
27 ADS_UF_LOCKOUT
28 ADS_UF_HOMEDIR_REQUIRED
29 Unused – must be 0
30 ADS_UF_ACCOUNT_DISABLE
31 Unused – must be 0

Bit #30 indicates if the account is disabled — 1 if the account is disabled, 0 if the account is enabled. Simple and direct approach is to “and” the attribute value with 0b10 to extract just the bit we care about. When the and operation returns 0, the account is enabled. When it returns 2 (0x10), the account is disabled.

A list of userAccountControl values and the corresponding meaning:

userAccountControl Value Meaning
1 Logon script executes
2 Account Disabled
8 Home Directory Required
16 Lockout
32 Password Not Required
64 User cannot change password
128 Encrypted text password not allowed
256 Temporary Duplicate Account
512 Normal active account
514 Normal disabled account
544 Password not required, enabled account
546 Password not required, disabled account
2048 Inter-domain trust account
4096 Workstation trust account
8192 Server trust account
65536 No password expiry
66048 Password never expires, enabled account
66050 Password never expires, disabled account
66082 Password never expires and is not required, enabled account
66084 Password never expires and is not required, disabled account
131072 MNS Login account
262144 Smartcard required
262656 Smartcard required, enabled account
262658 Smartcard required, disabled account
262688 Enabled account, password not required, smartcard required
262690 Disabled account, password not required, smartcard required
328192 Enabled account, password doesn’t expire, smartcard required
328194 Disabled account, password doesn’t expire, smartcard required
328224 Enabled account, password doesn’t expire, password not required, smartcard required
328226 Disabled account, password doesn’t expire, password not required, smartcard required
524288 Trusted for delegation
532480 Domain controller
1048576 Not delegated
2097152 Use DES key only
4194304 Don’t require pre-authorization
8388608 Password expired
16777216 Trusted to auth for delegation
67108864 Partial secrets account

 

Upgrading Oracle SQL Developer

I’ve been using an Oracle database more in my new position … which means I’ve got the Oracle SQL Developer tool installed on my computer. My first upgrade was available yesterday … and it didn’t work. Not like threw an error, but like double click on the executable and nothing happens. It silently exits.

Turns out there’s something in appdata that needs to be cleared. I don’t run multiple versions of SQL Developer, so I could just blow away “%userprofile%\appdata\roaming\SQL Developer” and “%userprofile%\appdata\roaming\sqldeveloper” to clear whatever needs to be cleared. Click the icon and the program finally runs.

Setting up gitlab-runner

CLI on the GitLab server:

# Set up the GitLab Repo
curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.rpm.sh

# Install package
yum install gitlab-runner

# verify runner is executable
ll `which gitlab-runner`
# If needed, flag it executable – shouldn’t be a problem with RPM installations, but it’s been a problem for me with manual installs
#chmod ugo+x /usr/local/bin/gitlab-runner

# Register a runner
gitlab-runner register

# use URL & token from http://<GITLABSERVER>/admin/runners
# add tags, if you want to use tags to assign runner
# executor: shell (and docker, if you want to use docker executors. The shell executor is the simplest case, so I am starting there)

# start the runner
gitlab-runner start

On the GitLab Web GUI:

Admin section => Overview => Runners. Click pencil to edit the runner and uncheck “Lock to current projects”, and (unless you want to use tagging) check “Run untagged jobs”

** I was getting an error in every pipeline job saying the git command was not found.

For most other commands, you can append to the path in the before_script section of your .gitlab-ci.yml

before_script:
– export PATH=$PATH:/opt/whatever/odd/path/bin

But that doesn’t work in this case because we’re not getting that far: the bootstrap “stuff” cannot fetch the project to see the before script. Git, on my system, was part of the GitLab package. I just created a symlink into a “normal” binary location:

root@gitlab:~# which git
/opt/gitlab/embedded/bin/git
root@gitlab:~# ln -s /opt/gitlab/embedded/bin/git /usr/bin/git

And we’ve got successful test execution:

 

Software Flow Control and vim

In the early 90’s, one of the things I liked about Microsoft’s ecosystem was that they developed a standard for keyboard shortcuts. In most applications, developed by Microsoft or not, you could hit ctrl-p to print or ctrl-x to exit. Or ctrl-s to save. It’s quite convenient when I’m using Windows applications, but hitting ctrl-s to save without really thinking about it hangs vim. Hangs like “go into another shell and kill vim & that ssh session”. This is because ctrl-s, in Linux, means XOFF — the software flow control command that means “hi, I’m a thing from 1968 and my buffer is getting full. chill out for a bit and let me catch up”. Recovery is simple enough, send XON — “hi, that thing from 1968 again, and I’m all caught up. send me some more stuff”. That’s ctrl-q.

But there aren’t many slow anything‘s involved in computing these days, which means XON/XOFF isn’t the most useful of features for most people (* if you’ve got real serial devices attached … you may not be “most people” here). Instead of remembering ctrl-q gets gets vim back without killing it, just disable START/STOP control. Thing is it’s not really vim that’s using flow control — it’s the terminal emulator — so the “fix” isn’t something you’ll have to do in vim. In your ~/.bashrc or ~/.bash_profile (or globally in something like /etc/profile.d/disableFlowControl.sh)

# Disable XON/XOFF flow control so ctrl-s doesn’t hang vim
stty -ixon

If you can add -ixoff to avoid ctrl-q from meaning XON too … but I don’t bother since “start sending me data” doesn’t seem to hang anything.

Generating a keytab file without domain admin permissions

Most of the application owners I encountered wanted someone online with them when they had to change their Kerberos service principal password. Not because I really needed to generate the keytab file, but “just in case”. A warm fuzzy feeling, good thoughts being sent their way. Whatever. I was up at dark-o-clock, so I’d generate the keytab the right way and we’d all be asleep in twenty minutes. What’s the wrong way? Well, in a stand-alone AD … that’s really just mapping the UPN to the wrong thing or failing to chose the encryption type wisely. But with AD accounts managed by an identify management platform and a notification package registered on the DCs to update said identity management platform when passwords were changed? I joined a lot of emergency calls either at 7AM following their keytab update or half an hour after the change completed. And 7AM was only because the app didn’t happen to have any 3rd shift users.

Keytab files have a key version number (kvno). Generate keytab and set the account password, you’ve got a file with KVNO version 5. Except IDM picks up the password change, tweaks up the managed accounts, and the actual AD object msDS-KeyVersionNumber is 6. And auth on your site falls over about half an hour after you complete your change (replication time!). So what’s the right way? Don’t make changes to the account. If you’re changing the password, change the password. And then generate a keytab.

 

I’ve created a sample account, ljrtest, used setspn to set an SPN value for my lisa.sandbox.rushworth.us site, and configured the account to support AES 128 and 256 bit encryption.

To generate a keytab file without updating the UPN or attempting to set the account password, use:

ktpass /out ljrtest.keytab /princ HTTP/lisa.sandbox.rushworth.us@rushworth.us -SetUPN /mapuser ljrtest /crypto AES256-SHA1 /ptype KRB5_NT_PRINCIPAL /pass DevNull -SetPass /target dc.rushworth.us

KTPASS is part of the RSAT utilities — on Win10 with the Oct 2018 update (or newer), this is now a “Feature on Demand” and can be added  through “Apps & Features” by clicking “optional features” and selecting the ADS RSAT pack.

There are a few other utilities available — ktab from the JDK or ktutil on Linux — if you cannot install the RSAT pack.

Docker Desktop for Windows – Bind Mounts

I’ve been trying to set up a Docker container running an older CentOS, Apache, and PHP version as a sandbox for work. This would allow me to update code on my local computer, test changes, and then pull the changes to the development server for UAT testing. Setting up the base container was easy enough — installed a VM, tar’d off the system, and imported it as a Docker image. There’s a lot of optimization that could/should be done, but I was aiming for proof of concept at this stage.

I am using bind mounts for the website configuration and code — the website conf file in conf.d, the SSL certificates, and the vhtml folder which houses the web code. This means I can tweak the site config & code in my IDE, reload Apache in Docker, and validate my changes. It worked great until I connected to the company VPN. Attempting to access the mounted data just hangs. Nothing. Drop the VPN, and the files are there again.

There are two problems — firstly, the default VPN configuration does not allow access to local network resources. And, it seems, the Docker NAT is a local network resource. We use Cisco AnyConnect. In the settings, I checked off “Allow local (LAN) access when using VPN (if configured)”. Note the if configured — the server-side settings need to allow use of local resources when connected via VPN. Fortunately, people with WiFi printers complained about having to disconnect the VPN every time they wanted to print something; and accessing local resources is permitted in our profile.

Unfortunately, I still couldn’t access files on my mount points. Docker Desktop shared out my drive, and the server network mounts the CIFS share. With my domain credentials. An Active Directory domain which is most certainly not registered in the VPN DNS servers.

[root@5542506m1a5e /]# mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/QMCCTMGPBHQFW66ARPWHSQMWQL:/var/lib/docker/overlay2/l/IQ2YIH47ZXTN55PGH3BWUKFPTT,upperdir=/var/lib/docker/overlay2/d072c94532976a4196174751c57359139501739001e7b9d50de59041c768a307/diff,workdir=/var/lib/docker/overlay2/d072c94532976a4196174751c57359139501739001e7b9d50de59041c768a307/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
...
//10.0.75.1/D on /etc/httpd/certs type cifs (rw,relatime,vers=3.02,sec=ntlmsspi,cache=strict,username=myuid,domain=mydomain,uid=0,noforceuid,gid=0,noforcegid,addr=10.0.75.1,file_mode=0755,dir_mode=0777,iocharset=utf8,nounix,serverino,mapposix,nobrl,mfsymlinks,noperm,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
...
tmpfs on /sys/firmware type tmpfs (ro,relatime)

To use the share when connected via the VPN, I needed to use the credentials of a local account here. Beyond creating a local administator-level account, you may need to add read/write permissions for that new account to your %userprofile% directory — inheritence is generally disabled & only the individual user has access to the folder.

Once there’s a local account set up to work, you’ve got to tell Docker to use it. In the settings, select “Shared Drives”. Use “Reset credentials” to open a prompt for the logon credentials that will be used to mount the shared volume.

o

Start the Docker container, VPN into the company network, and I’ve got a fully functional sandbox in a Docker container.

Cleaning Up Unused Docker Images

I’ve been using Docker for quite some time, but never had unused container images. This is partially because I installed a new hard drive and started from a blank slate, but also because I haven’t needed to use many different images to build my containers.

I’ve changed jobs recently and wanted to set up a container to mirror our web server. Which meant trying to get a CentOS 6.8 container going. Except there isn’t one from Cent anymore. And I don’t exactly trust random-dude-from-the-Internet’s OS. Download it and poke around without running it, sure … but that’s not a platform on which I can do my development.

And that means I’ve got a few images that I do not need. To view the list of images, use “docker images -a”

 

D:\docker>docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
openhab/openhab snapshot 8a4749c86ff3 4 weeks ago 527MB
docker4w/nsenter-dockerd latest 2f1c802f322f 9 months ago 187kB
centos/php-56-centos7 latest 92ed8b3a7cb4 15 months ago 617MB

13652604711/centos6.8-ssh latest 59ab169b5158 2 years ago 289MB

Then use “docker rmi imagename” to remove any unnecessary ones.

D:\docker>docker rmi centos/php-56-centos7
Untagged: centos/php-56-centos7:latest
Untagged: centos/php-56-centos7@sha256:f3c95020fa870fcefa7d1440d07a2b947834b87bdaf000588e84ef4a599c7546
Deleted: sha256:92ed8b3a7cb4d56d3a1c58386d966f22736010a292a81a72dddbc4ffc7cae3fd
Deleted: sha256:bdcb229c59ed69d26750cd0d24362670e1fa2ae9be6ef19aa3e7c5571a4a8503
Deleted: sha256:90eb7fca62f6c0febd9cc21544269029ff231f39f16054ba6b0ca93ec1037d97
Deleted: sha256:cdcf05e149fc6cb2801f7f93dce3acb54465fe6c46a16dd6135aa74d79bedffa
Deleted: sha256:139498a5907a4d17cf07b1400bdbdb4db5e9f1ac4e3985aac2b374eaa712d5fb
Deleted: sha256:5f0780b14e43db37e84162e0045657203ac1e9fb531cc3e879fa464eda013e79
Deleted: sha256:7e117241875497974bb56f09e6340e142a9acaa11af76917afab345acc25b5c1
Deleted: sha256:4b170488c295918f4d7618c2cd0b9b428d55ec952dd6a715593e3af34e538d94
Deleted: sha256:1e889f7360c52d1b20f93335382290445e4f257f08ccef01694837572842e95f
Deleted: sha256:43e653f84b79ba52711b0f726ff5a7fd1162ae9df4be76ca1de8370b8bbf9bb0

D:\docker>docker rmi 13652604711/centos6.8-ssh
Untagged: 13652604711/centos6.8-ssh:latest
Untagged: 13652604711/centos6.8-ssh@sha256:41bbe66ac18f199efac325d0d4bcb5d0390ec501ca82d6d1ce223df8a050be3a
Deleted: sha256:59ab169b5158a172079e2a89442936bc49292ea951f2eb9acb688a0ee34f95e1
Deleted: sha256:12d850520660ec9de87e84735a7067e663db282245502820f09dae5c937a93d2
Deleted: sha256:6b5c6954e3d511934786375730a068d0f013dcc99356a341a8c5d268a3b1cf3d

Un-killable Process

Scott had a Dolphin instance veg out on him. Not much for it other than killing the process. Except it didn’t kill. Now SIGTERM (15) I don’t expect to kill a vegged out process, but SIGKILL (9)? I’ve never seen that fail. A little research later, and I’ve discovered “uninterruptible sleep”.  Which, at 11PM is starting to sound really good to me. But not something I associate with computer programs. Essentially, processes that are waiting on I/O very briefly pop into this state and pop out of the state when the I/O operation completes. Code needs to have timeouts to prevent the application from getting stuck waiting for I/O. And, evidently, Scott has discovered a scenario in which Dolphin does not have a timeout.

How can you tell that your process is stuck in uninterruptible sleep? Use “ps u” (or “ps aux” for all processes) and check the “STAT” column.

From “man ps”:

PROCESS STATE CODES
Here are the different values that the s, stat and state output
specifiers (header "STAT" or "S") will display to describe the state of
a process:

D uninterruptible sleep (usually IO)
I Idle kernel thread
R running or runnable (on run queue)
S interruptible sleep (waiting for an event to complete)
T stopped by job control signal
t stopped by debugger during the tracing
W paging (not valid since the 2.6.xx kernel)
X dead (should never be seen)
Z defunct ("zombie") process, terminated but not reaped by its parent

For BSD formats and when the stat keyword is used, additional
characters may be displayed:

< high-priority (not nice to other users)
N low-priority (nice to other users)
L has pages locked into memory (for real-time and custom IO)
s is a session leader
l is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
+ is in the foreground process group

Rebooting clears the process (or sorting whatever is blocking the I/O operation). But there are processes that “kill -9” won’t terminate.

Recreating Grub Bootloader After the Windows Install Wipes It

Setting up the dual-boot Windows/Fedora system was straight-forward on my laptop. I installed Windows, then installed Linux and grub mkconfig found Windows and included it in the menu. Scott already had Fedora, and we needed to repair his Windows installation. Which, of course, blew away grub. Easy enough to get back, provided you’ve got a Live USB installation from which to boot.

Boot the Live media and use fdisk to find the Linux partition (in our installations, /boot is contained within the root partition).

[root@fedora02 ~]# fdisk -l
Disk /dev/sda: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xd847fbc2

Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 1026047 1024000 500M 83 Linux
/dev/sda2 1026048 20971519 19945472 9.5G 8e Linux LVM

Mount that partition somewhere:

mkdir /mnt/mycomputer
mount /dev/sda2 /mnt/mycomputer

Add bind mounts so /dev and /proc are in there

mount –bind /dev /mnt/mycomputer/dev
mount –bind /proc /mnt/mycomputer/proc

Chroot yourself into the mount point

chroot /mnt/mycomputer

Now you can reinstall grub. You don’t want the partition (e.g. /dev/sda2) but the disk. The following commands install the grub2 bootloader and reboot.

grub2-install /dev/sda
reboot

If you forget to pay attention on boot and thus inadvertently end up in the default operating system (<G>), edit /etc/default/grub and increase “GRUB_TIMEOUT=5”. Build the grub config — this should identify your Windows partition and include it in the menu

grub2-mkconfig -o /boot/grub2/grub.cfg

Reboot again, and you’ll be able to select between Windows and Linux.