I’m building a quick image to test permissions to a folder structure for a friend … so I’m making a quick note about how I’m building this test image so I don’t have to keep re-typing it all.
FROM python:3.7-slim
RUN apt-get update && apt-get install -y curl --no-install-recommends
RUN mkdir -p /srv/ljr/test
# Create service account
RUN useradd --comment "my service account" -u 30001 -g 30001 --shell /sbin/nologin --no-create-home webuser
# Copy a bunch of stuff various places under /srv
# Set ownership on /srv folder tree
RUN chown -R 30001:30001 /srv
USER 30001:30001
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]
I’ll start by acknowledging that, of course, you could just redeploy the container with the settings you want now. The whole point of containerized development is that anything “good” should either be part of the deployment settings or data persisted outside of the container. So, in theory, redeploying the container every day shouldn’t really be detectable. Even when you didn’t deploy the original container (i.e. you don’t have the Dockerfile or docker run command handy to tweak as needed), you can reverse engineer what you need from docker inspect. But sometimes? It’s quicker/easier/more convenient to just fix what you need to within the existing container. And it is possible to do so.
The trickiest part is finding the right file to edit.
# cd into docker container definition folder
cd /var/lib/docker/containers/
# find the guid for the container you want to edit
docker ps
# Find the corresponding folder name
ls -al | grep bc9dc66882af
# cd into that folder
cd bc9dc66882af18f59c209faf10031fe21765571d0a2fe4a32a768a1d52cc1e53
# Edit the config.v2.json file for the container
vi config.v2.json
# And, finally, restart docker
systemctl stop docker
systemctl start docker
Ideally, the definitions for Kubernetes objects are all safely stored in your code repository — you can easily revert back to the previous, working iteration, you can see who changed what, and you’ve got a copy of it all available if super electromagnet man takes a stroll through the data center. Ideally.
Here, in the real world, we took over management of a k8s implementation that’s been in service for about a year now. And, fortunately, the production YAMLs are all in the repo. The development system, on the other hand, isn’t. Logic dictates that the config would be similar, but it’s always good to check.
I wrote a quick script to dump YAML files for all of the configmaps, cron jobs, deployments, horizontal pod autoscaling, jobs, persistent volumes, persistent volume claims, secrets, service accounts, services, and stateful sets.
#!/bin/bash
nsbase="namespace/"
for ns in $(kubectl get namespaces -o=name)
do
ns=${ns#${nsbase}}
for n in $(kubectl get --namespace=$ns -o=name configmap,cronjob,deployment,hpa,job,pv,pvc,secret,serviceaccount,service,statefulset)
do
yamlfile="${ns}/${n}.yaml"
mkdir -p $(dirname ${ns}/${n})
kubectl get --namespace=$ns -o=yaml $n > $yamlfile
done
done
We’ve been moving a bunch of servers into magic cloudy land. A move which comes with a whole lot of additional security restrictions (and accompanying marketing that the cloud is so much more secure … uhh, no … any host to which you used a keylogged jump server to access & it had absolutely no access to the internal or external network without a specific request and firewall rule would be equally secure. You just haven’t bothered with all of those controls before!). As such, we cannot just pull an image from Docker Hub. We also cannot just use apt-get to install/update packages.
So … how can you build a Docker image with updated software for use on these locked down boxes? Bit of a trick question — you cannot. You can, however, build such an image elsewhere and then export/import the image.
Build the image using your Dockerfile
docker build -t my_app_base .
Then export the image you created
docker save my_api_base | gzip > myapp_image.tar.gz
Download the TGZ file and transfer it to your restricted-access host. Then import the image
docker load < myapp_image.tar.gz
Verify the image loaded successfully
[user@server ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/my_app_base latest 5z8e35z99d5z 19 minutes ago 995 MB
<none> <none> 5zdbcfz6393z About an hour ago 1.01 GB
Now use docker run with your my_app_base:latest image to create a running container based on the image.
To set up my redis sandbox in Docker, I created two folders — conf and data. The conf will house the SSL stuff and configuration file. The data directory is used to store the redis data.
I first needed to generate a SSL certificate. The public and private keys of the pair are stored in a pem and key file. The public key of the CA that signed the cert is stored in a “ca” folder.
Then I created a redis configuation file — note that the paths are relative to the Docker container
################################## MODULES #####################################
################################## NETWORK #####################################
# My web server is on a different host, so I needed to bind to the public
# network interface. I think we'd *want* to bind to localhost in our
# use case.
# bind 127.0.0.1
# Similarly, I think we'd want 'yes' here
protected-mode no
# Might want to use 0 to disable listening on the unsecure port
port 6379
tcp-backlog 511
timeout 10
tcp-keepalive 300
################################# TLS/SSL #####################################
tls-port 6380
tls-cert-file /opt/redis/ssl/memcache.pem
tls-key-file /opt/redis/ssl/memcache.key
tls-ca-cert-dir /opt/redis/ssl/ca
# I am not auth'ing clients for simplicity
tls-auth-clients no
tls-auth-clients optional
tls-protocols "TLSv1.2 TLSv1.3"
tls-prefer-server-ciphers yes
tls-session-caching no
# These would only be set if we were setting up replication / clustering
# tls-replication yes
# tls-cluster yes
################################# GENERAL #####################################
# This is for docker, we may want to use something like systemd here.
daemonize no
supervised no
#loglevel debug
loglevel notice
logfile "/var/log/redis.log"
syslog-enabled yes
syslog-ident redis
syslog-facility local0
# 1 might be sufficient -- we *could* partition different apps into different databases
# But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle
# from any web tool would be the same cred. In which case, one database suffices.
databases 3
################################ SNAPSHOTTING ################################
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
#
dir ./
################################## SECURITY ###################################
# I wasn't setting up any sort of authentication and just using the facts that
# (1) you are on localhost and
# (2) you have the key to decrypt the stuff we stash
# to mean you are authorized.
############################## MEMORY MANAGEMENT ################################
# This is what to evict from the dataset when memory is maxed
maxmemory-policy volatile-lfu
############################# LAZY FREEING ####################################
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
############################ KERNEL OOM CONTROL ##############################
oom-score-adj no
############################## APPEND ONLY MODE ###############################
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
########################### ACTIVE DEFRAGMENTATION #######################
# Enabled active defragmentation
activedefrag no
# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10
Once I had the configuration data set up, I created the container. I’m using port 6380 for the SSL connection. For the sandbox, I also exposed the clear text port. I mapped volumes for both the redis data, the SSL files, and the redis.conf file
To bring up a Docker container running memcached with SSL enabled, create a local folder to hold the SSL key. Mount a volume to your local config folder, then point the ssl-chain_cert and ssl_key to the in-container paths to the PEM and KEY file:
On the most recent iteration of Windows (20H2 build 19042.1052) and Docker Desktop (20.10.7 built Wed Jun 2 11:54:58 2021), I found myself unable to launch my Oracle container. The error indicated that the binding was forbidden.
C:\WINDOWS\system32>docker start oracleDB
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:1521: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Error: failed to start containers: oracleDB
Forbidden by whom?! Windows, it seems. Checking excluded ports using netsh:
netsh int ipv4 show excludedportrange protocol=tcp
Shows that there are all sorts of ports being forbidden — Hyper-V is grabbing a lot of ports when it starts. To avoid that, you’ve got to add a manual excluded port for the one you want to use.
To reserve the port for your own use, disable Hyper-V (reboot), add a port exclusion, and enable Hyper-V (reboot)
REM Disable Hyper-V
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
REM REBOOT ... then add an exclusion for the Oracle DB Port
netsh int ipv4 add excludedportrange protocol=tcp startport=1521 numberofports=1
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
REM REBOOT again
When shelling into a container isn’t root and you need it to be (i.e. you don’t have the root password and sudo isn’t set up), you can pass a uid number to the exec command and run the shell under root explicitly: