Ideally, the definitions for Kubernetes objects are all safely stored in your code repository — you can easily revert back to the previous, working iteration, you can see who changed what, and you’ve got a copy of it all available if super electromagnet man takes a stroll through the data center. Ideally.
Here, in the real world, we took over management of a k8s implementation that’s been in service for about a year now. And, fortunately, the production YAMLs are all in the repo. The development system, on the other hand, isn’t. Logic dictates that the config would be similar, but it’s always good to check.
I wrote a quick script to dump YAML files for all of the configmaps, cron jobs, deployments, horizontal pod autoscaling, jobs, persistent volumes, persistent volume claims, secrets, service accounts, services, and stateful sets.
#!/bin/bash
nsbase="namespace/"
for ns in $(kubectl get namespaces -o=name)
do
ns=${ns#${nsbase}}
for n in $(kubectl get --namespace=$ns -o=name configmap,cronjob,deployment,hpa,job,pv,pvc,secret,serviceaccount,service,statefulset)
do
yamlfile="${ns}/${n}.yaml"
mkdir -p $(dirname ${ns}/${n})
kubectl get --namespace=$ns -o=yaml $n > $yamlfile
done
done
We’ve been moving a bunch of servers into magic cloudy land. A move which comes with a whole lot of additional security restrictions (and accompanying marketing that the cloud is so much more secure … uhh, no … any host to which you used a keylogged jump server to access & it had absolutely no access to the internal or external network without a specific request and firewall rule would be equally secure. You just haven’t bothered with all of those controls before!). As such, we cannot just pull an image from Docker Hub. We also cannot just use apt-get to install/update packages.
So … how can you build a Docker image with updated software for use on these locked down boxes? Bit of a trick question — you cannot. You can, however, build such an image elsewhere and then export/import the image.
Build the image using your Dockerfile
docker build -t my_app_base .
Then export the image you created
docker save my_api_base | gzip > myapp_image.tar.gz
Download the TGZ file and transfer it to your restricted-access host. Then import the image
docker load < myapp_image.tar.gz
Verify the image loaded successfully
[user@server ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/my_app_base latest 5z8e35z99d5z 19 minutes ago 995 MB
<none> <none> 5zdbcfz6393z About an hour ago 1.01 GB
Now use docker run with your my_app_base:latest image to create a running container based on the image.
To set up my redis sandbox in Docker, I created two folders — conf and data. The conf will house the SSL stuff and configuration file. The data directory is used to store the redis data.
I first needed to generate a SSL certificate. The public and private keys of the pair are stored in a pem and key file. The public key of the CA that signed the cert is stored in a “ca” folder.
Then I created a redis configuation file — note that the paths are relative to the Docker container
################################## MODULES #####################################
################################## NETWORK #####################################
# My web server is on a different host, so I needed to bind to the public
# network interface. I think we'd *want* to bind to localhost in our
# use case.
# bind 127.0.0.1
# Similarly, I think we'd want 'yes' here
protected-mode no
# Might want to use 0 to disable listening on the unsecure port
port 6379
tcp-backlog 511
timeout 10
tcp-keepalive 300
################################# TLS/SSL #####################################
tls-port 6380
tls-cert-file /opt/redis/ssl/memcache.pem
tls-key-file /opt/redis/ssl/memcache.key
tls-ca-cert-dir /opt/redis/ssl/ca
# I am not auth'ing clients for simplicity
tls-auth-clients no
tls-auth-clients optional
tls-protocols "TLSv1.2 TLSv1.3"
tls-prefer-server-ciphers yes
tls-session-caching no
# These would only be set if we were setting up replication / clustering
# tls-replication yes
# tls-cluster yes
################################# GENERAL #####################################
# This is for docker, we may want to use something like systemd here.
daemonize no
supervised no
#loglevel debug
loglevel notice
logfile "/var/log/redis.log"
syslog-enabled yes
syslog-ident redis
syslog-facility local0
# 1 might be sufficient -- we *could* partition different apps into different databases
# But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle
# from any web tool would be the same cred. In which case, one database suffices.
databases 3
################################ SNAPSHOTTING ################################
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
#
dir ./
################################## SECURITY ###################################
# I wasn't setting up any sort of authentication and just using the facts that
# (1) you are on localhost and
# (2) you have the key to decrypt the stuff we stash
# to mean you are authorized.
############################## MEMORY MANAGEMENT ################################
# This is what to evict from the dataset when memory is maxed
maxmemory-policy volatile-lfu
############################# LAZY FREEING ####################################
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
############################ KERNEL OOM CONTROL ##############################
oom-score-adj no
############################## APPEND ONLY MODE ###############################
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
########################### ACTIVE DEFRAGMENTATION #######################
# Enabled active defragmentation
activedefrag no
# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10
Once I had the configuration data set up, I created the container. I’m using port 6380 for the SSL connection. For the sandbox, I also exposed the clear text port. I mapped volumes for both the redis data, the SSL files, and the redis.conf file
To bring up a Docker container running memcached with SSL enabled, create a local folder to hold the SSL key. Mount a volume to your local config folder, then point the ssl-chain_cert and ssl_key to the in-container paths to the PEM and KEY file:
On the most recent iteration of Windows (20H2 build 19042.1052) and Docker Desktop (20.10.7 built Wed Jun 2 11:54:58 2021), I found myself unable to launch my Oracle container. The error indicated that the binding was forbidden.
C:\WINDOWS\system32>docker start oracleDB
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:1521: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Error: failed to start containers: oracleDB
Forbidden by whom?! Windows, it seems. Checking excluded ports using netsh:
netsh int ipv4 show excludedportrange protocol=tcp
Shows that there are all sorts of ports being forbidden — Hyper-V is grabbing a lot of ports when it starts. To avoid that, you’ve got to add a manual excluded port for the one you want to use.
To reserve the port for your own use, disable Hyper-V (reboot), add a port exclusion, and enable Hyper-V (reboot)
REM Disable Hyper-V
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
REM REBOOT ... then add an exclusion for the Oracle DB Port
netsh int ipv4 add excludedportrange protocol=tcp startport=1521 numberofports=1
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
REM REBOOT again
When shelling into a container isn’t root and you need it to be (i.e. you don’t have the root password and sudo isn’t set up), you can pass a uid number to the exec command and run the shell under root explicitly:
I’ve been underwhelmed with the notifications I get from Azure DevOps – there are a lot of build-centric notifications, but I don’t use ADO for builds or deployment apart from playing around. And I really don’t care if the silly test project I set up to build and deploy a website worked, failed, or whatever. I was thinking about hooking whatever they’re calling Flow this week up to ADO and building notification workflows.
Fortunately, a coworker mentioned that you can customize notifications in ADO … which, I’d spent a few seconds poking around and didn’t see anything. But I spent more than a few seconds this time and happened across this little ellipsis on the card that pops up when I click the circle with my initials in it. More options!
A new menu flies out; and, look, there’s “Notifications”
Exactly as I’ve observed, there are a lot of build-centric alerts. So I created a new subscription.
Here’s a subscription that I hope will notify me when items assigned to me have updates to activity or comments.
I just started using k8s built into Docker for Windows, but I couldn’t connect because the target machine actively refused the connection.
C:\Users\lisa>kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"windows/amd64"}
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
No idea — it’s all internal traffic, but I resorted to turning off my firewall anyway just to see what would happen. Nothing. Turns out I need a KUBECONFIG environment variable pointing to the config file
C:\Users\lisa>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
C:\Users\lisa>kubectl proxy
Starting to serve on 127.0.0.1:8001
Working! Get the token from
kubectl -n kube-system describe secret default
And access the dashboard.
A reminder for myself — the totally not obvious package name for the kubeadm binary on the CHANGELOG link list is Node. Go figure!
Docker Official Images: Official images won’t have a publisher listed, and they will be tagged with “Official Image”.
Docker Official Images are a set of images curated by Docker. They’re generally recommended for new users as Docker has a team that reviews and publishes these images. Beyond Docker’s verification, you can see how an individual image was built. Navigate to the GitHub image library. Find the file corresponding with the image, and you’ll see a GitRepo line. Navigate to that URL to find the Dockerfiles that were used to build the image.
Docker Certified: Other images will be certified by Docker – these are published by someone other than Docker but have been tested & scanned for vulnerabilities, they come from a reputable source, and comply with best practice guidelines.
If you click on the hyperlinked organization name, they are listed as a verified publisher – this means someone put a little effort into ensuring “Oracle” is actually the corporation everyone thinks of when they hear “Oracle”
Other Images: You’ll also find containers that are not certified that have been published by un-verified parties. Don’t use these without some investigation.
We happen to interact with OpenHAB developers and “know” the guy who builds these images. I trust him and do run this image on my home network. I also know where to go to view his Dockerfiles, and I know how his images are built. https://github.com/openhab/openhab-docker
But there are images posted by random Internet denizens – I run Docker on my personal Windows laptop and needed to access the underlying MobyVM. The image justincormack/nsenter1 will do it … but I have no idea who this person is. A quick search of the Internet yielded a Dockerfile for this image, but there’s nothing that ensures the image on Docker Hub is actually built with this file. It’s safer to use the Dockerfile to build your own version of the image.