Category: Coding

Building Gerbera on Fedora

There is a great deal of documentation available for building Gerbera from source on a variety of Linux flavors. Unfortunately, Fedora isn’t one of those (and the package names don’t exactly match up to let you replace “apt-get” with “yum” and be done). So I am quickly documenting the process we followed to build Gerbera from source.

The Fedora build of Gerbera has the binaries in /usr/bin and the manual build places the gerbera binary in /usr/local/bin — the build updates the unit file to reflect this change, but this means you want to back up any customizations you’ve made to the unit file before running “make install”.

You need the build system — cmake, g++, etc and the devel packages from the following table as required by your build options

Additional packages that we needed to install: automake, autoconf, libtool

Library Fedora Package Required? Note Compile-time option Default
libpupnp libupnp-devel XOR libnpupnp pupnp
libnpupnp Build from source (if needed) XOR libupnp I was only able to locate this as a source, not available from Fedora repos WITH_NPUPNP Disabled
libuuid libuuid-devel Required Not required on *BSD
pugixml pugixml-devel Required XML file and data support
libiconv glibc-headers Required Charset conversion
sqlite3 sqlite-devel Required Database storage
zlib zlib-devel Required Data compression
fmtlib fmt-devel Required Fast string formatting
spdlog spdlog-devel Required Runtime logging
duktape duktape-devel Optional Scripting Support WITH_JS Enabled
mysql mariadb-devel Optional Alternate database storage WITH_MYSQL Disabled
curl libcurl-devel Optional Enables web services WITH_CURL Enabled
taglib taglib-devel Optional Audio tag support WITH_TAGLIB Enabled
libmagic file-devel Optional File type detection WITH_MAGIC Enabled
libmatroska libmatroska-devel Optional MKV metadata required for MKV WITH_MATROSKA Enabled
libebml libebml-devel Optional MKV metadata required for MKV WITH_MATROSKA Enabled
ffmpeg/libav ffmpeg-free-devel Optional File metadata WITH_AVCODEC Disabled
libexif libexif-devel Optional JPEG Exif metadata WITH_EXIF Enabled
libexiv2 exiv2-devel Optional Exif, IPTC, XMP metadata WITH_EXIV2 Disabled
lastfmlib liblastfm-devel Optional Enables scrobbling WITH_LASTFM Disabled
ffmpegthumbnailer ffmpegthumbnailer-devel Optional Generate video thumbnails WITH_FFMPEGTHUMBNAILER Disabled
inotify glibc-headers Optional Efficient file monitoring WITH_INOTIFY
libavformat libavformat-free-devel Required for 2.0
libavutil libavutil-free-devel Required for 2.0
libavcodec libavcodec-free-devel Required for 2.0

Then follow the generalized instructions — cd into the folder where you want to run the build and run (customizing the cmake line as you wish):

git clone https://github.com/gerbera/gerbera.git
mkdir build
cd build
cmake ../gerbera -DWITH_MAGIC=1 -DWITH_MYSQL=1 -DWITH_CURL=1 -DWITH_INOTIFY=1 -DWITH_JS=1 -DWITH_TAGLIB=1 -DWITH_AVCODEC=1 -DWITH_FFMPEGTHUMBNAILER=0 -DWITH_EXIF=1 -DWITH_EXIV2=1 -DWITH_SYSTEMD=1 -DWITH_LASTFM=0 -DWITH_DEBUG=1
make -j4
sudo make install

As with the Gerbera binary, the Fedora build places the web content in /usr/share/gerbera and the manual build places the web content into /usr/local/share/gerbera — yes, you can change the paths in the build, and I’m sure you can clue Gerbera into the new web file location. I opted for the quick/easy/lazy solution of running

mv /usr/share/gerbera /usr/share/gerbera/old
ln -s /usr/local/share/gerbera /usr/share/

To symlink the location my config thinks the web components should be located to the new files.

On the first start of Gerbera, SQL scripts may be run to update the database — don’t stop or kill the service during this process there’s no checkpoint restart of the upgrade process. We backed up /etc/gerbera/gerbera.db prior to starting our Gerbera installation. We’ve also wiped the database files to start from scratch and test changes that impacted how items are ingested into the database.

Fin.

Example Azure DevOps File Deployment

To automatically update files from your repository on your server, use a release pipeline. For convenience, I use deployment groups to ensure all of the servers are updated.

Creating a deployment group

Under the Project, navigate to Pipelines and “deployment groups”. Click “New” and provide a name for the deployment group.

Now click into the deployment group and select “Register”

Since I have a Linux server, I changed the “Type of target to register” drop-down to “Linux”. Copy the command and run it on your server (I don’t run literally what MS provides – I break it out into individual commands so I can make a folder named what I want it to be named and just run the part of the command that registers a service with systemctl.

Run the agent – for demonstration purposes, I am using the run.sh script to launch the agent. This outputs details to my console instead of a log file.

If you have multiple servers to which you want to deploy the files, install and run an agent on each one.

Create the release pipeline

Now we will build the pipeline that actually copies files over to the agent. Under Pipelines, navigate to “Releases”. Select “New” and create a “New release pipeline”. Start with an empty job.

You’ll be asked to name the first stage of the deployment pipeline – here, I’m calling it “Deploy Files to Servers”. Close out of the Stage window to see the pipeline.

Click the “+ Add” next to Artifacts to link an artifact to the deployment

If you have a build pipeline, you can link that as the artifact. Since I am just copying files, I selected the “Azure Repo” and configured the test project that contains the files I wish to deploy to my servers.

Click “Add”

Back in the pipeline, click the “1 job, 0 task” hyperlink to create a file deployment task.

We don’t need the “Agent job”, so click on it and click “Remove”

Select the hamburger menu next to “Deploy files to servers” and select “Add a deployment group job”

Click the “Deployment group” dropdown and select the deployment group that contains the servers to which you want to deploy files. You can add tags to limit deployment to a subset of the deployment group – I don’t do this, but I have seen instances where “prod” and “dev” tags were used and all servers in both the prod and dev environment were part of the same deployment group.

Click the “+” on the “Deployment group job” item to add a task.

Find the “Copy files” task and click “Add” to create a task to copy files.

Click on the “Copy files to:” item to configure the task. The source folder is the Azure repo, and the target folder is the path on the server.

Click “Save” to save the task, then click “OK” to save the task.

Now create a release – click the “Create a release” button

ok

When the deployment runs, the agent will show the job running.

Once the deployment completes, the files are on the server.

Scheduling Release

In the pipeline, you can click on “Schedule set” to schedule new releases.

Enable the schedule, set a time – I select to only schedule the release if the source or pipeline has changed … if I’ve not updated files in the repo, there’s no need to redeploy the files. Remember to save your pipeline when you add the schedule.

Manually Running a JAR File

The java code I now maintain is normally executed through a k8s cluster — this means just testing a quick change requires running the entire deployment pipeline. Sometimes, though, I really just want to test something quickly. In such instances, you can manually run a jar file using “java -jar my_file.jar” —

Maven Build Certificate Error

Attempting to build some Java code, I got a lot of errors indicating a trusted certificate chain was not available:

Could not transfer artifact 
org.springframework.boot:spring-boot-starter-parent:pom:2.2.0.RELEASE 
from/to repo.spring.io (<redacted>): sun.security.validator.ValidatorException: 
PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

And

[ERROR] Failed to execute goal on project errorhandler: 
Could not resolve dependencies for project com.example.npm:errorhandler:jar:0.0.1-SNAPSHOT: 
The following artifacts could not be resolved: 
org.springframework.boot:spring-boot-starter-data-jpa:jar:2.3.7.BUILD-SNAPSHOT, 
org.springframework.boot:spring-boot:jar:2.3.7.BUILD-SNAPSHOT, 
org.springframework.boot:spring-boot-configuration-processor:jar:2.3.7.BUILD-SNAPSHOT: 
Could not transfer artifact org.springframework.boot:spring-boot-starter-data-jpa:jar:2.3.7.BUILD-20201211.052207-37 
from/to spring-snapshots (https://repo.spring.io/snapshot): 
transfer failed for https://repo.spring.io/snapshot/org/springframework/boot/spring-boot-starter-data-jpa/2.3.7.BUILD-SNAPSHOT/spring-boot-starter-data-jpa-2.3.7.BUILD-20201211.052207-37.jar: 
Certificate for <repo.spring.io> doesn't match any of the subject alternative names: [] -> [Help 1]

Ideally, you could just add whatever cert(s) needed to be trusted into the cacerts file for the Java instance using keytool (.\keytool.exe -import -alias digicert-intermed -cacerts -file c:\tmp\digi-int.cer) however the work computers are locked down such that I am unable to import certs into the Java trust store. The second error makes me think it wouldn’t work anyway — if there’s no matching SAN on the cert, trusting the cert wouldn’t do anything.

Fortunately, there are a few flags you can add to mvn to ignore certificate errors — thus allowing the build to complete without requiring access to the cacerts file. There is, of course, a possibility that the trust failure is because your connection is being redirected maliciously … but I see enough other people getting trust failures for this spring-boot stuff (and visiting the site doesn’t show anything suspect) that I’m happy to bypass the security validation this once and just be done with the build 🙂

mvn package -DskipTests -Dmaven.wagon.http.ssl.insecure=true -Dmaven.wagon.http.ssl.allowall=true -Dmaven.wagon.http.ssl.ignore.validity.dates=true jib:build

Postgresql Through an SSH Tunnel in Python

Our production Postgresql servers have a fairly restrictive IP access control list — which means you cannot VPN in and query the server. We’ve been using DBeaver with an SSH tunnel to connect, but it’s a bit time consuming to run a query across all of the servers for monitoring and troubleshooting. To work around the restriction, I built a python script that uses an SSH tunnel to relay communications to the Postgresql servers.

import psycopg2
from sshtunnel import SSHTunnelForwarder

from config import strSSHRelayHost, iSSHRelayPort, strSSHRelayUser, strSSHAuthKeyFile, dictHost
# In the config.py, dictHost should contain the following information
# dictHost = {"host":"dbserver.example.com","port":5432,"database": "dbname", "username":"dbuser", "password":"S3cr3tPhr@5e"}

# Example query -- listing out locks 
sqlQuery = "WITH RECURSIVE l AS (  SELECT pid, locktype, mode, granted, ROW(locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid) obj FROM pg_locks ), pairs AS ( SELECT w.pid waiter, l.pid locker, l.obj, l.mode FROM l w JOIN l ON l.obj IS NOT DISTINCT FROM w.obj AND l.locktype=w.locktype AND NOT l.pid=w.pid AND l.granted  WHERE NOT w.granted ), tree AS ( SELECT l.locker pid, l.locker root, NULL::record obj, NULL AS mode, 0 lvl, locker::text path, array_agg(l.locker) OVER () all_pids FROM ( SELECT DISTINCT locker FROM pairs l WHERE NOT EXISTS (SELECT 1 FROM pairs WHERE waiter=l.locker) ) l  UNION ALL  SELECT w.waiter pid, tree.root, w.obj, w.mode, tree.lvl+1, tree.path||'.'||w.waiter, all_pids || array_agg(w.waiter) OVER () FROM tree JOIN pairs w ON tree.pid=w.locker AND NOT w.waiter = ANY ( all_pids )) SELECT (clock_timestamp() - a.xact_start)::interval(3) AS ts_age, replace(a.state, 'idle in transaction', 'idletx') state, (clock_timestamp() - state_change)::interval(3) AS change_age, a.datname,tree.pid,a.usename,a.client_addr,lvl, (SELECT count(*) FROM tree p WHERE p.path ~ ('^'||tree.path) AND NOT p.path=tree.path) blocked, repeat(' .', lvl)||' '||left(regexp_replace(query, 's+', ' ', 'g'),100) query FROM tree JOIN pg_stat_activity a USING (pid) ORDER BY path"

with SSHTunnelForwarder( (strSSHRelayHost, iSSHRelayPort), ssh_username=strSSHRelayUser, ssh_private_key=strSSHAuthKeyFile, local_bind_address=("localhost",55432), remote_bind_address=(dictHost.get('host'), dictHost.get('port'))) as server:
# Alternately, you can use password authentication
#with SSHTunnelForwarder( (strSSHRelayHost, iSSHRelayPort), ssh_username=strSSHRelayUser, ssh_password=strSSHRelayUserPass, local_bind_address=("localhost",55432), remote_bind_address=(dictHost.get('host'), dictHost.get('port'))) as server:
    if server is not None:
        server.start()
        server._check_is_started()
        #print("Tunnel server connected")
        params = {'database': dictHost.get('database'),'user': dictHost.get('username'),'password': dictHost.get('password'), 'host': server.local_bind_host, 'port': server.local_bind_port}
        conn = psycopg2.connect(**params)
        cursor = conn.cursor()
        cursor.execute(sqlQuery)
        column_names = [desc[0] for desc in cursor.description]
        print(column_names)
        rows = cursor.fetchall()
        for row in rows:
            print(row)
        cursor.close()
        if conn is not None:
            conn.close()
        server.stop()
    else:
        print("Unable to establish SSH tunnel")

Python List Joining

I’ve seen lists joined with a delimiter like this before:

strDelimiter = “\n”
strDelimiter.join(listOfStuff)

But it seems silly to allocate memory just for the delimiter … not a big deal from a resource perspective, and probably using the delimiter variable is more comprehensible in the future … but I’ve always wondered if you couldn’t just use a static string with the join method. Turns out you can —

msgContent.attach(MIMEText(“\n”.join(listOfStuff), ‘plain’))

 

Where did THAT come from? (PHP class)

I’ve got some code that is cobbled together from a couple of different places & it’s got namespace collisions that wouldn’t exist if I’d been starting from scratch. But I’ve got what I’ve got … and, occasionally, new code falls over because a class has already been declared.

Luckily, there’s a way to find out from where a class was loaded:

			$strClassName = "Oracle_Cred";
			if( class_exists($strClassName) ){
				$reflector = new \ReflectionClass($strClassName);
				echo "Class $strClassName was loaded from " . $reflector->getFileName();
			}
			else{
				echo "Class $strClassName does not exist yet";
			}

Bash – Spaces, Quotes, and String Replacement

Had to figure out how to do string replacement (Scott wanted to convert WMA files to similarly named MP3 files) and pass a single parameter that has spaces into a shell script.

${original_string/thing_to_find/thing_to_replace_there} does string replacement. And $@ is the unexpanded parameter set. So this wouldn’t work if we were trying to pass in more than one parameter (well, it *would* … I’d just have to custom-handle parameter expansion in the script)

 

Fortify on Demand Remediation: Cookie Security: Cookie not Sent Over SSL

This is another one that might be a false positive or might be legit. If you look at the documentation for PHP’s setcookie function, you will see the sixth parameter sets a restriction so cookies are only sent over secure connections. If you are not setting this restriction, the vulnerability is legitimate and you should sort that. But … if you followed PHP’s documentation and passed 1 to the parameter? FoD is falsely reporting that the parameter is not set to true.

In this case, the solution is easy enough. Change your perfectly valid 1

to say true

And voila, the vulnerability has been remediated.