Category: System Administration

Updating Weblogic Certificate For OUD Management Utility

This is the process I use to update the WebLogic SSL certificate for our OUD management web interface. 


# PRE-CHANGE VERIFICATION
# There are two environment variables set to allow this to work:
WLSTOREPASS=Wh@t3v3rY0uU53d # WLSTOREPASS is set to whatever is used for the keystore and truststore password
# OUDINST=/path/to/OUD/installation (root into which both java and OUD were installed — if you are using an OS package
# for java, your paths will be different)
#Log into https://hostname.domain.gTLD:7002/console (or whatever your WL console URL is)
# As my WebLogic instance auths users via LDAP, I log in with my UID & pwd … you may have a generic account like ‘admin’
#
#Navigate to Domain Structure => Environment => Servers
#Select “AdminServer”
#
#Keystores tab — will tell you the name of the keystore and trust store
#SSL tab — will tell you the friendly name of the certificate
# Verify the keystore and truststore are $OUDINST/Oracle/Middleware/${HOSTNAME%%.*}.jks,
# Verify the friendly name of the certificate is the short hostname
#
# Verify the keystore is using the normal keystore password
#[ldap@dell115 ~]$ $OUDINST/java/jdk/bin/keytool -v -list -keystore $OUDINST/Oracle/Middleware/dell115.jks –storepass $WLSTOREPASS| grep Alias
#Alias name: dell115
#Alias name: win-we
#Alias name: win-root
#Alias name: winca1-root
#Alias name: winca1-issuing
# *** If you do not get any output, remove the ” | grep Alias” part and check for errors. “keytool error: java.io.IOException: Keystore was tampered with, or password was incorrect” means the password is different.
# *** either try to guess the password (company name or ‘a’ are good guesses, along with the java-typical default of changeit)
# *** to continue using the existing password or you’ll need to update the keystore and truststore passwords in the web GUI.
# *** Since the keystores are generated using the process below … 99% of the time, the password matches.
#
# Generate a cert with appropriate info, export public/private key as a PFX file named with the short hostname of the server (i.e. dell115.pfx here) and, as the keystore password, use whatever you’ve set in $WLSTOREPASS

 # DURING THE CHANGE, as the ldap service account on the server:

mkdir /tmp/ssl

# Put base 64 public keys for our root and web CA in /tmp/ssl as Win-Root-CA.b64.cer and Win-Web-CA.b64.cer
# Put public/private key export from above in /tmp/ssl 

# Import the keychain for your certificate
$OUDINST/java/jdk/bin/keytool -import -v -trustcacerts -alias WIN-ROOT -file /tmp/ssl/Win-Root-CA.b64.cer -keystore /tmp/ssl/${HOSTNAME%%.*}.jks -keypass $WLSTOREPASS -storepass $WLSTOREPASS

$OUDINST/java/jdk/bin/keytool -import -v -trustcacerts -alias WIN-WEB -file /tmp/ssl/Win-Web-CA.b64.cer -keystore /tmp/ssl/${HOSTNAME%%.*}.jks -keypass $WLSTOREPASS -storepass $WLSTOREPASS 

# get GUID for cert within PFX file
HOSTCERTALIAS=”$($OUDINST/java/jdk/bin/keytool -v -list -storetype pkcs12 -keystore /tmp/ssl/${HOSTNAME%%.*}.pfx –storepass $WLSTOREPASS | grep Alias | cut -d: -f2-)” 

# Import the private key
$OUDINST/java/jdk/bin/keytool -importkeystore -srckeystore /tmp/ssl/${HOSTNAME%%.*}.pfx -destkeystore /tmp/ssl/${HOSTNAME%%.*}.jks -srcstoretype pkcs12 -deststoretype JKS -alias $HOSTCERTALIAS -storepass $WLSTOREPASS -srcstorepass Ra1n1ng1

# Change the alias to match what is configured in the web GUI
$OUDINST/java/jdk/bin/keytool -changealias -alias $HOSTCERTALIAS -destalias ${HOSTNAME%%.*} -keypass $WLSTOREPASS-keystore /tmp/ssl/${HOSTNAME%%.*}.jks -storepass $WLSTOREPASS
 

# Verify you have a WIN-ROOT, WIN-WEB, and hostname record

$OUDINST/java/jdk/bin/keytool -v -list -keystore /tmp/ssl/${HOSTNAME%%.*}.jks –storepass $WLSTOREPASS | grep Alias

# Stop the weblogic server

# Back up current keystore file and move new one into place
CURRENTDATE=”$(date +%Y%m%d)”
mv $OUDINST/Oracle/Middleware/${HOSTNAME%%.*}.jks $OUDINST/Oracle/Middleware/$CURRENTDATE.jks
cp /tmp/ssl/${HOSTNAME%%.*}.jks $OUDINST/Oracle/Middleware/${HOSTNAME%%.*}.jks

# Start the weblogic server in the screen session, then disconnect from the screen session

# Assuming success
rm -rf /tmp/ssl

# Backout is
# stop weblogic
mv $OUDINST/Oracle/Middleware/$CURRENTDATE.jks  $OUDINST/Oracle/Middleware/${HOSTNAME%%.*}.jks
# start weblogic

Oracle Unified Directory Bug – Paged Queries

I’ve encountered a bug with paged queries to a front end (“directory proxy”) Oracle Unified Directory server. When the load balancing algorithm is configured to distribute traffic equally across the back-end servers (proportional distribution algorithm), some queries return duplicate records. Not quite an infinite loop — for a small-ish (like 1300 objects) record set, I usually reach the end of the returned data round 50k records. But certainly not a valid result set either. And for a large result set (like 13,000 objects), it seems endless.

The oddest thing, though, is that different filters which produce the same result set do not all return duplicate results. Our uid values are algorithmically formed — those with employeeType of RealEmployee all start with ‘A’, those with employeeType of Contractor all start with ‘B’ type of rule. The prefix is then followed by a numeric sequence number. The filter (&(!(uid=A*)(uid=B*))(sn=*)) duplicates results and seemingly runs forever. The filter (&((employeeType=RealEmployee)(employeeType=Contractor))(sn=*)) returns the ~11k expected results. Go figure. Although this is algorithmically quite odd, it does provide a nice work-around to the bug as I just had to try different filters until I found one that produced non-duplicated results.

LDIF To Move User Accounts In Oracle Unified Directory

Since I keep wasting an hour to figure this out every time I need to move a user within OUD, I’m writing down the proper LDIF text to move a user from ou=disabled,o=orgName to ou=users,o=orgName.

dn: uid=TestUser123,ou=disabled,o=orgName
changetype: moddn
newrdn: uid=TestUser123
deleteoldrdn: 1
newSuperior: ou=users,o=orgName

For some reason, Oracle’s documentation omits the newrdn component and it all fails spectacularly.

Git For Configuration Management

I am starting to use git to manage application server configurations — partially to ensure team members are familiarizing themselves with git and thinking about it when they update code (we’ve seen a LOT of tweaks that are not pushed to the git server), but also to reduce the administrative overhead of managing servers.

The best use case thus far has been our sendmail environment — seven servers with three configuration bases. By issuing certificates with SAN values for each host name and the VIP name, we are able to use the same cert and config file on each server in a functional group. Admins can make changes to the config offline (i.e. we’re not live-editing config files on the sendmail servers), there is history to who made the changes {and a quick means of reverting changes), and, using a cron’d pull, we can ensure changes are consistent across the environment.

OUD Returning Some DirectoryString Syntax Values As UTF-8 Encoded Bytes

We are still in the process of moving the last few applications from DSEE to OUD 11g so the DSEE 6.3 directory can be decommissioned. Just two to go! But the application, when pointed to the OUD servers, gets “Unable to cast object of type ‘System.Byte[]’ to type ‘System.String'” when retrieving values for a few of our DirectoryString syntax custom schema.

This code snippet works fine with DSEE 6.3.

string strUserGivenName = (String)searchResult.Properties["givenName"][0]; 
string strUserSurame = (String)searchResult.Properties["sn"][0]; 
string strSupervisorFirstName = (String)searchResult.Properties["positionmanagernamefirst"][0]; 
string strSupervisorLastName = (String)searchResult.Properties["positionmanagernamelast"][0];

Direct the connection to the OUD 11g servers, and an error is returned.

     

The attributes use the same syntax – DirectoryString, OID 1.3.6.1.4.1.1466.115.121.1.15.

00-core.ldif:attributeTypes: ( 2.5.4.41 NAME ‘name’ EQUALITY caseIgnoreMatch SUBSTR caseIgnoreSubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} X-ORIGIN ‘RFC 4519’ ) 
00-core.ldif:attributeTypes: ( 2.5.4.4 NAME ( ‘sn’ ‘surname’ ) SUP name X-ORIGIN ‘RFC 4519’ ) 
00-core.ldif:attributeTypes: ( 2.5.4.42 NAME ‘givenName’ SUP name X-ORIGIN ‘RFC 4519’ ) 

99-user.ldif:attributeTypes: ( positionManagerNameMI-oid NAME ‘positionmanagernamemi’ DESC ‘User Defined Attribute’ SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN ‘user defined’ ) 
99-user.ldif:attributeTypes: ( positionManagerNameFirst-oid NAME ‘positionmanagernamefirst’ DESC ‘User Defined Attribute’ SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN ‘user defined’ ) 
99-user.ldif:attributeTypes: ( positionManagerNameLast-oid NAME ‘positionmanagernamelast’ DESC ‘User Defined Attribute’ SYNTAX 1.3.6.1.4.1.1466.115.121.1.15 SINGLE-VALUE X-ORIGIN ‘user defined’ ) 

I’ve put together a quick check to see if the returned value is an array, and if it is then get a string from the decoded byte array.

string strUserGivenName = (String)searchResult.Properties["givenName"][0]; 
string strUserSurame = (String)searchResult.Properties["sn"][0]; 

string strSupervisorFirstName = "";
string strSupervisorLastName = "";
if (searchResult.Properties["positionmanagernamefirst"][0].GetType().IsArray){
    strSupervisorFirstName = System.Text.Encoding.UTF8.GetString((byte[])searchResult.Properties["positionmanagernamefirst"][0]);
}
else{
    strSupervisorFirstName = searchResult.Properties["positionmanagernamefirst"][0].ToString();
}

if (searchResult.Properties["positionmanagernamelast"][0].GetType().IsArray){
    strSupervisorLastName = System.Text.Encoding.UTF8.GetString((byte[])searchResult.Properties["positionmanagernamelast"][0]);
}
else{
    strSupervisorLastName = searchResult.Properties["positionmanagernamelast"][0].ToString();
}

Voila

The outstanding question is if we need to wrap *all* DirectoryString syntax attributes in this check to be safe or if there’s a reason core schema attributes like givenName and sn are being returned as strings whilst our add-on schema attributes have been encoded.

Isolated Guest Network On Merlin 380.69_2 (Asus RT-AC68R)

We finally got rid of Time Warner Cable / Spectrum / whatever they want to call themselves this week’s overpriced Internet that includes five free outages between 1100 and 1500 each day. But the firmware on the new ISP’s router doesn’t have a facility to back up the config. And if we’re going to have static IPs for all of our speakers, printers, servers … we don’t want to have to re-enter all of that data if the router config gets reset. Same with configuring the WiFi networks. And, and and. So instead of using the snazzy new router, we are using our old router on .2, the new router on .1 … and everything actually connects to the old router, uses the DHCP server on the old router. And only uses the new router as its default gateway. Worked fine until we tried to turn on the guest network.

I found someone in Internet-land who has the exact same configuration and wants to permit guests to use the LAN printer. His post included some ebtables rules to allow guest network clients access to his printer IP. Swapped his printer IP for our router IP and … nada.

And then I realized that the router is not the packet destination IP when the guest client attempts to communicate outside our network. The router is the destination MAC address. So you cannot add an ebtables rule to the router’s IP address and expect traffic to flow.

The first thing you need to do is figure out the upstream router’s MAC address. From the Asus, you can query the arp table. If the command says “No match found in # entries”, ping the router and try again.

root@ASUS-RT-AC68R:/tmp/home/root# arp -a 10.5.5.1
? (10.5.5.1) at a3:5e:c4:17:a3:c0 [ether] on br0

The six pairs of hex numbers separated by colons – that’s the MAC address. You have to allow bidirectional communication from the guest network interface (wl0.2 for us) with the upstream router’s MAC address. You also have to allow broadcast traffic so guest devices are able to ARP for the router’s MAC address.

To have a persistent config, enable jffs and add the config lines to something like services-start:

root@ASUS-RT-AC68R:/tmp/home/root# cat /jffs/scripts/services-start
#!/bin/sh
logger "SERVICES-START: script start"
# Prevent Echo dots from sending multicast traffic to speaker network
ebtables -I FORWARD -o wl0.1 --protocol IPv4 --ip-source 10.0.0.36 --ip-destination 239.255.255.250 -j DROP
# Guest network - allow broadcast traffic so devices can ARP for router MAC
ebtables -I FORWARD -d Broadcast -j ACCEPT
# Guest network - allow communication to and from router MAC
ebtables -I FORWARD -s a3:5e:c4:17:a3:c0 -j ACCEPT
ebtables -I FORWARD -d a3:5e:c4:17:a3:c0 -j ACCEPT
# This should be automatically added for guest network, but it goes missing sometimes so I am adding it again
ebtables -A FORWARD -o wl0.2 -j DROP
ebtables -A FORWARD -i wl0.2 -j DROP

 

Use -L to view your ebtables rules:

root@ASUS-RT-AC68R:/tmp/home/root# ebtables -L
Bridge table: filter

Bridge chain: INPUT, entries: 0, policy: ACCEPT

Bridge chain: FORWARD, entries: 16, policy: ACCEPT
-d a3:5e:c4:17:a3:c0 -j ACCEPT
-s a3:5e:c4:17:a3:c0 -j ACCEPT
-d Broadcast -j ACCEPT
-p IPv4 -o wl0.1 --ip-src 10.0.0.36 --ip-dst 239.255.255.250 -j DROP
-o wl0.2 -j DROP
-i wl0.2 -j DROP

Voila, guests who can access the Internet & DNS on the .1 router, but cannot access anything on the internal network. Of course you can add some specific IPs as allowed destinations too – like the printers in the example that started me down this path.

DSEE 6.3 To OUD 11g Transition

There’s no direct path to replicate data from DSEE6.3 to OUD11g. Not unreasonable since DSEE is the Sun product based on the Netscape Directory Server and OUD is the Oracle product based on OpenLDAP – they weren’t exactly designed to allow easy coexistence that would permit customers to switch from one to the other. Problem is, with Oracle’s acquisition of Sun & axing the DSEE product line … customers *need* to interoperate or do a flash cut.
Since our Identity Management (IDM) platform was not able to prep development work and implement their changes along with the directory replacement, a flash cut was right out. I’ve done flash cuts before — essentially ran two completely different directories in parallel with data fed from the Identity Management platform, tested against the new directory using quick modification to the OS hosts file, then reconfiguring the virtual IP on the load balancer to direct the existing VIP to the new service hosts. Quick/easy fail-back is to set the VIP to the old config and sort out whatever is wrong on the new hosts. A lot lower risk than a traditional ‘flash cut’ approach as long as you trust the IDM system to keep data in sync. But lacking an IDM system, flash cut is typically a non-starter anyway.
There is a migration path. Oracle put some development effort into the DSEE product line prior to discontinuing it. DSEE7 was the Sun distributed “next version”. It was not widely deployed prior to the Oracle acquisition. Oracle took over DSEE7 development but called it DSEE11 (to match the OUD version numbering, I guess?). Regardless of the rational, you’ll see the “next version” DSEE product referred to as both DSEE7 and DSEE11.
There’s not a direct replication between Oracle DSEE11 and Oracle OUD11. Oracle created a “replication gateway” that handles, among other things, schema name mapping (only Netscape would use attribute names like nsAccountLockout, and that nomenclature carried through to the Sun product). Oracle did a decent job of testing DSEE11<=>OUD11 Replication Gateway interoperability. I don’t know if they just assumed DSEE6 would work because DSEE11 did or if they assumed the installation base for DSEE6 was negligible (i.e. didn’t bother to test older revisions) but we found massive bugs in the replication gateway working with DSEE6. “You cannot import the data to initialize the OUD11 directory” type of bugs which I was willing to work around by manually editing the export file, but subsequent “updates do not get from point ‘A’ to point ‘B’ bugs too. The answer from Oracle was essentially “upgrade to DSEE11” … which, if i could flash-cut upgrade DSEE6 to DSEE11 (see: IDM platform couldn’t do that), I could just cut it to OUD11 and be done. Any non-trivial change was a non-starter, but Oracle wasn’t going to dump a bunch of development time into fixing replication for a dead product to their shiny new thing.
I worked out a path that used tested and working components — DSEE6 replicated just fine with DSEE11. DSEE11 replicated just fine with the OUD11g replication gateway, and the OUD11g replication gateway replicated fine with OUD11g. Instead of introducing additional expense and time setting up dedicated replication translation servers, I installed multiple components on the new servers. There is a DSEE11 directory on one of the new OUD servers, the replication gateway on another one of the new OUD servers, and (of course) the OUD11g directory that we actually intended to run on the new servers is on those new OUD servers.
This creates additional monitoring overhead – watching replication between three different directories and ensuring all of the services are running – but allows the IDM platform to continue writing changes to the DSEE6.3 directory until they are able to develop and test changes that allow them to use OUD11g directly.

Systemd (a.k.a. where did my log files go!?!?!)

A systemd Primer For sysvinit Users

Background:

Starting in Fedora 15 and RHEL 7, systemd replaces sysvinit. This is a touchy subject among Unix folks – some people think it’s a great change, others think Linux has been ruined forever. Our personal opinions of the shift doesn’t matter: vendors are implementing it, WIN Linux servers use it, so we need to know it. Basically, throw “systemd violates the minimalist, modular philosophy at the core of Unix development” on the “but emacs is so awesome, why are we using vim” and “BETA outperforms VHS any day of the week” pile.

Quick terminology – services are now called units. You’ll see that word a lot. A unit is configured in a “unit file”. Additionally, “run levels” (0-6) have been replaced with the concept of “targets” that have friendly names.

What’s the difference?

Sysvinit wasn’t designed to know about your system, it was designed to run scripts on your system. Sysvinit essentially runs scripts, whereas systemd is a service manager. Systemd knows about the system. One place this becomes apparent – if you manually run the run line from a sysvinit script then check the service status, it will show running because the binary has a PID. If you do the same with systemd, it will say the service is down. This is like Windows – if you have a Docker service that runs “”C:\Program Files\Docker\Docker\com.docker.service”” set to run manually, and use start-run to run the exact same string … the service will not show as running.

Systemd manages a lot of different unit types. As application owners, we’ll use ‘service’ units. ‘Mount’ or ‘automount’ type units manage mountpoints. Socket and device unit types manage sockets (which have associated service unit files using the socket) and devices. Because systemd manages sockets, inetd/xinetd have been obsoleted.

Sysvinit scripts could run user-defined commands. If the init script for myapplication has a section called “bob”, you can run “service myapplication bob” and it will do whatever the ‘bob’ part of the script says to do. Systemd has a fixed list of directives – start, stop, restart, reload, status, enable, disable, is-enabled, list-unit-files, list-dependencies, daemon-reload. You cannot just make a new one.

Systemd may also require a system reboot for more than just kernel patches. This is really different, and I expect there will be a learning curve as to what requires a reboot.

Log files have “vanished”. If you are using a default installation, you won’t find /var/log/messages. You can use “journalctl -f” to tail the equivalent of the messages file. The systemd log files are stored in binary format – potentially corruptible, which is another aspect of the change Unix-types don’t care for.

What does systemd give me?

Systemd doesn’t just start/stop a service when run levels change. A unit can be started because it is configured to start on the runlevel (just like sysvinit scripts), if another service requires it, if the service abends, or if dbus triggers it. “If another service requires it” – that’s a dependency chain. Instead of defining an order and hoping everything you need was loaded by the time the init script ran, systemd allows you to include an “After” directive – units started before the current unit or “Before” – units that will not be started until the current unit starts. Additional directives for “Requires” – units which must be activated to activate the current unit and “Wants” – units that will be started in parallel with the current unit but failing to start these units will not fail the current unit.

A directive, “Conflicts”, allows systemd to identify other units that cannot coexist with the current unit. Conflicting units will be stopped to allow the current unit to start. In addition to the base command starting in the unit file (ExecStart), there are pre (ExecStartPre) and post (ExecStartPost) operations that are run before/after the base command. These could be related to the service itself but do not have to be. You could run a mail command line to alert an admin every time the unit starts or stops cleanly.

Another nice feature of systemd is user-level services – using systemctl –user will control unit files located in user-specific directories like /usr/lib/systemd/user/ and ~/.config/system/user/

Using systemd: (Warning: this is going to get odd)

You use systemctl to control units, and you use journalctl to view the binary blobs that have replaced log files. Use the man pages or your favourite search engine if you want details. The general syntax for systemclt is “systemctl operation unit.type” – e.g. “systemctl restart sendmail” would restart sendmail.

Chkconfig has been completely supplanted. Use “systemctl enable unit.type” and “systemctl disable unit.type” to control if a service auto-starts. Instead of using chkconfig –list, you can query the startup state of an individual unit. Use systemctl –is-enabled unit.type

There’s a service shell script that replaces ‘service’ that you used with sysvinit systems. It turns the old “service something-or-other action” into “systemctl action name.service” so it still works.

Here’s the odd part – it is quite easy to define a permitted sudo operation that allows a non-root user to control sysvinit services. Allow “service sendmail” and the user can run “service sendmail start”, “service sendmail stop”, “service sendmail status”, “service sendmail RandomStuffITossedIntoTheFile”. Because the service name and directive are swapped around in systemctl, we would have to enumerate each individual directive that should be permitted. More secure, because RandomStuffITossedIntoTheFile should not make the cut. But we haven’t done this yet. So until we go through and enumerate the reasonable actions (Are there directives beyond start/stop/status that we should be running? Do we have any business enabling and disabling our services?), submit the access request, confirm it’s all functioning as expected, and remove the “sudo service” access … continue using “sudo service something-or-other action”. We will advise you when the systemctl sudo access has been granted so we can start using the “new way” to control services on RHEL7 systems.

Unlike init scripts, changes to systemd unit files are not immediately activated on the system. Running “systemctl daemon-reload” makes systemd aware of the config change.

Using journalctl:

Our Unix team has implemented rsyslogd to output log data to the expected files. This means you can more or less ignore journalctl – tail/grep the log file as usual. I don’t foresee this changing in the near to mid term, but if you use cloud-hosted sandbox servers (i.e. boxes that don’t have the Unix group’s standard config) … journalctl is what happened to all the log files you cannot find.

To view logs specific to an individual unit, use journalctl -u unit.type. Additionally “systemctl unit.type status” will display the last handful of log lines from the unit.

Load Balance and Failover Sendmail Mailertable Relays

A coworker asked me today how to get the mailertable relays to load balance instead of fail over. Trick is to think beyond sendmail. The square brackets around hosts tell sendmail not to check for an MX record (you’re generally using an A record, so this saves a tiny little bit of time … not to mention *if* there is an MX record there, it creates a whole heap-o confusion). *But* the MX lookup is right useful when setting up load balanced or failover relay targets.

Single host relay in the mailertable looks like this:
yourdomain.gTLD      relay:[somehost.mydomain.gTLD]

If you want to fail over between relays (that is try #1, if it is unavailable try #2, and so on), you can stay within the mailertable and use:
yourdomain.gTLD      relay:[somehost.mydomain.gTLD]:[someotherhost.mydomain.gTLD]

Or even try direct delivery and fail back to a smart host:
yourdomain.gTLD      relay:%1:smart-host

But none of this evenly distributes traffic across multiple servers. The trick to load balancing within the mailertable is to create equal weight MX records in your domain to be used as the relay.

In ISC Bind, this looks like:
yourdomainmailrouting.mydomain.gTLD     IN MX 10 somehost.mydomain.gTLD.
yourdomainmailrouting.mydomain.gTLD     IN MX 10 somehost.mydomain.gTLD.

Once you have created the DNS records, simply use the MX record hostname in your mailertable:

yourdomain.gTLD      relay:yourdomainmailrouting.mydomain.gTLD

By leaving out the square brackets, sendmail will resolve an MX record for ‘yourdomainmailrouting.mydomian.gTLD’, find the equal weight MX records, and do the normal sendmail thing to use both.

Sendmail In CHROOT Jail

Running our sendmail mail relay in a chroot jail, ‘make’ does not update sendmail config files with changes. While I’m certain there’s a way to sort that, it’s a lot easier to go back to the old-school way of updating sendmail.cf and sendmail’s hash files.

Modifying Sendmail Configuration (sendmail.mc) on Servers with CHROOT Jailed Sendmail

  1. SSH to server using your ID
  2. Change to the sendmail service account (e.g. sudo /bin/su – sendmail)
  3. Change directory to the jailed sendmail /etc/mail locatio (e.g. cd /smt00p20/sendmail/etc/mail)
  4. vi sendmail.mc
  5. Make requisite changes and save file
  6. m4 sendmail.mc > sendmail.cf
  7. Under your ID, restart sendmail using “sudo systemctl stop sendmail stop;sudo systemctl start sendmail”
  8. Validate changes

Modifying Sendmail Data Files on Servers with CHROOT Jailed Sendmail

  1. SSH to server using your ID
  2. Change to the sendmail service account (e.g. sudo /bin/su – sendmail)
  3. Change directory to the jailed sendmail /etc/mail locatio (e.g. cd /smt00p20/sendmail/etc/mail)
  4. vi filetoedit
  5. Make requisite changes and save file
  6. makemap hash ./filetoedit.db < ./filetoedit
  7. Under your ID, restart sendmail using  “sudo systemctl stop sendmail stop;sudo systemctl start sendmail”
  8. Validate changes

Where filetoedit is the name of the data file. For example, run “makemap hash ./access.db < ./access” to update the changes to the access file into access.db