Category: System Administration

Load Runner And Statistical Analysis Thereof

I had offhandedly mentioned a statistical analysis I had run in the process of writing and implementing a custom password filter in Active Directory. It’s a method I use for most of the major changes we implement at work – application upgrades, server replacements, significant configuration changes.

To generate the “how long did this take” statistics, I use a perl script using the Time::HiRes module (_loadsimAuthToCentrify.pl) which measures microsecond time. There’s an array of test scenarios — my most recent test was Unix/Linux host authentication using pure LDAP authentication and Centrify authentication, so the array was fully qualified hostnames. Sometimes there’s an array of IDs on which to test — TestID00001, TestID00002, TestID00003, …., TestID99999. And there’s a function to perform the actual test.

I then have a loop to generate a pseudo-random number and select the test to run (and user ID to use, if applicable) using that number

my $iRandomNumber = int(rand() * 100);
$iRandomNumber = $iRandomNumber % $iHosts;
my $strHost = $strHosts[$iRandomNumber];

The time is recorded prior to running the function (my $t0 = [gettimeofday];) and the elapsed time is calculated when returning from the function (my $fElapsedTimeAuthentication = tv_interval ($t0, [gettimeofday]);). The test result is compared to an expected result and any mismatches are recorded.

Once the cycle has completed, the test scenario, results, and time to complete are recorded to a log file. Some tests are run multi-threaded and across multiple machines – in which case the result log file is named with both the running host’s name and a thread identifier. All of the result files are concatenated into one big result log for analysis.

A test is run before the change is made, and a new test for each variant of the change for comparison. We then want to confirm that the general time to complete an operation has not been negatively impacted by the change we propose (or select a route based on the best performance outcome).

Each scenario’s result set is dropped into a tab on an Excel spreadsheet (CustomPasswordFilterTiming – I truncated a lot of data to avoid publishing a 35 meg file, so the numbers on the individual tabs no longer match the numbers on the summary tab). On the time column, max/min/average/stdev functions are run to summarize the result set. I then break the time range between 0 and the max time into buckets and use the countif function to determine how many results fall into each bucket (it’s easier to count the number under a range and then subtract the numbers from previous buckets than to make a combined statement to just count the occurrences in a specific bucket).

Once this information is generated for each scenario, I create a summary tab so the data can be easily compared.

And finally, a graph is built using the lower part of that summary data. Voila, quickly viewed visual representation of several million cycles. This is what gets included in the project documentation for executive consideration. The whole spreadsheet is stored in the project document repository – showing our due diligence in validating user experience should not be negatively impacted as well as providing a baseline of expected performance should the production implementation yield user experience complaints.

 

New (To Me) WordPress Spam Technique

In the past week, one particular image that I posted has received about a hundred comments. Not real comments from people who enjoyed the image, unfortunately. Spam-bot comments. I get a few spam comments a month, easily just dropped. But exponentially increasing numbers of comments were showing up on this page. The odd thing, though, is it wasn’t a page or a post. It was an image embedded in a post.

Evidently embedded pictures have their own “attachment page” — a page that includes a comment dialogue. I guess that’s useful for someone … maybe an artist who uses a gallery front-end to their media can still get comments on their pictures if their gallery doesn’t provide commentary. Not a problem I need solved. WordPress includes a comments_open filter that allows you to programmatically control where comments are available (provided your theme uses the filter).

How do you add a function to WordPress? I find a lot of people editing WordPress or theme files directly. Not a good idea — next upgrade is going to blow your changes away. If you use an upgrade script, you could essentially ‘patch’ the theme during the upgrade process (append your function to the distributed file). Or you can just add your function as a plug-in. In your wp-content/plugins folder, make a folder with a good descriptive name of your plugin (i.e. don’t call it myPlugin if you have any thoughts of distributing it). In that folder make a PHP file with the same name (i.e. my filterCommentsByType folder has a filterCommentsByType.php file.

For what I’m doing, the comment header is longer than the code! The comment header is used to populate the Plugins page in your admin console. If you omit the header component, your plugin will not show up to be activated. Add your function and save the file:

<?php
/**
* Plugin Name: Filter Comments By Type
* Plugin URI: http://lisa.rushworth.us
* Description: This plugin allows commenting to be disabled based on post type
* Version: 1.0.0
* Author: Lisa Rushworth
* Author URI: http://lisa.rushworth.us
* License: GPL2
*/
add_filter( ‘comments_open’, ‘remove_comments_by_post_type’, 10 , 2 );
function remove_comments_by_post_type( $boolInitialStatus, $iPostNumber) {
$post = get_post($iPostNumber);
if( $post->post_type == ‘attachment’ ){ return false; }
else{ return $boolInitialStatus; }
}
?>

When you go to your admin console’s plugins section, your filter will appear in the list and be deactivated. Click to active it.

Voila, no more comments on attachment posts. Or whatever other type of post on which you wish to restrict commenting.

Setting Up DNSSEC

Last time I played around with the DNS Security Extensions (DNSSEC), the root and .com zones were not signed. Which meant you had to manually establish trusts before there was any sort of validation happening. Since the corporate standard image didn’t support DNSSEC anyway … wasn’t much point on either the server or client side. I saw ICANN postponed a key rollover for root a few days ago, and realized hey, root is signed now. D’oh, way to keep up, huh?

So we’re going to sign the company zones and make sure our clients are actually looking at zone signatures when they exist. Step #1 – signing our test zone. I do this in a screen session because it can take a long time to generate a key. If the process gets interrupted for whatever reason, you get to start ALL OVER. I am using ISC Bind – how to do this on any other platform, well LMGTFY 🙂

# Start a screen session
screen -S LJR-DNSSEC-KeyGen
# Use dnssec-keygen to create a zone signing key (ZSK) – bit value is personal preference
dnssec-keygen -a NSEC3RSASHA1 -b 2048 -n ZONE rushworth.us
# Then use dnssec-keygen to create a key signing key (KSK) – bit value is still personal preference
dnssec-keygen -f KSK -a NSEC3RSASHA1 -b 4096 -n ZONE rushworth.us

Grab the content of the *.key files and append them to your zone

Apache HTTP Sandbox With Docker

I set up a quick Apache HTTPD sandbox — primarily to test authentication configurations — in Docker today. It was an amazingly quick process.

Install an image that has an Apache HTTPD server:    docker pull httpd
Create a local file system for Apache config files (c:\docker\httpd\httpd.conf for main config, c:\docker\httpd\conf.d for all of the extras like ssl.conf and php.conf, plus web sites), and c:\docker\httpd\vhtml for the web site content)
Launch the container: docker run -detach –publish 80:80 –publish 443:443 –name ApacheWebServer –restart always -v /c/docker/httpd/httpd.conf:/etc/httpd/conf/httpd.conf:ro -v /c/docker/httpd/conf.d/:/etc/httpd/conf.d/:ro -v /c/docker/httpd/vhtml/:/var/www/vhtml/:ro httpd

Shell into it (docker exec -it ApacheWebServer bash) to look around, or just access http://localhost from the Docker host.

PPM Via Windows Authenticated Proxy

The office proxy used to use BASIC authentication. Which was terrible: transmission was done over clear text. Some years ago, they implemented a new proxy server that was capable of using Kerberos tickets for authentication (actually the old one could have done it too – I’ve set up the Kerberos realm on another implementation of the same product, but it wasn’t a straight forward clickity-click and you’re done). Awesome move, but it did break everything that used the HTTP_PROXY environment variable with creds included (yeah, I have a no-rights account with proxy access and put that in clear text all over the place). I just stopped using wget and curl to download files. I’d pull them to my Windows box, then scp them to the right place. But every once in a while I need a new perl module that’s available from ActiveState’s PPM. I’d have to fetch the tgz file and install it manually.

Until today — I was configuring a new Fiddler installation. Brilliant program – it’s just a web proxy that you can use for debugging purposes, but it can insert itself into HTTPS communications and provide clear text rendering of encrypted sessions too. It also proxies proxy credentials! There’s a config to allow remote hosts to connect – it’s normally bound to 127.0.0.1:8888, but it can bind to 0.0.0.0:8888 as well. If you have your web browser open & visit a site through the proxy server (i.e. you make sure the browser is authenticating fine) … set your HTTP_PROXY to http://127.0.0.1:8888 (or whatever means the specific program uses to configure a proxy). Voila, PPM hits Fiddler. Fiddler relays the request out to the proxy using the Kerberos token on your desktop. Package installs. Lot of overhead just to avoid unzipping a file … but if you are installing a package with a dozen dependencies … well, it’s a lot quicker than failing your install a dozen times and getting the next prereq!

PHP: Windows Authentication to MS SQL Database

I’ve encountered several people now how have followed “the directions” to allow their IIS-hosted PHP code to authenticate to a MS SQL server using Windows authentication … only to get an error indicating some unexpected ID is unable to log into the SQL server.

Create your application pool and add an identity. Turn off fastcgi.impersonate in your php.ini file. Create web site, use custom application pool … FAIL.

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="IUSR" />
    </authentication>
  </security>
</system.webServer>

The web site still doesn’t pick up the user from the application pool. Click on Anonymous Authentication, then click “Edit” over in the actions pane. Change it to use the application pool identity here too (why wouldn’t it automatically do so when an identity is provided?? no idea!).

C:\Users\administrator.RUSHWORTH<%windir%\system32\inetsrv\appcmd.exe list config "Exchange Back End" /section:anonymousAuthentication
<system.webServer>
  <security>
    <authentication>
      <anonymousAuthentication enabled="true" userName="" />
    </authentication>
  </security>
</system.webServer>

I’ve always seen the null string in userName, although I’ve read that the element may be omitted entirely. Once the site is actually using the pool identity, PHP can authenticate to SQL accounts using Windows authentication.

Checking Supported TLS Versions and Ciphers

There have been a number of ssl vulnerabilities (and deprecated ciphers that should be unavailable, especially when transiting particularly sensitive information). On Linux distributions, nmap includes a script that enumerates ssl versions and, per version, the supported ciphers.

[lisa@linuxbox ~]# nmap -P0 -p 25 –script +ssl-enum-ciphers myhost.domain.ccTLD

Starting Nmap 7.40 ( https://nmap.org ) at 2017-10-13 11:36 EDT
Nmap scan report for myhost.domain.ccTLD (#.#.#.#)
Host is up (0.00012s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
25/tcp open smtp
| ssl-enum-ciphers:
| TLSv1.0:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.1:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM_8 (dh 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CCM (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_256_CBC_SHA384 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA256 (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_256_CBC_SHA (dh 2048) – A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 2048) – A
| TLS_DHE_RSA_WITH_CAMELLIA_128_CBC_SHA (dh 2048) – A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM_8 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CCM (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA256 (rsa 2048) – A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_256_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) – A
| TLS_RSA_WITH_CAMELLIA_128_CBC_SHA (rsa 2048) – A
| compressors:
| NULL
| cipher preference: server
|_ least strength: A

Nmap done: 1 IP address (1 host up) scanned in 144.67 seconds

ZoneMinder After Upgrade

We recently updated from ZoneMinder 1.30 to 1.34 – easy as can be, ran the DB update script and everything came right online. Except … our home automation system hasn’t been able to access the system. OpenHAB reports that the bridge is offline. And we’re getting 404 errors in all of the /zm/api calls in access_log.

Turns out the API was offline because when the new package came down … there was a zoneminder.conf.rpmnew in the Apache conf.d directory. Can’t even say I found this intentionally – I wanted to check the Apache config file to see if it had anything about the api directory, did a directory listing, and said oooooh!

[lisa@fedora01 conf.d]# ll zone*
-rw-r–r– 1 root root 1990 Jul 29 18:13 zoneminder.conf
-rw-r–r– 1 root root 1990 Aug 28 22:34 zoneminder.conf.rpmnew

They’ve changed a few of the sub-directories and added components to the config. As soon as I renamed zoneminder.conf to zoneminder.conf.old, copied zoneminder.conf.rpmnew to zoneminder.conf, and repeated a few config tweaks we had made for the original installation … restarted Apache and voila, we can fetch /zm/api/host/getVersion.json and get values. So if you’re getting odd 404 errors and CakePHP “/zm/api” not found errors maybe you forgot to update your config with changes from the rpmnew file.

Cleaning Up Old OpenHAB Persistence Tables

So my husband asked for a program that would go out to the OpenHAB persistence database and identify all of the item tables that are no longer associated with active items. If you rename or delete an item from OpenHAB, the associated data is retained in the persistence database. Might be a good thing – maybe you wanted that data. But if it’s useless fluff … well, no need to keep the state changes from a door sensor that’s no longer around.

Wrote the code, and asked him how many days old he wanted the last update to be before the item table got dropped … and he told me this was a useless way to do it and maybe something really hadn’t updated in six months or three years and age of last update is no way to be identifying tables to be removed. Which, yeah, then why ask for it!? Then I needed to write something that takes a list of items from OpenHAB and identifies everything in the items table that does not appear in the OpenHAB list so those tables can be deleted. But I figured I’d post the original code too in case anyone else could use it. Both in perl, and neither in particularly well written perl. I trust the data and don’t want to protect against insertion attacks.

Drop tables for items that no longer appear in OpenHAB:

use strict;
use DBI;

my %strItemsFromOpenHAB = ();
open(INPUT,"./openhabItemList.txt");
while(<INPUT>){
        chomp();
        my $strCurrentItem = $_;
        $strItemsFromOpenHAB{$strCurrentItem}++;
}
close INPUT;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        my $strItemID = $row[0];
        my $strItemName = $row[1];
        if(! $strItemsFromOpenHAB{$strItemName} ){              # If the current item name is not in the list of items from OpenHAB
#               print "DELETE FROM items where ItemID = $strItemID\n";
                print "DROP TABLE Item$strItemID;  # $strItemName \n";
        }
}
$sth->finish();

$dbh->disconnect();
close OUTPUT;

 

Identify tables that have not been updated in iTooOldInDays days:

use strict;
use DBI;
use Date::Parse;
use Time::Local;

my $iTooOldInDays = 365;

my $iCurrentEpochTime = time();

my @strItems = ();
my $iItems = 0;

my $dbh = DBI->connect('DBI:mysql:openhabdb;host=DBHOST', 'DBUID', 'DBPassword', { RaiseError => 1 } );

my $sth = $dbh->prepare("SELECT * FROM Items");
$sth->execute();
while (my @row = $sth->fetchrow_array) {
        $strItems[$iItems++] = $row[0];
}
$sth->finish();

for(my $i = 0; $i < $iItems; $i++){ my $strTableName = 'Item' . $strItems[$i]; my $sth = $dbh->prepare("SELECT * FROM $strTableName ORDER BY Time DESC LIMIT 1");
        $sth->execute();
        while (my @row = $sth->fetchrow_array) {
                my $strUpdateTime = $row[0];
                my @strDateTimeBreakout = split(/ /,$strUpdateTime);
                my $strDate = $strDateTimeBreakout[0];
                my $strTime = $strDateTimeBreakout[1];

                my @strDateBreakout = split(/-/,$strDate);
                my @strTimeBreakout = split(/:/,$strTime);

                my $iUpdateEpochTime = timelocal($strTimeBreakout[2],$strTimeBreakout[1],$strTimeBreakout[0], $strDateBreakout[2],$strDateBreakout[1]-1,$strDateBreakout[0]);
                my $iTableAge = $iCurrentEpochTime - $iUpdateEpochTime;

                if($iTableAge > ($iTooOldInDays * 86400) ){
                        print "$strTableName last updated $strUpdateTime - $iUpdateEpochTime\n";
                }
        }
        $sth->finish();
}

$dbh->disconnect();
close OUTPUT;

Configuring and Using RPZ

I realized today what, while I had written about why response policy zones are useful, I never indicated how to configure one! So here’s a quick document outlining how to set it up in ISC Bind. In your named.conf file, add a response policy to your options section:

        response-policy {
                zone “rpz”;
        };
Then add the correspondingly named zone at the end of the file. For purposes of testing, I added a zone as a forward only zone so I could perform a network capture to see what exactly transpires when a name in the RPZ is resolved.
zone “rpz” {
      type master;
      file “rpz.db”;
      allow-query { none; };
      allow-transfer { none; };
};
zone “windstream.com” {
    type forward;
    forward only;
    forwarders { 8.8.8.8; };
};
Then you just have to make a rpz.db where you store your named files:
$TTL 60
$ORIGIN rpz.
@            IN    SOA  localhost. root.localhost.  (
                          2   ; serial
                          3H  ; refresh
                          1H  ; retry
                          1W  ; expiry
                          1H) ; minimum
                  IN    NS    localhost.

www.windstream.com    CNAME    www.yahoo.com.
Restarted named and ran “rndc flush” to avoid serving cached content instead of the RPZ host data. Then ran a few tests and confirmed that the resolution configured in the rpz zone:
[lisa@fedora02 named]# dig +short www.windstream.com @localhost
www.yahoo.com.
atsv2-fp.wg1.b.yahoo.com.
98.139.183.24
98.138.252.30
98.139.180.149
98.138.253.109
[lisa@fedora02 named]# dig +short dell905.windstream.com @localhost
ns4.windstream.com.
173.186.244.139
[lisa@fedora02 named]# dig +short www.google.com @localhost
216.58.218.228
In this process, I learnt something interesting about ICS’s implementation of RPZ: it still performs the query and then overrides the results. Odd waste of cycles, but the resolution that was subsequently turned into yahoo’s address from the rpz zone. Looking up a windstream.com host that isn’t in my RPZ and I got another query out to 8.8.8.8 which was expected. Query to something not in the forward zone and not in the rpz zone and I get no traffic to 8.8.8.8 (because it follows my normal forwarding which is to our ISP’s DNS).
I was curious if this meant rpz could not be used to publish a bad hostname locally – but attempting to resolve a bad hostname (added abadhost.windstream.com with the same CNAME to Yahoo and reloaded my zone) worked just fine.

[root@fedora02 ~]# dig abadhost.windstream.com @localhost

; <<>> DiG 9.11.1-P2-RedHat-9.11.1-2.P2.fc26 <<>> abadhost.windstream.com @localhost
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8382
;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 4, ADDITIONAL: 3

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 1aa34751c5df7f78857a921259a8706fb5e1741a46eb5352 (good)
;; QUESTION SECTION:
;abadhost.windstream.com. IN A

;; ANSWER SECTION:
abadhost.windstream.com. 5 IN CNAME www.yahoo.com.
www.yahoo.com. 1800 IN CNAME atsv2-fp.wg1.b.yahoo.com.
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.180.149
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.253.109
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.139.183.24
atsv2-fp.wg1.b.yahoo.com. 60 IN A 98.138.252.30

;; AUTHORITY SECTION:
wg1.b.yahoo.com. 172800 IN NS yf3.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf4.a1.b.yahoo.net.
wg1.b.yahoo.com. 172800 IN NS yf1.yahoo.com.
wg1.b.yahoo.com. 172800 IN NS yf2.yahoo.com.

;; ADDITIONAL SECTION:
yf1.yahoo.com. 86400 IN A 68.142.254.15
yf2.yahoo.com. 86400 IN A 68.180.130.15

;; Query time: 1204 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Aug 31 16:24:15 EDT 2017
;; MSG SIZE rcvd: 315

But there is a query that goes out to the name server and a ‘no such name’ result returned. Odd.