Tag: Linux

Finding PCI Devices

You can use dmidecode to list all sorts of information about the system — there is a list of device types that you can use with the “-t” option

   Type   Information
   ────────────────────────────────────────────
      0   BIOS
      1   System
      2   Baseboard
      3   Chassis
      4   Processor
      5   Memory Controller
      6   Memory Module
      7   Cache
      8   Port Connector
      9   System Slots
     10   On Board Devices
     11   OEM Strings
     12   System Configuration Options
     13   BIOS Language
     14   Group Associations
     15   System Event Log
     16   Physical Memory Array
     17   Memory Device
     18   32-bit Memory Error
     19   Memory Array Mapped Address
     20   Memory Device Mapped Address
     21   Built-in Pointing Device
     22   Portable Battery
     23   System Reset
     24   Hardware Security
     25   System Power Controls
     26   Voltage Probe
     27   Cooling Device
     28   Temperature Probe
     29   Electrical Current Probe
     30   Out-of-band Remote Access
     31   Boot Integrity Services
     32   System Boot
     33   64-bit Memory Error
     34   Management Device
     35   Management Device Component
     36   Management Device Threshold Data
     37   Memory Channel
     38   IPMI Device
     39   Power Supply
     40   Additional Information
     41   Onboard Devices Extended Information
     42   Management Controller Host Interface

Blah

[lisa@fedora ~/]# dmidecode -t 9

Handle 0x0024, DMI type 9, 17 bytes
System Slot Information
Designation: Slot6
Type: 32-bit PCI
Current Usage: In Use
Length: Short
ID: 6
Characteristics:
3.3 V is provided
Opening is shared
PME signal is supported
Bus Address: 0000:0a:02.0

The “Bus Address” value corresponds to information from lspci:

[lisa@fedora ~/]# lspci | grep “0a:02.0”
0a:02.0 Multimedia video controller: Conexant Systems, Inc. CX23418 Single-Chip MPEG-2 Encoder with Integrated Analog Video/Broadcast Audio Decoder

XRDP Logon Hangs on Black Screen

I’m writing it down this time — after completing the steps to set up xrdp (installed, configured, running, firewall port open), we get prompted for credentials … good so far!

And then get stuck on a black screen. This is because the user we’re trying to log into is already logged into the machine. Log out locally, and the user is able to log into the remote desktop connection. Conversely, attempting to log in locally once the remote desktop connection is established just hangs on a black screen too.

Using Screen to Access Console Port

We needed to console into some Cisco access points — RJ45 to USB to plug into the device console port and the laptop’s USB port? Check! OK … now what? Turns out you can use the screen command as a terminal emulator. The basic syntax is screen <port> <baud rate> — since the documentation said to use 9600 baud and the access point showed up on /dev/ttyUSB0, this means running:

 

screen /dev/ttyUSB0 9600

More completely, screen <port> <baud rate>,<7 or 8 bits per byte>,<enable or disable sending flow control>,<enable or disable rcving flow control>,<keep or clear the eight bit in each byte>

screen /dev/ttyUSB0 9600,cs8,ixon,ixoff,istrip 
- or - 
screen /dev/ttyUSB0 9600,cs7,-ixon,-ixoff,-istrip

Logstash, JRuby, and Private Temp

There’s a long-standing bug in logstash where the private temp folder created for jruby isn’t cleaned up when the logstash process exits. To avoid filling up the temp disk space, I put together a quick script to check the PID associated with each jruby temp folder, see if it’s an active process, and remove the temp folder if the associated process doesn’t exist.

When the PID has been re-used, this means we’ve got an extra /tmp/jruby-### folder hanging about … but each folder is only 10 meg. The impacting issue is when we’ve restarted logstash a thousand times and a thousand ten meg folders are hanging about.

This script can be cron’d to run periodically or it can be run when the logstash service launches.

import subprocess
import re

from shutil import rmtree

strResult = subprocess.check_output(f"ls /tmp", shell=True)

for strLine in strResult.decode("utf-8").split('\n'):
        if len(strLine) > 0 and strLine.startswith("jruby-"):
                listSplitFileNames = re.split("jruby-([0-9]*)", strLine)
                if listSplitFileNames[1] is not None:
                        try:
                                strCheckPID = subprocess.check_output(f"ps -efww |  grep {listSplitFileNames[1]} | grep -v grep", shell=True)
                                #print(f"PID check result is {strCheckPID}")
                        except:
                                print(f"I am deleting |{strLine}|")
                                rmtree(f"/tmp/{strLine}")

Using urandom to Generate Password

Frequently, I’ll use password generator websites to create some pseudo-random string of characters for system accounts, database replication,etc. But sometimes the Internet isn’t readily available … and you can create a decent password right from the Linux command line using urandom.

If you want pretty much any “normal” character, use tr to pull out all of the other characters:

'\11\12\40-\176'

Or remove anything outside of upper case, lower case, and number characters using

a-zA-Z0-9

Pass the output to head to grab however many characters you actually want. Voila — a quick password.

Linux Disk Utilization – Reducing Size of /var/log/sa

We occasionally get alerted that our /var volume is over 80% full … which generally means /var/log has a lot of data, some of which is really useful and some of it not so useful. The application-specific log files already have the shortest retention period that is reasonable (and logs that are rotated out are compressed). Similarly, the system log files rotated through logrotate.conf and logrotate.d/* have been configured with reasonable retention.

Using du -sh /var/log/ showed the /var/log/sa folder took half a gig of space.

This is the daily output from sar (a “daily summary of process accounting” cron’d up with /etc/cron.d/sysstat). This content doesn’t get rotated out with the expected logrotation configuration. It’s got a special configuration at /etc/sysconfig/sysstat — changing the number of days (or, in my case, compressing some of the older files) is a quick way to reduce the amount of space the sar output files consume).

Linux – Clearing Caches

I encountered some documentation at work that provided a process for clearing caches. It wasn’t wrong per se, but it showed a lack of understanding of what was being performed. I enhanced our documentation to explain what was happening and why the series of commands was redundant. Figured I’d post my revisions here in case they’re useful for someone else.

Only clean caches can be dropped — dirty ones need to be written somewhere before they can be dropped. Before dropping caches, flush the file system buffer using sync — this tells the kernel to write dirty cache pages to disk (or, well, write as many as it can). This will maximize the number of cache pages that can be dropped. You don’t have to run sync, but doing so optimizes your subsequent commands effectiveness.

Page cache is memory that’s held after reading a file. Linux tends to keep the files in cache on the assumption that a file that’s been read once will probably be read again. Clear the pagecache using echo 1 > /proc/sys/vm/drop_caches — this is the safest to use in production and generally a good first try.

If clearing the pagecache has not freed sufficient memory, proceed to this step. The dentries (directory cache) and inodes cache are memory held after reading file attributes (run strace and look at all of those stat() calls!). Clear the dentries and inodes using echo 2 > /proc/sys/vm/drop_caches — this is kind of a last-ditch effort for a production environment. Better than having it all fall over, but things will be a little slow as all of the in-flight processes repopulate the cached data.

You can clear the pagecache, dentries, and inodes using echo 3 > /proc/sys/vm/drop_caches — this is a good shortcut in a non-production environment. But, if you’re already run 1 and 2 … well, 3 = 1+2, so clearing 1, 2, and then 3 is repetitive.

 

Another note from other documentation I’ve encountered — you can use sysctl to clear the caches, but this can cause a deadlock under heavy load … as such, I don’t do this. The syntax is sysctl -w vm.drop_caches=1 where the number corresponds to the 1, 2, and 3 described above.