You can pin messages … but I wouldn’t say pinning a message has the result I’d expect. First, how to do it. On any channel message, you can click the ellipsis to access a menu. Select “Pin”.
You’ll get a warning that the message will be pinned for everyone … sounds good, right? If you want everyone to read the rules of the Teams space or to read the “NDA Applies To These Discussions” notice, you want the message pinned for everyone. Click “Pin” to continue.
Aaaand … everyone sees the message highlighted (and a little pin icon). The message is not, however, pinned to the bottom of the conversation list where everyone is sure to notice it. It is not displayed at the top of the current page where everyone is sure to notice it.
But there is a way to quickly view pinned messages in a channel. In the upper right-hand corner of the channel’s conversation list, find the little info icon. It’s the one you’ve never noticed because it didn’t do anything too useful … right next to the ‘meet’ button. Click it.
Scroll past the ‘About’ and ‘Members’ section of the info, and you will see any pinned posts.
Yes, I know md5sum has a “-c” option for checking the checksum in a file … but, if I was going to screw with a file, I’d have the good sense to edit the checksum file in the archive!
It would be great to error_log(var_dump($arraySomething)); but, alas, it doesn’t work like that. I don’t want to bother making a function that does work (ob_start and ob_end_clean should do it) … so I use print_r instead:
Issue: The multi-step process of retrieving credentials from CyberArk introduce noticeable latency on web tools that utilize multiple passwords. This occurs each execution cycle (scheduled task or user access).
Proposal: We will use a redis server to cache credentials retrieved from CyberArk. This will allow quick access of frequently used passwords and reduce latency when multiple users access a tool.
Details:
A redis server will be installed on both the production and development web servers. The redis implementation will be bound to localhost, and communication with the server will be encrypted using the same SSL certificate used on the web server.
Data stored in redis will be encrypted using libsodium. The key and nonce will be stored in a file on the application server.
All password retrievals will follow this basic process:
Outstanding questions:
Using a namespace for the username key increases storage requirement. We could, instead, use allocate individual ‘databases’ for specific services. I.E. use database 1 for all Oracle passwords, use database 2 for all FTP passwords, use database 3 for all web service passwords. This would reduce the length of the key string.
Data retention. How long should cached data live? There’s a memory limit, and I elected to use a least frequently used algorithm to prune data if that limit is reached. That means a record that’s fused once an hour ago may well age out before a frequently used cred that’s been on the server for a few hours. There’s also a FIFO pruning, but I think we will have a handful of really frequently used credentials that we want to keep around as much as possible.Basically infinite retention with low memory allocation – we could significantly limit the amount of memory that can be used to store credentials and have a high (week? weeks?) expiry period on cached data.Or we could have the cache expire more quickly – a day? A few hours? The biggest drawback I see with a long expiry period is that we’re retaining bad data for some time after a password is changed. I conceptualized a process where we’d want to handle authentication failure by getting the password directly from CyberArk and update the redis cache – which minimizes the risk of keeping the cached data for a long time.
How do we want to encrypt/decrypt stashed data? I used libsodium because it’s something I used before (and it’s simple) – does anyone have a particular fav method?
Anyone have an opinion on SSL session caching
################################## MODULES #####################################
# No additional modules are loaded
################################## NETWORK #####################################
# My web server is on a different host, so I needed to bind to the public
# network interface. I think we'd *want* to bind to localhost in our
# use case.
# bind 127.0.0.1
# Similarly, I think we'd want 'yes' here
protected-mode no
# Might want to use 0 to disable listening on the unsecure port
port 6379
tcp-backlog 511
timeout 10
tcp-keepalive 300
################################# TLS/SSL #####################################
tls-port 6380
tls-cert-file /opt/redis/ssl/memcache.pem
tls-key-file /opt/redis/ssl/memcache.key
tls-ca-cert-dir /opt/redis/ssl/ca
# I am not auth'ing clients for simplicity
tls-auth-clients no
tls-auth-clients optional
tls-protocols "TLSv1.2 TLSv1.3"
tls-prefer-server-ciphers yes
tls-session-caching no
# These would only be set if we were setting up replication / clustering
# tls-replication yes
# tls-cluster yes
################################# GENERAL #####################################
# This is for docker, we may want to use something like systemd here.
daemonize no
supervised no
#loglevel debug
loglevel notice
logfile "/var/log/redis.log"
syslog-enabled yes
syslog-ident redis
syslog-facility local0
# 1 might be sufficient -- we *could* partition different apps into different databases
# But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle
# from any web tool would be the same cred. In which case, one database suffices.
databases 3
################################ SNAPSHOTTING ################################
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
#
dir ./
################################## SECURITY ###################################
# I wasn't setting up any sort of authentication and just using the facts that
# (1) you are on localhost and
# (2) you have the key to decrypt the stuff we stash
# to mean you are authorized.
############################## MEMORY MANAGEMENT ################################
# This is what to evict from the dataset when memory is maxed
maxmemory-policy volatile-lfu
############################# LAZY FREEING ####################################
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
############################ KERNEL OOM CONTROL ##############################
oom-score-adj no
############################## APPEND ONLY MODE ###############################
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
########################### ACTIVE DEFRAGMENTATION #######################
# Enabled active defragmentation
activedefrag no
# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10
To set up my redis sandbox in Docker, I created two folders — conf and data. The conf will house the SSL stuff and configuration file. The data directory is used to store the redis data.
I first needed to generate a SSL certificate. The public and private keys of the pair are stored in a pem and key file. The public key of the CA that signed the cert is stored in a “ca” folder.
Then I created a redis configuation file — note that the paths are relative to the Docker container
################################## MODULES #####################################
################################## NETWORK #####################################
# My web server is on a different host, so I needed to bind to the public
# network interface. I think we'd *want* to bind to localhost in our
# use case.
# bind 127.0.0.1
# Similarly, I think we'd want 'yes' here
protected-mode no
# Might want to use 0 to disable listening on the unsecure port
port 6379
tcp-backlog 511
timeout 10
tcp-keepalive 300
################################# TLS/SSL #####################################
tls-port 6380
tls-cert-file /opt/redis/ssl/memcache.pem
tls-key-file /opt/redis/ssl/memcache.key
tls-ca-cert-dir /opt/redis/ssl/ca
# I am not auth'ing clients for simplicity
tls-auth-clients no
tls-auth-clients optional
tls-protocols "TLSv1.2 TLSv1.3"
tls-prefer-server-ciphers yes
tls-session-caching no
# These would only be set if we were setting up replication / clustering
# tls-replication yes
# tls-cluster yes
################################# GENERAL #####################################
# This is for docker, we may want to use something like systemd here.
daemonize no
supervised no
#loglevel debug
loglevel notice
logfile "/var/log/redis.log"
syslog-enabled yes
syslog-ident redis
syslog-facility local0
# 1 might be sufficient -- we *could* partition different apps into different databases
# But I'm thinking, if our keys are basically "user:target:service" ... then report_user:RADD:Oracle
# from any web tool would be the same cred. In which case, one database suffices.
databases 3
################################ SNAPSHOTTING ################################
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
#
dir ./
################################## SECURITY ###################################
# I wasn't setting up any sort of authentication and just using the facts that
# (1) you are on localhost and
# (2) you have the key to decrypt the stuff we stash
# to mean you are authorized.
############################## MEMORY MANAGEMENT ################################
# This is what to evict from the dataset when memory is maxed
maxmemory-policy volatile-lfu
############################# LAZY FREEING ####################################
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
############################ KERNEL OOM CONTROL ##############################
oom-score-adj no
############################## APPEND ONLY MODE ###############################
appendonly no
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
############################### ADVANCED CONFIG ###############################
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
########################### ACTIVE DEFRAGMENTATION #######################
# Enabled active defragmentation
activedefrag no
# Minimum amount of fragmentation waste to start active defrag
active-defrag-ignore-bytes 100mb
# Minimum percentage of fragmentation to start active defrag
active-defrag-threshold-lower 10
Once I had the configuration data set up, I created the container. I’m using port 6380 for the SSL connection. For the sandbox, I also exposed the clear text port. I mapped volumes for both the redis data, the SSL files, and the redis.conf file
To bring up a Docker container running memcached with SSL enabled, create a local folder to hold the SSL key. Mount a volume to your local config folder, then point the ssl-chain_cert and ssl_key to the in-container paths to the PEM and KEY file:
Quick PHP code used as a proof of concept for storing credentials in memcached — cred is encrypted using libsodium before being send to memcached, and it is decrypted after being retrieved. This is done both to prevent in-memory data from being meaningful and because the PHP memcached extension doesn’t seem to support SSL communication.
<?php
# To generate key and nonce, use sodium_bin2hex to stash these two values
#$sodiumKey = random_bytes(SODIUM_CRYPTO_SECRETBOX_KEYBYTES); // 256 bit
#$sodiumNonce = random_bytes(SODIUM_CRYPTO_SECRETBOX_NONCEBYTES); // 24 bytes
# Stashed key and nonce strings
$strSodiumKey = 'cdce35b57cdb25032e68eb14a33c8252507ae6ab1627c1c7fcc420894697bf3e';
$strSodiumNonce = '652e16224e38da20ea818a92feb9b927d756ade085d75dab';
# Turn key and nonce back into binary data
$sodiumKey = sodium_hex2bin($strSodiumKey);
$sodiumNonce = sodium_hex2bin($strSodiumNonce);
# Initiate memcached object and add sandbox server
$memcacheD = new Memcached;
$memcacheD->addServer('127.0.0.1','11211',1); # add high priority weight server added to memcacheD
$arrayDataToStore = array(
"credValueGoesHere",
"cred2",
"cred3",
"cred4",
"cred5"
);
# Encrypt and stash data
for($i = 0; $i < count($arrayDataToStore); $i++){
usleep(100);
$strValue = $arrayDataToStore[$i];
$strMemcachedKey = 'credtest' . $i;
$strCryptedValue = base64_encode(sodium_crypto_secretbox($strValue, $sodiumNonce, $sodiumKey));
$memcacheD->set($strMemcachedKey, $strCryptedValue,time()+120);
}
# Get each key and decrypt it
for($i = 0; $i < count($arrayDataToStore); $i++){
$strMemcachedKey = 'credtest'.$i;
$strValue = sodium_crypto_secretbox_open(base64_decode($memcacheD->get($strMemcachedKey)),$sodiumNonce, $sodiumKey);
echo "The value on key $strMemcachedKey is: $strValue \n";
}
?>
There’s a convenient command to update the Windows git binary
git update-git-for-windows
Which is useful since ADO likes to complain about old git clients —
remote: Storing packfile... done (51 ms)
remote: Storing index... done (46 ms)
remote: We noticed you're using an older version of Git. For the best experience, upgrade to a newer version.
We are automating a file download — it works fine when running headed, but headless execution doesn’t manage to log in. Proxying the requests through Fiddler show that several JavaScript pages download unexpected content.
I’ve added a user-agent to the request, but I’ve noticed that the ChromeDriver also sets sec-ch-* headers … I expect the null sec-ch-ua causes the web server to refuse our request. I don’t see any issues in the ChromeDriver repo for the sec-ch-* headers … and I don’t really want to walk back versions until I find one that doesn’t try setting this header value. Firefox’s GeckoDriver, though, doesn’t set them … so I moved the script over to use Firefox instead of Chrome and am able to download the file.