Clean Slate

Due to circumstances I don’t need to really dive into I am pairing back expenses and thus culling things like relatively unused DO droplets and the like. The Hockey-Info site is already gone and last to go was my generic linux jumphost I used for all manner of things from IRC/ZNC to testing out quick an dirty python or running the odd port scan. This is just a little farewell to Asgard, a droplet that served me well for 7 solid years and was finally laid to rest with more uptime than is probably secure to have.

Filesystem created:       Tue Jul  8 19:01:40 2014
 07:03:35 up 653 days,  9:35,  2 users,  load average: 0.00, 0.01, 0.05

Youtube Essential Ripping Platform

So I have been a longtime user of youtube-dl for ages to archive some things (obscure music, recordings of tech talks) and figured it was worth taking some time to make a simple and easy to use way to achieve this that others could benefit from. More simply put I created a front-end with Python and Flask to sit on top of youtube-dl and make the process so easy non-technical people could use it. Thus YERP was born ( to fill that role. I know there are tons of other competiting ideas out there doing the exact same thing but I wanted to take a crack at it for my own home network and get it so simplified that all you had to do was run a Dockerfile and it would spring into existence without configuration.

The project is VERY green right now and things are moving around and changing alot (even in my head before code is committed to the repository) so don’t bank on things staying how they are. There are tons of little features I want to put in like folder organization, backups, flags for filetypes and the like which will take quite a while to figure out how I would like to implement them. So if you do run the program just beware and if you find something that can be done better feel free to submit a PR and I will gladly bring other code into the project since I am only one person and not exactly a professional at this to begin with.

Simple CI with Chef

So I needed to work out a way to make a script I wrote recently be deployed across a whole host of systems, turns out the only option is Chef so I had to dive into it and read a bunch of stuff.  Also had to try a bunch of things and ended up with my own Chef server in the lab to test against.  Several hours of clicking and clacking later and I have my task worked out, so here it is.

First we need to create a new cookbook and drop a pretty simple default recipe in, all it does is make sure git is installed then clone a repo to /opt/nhlapi.

# Cookbook:: repo
# Recipe:: default
# Copyright:: 2018, The Authors, All Rights Reserved.
package 'git' do
  action :install

git '/opt/nhlapi' do
  repository 'git://'
  revision 'master'
  action :sync
default.rb (END)

Once we have the recipe we need a role to tell it what to do.

   "name": "repo-update",
   "description": "update chef from time to time",
   "json_class": "Chef::Role",
   "default_attributes": {
     "chef_client": {
       "interval": 1800,
       "splay": 60
   "override_attributes": {
   "chef_type": "role",
   "run_list": ["recipe[chef-client::default]",
   "env_run_lists": {

Create the role with # knife role from file repo-update.json  (or whatever you named the file to create the role from).

Now all that is left is to assign the role to the node so use #knife node edit itsj-cheftest.itscum.local  and assign the role and repo to the node we want

  "name": "itsj-cheftest.itscum.local",
  "chef_environment": "_default",
  "normal": {
    "tags": [

  "policy_name": null,
  "policy_group": null,
  "run_list": [


That is enough to get it working, you can kick back and watch it with # while :; do knife status ‘role:repo-update’ –run-list; sleep 120; done and wait to see it run in about 30 minutes based on the interval and splay values.  Speaking of which Interval is pretty self explanatory, but Splay not-so-much; Splay is used keep a bunch of nodes from all running at once basically so it doesn’t overwhelm a system that they might be checking into or otherwise digitally assaulting.

Simple Icinga2 Plugin

I’ve seen bits and pieces of the process of creating an Icinga2 (or Nagios) plugin, so here are my notes dumped straight from my brain.

First and foremost we need a script to call from Icinga, in this case I created a very simple Python script to simply get the version of LibreNMS running on my monitoring system.

import argparse
import requests
import json
import sys

parser = argparse.ArgumentParser(description='Process some integers.')

parser.add_argument('-H', action="store",dest="host", help='name of host to check')

#parser.add_argument('token', metavar='token', help='API token')
token = 'yourAPItokenGOEShere'
args = parser.parse_args()

host_check = 'http://''/api/v0/system'
headers = {'X-Auth-Token': token }
r = requests.get(host_check, headers=headers,verify=False)


json_string = r.text
parsed_json = json.loads(json_string)

system_status = parsed_json['status']
system_ver = parsed_json['system'][0]['local_ver']

if system_status == 'ok':
	ret = "status: "+system_status+" version:"+system_ver
elif system_status != 'ok':
	ret = "status: "+system_status+" version:"+system_ver

This is a pretty simple script, you could call it with ./ -H to see how it works.  With the script working the next portion is done in the command line, first create the directory that will later be referenced as CustomPluginDir

# mkdir -p /opt/monitoring/plugins

Now we need to tell Icinga2 about the directory, this is done in a few different places

in /etc/icinga2/constants.conf add the following

const CustomPluginDir = “/opt/monitoring/plugins”

and in /etc/icinga2/conf.d/commands.conf we add the following block

object CheckCommand "check-lnms" {
    command = [ CustomPluginDir + "/" ]

    arguments = {
        "-H" = "$address$"

The block above defines the custom command, specifies the script we created first and also passes the correct flags.  Now its time to add the check into the hosts.conf file, so place the following block into /etc/icinga2/conf.d/hosts.conf

object Host "itsj-lnms" {
        address = ""
        check_command = "check-lnms"

And with that we wait for the next polling cycle and should see something like the screenshot below

This is a highly simplistic example, but figuring it out was necessary for me because I had to port some existing code from Ruby to Python so I wanted to know exactly how a plugin was created to understand what values were returned and how it all fits together.

Homelab: Synology failure post-mortem

I take my homelab very seriously, its modeled after several production environments I have worked on over the years. What follows is my recap of events over a few weeks leading up to the total failure of my central storage system, my beloved Synology DS1515 hosting 5.5TB of redundant network storage. The first signs of problems cropped up on May 31st and culminated over the last week in June.

Continue reading “Homelab: Synology failure post-mortem”

low tech Salt deployment

So I have been tearing down and rebuilding a lot of crap in the lab lately (kubernetes clusters, ELK stack, etc) and I have been constantly having to re-add salt to the VMs because salt-cloud doesnt yet play nice with Xen.  After about the 3rd time of doing this I got tired of manually installing epel-release, salt-minion and then changing the config so I wrote perhaps the worst script ever to remotely do all that work for me and possibly be used later when I finally get salt-cloud working with Xen.


echo "deploying salt -> $HOST"
ssh root@$HOST "yum -y install epel-release && yum -y install salt-minion"
ssh root@$HOST "sed -i 's/#master: salt/master:' /etc/salt/minion"
ssh root@$HOST "systemctl start salt-minion && systemctl enable salt-minion"
echo "salt successfully deployed on host: $HOST"

Granted this relies upon me still manually doing ssh-copy-id so I don’t have to keep typing in passwords thats a lot fewer commands, maybe if I get the time I will add in some logic to then auto-accept the key in salt so that I don’t have to manually do that either.

Salt States for the Homelab

Over the past year or so I have been playing around with saltstack to automate as much as I possibly can in my lab, from updates to base vm configuration and making lab wide configuration changes (such as setting up SNMP for monitoring).  Here are my collection of states I currently use to carry out that baseline setup, they are all called from within my top.sls so at highstate they all are applied and make things suck just a little less when running updates and helps prevent typos from making things take longer than necessary.


    - password: $1$hud1CQZ8$eBQ/vZhwxfgIbLP/UbQzA.
    - text:
      - "# added via salt"
      - "dword ALL=(ALL)       ALL"


  pkg.installed: []
    - enable: True
    - running


    - humanname: CentOS-$releasever - Updates
    - baseurl:
    - gpgcheck: 1
    - gpgkey: file:///etc/pki/rpm-gpg/RPM_GPG-KEY-CentOS-7


  pkg.installed: []
  pck.installed: []


And finally my favorite of all, a working curl from within a state to hit an API target to kick off discovery, in this case its a discovery within EM7 but it can be easily modified as necessary

# this will perform a curl on the target minion
    - name: >-
        curl -k -v -H 'X-em7-beautify-response:1' -u 'dword:somepass' "" -H 'content-type:application/em7-resource-uri' --data-binary "/api/discovery_session/1"