Clean Slate

Due to circumstances I don’t need to really dive into I am pairing back expenses and thus culling things like relatively unused DO droplets and the like. The Hockey-Info site is already gone and last to go was my generic linux jumphost I used for all manner of things from IRC/ZNC to testing out quick an dirty python or running the odd port scan. This is just a little farewell to Asgard, a droplet that served me well for 7 solid years and was finally laid to rest with more uptime than is probably secure to have.

Filesystem created:       Tue Jul  8 19:01:40 2014
 07:03:35 up 653 days,  9:35,  2 users,  load average: 0.00, 0.01, 0.05

Cleaner Python

So for the longest time I was big on writing my if/else statements in a very old school way, which for the most part was fine. However the other day I was working with someone at work and they showed me perhaps the most glorious way to turn what at the minimum is a 4 line block of code into a single line.

Here is how I used to do it prior to this enlightening pair-programming moment.

if some_value == expected_value:
    result = True
else:
    result = False

It may not seem like a huge savings but when your files easily hit 200 lines or more being able to condense a little bit adds up over time, so here is how to bust this down to a single line of code, and its fantastic

result = True if found_value == expected_value else False

It seems so plain but I guess being stuck in my old ways of writing things it never occurred to me that it could be condensed so much. That being said I don’t know if I would use this method all the time as it could be confusing in example code intended for relatively new programmers but for production things at work this is the bees knees!

Finding Data in the NHL API

So I decided to retake my NHL API video to try to improve things (now that I know how to get my audio better) and maybe make it a little easier to digest. Turns out even a 3 and a half minute video takes hours to get just right and somehow I still am not totally satisfied with it. I think perhaps this may be something I spend more time on in the near future to try to provide people with a little bit of educational material. In the meantime feel free to go have a watch of Finding Data in the NHL API over on Odysee

PaaS Frustrations

So after several days working with the Support folks at Digital Ocean they finally nailed down why my deploys were never getting new code and I am still not clear on the reason it was an issue to begin with but I figured it needs to be documented to maybe save someone else the trouble.

The Details

My code is a mix of Python (and some HTML/JS) using Flask, Python version in this specific situation was 3.8.2 (at least locally anyway). Using the Digital Ocean App platform and a domain hosted through them as well (hockey-info.online). Docker version locally was 20.10.2, build 2291f61 on a Fedora 32 based system.

The Problem

No code changes I made after Jan 15th seemed to be pulled when doing a deploy, deploys would trigger properly however they never got the correct code just the correct commit sum. I tried manual deploys, I tried automatic, I searched the internet high, low and inbetween but couldn’t figure it out.

Solution (Eventually)

I finally broke down and opened a ticket with DO on a Wednesday, after going back and forth with their support people and trying a lot of things they finally informed me of a solution on the Tuesday after. It seems that in the Dockerfile I was doing a RUN git clone https://gitlab.com/dword4/hockey-info.git . which was running the git command to pull code down outside of the methods used by the App platform. The fix turns out was as simple as replacing that line with COPY . /hockey-info and then pushing the code up.

Still not entirely sure why this functions this way, there appears to be some kind of git caching going on but I have no real insight as to why, probably due to how the app platform is built.

Monitoring on AWS with CloudWatch Agent and Procstat

Objective: Install CloudWatch Agent with procstat on an EC2 instance and configure a metric alarm in CloudWatch

One of the first issues I ran into was with IAM Policies, or lack thereof . Specifically it was the managed policy CloudWatchAgentServerPolicy which needed to be added. The telltale sign that you forgot to add this Policy is an error message in the Agent logs, seen below

2020-08-17T22:46:18Z E! refresh EC2 Instance Tags failed: NoCredentialProviders: no valid providers in chain<br>caused by: EnvAccessKeyNotFound: failed to find credentials in the environment.

The procstat plugin fortunately is already part of the Agent from install, but it still needs to be configured. In order to do this you have to add a configuration file specific to your monitoring needs. For old school admins the easiest way to think of procstat is that it basically ties into the ps tool. It’s like doing a `ps -ef | grep` to find something about a running process.

[root@lab-master amazon-cloudwatch-agent.d]# pwd
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d
[root@lab-master amazon-cloudwatch-agent.d]# cat processes
{
    "metrics": {
        "metrics_collected": {
            "procstat": [
                {
                    "pattern": "nginx: master process /usr/sbin/nginx",
                    "measurement": [
                        "pid_count"
                    ]
                }
            ]
        }
    }
}

This will get us far enough that now we can see values in the Metrics view of CloudWatch. Once we have data there its time to construct a metric alarm. My goal was to use Terraform however its less painful to do in the AWS console.

resource "aws_cloudwatch_metric_alarm" "nginx-master" {
  alarm_name = "nginx master alarm"
  comparison_operator = "LessThanThreshold"
  evaluation_periods = 1
  datapoints_to_alarm = 1
  metric_name = "procstat_lookup_pid_count"
  namespace = "CWAgent"
  period = "300"
  statistic = "Average"
  threshold = "1"
  alarm_description = "Checks for the presence of an nginx-master process"
  alarm_actions = [aws_sns_topic.pagerduty_standard_alarms.arn]
  insufficient_data_actions = []
  treat_missing_data = "missing"
  dimensions = {
    "AutoScalingGroupName" = "some-ASG-YXI8VDT6MBE3"
    "ImageId"       = "some-ami"
    "InstanceId"    = "some-instance-id"
    "InstanceType"  = "t3a.large"
    "pattern"       = "nginx: master process /usr/sbin/nginx"
    "pid_finder"    = "native"
  }
}

The alarm creation proved to be a lot harder than I had expected, taking up several hours. I had to re-create things in my lab setup twice and do a Terraform import. The problem turned out to be that the dimensions{} block is not optional despite what the Terraform docs say. Had it said the fields were all required I probably would have saved days of time.

Polish Work

In the process of working things out I hard coded a lot of values in the Dimensions {} block. Naturally that is not good practice, especially with IaaS so I will need to rework it to use variables instead. Also the alarm names should utilize the Terraform workspace values for better naming.

Hockey-Info Release v1.0

https://gitlab.com/dword4/hockey-info

This is probably as “complete” as the Hockey-Info project may ever get so I figured its worth an actual blog post (and git tag at v1.0 too). A little over a year ago I was fed up with the NHL’s mobile website and decided I wanted to create something that gave me all the information without the associated fluff and annoyance. I had already spent time documenting the NHL API and using that knowledge to build smaller things like a sopel plugin for hockey stats. So I started throwing things together with Flask and suddenly hockey-info.online came to be!

Features

  • News Headlines
  • Scores view (only shows today’s games/scores)
  • Standings (grouped by Conference/Division and sometimes showing Wildcards)
  • Schedule (day, week and month views, team specific or league wide)
  • Team Views
    • Regular Season
    • Playoffs
    • Previous Games
  • Game details
    • Box Score information
    • Goals by period with details
    • Shots on goal by period
    • Penalties by period with details

Important Details

Built in Python 3 with Flask, Requests, and some various other bits and bobs. It runs either by itself with the usual process to launch a Flask application or if you are so inclined there is a Dockerfile that can be used to launch it with a little less pain.

There is some caching going on with requests_cache however this is by no means optimal, I would like to eventually do more work with a proper web cache but for now this works since the site is so light weight. I make use of CDNs for all the JavaScript so that also helps speed things up (and more importantly move the burden off of wherever you choose to run it in the first place).

Timezone awareness is non-existent, I basically converted everything from whatever the NHL stores (UTC time/date) to Eastern Time since that is where I live. I try to be very privacy conscious and I couldn’t justify the time expenditure in methods of determining user location for time conversion processes, if someone wants to suggest it or contribute it PRs are welcome.

Legal Considerations

I have zero affiliation with the NHL beyond sometimes buying their branded merchandise and viewing their games when I get the chance. There are no ads served by the application (or even any decent way to add them without altering all the templates) so I make no money from anything (not even the blog).

Quick Code: Repo List

So I ran into an interesting problem over the weekend, I forgot my 2FA token for Gitlab at home while I was away. My laptop’s SSH key was already loaded into Gitlab so I knew I could clone any of my repositories if only I could remember the exact name. That of course turned out to be the problem: I couldn’t remember the name of a specific repository that I wanted to work on. I even tried throwing a bunch of things at git clone to try to guess it and still had no luck. Enter the Gitlab API:

#!/usr/bin/env python3
                                                                                                                    import requests                                                                                                     from tabulate import tabulate
                                                                                                                    personal_token = 'asdfqwerzxcv1234'                                                                             user_id = 'dword4'                                                                                                                                                                                                                      base_url = 'https://gitlab.com/api/v4/'                                                                             repo_url = 'users/'+user_id+'/projects'                                                                                                                                                                                                 full_url = base_url + repo_url + '?private_token=' + personal_token                                                                                                                                                                     res = requests.get(full_url).json()
table = []
for project in res:                                                                                                     name = project['name']
    name_spaced = project['name_with_namespace']
    path = project['path']
    path_spaced = project['path_with_namespace']
    if project['description'] is None:                                                                                      description = ''                                                                                                else:                                                                                                                   description = project['description']                                                                            #print(name,'|', description)                                                                                       table.append([name, description])                                                                                                                                                                                                   print(tabulate(table, headers=["name","description"]))

This is of course super simplistic and does virtually no error checking, fancy formatting, etc. However now with a quick alias I can get a list of my repositories even when I do flake out and forget my token at home.

Terraform – Reference parent resources

Sometimes things get complicated in Terraform, like when I touch it and make a proper mess of the code. Here is a fairly straight forward example of how to reference parent resources in a child.

├── Child
│   └── main.tf
└── main.tf

1 directory, 2 files
$ pwd
/Users/dword4/Terraform

First lets look at what should be in the top level main.tf file, the substance of which is not super important other than to have a rough idea of what you want/need

provider "aws" {
  region = "us-east-2"
  profile = "lab-profile"
}

terraform {
  backend "s3" {}
}

# lets create an ECS cluster

resource "aws_ecs_cluster" "goats" {
  name = "goat-herd"
}

output "ecs_cluster_id" {
  value = aws_ecs_cluster.goats.id
}

What this does simply is create an ECS cluster with the name “goat-herd” in us-east-2 and then outputs ecs_cluster_id which contains the ID of the cluster. While we don’t necessarily need the value outputted visually to us, we need the output because it makes the data available to other modules including child objects. Now lets take a look at what should be in Child/main.tf

provider "aws" {
  region = "us-east-2"
  profile = "lab-profile"
}

terraform {
  backend "s3" {}
}
module "res" {
  source = "../../Terraform"
}
output "our_cluster_id" {
  value = "${module.res.ecs_cluster_id}"
}

What is going on in this file is that it creates a module called res and sources it from the parent directory where the other main.tf file resides. This allows us to reference the module and the outputs it houses, enabling us to access the ecs_cluster_id value and use it within other resources as necessary.

Managing a Growing Project

I am no Project Manager in even the loosest sense of the word. Despite that I find myself learning more and more of the processes of PM. This is especially true when projects start to expand and grow. Specifically I am speaking about the NHL API project I started almost two years ago. This lead me to the rabbit hole that is permissions and how to manage the project overall going forward. The projects roots are very rough, even today I still generally commit directly to master. Now the repository has grown to over 70 commits, two distinct files and 17 contributors.

Balance

I am constantly trying to be cognizant of is becoming overly possessive of the project. While it may have started as a one-man show I want and enjoy contributions from others. The converse of worrying about becoming possessive is that there are times when steering is necessary. One of the instances that comes to mind is the suggestion of including example code. The goal of the project is documentation, so I declined such suggestions. Unmaintained code becomes a hindrance over time and I don’t want to add that complexity to the project.

Growth

There is often a pressure to grow projects, to make them expand over time and change. Its a common thing for businesses to always want growth and it seems that mentality has spread to software. Something like the NHL API is a very slow changing thing, just looking at the commit history shows this. Weeks and months will go by without new contributions or even me looking at the API itself. I dabbled with ideas such as using Swagger to generate more appealing documentation. Every time I tried to add something new and unique I realized it felt forced. This ultimately forced me to accept that growth will not be happening, the project has likely reached its zenith.

Looking Forward

The next steps are likely small quality-of-life things such as the recent Gitter.im badge. Things that make it easier for people to interact but don’t change the project overall. My knowledge of the API makes for fast answers so I try to help out when I am able.

Home Gardening in the Apocalypse

So if you listen to the news and social media we are in a very slow collapse it seems, where things are never going back to normal but we totally shouldn’t panic just yet because they are going to devalue the living hell out of our currency with multiple massive multi-trillion-dollar stimulus packages. Well this got me thinking that if it were to go as bad as that nagging little voice says then perhaps its time to actively start sustaining myself with food. This leads me to a home garden in my super limited space at the townhouse. Between the fiance and I we love eggs so we had about 3 cartons laying around that we repurposed into vessels for starting our seeds.

Redundancy is key, 3 pods of each seed type and multiple seeds per pod

This is only a portion of what we plan to plant, basically a phase one with the seeds we were able to source locally, the larger shipment of seeds has been slowly winding its way to us from across the US and should be here within a day or so. The overall plan will includes a mix of common herbs such as Rosemary, Cilantro and Basil alongside edibles like Kale, Cucumbers and Tomatoes to help reduce our costs at the grocery store; less time spent at the store means less potential exposure and saves us money while increasing freshness to something not really possible in a grocery store.

The current layout of the seed starting tray

And an aside to the garden project is my long term fruit tree effort, last year the fiance and I bought a Key Lime tree and a Myers Lemon tree at the local garden store and put them out front of my townhouse. They flourished since it was the middle of summer with tons of light and regular rains, but as I moved them inside the Myers tree took a turn for the worse, losing a lot of its leafs when I moved. I tried more intensive watering in case the dry conditions of the house were evaporating more water than I realized. I rotated it a few times in hopes of the sunlight coming through the window pulling it back to a normal vertical position but that also failed to improve its conditions. Eventually I even resorted to a boost from some fertilizer steaks a few weeks ago but those failed to really change things. Finally I stumbled upon my plant light and timer that I had packed up when I moved so I relocated the tree so that I could point the light at it and set the timer up for about a 12 hour sun cycle and within a few days I was greeted with unmistakably fresh shoots in that vibrant green you cant mistake as well as possibly more fruit developing!

It is amazing what a little supplemental sunlight can do and I am hoping that the 12h cycle I have the seeds on ushers forth even more green in the house so that eventually we will have fresh herbs, veggies and fruits in a few months