Beware The Promise Land

The Javascript world has embraced asynchronicity tightly. It understandably began handling its asynchronous behavior with callbacks. When those became unwieldy, the language gave us promises. We now have async await to deal with difficult promise chaining. I am currently in a project that doesn’t have easy access to async await so promises are the way forward for now.

I am working toward my Javascript Promise merit badge and today I made good progress in that endeavor. Unfortunately it was done through several hours of difficulty and pain. I had a simple misunderstanding about the different ways functions can be passed to promise chains. In hindsight it’s funny how a simple misunderstanding, under the right circumstances, can lead to big problems.

Javascript Memoization

When working on any sizable project, you’ll inevitably run into a situation where you need to improve the performance of the application. Often times, you’ll notice multiple redundant calls being made to a database or external API that is loading down the external resource and causing unnecessary delays. One way to solve this problem is through caching.

There are varying degrees and strategies for caching. Because I come from the Ruby on Rails world, I think of the lightest form and first defense against redundant expensive operations as memoization. If you’re familiar with any Rails projects you’ll recognize the following memoize pattern:

Non-Unique DOM IDs

Every good web developer knows that IDs on DOM elements should be unique. I say should because browsers faithfully render pages that break this rule. That said, Chrome now shows an error message in the developer console under certain circumstances with version 63 - released earlier this month. Through the new error message, I learned how difficult it can be to follow the unique ID rule when using my beloved Ruby on Rails.

The New Error

One of my favorite features of Chrome has always been the developer tools. I consistently have the console open when working on my web projects. Earlier this month, I was greeted with a new error message on a page of our app that I am on all the time. The message looked similar to the following:

Automatically Delete Old Data From Druid

We use the open source metrics data store, Druid, for several of our metrics collection needs at work. The nature of our runtime environment requires us to delete old data from Druid’s deep storage (Hadoop/HDFS in our case).

This use case seems to be fairly common and every database handles it differently. Elasticsearch uses a separate curator program. We run scripts from jobs to accomplish it in MySQL. If memory serves, Influx has the functionality built in. Druid also has it built in but it was surprisingly difficult to get working. Here is how I eventually accomplished it.

Prevent Test Rot

Nobody likes buggy software. Because of this fact, we must test our code. Whether we conduct these tests manually or automatically, they are a crucial part of the development process. All too often, testing strategies and automatic test suites begin to fail because they are not given the care they need to survive.

One of the most difficult things I’ve seen development teams struggle with over the years is how to maintain a solid test suite as their application grows. It doesn’t matter what the style is (TDD or otherwise) the tests begin to rot. In order to figure out how to prevent a distructive end, we must first understand what a good suite looks like.

Everything Has A Cost

In our lives and business, we have finite resources. These resources can be natural such as wood, coal, or oil. They can be less material such as time and money. They can even be quite abstract like focus, inspiration, and motivation. Because they are not infinite, these resources need to be consumed wisely. All too often, we lose sight of this fact when writing software and designing systems.

As the stewards of software, we make lots of decisions that have potentially large impacts on the businesses we work for. We choose the languages to write the code in and the databases to store company assets in. We decide how much of the cloud to utilize and how much we should host ourselves. And on and on. All of these choices have a variety of costs and benefits that come with them.

Walking With Certificate Manager

Last week, I talked about my SSL setup with Cloudflare. There were a few things I didn’t love about the setup. It caused some funkiness with my Google OAuth2 authentication redirects but more importantly, it terminated the SSL connection before getting to my application servers. In my ongoing pursuit of experimentation, I decided to get a certificate closer to home.

There were several goals I wanted to achieve with the transition.

  1. Get an SSL certificate on my servers at AWS (either on the load balancer - ELB or on the EC2 instance itself)
  2. Release my dependency on Cloudflare
  3. Maintain https always (http should redirect to https)
  4. Maintain www redirect to root domain (https://ramekintech.com)

As is typical with any technology transition, there were bumps in the road. I didn’t find everything I needed to get it all working in one place, so I put my pieced together solution here.