We live in the exciting times of microservices and single page apps, cloud solutions and performant concurrent engines, agile programming and continuous delivery frameworks that spit out green glowing charts for every successful release. And yet, in most organisations there is a dark, shameful corner somewhere on a server, hosting a legacy part of the system that is still used for those 3 things that the company cannot survive without. In our case, that legacy part is called WHMCS. This is the story of how we migrated this monolithic beast onto our Kubernetes clusters and the lessons we learned from it.
The HTML spec has always been incomplete and leaves to the browsers to decide a lot of critical details on how particular elements should be rendered or function. In the not-so-distant past, this experimental nature led to a lot of developers having headaches and panic attacks when they needed to provide a similar user experience on different browsers. It also left a blazing trail of hacky code snippets, sprinkled with comments like “No idea why, but IE needs this”. But then the main browsers settled their wars, W3C improved the specs and bit by bit the state of web UX development became less of a nightmare. It’s 2020 now and surely browsers are working consistently with the main HTML elements, right?
Angular provides several well-documented patterns for communication between components. But what can you do on the off-chance that none of those match your particular case? Time to get creative and reach for one of those tempting solutions with DANGER on the label.
Angular 8 came with a big changelog of features, deprecations, and fixes. A major change was the way Angular lazy loads its components. The standard method for that (loadChildren) stopped accepting a string parameter and switched to an import function. For a particular case we had, this change gave us some food for thought and opened a lot of possibilities.
The year is 2019 so everyone and their grandparents are running their sophisticated microservice solutions on Kubernetes. Same as many, we run our Kubernetes clusters on Amazon EC2. Striving to have a dev and prod environments as similar as possible, we run a single-node Kubernetes cluster inside a Docker-For-Mac VM on our OSX based dev stations. All this has it’s pros and cons, which is a completely separate discussion. We recently hit a minor nuisance – for a troubleshooting session we needed to run a GUI tool from inside the isolated Kubernetes network. If you want to find a few possible solutions to this (probably) common case, please read on.
As developers and engineers we’re blessed with having to play around with (almost) completely deterministic systems. Even so, debugging them can often become overwhelming and push us to reconsider what we know or understand. We’ll try to showcase a particular troubleshooting adventure we recently endured and highlight what mistakes we did and how we overcame the despair.