I recently noticed a theme in technical articles debating between the monolith versus microservices, surprisingly how some gravitate back to monoliths for simplicity. Yet, every customer engagement I've worked on over the past couple of years is about making quick, simple, and easy processes.
For me, it raises questions, why do teams struggle? How are there still enormous challenges in this era of on-demand cloud computing and many automation and configuration management tools? Why do developers still struggle to deploy code to production efficiently with well established continuous integration and continuous methods?
I've touched on my issues with specific products over others with the notion that some tools seem to introduce their demands, burdens and problems that, in some cases, can outweigh the difficulties trying to be solved in the first place.
However, it's more than that. When you begin to decompose existing approaches in today's DevOps environments, it would appear that the sum of all things results in death by a thousand cuts.
Let me paint a scenario with one engineer and one developer. As needed by a business, their goal is simple. The developer needs to write code, test it, and push it to a live environment. The engineer needs to provide the infrastructure, platform and facilities for the developer to safely and efficiently do that. Finally, they should have it all worked out, a well-oiled machine with little maintenance and delivering change to applications as and when demanded by the business.
On a small scale, this is comfortably achievable. The engineer uses one version control platform, configuration management tool, one Linux distribution, and one cloud provider. In addition, the developer uses one language, one packaging tool, one test framework and the same version control and its pipeline functionality, all in all, a lean, stable and maintainable suite of tools and skills to use them.
Business is in demand. The team doubles, bringing in a second developer and a second engineer. The new developer uses a different language and therefore has other subtle requirements. The new engineer insists on an additional provisioning tool and recommends a further Linux distribution to cater to the new developer's new coding language requirements.
Double the team again to eight, with four engineers and four developers, each with slightly different approaches, opinions and prefered technologies. At the same time, the competencies might not double, yet each slight deviation or additional tool, library or requirement heaps straw onto the camel of individuals. Finally, it's the sum of all these intricacies that make overall complexity exponential.
Take flat-pack furniture, for example. Great thought and effort have gone into making the process of construction as simple as possible for customers, achieved by providing concise instructions (good documentation) and lean tools, often just a single Allen key provided. Sure we've all had a frustrating experience building a wardrobe at least once in our past but imagine if each part was non-standard and needed a different tool. As a result, it would take longer to complete and inevitably lead to more frustration.
I believe that this concept is why one team might feel like a pleasure to work with, and yet another can feel frustrating and demotivating. Sometimes it feels like the tool choices, and an unnecessary breadth of them can pound the life and soul of you.
I've known it for years and learnt from bad experiences and success stories. The point is, instilling good habits into teams like producing concise documentation integral to everything that is done and having the self-discipline to keep tools and products lean with consistency is critical.
Equally, understanding economies of scale are essential. Jumping straight in with a container platform can be like trying to tap in a drawing pin with a 14lb sledgehammer. Instead, teams need to evolve as business demands grow organically. They need to evaluate introducing new tools carefully and consider the combined complexity they may present. Additional tools add additional burdens, so think replacement over additional.
Learn how to assess and spot the tipping points. For example, do you keep doing things the same way to cope with growth and simply making teams bigger or reach a point where the teams are big enough but need to work smarter?
If your business was digging holes, and you start with one hole digger equipped with a spade, there are only so many holes they can dig in a day. Hire a second, and you can dig twice as many holes and so on. So balance the costs and demand for holes. There will be a point when investing in a digger and training one hole digger to operate it will make more sense than hiring more and more people with spades. Maybe scaling up looks more like a fleet of diggers with skilled digger operators and a couple of mechanics to maintain them? And it would make sense that all the diggers were the same make and model, so maintenance is made easier across the fleet, and any operator can operate any digger? But nobody would start at that point to dig just a few holes!
Is monolith versus microservice an issue? Is one tool better than another? Not really. The truth is, it doesn't matter at the end of the day. What matters is doing what makes sense in any given scenario, keeping things lean, consistent and manageable to meet delivery demands. Making sure everyone is clear and singing from the same hymn sheet, documenting every architecture and operational aspect. Moreover, learning how to identify when scaling up is the next step requiring a fundamental change to approaching the overall problem.