Automate to a maximum!
With application platforms available where applications can be deployed and scaled within seconds, how can data services not be? Obviously, applications rely on data services and stateful data services can be vastly different.
Any organization on a path of a digital transformation is likely to adapt modern software development paradigms including microservice based architectures and data services resulting from the NoSQL movement. Both paradigms produce more demand of both more data services and data service instances.
As discussed later, most application platforms require significant investment and therefore need to have a certain size to benefit from the economy of scale effect.
It is therefore safe to assume that the demand for data services will be a multitude larger and more complex than in the past era.
At the same time there’s a growing pressure to innovate at speed. All modern software development and operation paradigms share the same goals: accelerating innovation and reducing time to value.
For application platforms the way was to fully automate the entire application lifecycle. Exactly the same is required for data services.
Automating the full lifecycle of data services is a key success factor for successful platforms.
A systematic automation will speed up the development cadence by lowering operational friction.
With on-demand self-service for developers, they don’t have to wait for sysops to provision a database or message queue anymore. This eliminates significant amounts of waste from the value chain.
Well written and well tested automation will also reduce human errors resulting from manual execution.
Automation has become a competitive advantage. Whether it is to attract developers by offering a higher level of service or to directly benefit from increased productivity by using an application platform internally, automation sets platforms apart. It’s a major difference for developer productivity whether a database cluster is provisioned within a week, a day, an hour or a few minutes.
Therefore, the goal is to push the boundaries of data service automation. This is achieved by automating what never has been automated before.
Explore new automation territory means to fully automate the entire lifecycle of data services, a mission goal that is worth looking at in a separate chapter. In short, the automation needs to cover all operational aspects including aspects such as:
- The on-demand provisioning of dedicated data service instances
- The creation of new automation releases when new data service versions arrive
- The delivery of new releases into customer environments
- The update of provisioned service instances
Protect your Investment!
Just like any software development, automation is an investment and it’s wise to perform it in a way that the resulting value lasts as long as possible.
The first rule to automation is: the more often a task is repeated, the faster automation will amortize. Automation that is being executed regularly in many scenarios will reveal edge cases and thus increase its quality.
This is also where automation benefits from the economy of scale as scale usually leads to more repetitions.
But you can’t automate it all!?
In fact, it usually is possible and more of a question is, whether it is worth the effort and trade-offs. A regularly upcoming challenge is finding the right interaction model with the user. The goal of enabling a user to always be able to self-service means to interact with the user where autonomous decision making is not possible or meaningful. In these cases, a smart interaction with the user is important providing information to enable informed user decisions.
Admittedly, there’s an exception to every rule and in fact not every data service use case should be automated. The point here being that resistance from long-established employees of organizations tend to underestimate where meaningful automation can be applied successfully. But there’s no doubt that there scenarios where automation will never pay-off.
As a general rule of thumb, applying the Pareto principle helps. Automate for the average data service covering around 80-90% of your database use cases.
A closer look at the remaining 10-20% may reveal the special-special-high-load-large-volume use case. Alternatively, it represents a legacy use case of a monolithic apps that holds way to many responsibilities and therefore became special-special. Spending effort to automate such a legacy use case would be wasteful and should be avoided for the sake of refactoring the application, instead.
In the upcoming article of this series you will learn about lifecycle automation and the lifecycle of Data Services.