Will IT Soon Stop Caring About Infrastructure?
Infrastructure has historically been one of the most talked-about topics in IT, but will recent innovations change that? Will IT professionals eventually stop caring about infrastructure? How should businesses monitor if they no longer think about infrastructure?
IT is one of the fastest-changing industries today. While it doesn’t grab headlines like self-driving cars, black holes, or smart speakers, insiders can vouch for how quickly responsibilities, hardware, and even software become outdated. To borrow language from the startup world, new innovations disrupt IT standards regularly, leading to new standards that make monitoring and processing networks more efficient.
The latest innovations might even be calling into question the focus our industry places on infrastructure. Will it soon become something none of us even think—let alone care—about?
Recent IT Innovations May Be Making Infrastructure Obsolete
Many working IT experts remember when IT went virtual. We transitioned from having a single bare metal server running a few applications to having a single server running many virtualized “servers.” We virtualized the underlying hardware of the server. This enabled admins to run many servers on a single bare metal server. Suddenly, we were able to scale workloads horizontally across N servers, with far less hassle.
Then we began adopting containers. Instead of just virtualizing the hardware and running full-blown operating systems on each virtual machine (which can be a pain to update), containers run atop the operating system of a host or node. As the industry began to use containers, we gained the ability to run workloads on top of a single operating system, further abstracting away the hardware resources necessary to run workloads at scale.
These nodes or hosts can also be virtual machines, as opposed to running on bare metal. Instead of moving your entire OS and application, you can simply move or create new instances of the application. This, in turn, enables us to leave a smaller footprint because we can balance our load over multiple servers.
The latest shift is towards serverless computing. Containers allow for one more level of abstraction: functions as a service (FaaS). Many experts call this “serverless” because FaaS eliminates the need for someone within your organization to maintain a server at all. You can just write cloud functions and have them execute effortlessly on fully managed, on-demand infrastructure.
It feels “serverless” because no one at your organization has to attend to a server; a cloud provider, generally, will manage that for you.
FaaS allows software developers to write only their business logic and then upload it to a FaaS service with a public cloud provider. They can then set up an event-driven architecture to run said business logic, and that’s it: you’re done!
There is a rather heated debate about what is better: serverless or containers. But that’s for another article.
Instead, let’s talk about how all these recent development impacts monitoring.
It’s Time to Stop Caring About Infrastructure
This shift in innovation—away from hardware and toward the cloud—helps paint the picture of our not-so-distant future. Soon, maybe within the next few years, monitoring won’t be concerned with infrastructure anymore.
Think about it. The more we remove ourselves and our applications away from bare metal, the less any of us will naturally care about it. In the same way that FaaS makes it possible to all-but forget about our servers, a continued movement away from hardware means we may also forget about infrastructure.
When you run a totally serverless application on a public cloud, you can’t monitor it, even if you want to. There’s no way to access the network, server, or container metrics that are running your code. Instead, all you monitor is the code’s performance.
Not to mention, DevOps teams running their application in containers across a well-built Kubernetes cluster (or a managed cluster in the cloud) shouldn’t have to think about the hardware either. That type of management is increasingly outsourced to the cloud.
Hardware is becoming a mere resource commodity. Running these systems in the cloud has become so cheap that the need for local/on-prem hardware goes down every day. Cloud providers, with dedicated infrastructure partners and teams, can run the hypervisor or container software at scale for millions of users much more efficiently than a single organization ever could.
How Will This Change Monitoring?
This brings up a big question: How should businesses monitor if they no longer think about infrastructure?
The answer is complex insofar as it may change from business to business. What we do know is that businesses should start focusing on instrumenting applications, rather than concerning themselves with the infrastructure that runs them.
The recent popular term to describe this process change is observability.
Like the phrase “DevOps,” there’s still not an all-agreed-upon definition for observability. Generally speaking, the term encompasses what most of us think of as traditional monitoring. The difference is that observability also includes metrics, logs, and traces. Many consider these the “three pillars” of observability. The pillars provide quick analysis and troubleshooting, which enables us to infer the current state of a system.
To make an application observable, many argue that storing highly cardinal data—for the purpose of delving into problems as they occur—will become the new standard. The result: we can ask our systems highly specific questions and expect highly specific answers and solutions.