No, in my opinion.
Containers are the wheels of the hybrid-cloud last generation technology that is around since a decade.
In the '70 the operating systems changed the world of the IT introducing the concept of (hardware) abstraction and multi-tasking. Increasing processing power was the driving force of the “all-in-one” (server) approach.
Today workload orchestration provided by things like Kubernetes and Nomad are the new face of the process scheduler that is inside the traditional operating systems. And the Internet is the driving force that shifts the focus from the single server processing power to the fast-scaling and distributed computing platform known as “the cloud”.
Problems and limitations always exist:
- Digital divide: slow CPU yesterday, slow Internet connection today.
- Privacy and data security, another big concern!
To address them the hybrid-cloud splits the problem in public/private infrastructure and on/off-premise.
I think we have to take the leap. Learn, design and implement something based on today’s technology to shape the business like we want. This is what the FOSS communities always did!
I’d like an hybrid-cloud platform that with some limitations can run even on a single server (!) but can leverage service cloning, scaling, atomic upgrades and fault tolerance out of the box.
I’m afraid that if we don’t provide a cheap-enough platform for our services by ourselves, we’ll buy it from someone else at their price.
The hybrid-cloud approach shifts the focus from the OS/distro level to the services orchestration. First and foremost we have to plan the service lifecycle (install, configure, update, upgrade). As the distro becomes less important in general, we can pick an alternative distro to build a better firewall.
In the end (I’m a devvelopper) I’d change our project technology to preserve our project goals