A look at the five key trends I see driving the industry over the next 12 months.
At this year’s GigaOm’s Structure 2014, pretty much everybody was talking about Docker and containers. A container is essentially a private execution sandbox on a shared operating system. Containers have been in existence for a while, but Docker’s open platform has made them really sexy.
The current discussion is about the advantages of getting individual workloads into the cloud in a Linux container and running them in high numbers in that format on a single, containerized server. Performance and scalability will improve as you eliminate the need for virtual machines and the hypervisor layer. Google uses containers to operate all of its services, including Gmail and it says containers has made its service more reliable. If Google, which knows a bit about scale, is singing Docker’s praises and using containers in its infrastructure, there must be something here.
Docker builds in some shortcuts to the work that developers normally have to take to get an application ready for deployment. It captures an application and its dependent pieces of software into a Linux container. The components will always fire up in their assigned sequence. Each component can be maintained separately. The container as a whole can be moved around like a shipping container -- by using certain methods of handling and defined attachment points.
Once you do that, then Mesos becomes the obvious choice for cluster management. An emerging cluster management platform, Apache Mesos simplifies the complexity of running clusters with distributed resources and even entire datacenters. Mesos manages your cluster including rebalancing your load should servers go down. These emerging cluster management platforms are the next evolution for fine grained and efficient resource management to achieve the desired performance of today’s applications. As the need for higher performance, scalability and more processing grows, solutions like Mesos take center stage. It’s widely used, by Twitter and Airbnb, among others.
All of these up-and-coming technologies underscore how virtualization has evolved over the last 10 years. When virtualization first appeared as a disruptive technology, hardware was really growing in terms of capacity, while applications were relatively small. The key problem addressed by virtualization was low server utilization: virtual machines let you allocate multiple small applications on these big servers.
But that’s all changed pretty dramatically. Today, if we look at many applications — like Hadoop and Spark — they are designed to be distributed systems from the get-go, while the hardware has become a commodity. The virtual machine model doesn’t make as much sense. Rather than splitting up the applications onto multiple machines, we have to aggregate all the machines and present them to the application as a pool of resources.
Docker is disruptive, platform as a service (PaaS) is being disrupted and Docker and Mesos are poised to become standards for developers, certainly for new clustered applications like Hadoop. The buzz at GigaOm confirmed that these shifts are taking place, but when you understand how application performance has evolved in recent years, it makes perfect sense.
Don’t hesitate to contact me directly with your comments and inputs via paola dot moretto at nouvola dot com. You can find me on Twitter at @paolamoretto3 or @nouvolatech.