Nouvola Blog

DevOps is here to stay: How can we help the DevOps culture succeed? 


Posted by Paola Rossaro on May 1, 2015 11:17:52 AM
Find me on:

Or, Why friends don’t let friends build data centers 

2015-04-17_16.39.29-089041-editedFive months ago I wrote about the growing consensus that DevOps is a leading trend that will disrupt IT. That view was confirmed this week at the ICEE3 Conference in Seattle, where several tech giants from AWS and MS to Google and HP spoke about the importance of providing continuous value to customers as more and more business moved to the Cloud, and the dramatic changes that are resulting.

I spoke on a panel on Orchestration and Performance Management at the conference, but I also spent a lot of time listening to others. I heard again and again how projects that once took months or years are being completed in days, and with a fraction of the labor. Change is inevitable and change is happening. And businesses are increasingly understanding that it’s better to embrace these sweeping changes than to risk becoming irrelevant.

Michael Goetz, Chef Software’s Solutions Engineering Manager, introduced the concept of infrastructure as a code, or configuration management. The cloud infrastructure is programmable, and that’s why we need DevOps.

The move toward agile methodologies, continuous integration and continuous deployment – what Jason Kalich, GoDaddy’s Vice President of Cloud and Site Reliability Engineering, called continuous operations – is happening. And not just for startups and new enterprises, but also for traditional enterprises. These are profound changes that are impacting business at all levels, changing the way companies operate and build products.

Google Technical Program Manager Eric Johnson explained the significance of the cloud with this equation 

Cloud = Save + Scale + Succeed

And who wouldn’t want that sort of capability?

That’s why people at this conference were no longer debating the importance of DevOps but discussing how it can be broadly adopted. They weren’t asking Why should we be in the cloud? but rather, Why aren’t more people in the cloud? 

Some pointed out how many companies at enterprise level are now moving to the cloud either completely or more often with some hybrid architecture. They referred to the cloud as the “new normal” in business from both a technical and a cultural perspective. Amazon Web Services offered this recipe for a successful integration in the cloud:

  1. Get buy-in from the decision maker.
  2. Educate.
  3. Experiment

This was echoed, by many others, including Karen Ng from Microsoft, who showed us how Visual studio dropped the waterfall approach and a two-years development lifecycle to move to a three-weeks agile approach. Here’s the cultural part of that shift: Entire buildings on the Microsoft campus were gutted to create open spaces where teams could communicate more directly replacing the painstaking waterfall process, with one of end-to-end quality. 

 

IC3_image-174254-edited

 

And Todd Warren of Divergent Ventures showed us how a “two pizza team” model, using the most agile methodologies from extreme programming to lean startup can also be applied to enterprise. Teams of two developers in a garage with a whole cloud available for them can do in a few days what once took enterprises months or years. 

So how are the cloud and a full DevOps approach affecting performance and orchestration? This was the topic of our panel discussion, where companies like Nouvola, GoDaddy, LiveAction and Midokura all agreed on the importance of performance and of finding effective tools that can help measure and optimize. From the end-to-end user perspective with the support of technologies like Nouvola’s. Or from the network virtualization (Midokura) and network monitoring (LiveAction) point of view, performance is becoming product feature #1. For a business like GoDaddy that provides technology to so many small businesses that can easily switch to another solution, poor performance can trigger a very swift loss of business. There’s no time for slowdowns. That’s why DevOps is becoming the main resource to achieve better performance. And it’s becoming clear that monitoring, while a useful first line of defense is not enough. While it can provide a faithful insight on the system, it is useful only after the fact. But synthetic testing can help predict the system behavior and prevent problems before they happen. Synthetic testing becomes fundamental in capacity planning, and therefore helps reducing costs. That’s why monitoring techniques need to go hand in hand with synthetic testing, to produce a full understanding of the application performance and anticipate future behavior. 

There was less agreement on how to derive continuous value through containerization. Containers, in particular Docker’s approach, are embraced by Shippable and RedHat. Century Link sees this technology to be effective in some scenarios, but still rather new to be used so widely, especially when there are a lot of interdependencies. Microservices are key to build an application based on containers, and might require re-architecting existing solutions if we want to move towards full containerization.

 

 

[WEBINAR REPLAY] Performance And Your Brand 

Topics: Technology, DevOps