Achieving High Efficiency DevOps – 5 Lessons Learned

Achieving High Efficiency DevOps – 5 Lessons Learned

If we look at the current state of software delivery, things are complex, constantly changing and requiring broader/deeper skills. What worked 10 years ago is no longer relevant as we move into the age of containers, platforms and software defined infrastructure. The target moves forward as many are left behind struggling with “the basics” of effective DevOps.

At its very core, DevOps started as a way to bring development teams and operations teams closer together in a more collaborative, efficient and empowering way. Many organizations have not been able to move past the silo approach or the control barriers inherent to their culture. This basic, continued struggle shows us that DevOps and software delivery practices are multi-dimensional by their very nature.

Lesson Learned 1: 

DevOps is a "constantly evolving" set of practices across your delivery processes, technology, people and time.

Software delivery is and always will be “constantly evolving”. As technology advances the options expand. Drive focus to the following areas of understanding and measurement:

  • Delivery processes
  • Technology/Tools
  • People and the organizational culture
  • Time measurement for each step of the process (to identify and determine bottlenecks)

It is critically important that we have the right people guiding our strategy. Now, more than ever. The landscape is continuously changing, keep mindful to remain flexible in our approach. Embrace an iterative approach to our technology adoption and be open to new ideas and methods.

A broader level of experience and understanding across the various dimensions will allow everyone to make better decisions that:

  • Drive greater revenue
  • Reduce cost
  • Accelerate delivery speed
  • Improve predictability and quality

Lesson Learned 2: 

For DevOps to be highly efficient, each specific area of delivery needs to be understood, visible, measured and optimized.

The current delivery landscape continues to get more complex. We are constantly adding technologies and modifying processes in an attempt to solve the problems in our delivery pipelines. Complexity increases exponentially when adding components/services, that are expected to work in a specific context, only to end up replacing them with other solutions.

It’s important that when we do our solution research, that we keep note of all the options available to us. Intense focus on security, performance and integrations put development pressure on the smaller open source companies. In addition to the consumer demands for providers to document product baselines before release. Open source services/components can be notorious for this trial and error architectural pattern. The component or service may not have had rigorous testing to confirm the configuration settings or performance thresholds. 

For each problem we are trying to solve in our DevOps/Delivery pipeline, we need to make sure we follow this approach:

Understand –Thoroughly define the problem we are trying to solve; researching the technical options, capturing our requirements, inputs and outputs, thresholds, configuration parameters, time to implement, cost, maintenance and licensing.

Visibility – In many cases we do not have proper visibility into the component, its input or output. Did it succeed or fail? Does it integrate with other areas, send parameters, trigger something? Does it have logs? Do we need it on a dashboard?

Measure – Do we know if it is performing as expected? Is it operating slowly or is it acceptable? Do we have any trended data to analyze its performance? During idle, load, spikes? Do we know its failure thresholds?

Optimize – Do we have ways to make the component or process more efficient? Have we captured and trended our options? After we invest in automation, we can begin optimizing our components and processes. Have we documented its ideal configuration? Have we load or performance tested the component?

Lesson Learned 3: 

To hit the "sweet spot" of delivery best practices, each area will contain specific technology, processes and people that are aligned to the desired outcome for that part of the process. Additionally, we must consider our current level of maturity for that area or capability.

There are two dimensions that are required to hit the “sweet spot” of delivery in DevOps:

  • Right tool for the job (tech, process and people)
  • Organization maturity (in this area)

Technical influencers become biased to their selection of tools in their toolbox. They like to work with tools that are familiar to them, to save time and effort installing and configuring. Instead, let us constantly challenge ourselves in trying new, tested and proven technologies. Another option is to put this burden on software vendors. Make the software vendors stand up the technology and work through your primary use cases. At the end of the day, we need to make sure we are using the right tool for the problem we are trying to solve.

Consider the organizational maturity and capabilities in the areas of problem solving. Maybe our organization really struggles with testing or automation (in general). Do we have the right resources participating to make experience-based decisions? Are the teams operating in silos? Do the team’s lack in the ability to empower our developers? Are we experienced in containers and container architectures? Hybrid-cloud? Do we have over-zealous security practices, locking us down so far that we can’t innovate or test new technology? Are we completely outsourced, so our teams struggle with business context and are continuously adding new resources?

Being realistic about our level of expertise and challenges will allow us to seek out guidance from others with more specialized skill and experience to help us understand our options.

Lesson Learned 4: 

Not all technology, processes and vendors are created equally (and are capable across all areas).

Likely, there will be a combination of technologies, vendors and methods each relating to their area of expertise. This can be called a “Best of Breed” solution architecture. Companies can struggle with a Best of Breed approach. Depending on the solutions, this approach can seem more expensive than choosing a single vendor for multiple areas. When companies choose a single vendor, they can get significant discounts on their licensing. However, the flip side of this, is that they are now locked into a solution that does not fit well with the requirements and the desired outcome for an area of delivery. Our visibility and continuous measurement of the value we are receiving is essential.

Whether you are choosing open source or commercial vendors, it is important to understand each solution’s “sweet spot”. We need to carefully weigh our requirements and constraints with the solution we are implementing. The process needs to be iterative.

This is very important to understand. Your architecture should not be developed using a big bang approach. It needs to be iterative. Architectures should not be ridged. You should continuously improve and test components/services/solutions. If the entire architecture is driven by cost or any other single requirement, you will likely struggle with achieving your desired outcomes. There are many factors to be considered as you assemble the combination of technologies, vendors, and methods you will use. Achieving high efficiency in DevOps is truly multi-dimensional.

Lessons Learned 5: 

As DevOps continues to evolve and delivery areas continue to commoditize, new levels of maturity and automation become possible for organizations.

DevSecOps, XOps, AIOps – every year there seems to be a new term or angle on what we should all be doing. Maybe some groups were feeling left out. Be careful chasing the bleeding edge, balancing the chase while also taking advantage of areas of commoditization that accelerate and automate your processes. For example: Build Automation used to be an area of investment, now it is built into Continuous Integration/Continuous Delivery (CI/CD) solution for free (and likely leveraging Jenkins in some way).

Achieving new levels of possibility: AIOps is beginning to provide significant value at the intersection of Kubernetes/Event Management/Problem-incident Management. Using Artificial Intelligence and Machine Learning to predict workloads, baseline and trend system or component pressure. The system (without human intervention) has the ability to see problems and attempt to fix them BEFORE they occur. All the while, sending notifications to the right Incident and Operational Groups to let them know what is happening. Amazing, powerful and extremely helpful!

If your company is struggling with Hybrid Cloud Software, Enterprise Architectures or DevOps, feel free to reach out.

About Greg S. Graf:

When Greg is not working in the trenches of Fortune 100 company’s software delivery and hybrid cloud architectures for IBM Software Group, he is helping others with their business and technical strategies.

  • Software Technology Thought-leader, Author, Speaker and Coach
  • Unique background of business and technology
  • Deep experience in driving revenue, business value, growth and go-to-market strategies
  • Certified technical architect, software specialist, Kubernetes/containers, DevOps, hybrid cloud architectures, automation, edge computing
  • 20+ years of technology and business experience (from coding, consulting to software technical leadership)
  • Known for thought leadership, having a “finger on the pulse”, seeing ahead of the market curve, innovator

Why Greg likes Working for IBM?

·     IBM invests in your success and provides guidance, strategic workshops, MVPs, and other enablement to prove out the technology

·     IBM supports the greatest depth and breadth across your delivery platform and required delivery goals/capabilities

·     IBM has been a leader in technology approach/strategy since the beginning (100+ years and counting)

·     IBM believes in open standards and avoiding vendor lock-in

·     IBM understands and supports your organization's existing investments (AWS, MS Azure, Google, etc..) and will integrate with them

·     IBM assists in creating a Cost Benefit Analysis and financial models to help build the business case for its value 

Ryan Zombo

Director Data & AI Technical Sales - Americas

4y

Excellent article Greg! Thanks for publishing and sharing your deep expertise.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics