One of the most common early goals of implementing DevOps best practices is a deep understanding of your systems in a stable state. However, this objective is not a “once and done” effort. It is important to continuously circle back in some form (a feedback loop) as changes are introduced. It’s an ongoing exercise for an entire organization as our processes, tools, and teams improve continuously over time.

In many cases during these beginning stages of DevOps transformations, agreeing on a starting point is where much of our time is spent. An unfortunate consequence of this is that without confidence in understanding where to start, oftentimes we never start at all. Analysis paralysis is a very real thing, especially for big organizational changes, and those who are typically risk-averse unfortunately fall victim to this far too easily.

15381354616_663a8999b9_z

Be wary if ROI (return on investment) is creeping into the conversation regarding adopting incident management tools and DevOps processes. This is an immediate indication that the collective mindset of management, or at least the decision-makers, have not yet placed continuous improvement and learning as the highest priority. The decision-making ways of the past will not empower an organization to adapt & thrive in today’s competitive business environment .

(Should you disagree with this position, the rest of this article will provide you no real value.)

“Establishing a deep understanding of our current systems to formulate a baseline and feedback loop is the foundation. From there, we improve.”

Introducing a level of confidence (or lack of) in current methods of software delivery & maintenance as measured by anomalies in development and operations efforts helps shed light on where to start. By moving focus towards a deeper understanding of our infrastructure and codebase, a starting point begins to appear. The paralysis of decision-making begins to ease and the “managing from a distance” behaviors, such as ROI mentions, start to provide no meaningful value.

6218038832_cf5be5bcfe_z

From there, small incremental goals or what are known as “Target Conditions” can be set to begin the process of improvement. This focus on improvement is the key to unlocking so many of the concepts brought up in DevOps conversations. Continuous Integration and Continuous Delivery are possible only as results of a focus on understanding current conditions while placing a company-wide effort on striving towards Continuous Improvement.

Thus, a good starting point in any organization’s efforts to dip their toe into the DevOps pool is with on-call scheduling, incident management, and monitoring improvements. Understanding your organization’s existing methods of identifying and responding to abnormalities is one of the easiest and most stimulating first steps.

The immediate benefits of modern DevOps on-call practices are easy to identify and agree on:

— Anomalies are detected in real-time.

— The correct operators and engineers are alerted to actionable issues as quickly as possible.

— Critical context on what’s taking place gives responders exactly what they need in that moment, shaving time and cognitive load.

— A collaborative space to discuss context, diagnosis, and efforts towards repair, means reduced Time to Repair and an increased situational awareness across teams and the organization of what is happening and the “state of systems”.

However, what about the concerns and opportunities that aren’t obvious or immediate? What else is at stake? Can more gains be made simply by improving the way we monitor and manage on-call and incident management processes?

Opportunity To Learn

If there is a large gap between identifying a problem and solving it, that makes learning difficult. The ability to identify contributing factors becomes increasingly problematic the longer time passes. The trail to identifying everything involved with a disruption in service begins to go cold as operators, engineers, and the systems themselves move on to new tasks. Because of this, it becomes very difficult to learn and any opportunity for improvement is missed.

16235812517_090424ea31_z

Snowball effect

What may seem like a small or non-critical problem can quickly become a large one if left alone. As time ticks away, seemingly insignificant issues accumulate and grow into large, complex problems that have dangerous long term impacts and are much more difficult to diagnose and repair. In some situations, this can happen very quickly and a minor incident may become a “Sev-1” outage in no time at all.

Stay On Track

Many of us follow Agile Development principles and operate in short development cycles. Shortened sprints are designed and planned to prevent disruptions and context switching, which can be very detrimental to our efficiency. However, sprint planning is developed to establish targets and goals, with the caveat that the team can quickly change course if the need arises. By responding to disruptions quickly, we have the greatest chance of achieving those goals.

Waiting to deal with a problem until you’ve finished the code or configuration you are currently working on may very well result in the realization that those efforts (and code) were wasted. Feedback from your current system (in the form of a problem) may be full of information indicating that the piece of code you are writing won’t work under the current conditions of your system. Or worse, that it doesn’t provide value to the service you are building.

Leveraging monitoring, alerting, and smart incident management software means having a pulse on your systems. That feedback loop is essential to staying on track for the greater good of the services you are engineering, even if that means changing course quickly and often. That is – after all – what Agile and DevOps are designed to provide you with.

Consistency

The quality of your service is extremely important to not only your end users, but the business as a whole. The service you provide IS the brand of the company and not placing quality of service as a top priority can mean extreme negative consequences. System resiliency and reliability as a means toward “high availability” is paramount in establishing credibility. Consumers of your product have very little tolerance for regular or lengthy outages. Communicating to your end users that quality of service is extremely important to you, yet not responding to problems as they occur is saying one thing and doing another.

The message you are sending is inconsistent at best and indicates trouble within the organization (likely at the management level) that priorities are not in alignment. Being consistent is one of the most important things to focus on for any organization. Your customers are paying attention to that consistency. Are you?

Downstream consequences

Many of us are aware of the benefits of loosely-coupled and independent processes or systems. The arguments for microservices architecture are hard to ignore. Its approach means a degraded performance of one service can have little-to-no impact on others. If there is a problem in one small area of the system, it doesn’t have a negative consequence to the system as a whole.

8695082512_fdb2b66baa_z

However, unless your entire service is part of a distributed microservice ecosystem, services are in fact, closely-coupled, and a problem in one area can quickly lead to problems elsewhere. The idea of a rarely used and non value-adding part of your infrastructure or codebase crumbling your entire service is frustrating for some, but something that keeps many in Operations roles from sleeping well at night. Not being aware of or alerted to an issue may mean catastrophic failure when a small and less significant service takes out a large and critical one.

The approach you and your organization use to take on incident management is a key indicator on how much you value continuous improvement. If the culture of your team or company does not place a high value on learning and striving for improvements in processes, tools, and individuals in a continuous manner, then any efforts of rolling out DevOps will fail. This is why the ‘culture of DevOps’ comes up so frequently, and why it frustrates many who strongly hold on to ‘old-view’ methods of managing development and operations.

Continuous improvement is at the heart of it all. Empathizing with our end users and those involved in engineering and maintaining our systems means that nothing is ever “done” or “good enough”. Everything must continuously improve. Establishing a deep understanding of your systems provides insight on where to focus efforts of improvement.

Failing to place understanding and learning as the highest priority means imminent failure of the organization and the products or services it provides.