Jonathan is a platform engineer at VictorOps, responsible for system scalability and performance. This is Part 1 in a 3-part series on system visibility, the detection part of incident management.

These days, with infrastructures spanning tens, hundreds, even thousands of running instances, piping a log file into less is no longer an acceptable means of log research and debugging. Instead, sending them to a log aggregation service with products like Sumo, Elastic, or Splunk is commonplace because searchability is king.

Unfortunately, the pursuit of searchability can lead to undesirable side effects like unreadable, inconsistent, and just plain ugly log statements. It invades our codebases with even more custom formatting (on top of string interpolation, etc) that’s not only distracting but also hard to do well with anything less than super-human string-formatting skills. In short, the side effect on log statements can be detrimental.

Logging is just the starting point

First off, let’s make sure to bring a little context to this pursuit. At VictorOps, we use logging as a research and debugging tool. However, logging isn’t and shouldn’t provide the primary heartbeat on your systems. That’s when metrics comes into the picture, which we’ll discuss in Part 3 of this series. Before going there, let’s talk about where we started with logging at VictorOps. We needed some major improvements in this first line of troubleshooting for when things go bad.

As Dave Hahn, a senior SRE from Netflix, recently shared with us, “Be willing to have a problem before you solve it.” In line with that advice, we recently identified multiple problems to solve relating to the research and debugging done through our logs. To top it all off, when I noticed that our logging interfaces were not unified, it became clear that it was time to make both our logging interface and log output great again.

I hope that our experience at VictorOps will give you ideas on how to improve logging at your organization.

The current state: how we use logs at VictorOps

Sumo Logic is our logging platform and we use it heavily throughout the development lifecycle.

There are four primary ways we use logs:

  1. To get visibility on what’s going on during releases. Through logs, we can see if there are errors that persist after a release. If so, there is probably a hole in our altering – some problem that we aren’t yet monitoring.
  2. To create VictorOps incidents for relevant alerts. When we know that a particular log statement is indicating a problem where someone needs to get involved, we hook a scheduled Sumo search up to the VictorOps platform to create an incident out of it. The goal for most of these alerts is to migrate them to a metric based alert instead of basing it on a log statement – more on that in our metrics discussions in Part 3 of this series.
  3. To see how something is working in production. We might want to see how some new feature is working in production, so we’ll review the log statements. The production environment is always the most valuable for feedback because that’s where real customers have real accounts, alerts, users, and escalation policies. It may look fine in staging, but if there is a use case we didn’t test for that shows up in production, you can see the details in a log.
  4. To investigate high-dimensionality information. Organization, user, and API key (and for that matter, any sort of UUID) are all great examples of metadata that typically won’t be available in a metric and thus logging (or eventing) is where we’ll find that data.

We had three main players in our logging

We have a Scala backend that used three different logging frameworks. Some code used the SLF4J logging framework. SLF4J is widely used and provides a rich feature set. Other code within Akka actors used the Akka actor logging, which has a scaled down interface and feature set and is configured to use SLF4J. Some of our Play code used Play’s own logging, which is extremely simplistic, and is also configured to use SLF4J. All of these were configured with the SLF4J native Logback implementation. Here are some details:

SLF4J

SLF4J is likely the most widely used java logging facade with multiple implementations and a massive user base. Performance is dependent solely on your configuration of the appender that you’re using. By default, logback uses a synchronous appender, but you can easily configure an asynchronous appender. A synchronous appender will use the calling thread to actually write the log statement to file/network, whereas an asynchronous appender lightens the processing load on the calling thread by simply handing over the log statement to the appender to write to file/network at some point in the future.

Akka logging

Akka’s actor-based logging is event driven and is easily configured to use SLF4J. In the actor itself, you say log.info(“this message”), and behind the scenes, it sends an event to the system’s event stream and it’s done. This takes up almost no overhead to create that log statement because it goes somewhere else to be written.

Play logging

Play has its own simplistic logger that’s much more stripped down than the Akka logger and by default uses SLF4J. Play offers up to two arguments: the string that you’re logging, and an optional exception. The most recent version (2.6.x) has added support for SLF4J markers.

Why change how we do logging?

Strategy concerns

These concerns have to do with the various strategies taken by these different logging interfaces.

  • Call-site performance: All SLF4J interfaces rely on the caller to provide pre-computed strings and arguments prior to checking if that log level (info, debug, trace, etc.) is enabled. There are simple ways around this, like Play’s interface that uses a by-name argument for the string. This essentially creates an anonymous function that is executed only after the log level has been checked. For example, without by-name arguments, the statement below will require the mkString method to execute on a potentially large collection prior to the info method checking whether or not info level log statements are enabled. log.info(s“Team $team has users: ${users.mkString(separator)}”)
  • Conflicting interfaces: The largest effect of conflicting interfaces is developer confusion and frustration. The next problem is that it leads to incorrect log statements. If logs are to save you when things go awry, then an incorrect log statement is like a carabiner with a broken arm–looks like a useful thing but is completely useless for the intended user. For example, below are the error methods from these three interfaces. Notice how the location of the Throwable argument changes? Now, imagine working in a codebase where all three of these interfaces are being used. A little scary.
    • SLF4J: void error(String msg, Throwable t)
    • Akka: def error(cause: Throwable, message: String): Unit
    • Play: def error(message: ⇒ String, error: ⇒ Throwable)
  • Appender performance: All three of these have configurable backends and appenders, but it is worth noting that any interface you use will need to have its configuration examined. Most default appenders are synchronous and therefore write the log statement to file (or whatever destination) at the call-site. However, this can be changed easily by configuring an asynchronous appender. This clearly improves call-site performance by requiring only the string to be built before asynchronously handing it off to the appender, which will write the statement to file on its own time.

Developer concerns

How did using multiple logging libraries affect the developers?

  • Too many decisions: Choosing between three different loggers for any given class.
  • Conflicting interfaces: From a developer perspective, this causes confusion and requires you to pay more attention to your logger than you really should.
  • Inconsistency: Having more than one logger in a class, which is clearly unnecessary, and having different types inconsistently named, e.g. log and logger.

Functionality needs

What functionality do the developers need for a maintainable codebase and effective log portfolio?

  • Unified interface: A single interface allows you to add new features in one place and enables the power of easily refactoring logging on a large scale.
  • Support for log variables: Extracting specific information from a log statement is easier if it’s been given special formatting. Once standardized, this can be utilized in our Sumo queries.
  • Implicit loggers for utility classes: Utility classes lack their own identity in terms of data flow. Implicitly passing in a logger, which has identifying information from the caller (its class and log variables), provides rich log statements within utility code.
  • Further consistency: This equates to icing on the cake. Things like a very simple Logging trait to standardize the log field name, logger name (used when writing the log statements), and the logger identity (based on log variables).

Up next

Now that we’ve set the stage, in Part 2, we’ll explore how we addressed these concerns in order to make logging great again.