Diagnose Metric Anomalies with Single-Click Machine-Learning Analytics Magic

App Insights Analytics just released Smart Diagnostics, and it is by far the best application of Machine Learning analytics in the service to date.

I’ve posted before about some ML features such as autocluster and smart alerting, but this one really takes the cake as the most powerful and useful yet:

  • It’s super-duper easy to use! Despite the huge complexity of the Machine Learning algo behind the scenes.
  • It’s fast!
  • It can give you awesome answers that save you lots of investigation time and agony.

It works by analyzing spikes in charts, and giving you a pattern that explains the sudden change in the data.

So let’s give it a go!

Analyze spike in dependency duration

I run a service that has all kinds of remote dependencies – calls to Azure blobs, queues, http requests, etc.

In my devops hat, I run this simple query almost daily just to gauge the health of my service – a look at the 95th percentile for call duration by dependency type:

| where timestamp > ago(1d)
| where operation_SyntheticSource == ""
| summarize percentile(duration, 95) by bin(timestamp,30m), type
| render timechart

The results look like this:


Right off the bat I can see something very funky going on in my http calls. I wanna know exactly what’s going on, but drilling in to the raw data can be a messy business.

If only there was a way to analyze that spike with just one click…. !!!

Fortunately, there’s a small purple dot on that spike. It signifies that this spike is available for analysis with Machine Learning (aka Smart Diagnostics).

Once I click on it, the magic happens.


Smart Diagnostics just told me that the cause for the spike in call duration was:

  • name: https://../observation
  • operation_Name: HealthyFlow
  • resultCode: 409
  • success: False


Furthermore, looking at the chart I see that calls without this pattern are totally flat in this time period.

This is definitive proof that the spike was caused by failed calls to this dependency. My work here is done in seconds.


App Analytics Machine Learning: Autocluster

Don’t freak out about the title. I’m going to show some powerful machine-learning algorithms behind the scenes — But they are also super-duper easy to use and understand from analytics query results.

I’ll start with Autocluster(). What this operator does, is take all your data, and classify it into clusters. So we’re basically bunching your data into groups. This is very useful in a few scenarios:

  1. Classify request failures – easily see if all failures have a certain response code, are on a certain role instance, a certain operation, or from a specific country etc.
  2. Classify exceptions.
  3. Classify failed dependencies.

This is actually the feature that is being used in the Near Real-Time Proactive Alerts feature to classify the characteristics of the request failure spike.

Let’s get to an example.

I just deployed my service, and checking the portal I see a huge spike in failed requests:



So I know something went terribly wrong, I just don’t know what.

Now, ordinarily what I would do in a situation like this is just take a random failed request, and try to trace the reason it specifically failed. But this can be wrong – several times I just happened to take a failed request that was completely not indicative of the real problem.

So this is where Autocluster() kicks in.

| where success == "False"
| where timestamp > datetime("2016-06-09 14:00")
| where timestamp < datetime("2016-06-09 18:00")
| join (exceptions | project type, operation_Id ) on operation_Id
| project name , cloud_RoleInstance , type
| evaluate autocluster(0.85)

This is basically a query of all the failed requests in the specific timeframe, joined to exceptions. On top of this query I’m running the “evaluate autocluster()” command.

The result I’m expecting is bunching all these records into several groups, which will help me diagnose the common characteristics of my failures.

The results look like this:



So the autocluster algorithm went over all the data, and found that

  • 71% of the requests failed due to 1 specific exception.
  • The exception is found on all of my instances – see the “*” in the instance column.

Autocluster just diagnosed the problem in my service, going over thousands of records, in an instant! It’s easy to see why I think this is awesome.

FYI, Autocluster can take in as input any column, even custom dimensions. Ping me in the comments if you have any questions about the usage.