Back-fill Missing Dates With Zeros in a Time Chart

A common ask I’ve heard from several users, is the ability to fill gaps in your data in Kusto/App Analytics/DataExplorer (lots of names these days!):

If your data has gaps in time in it, the default behavior for App Analytics is to “connect the dots”, and not really reflect that there was no data in these times. In lots of cases we’d like to fill these missing dates with zeros.

The way to go to handle this, is to use the “make-series” operator. This operator exists to enable advanced time-series analysis on your data, but we’ll just use it for the simple use-case of adding missing dates with a “0” value.

Some added sophistication is converting the series back to a *regular* summarize using “mvexpand”, so we can continue to transform the data as usual.

Here’s the query (Thanks Tom for helping refine this query!) :

let start=floor(ago(3d), 1d);
let end=floor(now(), 1d);
let interval=5m;
requests
| where timestamp > start
| make-series counter=count() default=0 
              on timestamp in range(start, end, interval)
| mvexpand timestamp, counter
| project todatetime(timestamp), toint(counter)
| render timechart

 

Monitoring and Scaling Azure Functions

Everybody loves Azure Functions.

My team recently deployed a production service using Azure Functions as the back end backbone. I’d like to share some lessons and tips we learned along the way.

We’re using Azure functions in consumption plan – which basically means the platform scales in and out as required without our intervention. But that doesn’t mean you can just forget about scaling.

Monitor! Monitor! Monitor!

Azure Functions has a really great integration with App Insights. It makes it really easy to get near real-time data on whats going on in your app.

Coupled with Log Analytics, this is extremely valuable to get going right from the beginning. Skip this step at your own peril..

Here’s a little taster of what you can get – a very useful query that’ll get a feel for your app performance – 95th percentile request duration by request name:

requests
| where timestamp > ago(7d)
| summarize percentile(duration, 95) by name, bin(timestamp, 1h)
| render timechart

You Gotta Have Context

We’re using App Insights as our complete monitoring platform – meaning we’re calling App Insights from the Function code itself – we use it to trace logs, events and dependencies.

So if all your application monitoring data is in App Insights, it’s super-duper useful to be able to correlate all the telemetry from one request (request, traces, dependencies, events) under one context.

AF App Insights integration already sets the operation_Id field in all the requests to the context invocation Id. What we did, is set the operation Id for *all* telemetry items. You can’t really use a telmetry initializer because you don’t really control the telemetry client instance. Here’s what we did instead – store the context, and then out it in every telemetry item:

public class ApplicationInsightsTracer 
{
   private static readonly Lazy TelemetryClient = new 
                                     Lazy(InitTelemetryClient);

   public string OperationId { get; set; }

   private static TelemetryClient InitTelemetryClient()
   {
            var telemetryClient = new 
                    TelemetryClient(TelemetryConfiguration.Active)
            {
                InstrumentationKey = ConfigurationManager.AppSettings
                                   ["APPINSIGHTS_INSTRUMENTATIONKEY"]
            };
            return telemetryClient;
        }
   }
   
   public ApplicationInsightsTracer(Guid contextInvocationId) 
   {
       this.OperationId = contextInvocationId.ToString();
   }   

   public void TrackEvent(string name)
   {
      var eventTelemetry = new EventTelemetry(name);
      telemetry.Context.Operation.Id = OperationId;
      TelemetryClient.Value.TrackEvent(eventTelemetry);
   }
   
}

Then, in the function code:

[FunctionName("MyFunc")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "My")] HttpRequestMessage req, 
TraceWriter log, 
ExecutionContext context)
{
    var tracer = new ApplicationInsightsTracer(context.InvocationId);
    ...
}

Also – make sure you *don’t* Flush in your function code. In our tests it added about 200ms to every function invocation. Flushes happen periodically on their own.

Roles Matter

Our service has several different roles in it:

  • A high-usage HTTP API which is utilized with very high concurrency.
  • A job scheduling HTTP API which get called about once an hour.
  • A service-bus queue based worker role that does long, heavy data crunching.

At first, when we just got started with Azure Functions, we just shoved all these functions into one Azure Functions resource. Wrong!

When you put them all together, they scale together! so whenever the long drawn processing would scale to more roles, it would scale the Http roles too and adversely affect their performance.

Different roles, with different scaling requirements, should be separated into separate Azure Functions resources.

If you’ve got App Insights integration setup, here is a query that we used a lot to help us understand what exactly is scaling in our service – a distinct count of role instances per hour in our deployment:

requests
| where timestamp> ago(7d)
| summarize dcount(cloud_RoleInstance) 
            by bin(timestamp, 1h), cloud_RoleName
| render timechart

 

Different roles should also have different properties – things like the client affinity cookie should be disabled/enabled on a per role basis.

 

Cool uses for the top-nested operator

There’s a pretty nice operator in Kusto (or App Insights Analytics) called top-nested.

It basically allows you to do a hierarchical drill-down by dimensions. Sounds a bit much, but it’s much clearer when looking at an example!

So a simple use for it could be something like getting the top 5 result-codes, and then a drill down for each result code of top 3 request names for each RC.

requests
| where timestamp > ago(1h)
| top-nested 5 of resultCode by count(),
  top-nested 3 of name by count()

So I can easily see which operation names are generating the most 404’s for instance.

This is pretty cute, and can be handy for faceting.

But I actually find it more helpful in a couple of other scenarios.

First one is getting a chart of only the top N values. For instance, if I chart my app usage by country, I get a gazillion series of all different countries. How can I easily filter the chart to show just my top 10 countries? Well one way is to do the queries separately, and add a bunch of where filters to the chart…

But top nested can save me all that work:

let top_countries = view()
{
  customEvents
  | where timestamp > ago(3d)
  | top-nested 5 of client_CountryOrRegion by count()
};
top_countries
| join kind= inner
  (customEvents
    | where timestamp >= ago(3d)
   ) on client_CountryOrRegion
| summarize count() by bin(timestamp, 1h), client_CountryOrRegion
| render timechart

top5countries

A beautiful view of just my top 5 countries…

I’ve actually used the same technique for a host of different dimensions (top countries, top pages, top errors etc.), and it can also be useful to filter OUT top values (such as top users skewing the numbers), by changing the join to anti-join.

The second neat scenario is calculating percentages of a whole. For instance – how do you calculate the percentage of traffic per RC daily?

Yeah, you can do this using a summarize and the (newly-added) areachart stacked100 chart kind:

requests
| where timestamp >= ago(3d)
| where isnotempty(resultCode)
| summarize count() by bin(timestamp, 1h), resultCode
| render areachart kind=stacked100

stacked100

But this only partially solves my problem.

Because ideally, I don’t want to look at all these 200’s crowding my chart. I would like to look at only the 40X’s and 500’s, but still as a percentage of ALL my traffic.

I could do this by adding a bunch of countif(rc=403)/count(), countif(rc=404)/count()… ad nauseum, but this is tiresome + you don’t always know all possible values when creating a query.

Here’s where top-nested comes in. Because it shows the aggregated value for each level, creating the percentages becomes super-easy. The trick is simply doing the first top-nested by timestamp:

requests
| where timestamp > ago(14d)
| top-nested 14 of bin(timestamp, 1d) by count() ,
  top-nested 20 of resultCode by count()
| where resultCode !startswith("20")
| where resultCode !startswith("30")
| project pct=aggregated_resultCode * 1.0 / aggregated_timestamp, 
          timestamp, resultCode 
| render timechart

top-nested-oct

Pretty nice, no?

App Analytics: Using “Let”, and a really useful investigation query

So here’s just a small tidbit that can be useful.

First the “let” keyword – it basically allows you to bind a name to an expression or to a scalar. This of course is really useful if you plan to re-use the expression.

I’ll give an example that I use in real-life – a basic investigative query into failed requests. I’m joining exceptions and failed dependencies (similar to NRT proactive detection). I’m using the let keyword to easily modify the time range of my query.

Here it is, enjoy!

 

let investigationStartTime = datetime("2016-09-07");
let investigationEndTime = investigationStartTime + 1d;
requests
| where timestamp > investigationStartTime
| where timestamp < investigationEndTime
| where success == "False"
| join kind=leftouter(exceptions
   | where timestamp > investigationStartTime
   | where timestamp < investigationEndTime
   | project exception=type , operation_Id ) on operation_Id
| join kind=leftouter (dependencies
   | where timestamp > investigationStartTime
   | where timestamp < investigationEndTime
   | where success == "False"
   | project failed_dependency=name, operation_Id ) on operation_Id
| project timestamp, operation_Id , resultCode, exception, failed_dependency

App Analytics Machine Learning: Autocluster

Don’t freak out about the title. I’m going to show some powerful machine-learning algorithms behind the scenes — But they are also super-duper easy to use and understand from analytics query results.

I’ll start with Autocluster(). What this operator does, is take all your data, and classify it into clusters. So we’re basically bunching your data into groups. This is very useful in a few scenarios:

  1. Classify request failures – easily see if all failures have a certain response code, are on a certain role instance, a certain operation, or from a specific country etc.
  2. Classify exceptions.
  3. Classify failed dependencies.

This is actually the feature that is being used in the Near Real-Time Proactive Alerts feature to classify the characteristics of the request failure spike.

Let’s get to an example.

I just deployed my service, and checking the portal I see a huge spike in failed requests:

FRRSpike

 

So I know something went terribly wrong, I just don’t know what.

Now, ordinarily what I would do in a situation like this is just take a random failed request, and try to trace the reason it specifically failed. But this can be wrong – several times I just happened to take a failed request that was completely not indicative of the real problem.

So this is where Autocluster() kicks in.

requests
| where success == "False"
| where timestamp > datetime("2016-06-09 14:00")
| where timestamp < datetime("2016-06-09 18:00")
| join (exceptions | project type, operation_Id ) on operation_Id
| project name , cloud_RoleInstance , type
| evaluate autocluster(0.85)

This is basically a query of all the failed requests in the specific timeframe, joined to exceptions. On top of this query I’m running the “evaluate autocluster()” command.

The result I’m expecting is bunching all these records into several groups, which will help me diagnose the common characteristics of my failures.

The results look like this:

autocluster-results

!!!

So the autocluster algorithm went over all the data, and found that

  • 71% of the requests failed due to 1 specific exception.
  • The exception is found on all of my instances – see the “*” in the instance column.

Autocluster just diagnosed the problem in my service, going over thousands of records, in an instant! It’s easy to see why I think this is awesome.

FYI, Autocluster can take in as input any column, even custom dimensions. Ping me in the comments if you have any questions about the usage.

 

 

Cool Azure Log Analytics: Joining requests and dependencies

Another cool thing you can do with App Insights Analytics is join different data types to get a good understanding of what’s happening in your app.

A great example are remote dependencies – this is an out-of-the-box feature in App Insights that logs all remote dependency calls such as SQL, Azure, http etc. If you’ve got that data flowing, you can get amazing insights with just a few small queries.

Here’s a small example – Lets’ try and find out which resources are real time-hogs in my service. The query I spun out is – per http request, get the average duration spent calling each dependency type.

requests
| where timestamp > ago(1d)
| project timestamp, operation_Id
| join (dependencies
        | where timestamp > ago(1d)
        | summarize sum(duration) by operation_Id, type 
        ) on operation_Id
| summarize avg_duration_by_type=avg(sum_duration) by type, bin(timestamp, 20m)
| render barchart

request_join_dependencies

Cool AppInsights Analytics: Counting sampled data

If you’re doing stuff you’re supposed to be doing in Analytics – like slicing and dicing request, counting page views, etc. – then you should probably make sure you’re counting correctly.

2 big pitfalls here are:

  1. If you’re sampling your data with App Insights 2.0 sdk, then you should obviously reflect that when counting.
  2. If you’ve got a bunch of tests set up, then you probably don’t want to count those as page views.

For #1, you need to make sure you are always summing items – do sum(itemCount) instead of a simple count().

For #2, remember to add a where clause on the synthetic source field.

Here’s an example:

requests
| where timestamp > ago(1d)
| where operation_SyntheticSource == ""
| summarize sum(itemCount) by performanceBucket

Cool AppInsights Analytics: Charting request failure rate

Here is a really cool App Analytics query over App Insights that shows the request failure ratio of your app over the last week.

I use “extend” with the “iff” features to create a a successes field I can count, and then use “extend” again to create a failure ratio.

requests
| where timestamp > ago(7d)
| extend isSuccesss=iff(success=="True" ,1, 0)
| summarize failures=sum(1-isSuccesss) , successes=sum(isSuccesss)
by timestamp bin=20m
| extend ratio=todouble(failures) / todouble(failures+successes)
| project timestamp, failure_Percent=ratio*100
| render timechart

FailPercent

 

Cool AppInsights Analytics: Charting common exceptions causing failed requests

Here’s a really simple but powerful query charting the most common exceptions causing requests to fail.

We do this by first getting all the failed requests, and joining them to exceptions according to operation_id.

Then we just chart it using a timechart.

requests
| where timestamp > ago(3d)
| where success == "False"
| project timestamp, duration, id, operation_Id
| join (exceptions
   | where timestamp > ago(3d)
   | project type, method, operation_Id) on operation_Id
| summarize count() by type, timestamp bin = time(1h)
| render timechart

request_join_exceptions