How to Split your Data Into A Fixed Amount of Buckets

Okay, another question from Twitter (original content will have to wait till I get some more free time!)

Here’s the challenge:

So what we need to do here is somehow infer the time-range of the query, and then create a fixed set of time bins according to that range.

I think the only way to that is by performing 2 queries – one to get the time range and convert it into a fixed interval, and a second query with the actual logic.

To convert the result of the first query into a ‘variable’ we can use in the second query, I’ll use the ‘toscalar‘ operation.

Here we go:

let numberOfBuckets = 24;
let interval = toscalar(requests
| summarize interval = (max(timestamp)-min(timestamp)) / numberOfBuckets
| project floor(interval, 1m));
requests
| summarize count() by bin(timestamp , interval)

I use ‘floor’ here just to round the interval and make the results a bit more readable.

Back-fill Missing Dates With Zeros in a Time Chart

A common ask I’ve heard from several users, is the ability to fill gaps in your data in Kusto/App Analytics/DataExplorer (lots of names these days!):

If your data has gaps in time in it, the default behavior for App Analytics is to “connect the dots”, and not really reflect that there was no data in these times. In lots of cases we’d like to fill these missing dates with zeros.

The way to go to handle this, is to use the “make-series” operator. This operator exists to enable advanced time-series analysis on your data, but we’ll just use it for the simple use-case of adding missing dates with a “0” value.

Some added sophistication is converting the series back to a *regular* summarize using “mvexpand”, so we can continue to transform the data as usual.

Here’s the query (Thanks Tom for helping refine this query!) :

let start=floor(ago(3d), 1d);
let end=floor(now(), 1d);
let interval=5m;
requests
| where timestamp > start
| make-series counter=count() default=0 
              on timestamp in range(start, end, interval)
| mvexpand timestamp, counter
| project todatetime(timestamp), toint(counter)
| render timechart

 

Search in App Analytics

The questions I get most often about Analytics aren’t usually about super-complicated queries or magic ML functions.

It’s usually just about how to find specific logs in an investigation.

App Insights Analytics has a really simple way to do it – search. This will search for a keyword across all your tables, across all columns.

search "Error"

If you look at the results, the first column is called $table – it is the name of the table from which the results came from.

You can combine search with a summarize, or any other filter you need:

search "Error" 
| summarize count() by bin(timestamp, 1h)
| render timechart  

It’s also possible to search in specific tables:

search "fail" in (customEvents, dependencies)

Happy searching!

Calculating Stickiness Using AppInsights Analytics

Update:

There is a new, simpler, better way to calculate usage metrics such as stickiness, churn and return rate.


In previous posts I  demonstrated some simple yet nifty tricks to get stuff done in app insights analytics – like extracting data from traces, or joining tables.

Those were mostly pretty simple queries, showing some basic Kusto techniques.

In this post I’m going to show something much more complex, with some advanced concepts.

We’re gonna take it slow, but be warned!

What I wanna do is calculate “Stickiness“. This is a measure of user engagement, or addiction to your app. It’s computed by dividing DAU (daily active users) by MAU (monthly active users) in a rolling 28 day window. It basically shows what percentage of your total user base is using your app daily.

Computing your DAU is pretty simple in analytics, and can be done using a simple dcount aggregation:

requests
| where timestamp > ago(60d)
| summarize dcount(user_Id) by bin(timestamp, 1d)

But how do you compute a rolling 28-day window unique count of users? For this we’re gonna need to get familiar with some new Kusto operators:

hll() – hyperloglog – calculates the intermediate results of a dcount.

hll_merge() – used to merge together several hll intermediate results.

dcount_hll() – used to calculate the final dcount from an hll intermediate result.

range() – generates a dynamic array with equal spacing

mvexpand() – expands a list into rows

let – binds names to expressions. I’ve already shown a use for let in a past post.

It’s kind of a lot, but let’s get going and see how we’re gonna use each of these along the way.

Let’s do this in steps. Our goal is to calculate a moving 28 day window MAU. First thing, instead of dcount we’ll use hll, to get the intermediate results:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)

With the intermediate results in place, the next phase is to think about which dates will use each intermediate result. If we take 20/1/2017 as an example, well, we know that each subsequent day, 28 days forward, will want to use this hll for it’s moving window result. So we build a list of [21/1/2017, 22/1/2017 … 18/2/2017].

So what we do here, and this is a little dirty, is create a list of all the future dates that will need this result. We do this using the range operator:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)
| extend periodKey = range(bin(timestamp, 1d), timestamp+28d, 1d)

Now let’s turn every item in the periodKey column list, into a row in the table. We’ll do this with mvexpand:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)
| extend periodKey = range(bin(timestamp, 1d), timestamp+28d, 1d)
| mvexpand periodKey

So now, when sorting by periodKey, each date in that column has exactly 28 rows, each with an hll from a different date it needs to calculate the total dcount. We’re almost done! Let’s calculate the dcount:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)
| extend periodKey = range(bin(timestamp, 1d), timestamp+28d, 1d)
| mvexpand periodKey
| summarize rollingUsers = dcount_hll(hll_merge(hll_user_Id)) by todatetime(periodKey)

That’s the 28 day rolling MAU right there!

Now let’s make this entire query modular, so we can calculate any length rolling dcount we’d like – including a zero day rolling (DAU actually) – and calculate our metric:

let start=ago(60d);
let period=1d;
let RollingDcount = (rolling:timespan)
{
requests
| where timestamp > start
| summarize hll(user_Id) by bin(timestamp, period)
| extend periodKey = range(bin(timestamp, period), timestamp+rolling, period)
| mvexpand periodKey
| summarize rollingUsers = dcount_hll(hll_merge(hll_user_Id)) by todatetime(periodKey)
};
RollingDcount(28d)
| join RollingDcount(0d) on periodKey
| where periodKey < now() and periodKey > start + 28d
| project Stickiness = rollingUsers1 *1.0/rollingUsers, periodKey
| render timechart

STICKINESS ON THE FLY!

stickiness

App Analytics: Using “Let”, and a really useful investigation query

So here’s just a small tidbit that can be useful.

First the “let” keyword – it basically allows you to bind a name to an expression or to a scalar. This of course is really useful if you plan to re-use the expression.

I’ll give an example that I use in real-life – a basic investigative query into failed requests. I’m joining exceptions and failed dependencies (similar to NRT proactive detection). I’m using the let keyword to easily modify the time range of my query.

Here it is, enjoy!

 

let investigationStartTime = datetime("2016-09-07");
let investigationEndTime = investigationStartTime + 1d;
requests
| where timestamp > investigationStartTime
| where timestamp < investigationEndTime
| where success == "False"
| join kind=leftouter(exceptions
   | where timestamp > investigationStartTime
   | where timestamp < investigationEndTime
   | project exception=type , operation_Id ) on operation_Id
| join kind=leftouter (dependencies
   | where timestamp > investigationStartTime
   | where timestamp < investigationEndTime
   | where success == "False"
   | project failed_dependency=name, operation_Id ) on operation_Id
| project timestamp, operation_Id , resultCode, exception, failed_dependency

Cool AppInsights Analytics: Extracting url host with a regular expression

Another nice feature of Kusto / Application Insights Analytics is full on support for regular expressions using the extract keyword.

A very useful application of this is all matter of manipulations you can do over the “url” field in requests. A common ask is understanding how much traffic is generated by any of your different hosts.

Since Analytics only carries the full url field, we need to parse out the host out using a regex. I took a really really simple regex in this case, but obviously it can be much more complex.

pageViews
| where timestamp > ago(1d)
| extend urlhost=extract('^(http://|https://)([^:\\/\\s]+)', 2, url)
| summarize count() by urlhost
| render piechart

Update:

There is now a simpler method to extract all url parts – parseurl.

requests
| take 5
| extend urlParts = parseurl(url)
| project url, urlParts, urlParts.Scheme, urlParts.Host, urlParts.Path, urlParts.Port

host