Back-fill Missing Dates With Zeros in a Time Chart

A common ask I’ve heard from several users, is the ability to fill gaps in your data in Kusto/App Analytics/DataExplorer (lots of names these days!):

If your data has gaps in time in it, the default behavior for App Analytics is to “connect the dots”, and not really reflect that there was no data in these times. In lots of cases we’d like to fill these missing dates with zeros.

The way to go to handle this, is to use the “make-series” operator. This operator exists to enable advanced time-series analysis on your data, but we’ll just use it for the simple use-case of adding missing dates with a “0” value.

Some added sophistication is converting the series back to a *regular* summarize using “mvexpand”, so we can continue to transform the data as usual.

Here’s the query (Thanks Tom for helping refine this query!) :

let start=floor(ago(3d), 1d);
let end=floor(now(), 1d);
let interval=5m;
requests
| where timestamp > start
| make-series counter=count() default=0 
              on timestamp in range(start, end, interval)
| mvexpand timestamp, counter
| project todatetime(timestamp), toint(counter)
| render timechart

 

Creating Beautiful Sankey Diagrams From App Insights Custom Events

I came across this tweet the other day:

It sounded a lot like a challenge to me, so I just couldn’t resist!!

Sankey diagrams for those who don’t know are an amazing tool for describing user flows AND are the basis for one of the most famous data visualizations of all-time. But really, I had no idea how to create one. So, I googled Sankey + PowerBI and came across this fantastic Amir Netz video:

So all you need to create the diagram is a table with 3 columns:

  • Source
  • Destination
  • Count

And PowerBI takes care of the rest.

We already know you can take any App Insights Analytics query and “export” the data to PowerBI.

So the only problem now is how do I transform my AppInsights custom events table into a Sankey events table?

Let’s go to Analytics!

First it’s best to have an idea of what we’re trying to do. Decide on 5-10 events that make sense for a user session flow. In my case, I want to see the flow of users through a new feature called “CTD”. So the events I chose are:

  • CTD_Available (feature is available for use)
  • CTD_Result (user used the feature and got a result)
  • CTD_DrillIn (user chose to further drill-in the results)
  • CTD_Feedback (user chose to give feedback)

In every step, I’m interested in seeing how many users I’m “losing”, and what they’re doing next.

Ok, let’s get to work!

First query we’ll

  • Filter out only relevant events
  • Sort by timestamp asc (don’t forget that in this chart, order is important!)
  • Summarize by session_id using makelist, to put all events that happened in that session in an ordered list. If you’re unfamiliar with makelist, all it does is take all the values of the column, and stuffs them into a list. The resulting lists are the ordered events that users triggered in each session.
customEvents
| where timestamp > ago(7d)
| where name=="CTD_Available" or name=="CTD_Result" or 
        name=="CTD_Drillin" or name== "CTD_Feedback"
| sort by timestamp asc
| summarize l=makelist(name) by session_Id

Next step I’ll do is add an “EndSession” event to each list, just to make sure my final diagram is symmetric. You might already have this event as part of your telemetry, I don’t. This is optional, and you can choose to remove this line.

| extend l=todynamic(replace(@"\]", ',"EndSession" ]', tostring(l))) 

Next step, I’d like to create “tuples” for source and destination from each list. I want to turn:

Available -> Result -> Feedback -> EndSession

Into:

[Available, Result], [Result, Feedback], [Feedback, EndSession]

To do this, I need to chop off the first item in the list and “zip” it (like a zipper) with the original list. In c# this is very easy – list.Zip(list.Skip(1))..

Amazingly, App Analytics has a zip command! Tragically, it doesn’t have a skip… :(. Which means we need to do some more ugly regex work in order to chop off the first element.

| extend l_chopped=todynamic(replace(@"\[""(\w+)"",", @"[", tostring(l)))

Then I zip, and use mvexpand, to create one row per tuple created

| extend z=zip(l, l_chopped) 
| mvexpand z

And I remove the lonely “EndSession”s which are useless artifacts.

| where tostring(z[0]) != "EndSession"

Last thing left to do is summarize to get the counts for each unique tuple, and remove the tuples with identical source and destinations.

The final query:

customEvents
| where timestamp > ago(7d)
| where name=="CTD_Available" or name=="CTD_Result" or 
        name=="CTD_Drillin" or name== "CTD_Feedback"
| sort by timestamp asc
| summarize l=makelist(name) by session_Id
| extend l=todynamic(replace(@"\]", ',"EndSession" ]', tostring(l))) 
| extend l_chopped=todynamic(replace(@"\[""(\w+)"",", @"[", tostring(l)))
| extend z=zip(l, l_chopped) 
| mvexpand z
| where tostring(z[0]) != "EndSession"
| summarize cnt() by source=tostring(z[0]),dest=tostring(z[1])
| where source!=dest

We’re in business!

Now I want to get this data to Power BI…

  • Download Power BI desktop.
  • Goto the Power BI visuals gallery, search “sankey” and download the visuals you want (my preference is “sankey with labels”)
  • Open Power BI, and goto GetData -> BlankQuery
  • In Query editor, goto View -> Advanced Editor
  • Now go back to the prepared AppAnalytics query, and hit export to Power BI

ExportPBI

  • Take the query in the downloaded text file, and paste it into the query editor
  • Press done

Now the data should be there! Just one more step!

  • Back in PowerBI, import the visuals you previously downloaded

ImportVisuals

  • Select Source -> source, Destination -> dest, Weight -> count_

WOOT!

Sankey

Calculating Stickiness Using AppInsights Analytics

Update:

There is a new, simpler, better way to calculate usage metrics such as stickiness, churn and return rate.


In previous posts I  demonstrated some simple yet nifty tricks to get stuff done in app insights analytics – like extracting data from traces, or joining tables.

Those were mostly pretty simple queries, showing some basic Kusto techniques.

In this post I’m going to show something much more complex, with some advanced concepts.

We’re gonna take it slow, but be warned!

What I wanna do is calculate “Stickiness“. This is a measure of user engagement, or addiction to your app. It’s computed by dividing DAU (daily active users) by MAU (monthly active users) in a rolling 28 day window. It basically shows what percentage of your total user base is using your app daily.

Computing your DAU is pretty simple in analytics, and can be done using a simple dcount aggregation:

requests
| where timestamp > ago(60d)
| summarize dcount(user_Id) by bin(timestamp, 1d)

But how do you compute a rolling 28-day window unique count of users? For this we’re gonna need to get familiar with some new Kusto operators:

hll() – hyperloglog – calculates the intermediate results of a dcount.

hll_merge() – used to merge together several hll intermediate results.

dcount_hll() – used to calculate the final dcount from an hll intermediate result.

range() – generates a dynamic array with equal spacing

mvexpand() – expands a list into rows

let – binds names to expressions. I’ve already shown a use for let in a past post.

It’s kind of a lot, but let’s get going and see how we’re gonna use each of these along the way.

Let’s do this in steps. Our goal is to calculate a moving 28 day window MAU. First thing, instead of dcount we’ll use hll, to get the intermediate results:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)

With the intermediate results in place, the next phase is to think about which dates will use each intermediate result. If we take 20/1/2017 as an example, well, we know that each subsequent day, 28 days forward, will want to use this hll for it’s moving window result. So we build a list of [21/1/2017, 22/1/2017 … 18/2/2017].

So what we do here, and this is a little dirty, is create a list of all the future dates that will need this result. We do this using the range operator:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)
| extend periodKey = range(bin(timestamp, 1d), timestamp+28d, 1d)

Now let’s turn every item in the periodKey column list, into a row in the table. We’ll do this with mvexpand:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)
| extend periodKey = range(bin(timestamp, 1d), timestamp+28d, 1d)
| mvexpand periodKey

So now, when sorting by periodKey, each date in that column has exactly 28 rows, each with an hll from a different date it needs to calculate the total dcount. We’re almost done! Let’s calculate the dcount:

requests
| where timestamp > ago(60d)
| summarize hll(user_Id) by bin(timestamp, 1d)
| extend periodKey = range(bin(timestamp, 1d), timestamp+28d, 1d)
| mvexpand periodKey
| summarize rollingUsers = dcount_hll(hll_merge(hll_user_Id)) by todatetime(periodKey)

That’s the 28 day rolling MAU right there!

Now let’s make this entire query modular, so we can calculate any length rolling dcount we’d like – including a zero day rolling (DAU actually) – and calculate our metric:

let start=ago(60d);
let period=1d;
let RollingDcount = (rolling:timespan)
{
requests
| where timestamp > start
| summarize hll(user_Id) by bin(timestamp, period)
| extend periodKey = range(bin(timestamp, period), timestamp+rolling, period)
| mvexpand periodKey
| summarize rollingUsers = dcount_hll(hll_merge(hll_user_Id)) by todatetime(periodKey)
};
RollingDcount(28d)
| join RollingDcount(0d) on periodKey
| where periodKey < now() and periodKey > start + 28d
| project Stickiness = rollingUsers1 *1.0/rollingUsers, periodKey
| render timechart

STICKINESS ON THE FLY!

stickiness