App Analytics: Using “Let”, and a really useful investigation query

So here’s just a small tidbit that can be useful.

First the “let” keyword – it basically allows you to bind a name to an expression or to a scalar. This of course is really useful if you plan to re-use the expression.

I’ll give an example that I use in real-life – a basic investigative query into failed requests. I’m joining exceptions and failed dependencies (similar to NRT proactive detection). I’m using the let keyword to easily modify the time range of my query.

Here it is, enjoy!

 

let investigationStartTime = datetime("2016-09-07");
let investigationEndTime = investigationStartTime + 1d;
requests
| where timestamp > investigationStartTime
| where timestamp < investigationEndTime
| where success == "False"
| join kind=leftouter(exceptions
   | where timestamp > investigationStartTime
   | where timestamp < investigationEndTime
   | project exception=type , operation_Id ) on operation_Id
| join kind=leftouter (dependencies
   | where timestamp > investigationStartTime
   | where timestamp < investigationEndTime
   | where success == "False"
   | project failed_dependency=name, operation_Id ) on operation_Id
| project timestamp, operation_Id , resultCode, exception, failed_dependency

App Insights Analytics: Extracting data from traces

I wanna show two real-world examples (it really happened to me!) of extracting data from traces, and then using that data to get really great insights.

So a little context here – I have a service that reads and processes messages from an Azure Queue. This message processing can fail, causing the same message to be retried many times.

I We recently introduced a bug into the service (as usual.. ) which caused requests to fail on a null reference exception. I wanted to know exactly how many messages were affected by this bug, but it was kind of hard to tell because the retries cause a lot of my service metrics to be off.

Luckily I have a trace just as I am beginning to process a message that shows the message id :

Start handling message id: 0828ae20-ba09-4f83-bb46-69f4fe25b510, dequeue count: 1, message: …

So what I did is extract the message id from the trace using a simple regex, and was then able to count messages using dcount:

traces
 | where timestamp > ago(1d)
 | where message startswith "Start handling"
 | extend messageid = tostring(extract("Start handling message id: ([^:\\/\\s]+), ", 1, message))
 | summarize dcount(messageid)

And in order to count how many messages were affected by the exception, I did a double join – to the failed requests and to exceptions tables:

requests 
| where timestamp > ago(1d)
| where success == "False"
| join (exceptions
   | where timestamp > ago(1d)
   | where type contains "NullRef"
   ) on operation_Id
| join (traces
   | where timestamp > ago(1d)
   | where message startswith "Start handling"
   | extend messageid = tostring(extract("Start handling message id: ([^:\\/\\s]+), ", 1, message))
   ) on operation_Id
| summarize dcount(messageid)

Voila!

The second example is similar, but this time I extracted a measurement.

Again I started from a trace – I have a trace detailing exactly how late a message that came in the queue is. It looks like this:

Latency: 21 minutes.

I wanted to turn these traces into measurable data that I can slice and dice on. So I used the same extend+extract method as before + a todouble:

traces
| where timestamp > ago(1d)
| where message contains "Latency: "
| extend latency = todouble(extract("Latency: ([^:\\/\\s]+) minutes.", 1, message))
| summarize percentile(latency, 90)

AWESOME!

App Insights: Correlating your telemetry

So you’re on-boarded to App Insights.

But to really get the most of it, you really need to make sure that all the telemetry you’re sending – requests, views, events, exceptions are correlated – meaning you can easily see what happened on single operation in your service.

For web applications, events in App Insights are correlated out-of-the-box using operation id. Meaning that everything in your app does on single operation can be viewed easily in one view. This is why on a lot of blades in app insights you’ll see this button:

operation

Operation correlates all telemetry on a server request.

If you’d like to correlate everything that happens from a user session, you can also correlate using session id.

Unfortunately, this does not come ready to use for worker roles.

If you want all of your telemetry correlated on a worker role, you should implement a telemetry initializer.

An initializer will ANY TIME any piece of telemtery is ent to app insight. You can update the context of that telemetry – including the operation id.