Azure Observability on Tap - End-to-End Azure Monitoring - Part 1

Setting up observability using Azure Monitor & Log Analytics. Before you can observe anything in the cloud, you’ve got to teach it how to talk. Let’s wire up Azure Monitor and its Log Analytics engine to capture the signal, not the noise.

Why This Matters

Observability isn’t just turning on metrics and logs it’s about setting up a system that can answer questions you haven’t asked yet. Azure Monitor gives you that engine, but Log Analytics is the brain that makes it actionable.

In this post, we’re setting up the foundation for the entire “Azure Observability on Tap” series:

  • Enable Azure Monitor and connect it with Log Analytics
  • Define a workspace structure that scales
  • Tear into data sources: VMs, PaaS, AKS, more
  • Keep costs predictable while capturing enough signal

What is Azure Monitor + Log Analytics?

Azure Monitor is the native observability platform baked into Azure. It collects and organises:

  • Metrics (numerical, structured)
  • Logs (structured/semi-structured, often from agents or control planes)
  • Traces, alerts, dashboards, and more

Log Analytics is the data engine underneath the “Logs” portion—it’s where you query, correlate, and visualise log data using Kusto Query Language (KQL). It all lives inside a Log Analytics workspace.

Think of Azure Monitor as your barista, and Log Analytics as the espresso machine. Everything routes through that workspace if you want detail, precision, and flavour.

Key Concepts

Concept Summary
Log Analytics Workspace Central hub for ingesting and querying logs
Data Sources VMs, containers, apps, PaaS services, diagnostic settings
Diagnostic Settings Route platform logs/metrics to Log Analytics, Storage, or Event Hub
Pricing Tier Pay-as-you-go (GB/month ingested), with cost management via sampling/filtering

Azure Portal Walkthrough

Let’s set up Azure Monitor + Log Analytics step by step:

Step 1: Create a Log Analytics Workspace

  1. Go to Azure PortalSearch: Log Analytics workspaces

  2. Click + Create

  3. Choose:

    • Subscription
    • Resource group
    • Workspace name
    • Region (ideally same as your resources)
  4. Click Review + Create → then Create

Step 2: Connect Data Sources

Connect a VM

  1. Navigate to the VM → MonitoringInsights
  2. Click Enable → Select your Workspace
  3. Azure installs the Log Analytics agent automatically

Connect a PaaS Resource (e.g., App Service)

  1. Go to the App Service → MonitoringDiagnostic settings
  2. Click + Add diagnostic setting
  3. Choose logs & metrics to push
  4. Route them to the Log Analytics workspace you created
ℹ️
Repeat this for resources like Azure Firewall, Application Gateway, Key Vault, etc.

Step 3: Configure Retention Settings (Optional)

  1. Log Analytics Workspace → Usage and estimated costs
  2. Adjust data retention (default is 30 days, up to 730 days)

Bicep: Infrastructure as Code

Here’s a basic Bicep snippet to provision the Log Analytics workspace:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
@description('Name the log analytics workspace')
param workspaceName string = 'MyLogAnalyticsWorkspace'

resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2025-02-01' = {
  name: workspaceName
  location: resourceGroup().location
  properties: {
    retentionInDays: 60
    sku: {
      name: 'PerGB2018'
    }
  }
}

Add VM onboarding with a VM extension or automate diagnostic settings using Microsoft.Insights/diagnosticSettings.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@description('Name of the Log Analytics Workspace')
param logAnalyticsWorkspaceName string = 'MyLogAnalyticsWorkspace'

@description('Resource Group Name for Log Analytics Workspace')
param logAnalyticsResourceGroup string = 'MyLogAnalyticsRG'

@description('Name of the Virtual Machine to monitor')
param vmname string = 'myVM'

// Reference the existing Log Analytics Workspace
resource logAnalytics 'Microsoft.OperationalInsights/workspaces@2025-02-01' existing = {
  name: logAnalyticsWorkspaceName
  scope: resourceGroup(logAnalyticsResourceGroup)
}

// Ensure the diagnostic settings are applied to all VMs in the resource group  
resource vmResources 'Microsoft.Compute/virtualMachines@2024-11-01' existing = {
  name: vmname
}

// Create diagnostic settings for each VM to send logs to Log Analytics Workspace
resource vmDiagnosticSettings 'Microsoft.Insights/diagnosticSettings@2021-05-01-preview' =  {
  name: '${vmname}-diagnosticSettings'
  scope: vmResources
  properties: {
    workspaceId: logAnalytics.id
    logs: [
      {
        category: 'VirtualMachineInsights'
        enabled: true
        retentionPolicy: {
          enabled: false
          days: 0
        }
      }
    ]
    metrics: [
      {
        category: 'AllMetrics'
        enabled: true
        retentionPolicy: {
          enabled: false
          days: 0
        }
      }
    ]
  }
}

Architecture Overview

Here is a brief overview diagram to demonstrate what kind of resources and how they talk to Azure Log Analytics Workspaces

graph TD VM1[Azure VM] --> LA[Log Analytics Workspace] AppService[App Service] --> DS1[Diagnostic Settings] --> LA AKS[AKS Cluster] --> DS2 --> LA AzureFunction[Function App] --> DS3 --> LA LogAnalyticsAgent[Log Analytics Agent] --> LA LA --> AzureMonitor[Azure Monitor Insights] AzureMonitor --> Dashboards AzureMonitor --> Alerts

Gotchas & Edge Cases

  • Agent sunset: The legacy Log Analytics agent (MMA) is being replaced by Azure Monitor Agent (AMA). Plan migration now.
  • Data volume: Be careful with PaaS logs (especially App Service HTTP logs). Hook into Diagnostic Settings sampling or filtering to reduce noise.
  • Latency: Logs may take a few minutes to appear. Not real-time—always validate latency in design.
  • Cross-region costs: Sending logs from one region to a workspace in another? Expect egress costs.

Best Practices

  • Centralise to fewer workspaces where RBAC and data sovereignty allows
  • Apply naming standards (<env>-observability-la)
  • Align workspace region with workload
  • Set retention to match compliance & usage (often 30–90 days unless required longer)
  • Pull in Diagnostics + platform metrics especially for network resources
  • Use Resource Diagnostic Settings over legacy agents when possible
🍺
Brewed Insight:

Log Analytics isn’t just for performance dashboards it’s your front line for security signals too. I usually deploy a dedicated management workspace to capture tenant level logs like Entra ID Sign-ins, Risky Users, and Azure Activity Logs.

Yes, the “fewer workspaces” rule can get messy in the wild but it’s still worth the effort. Nothing drains your day faster than chasing a log across five workspaces just to track down a single issue.

Learn More