Why This Matters
Observability isn’t just turning on metrics and logs it’s about setting up a system that can answer questions you haven’t asked yet. Azure Monitor gives you that engine, but Log Analytics is the brain that makes it actionable.
In this post, we’re setting up the foundation for the entire “Azure Observability on Tap” series:
- Enable Azure Monitor and connect it with Log Analytics
- Define a workspace structure that scales
- Tear into data sources: VMs, PaaS, AKS, more
- Keep costs predictable while capturing enough signal
What is Azure Monitor + Log Analytics?
Azure Monitor is the native observability platform baked into Azure. It collects and organises:
- Metrics (numerical, structured)
- Logs (structured/semi-structured, often from agents or control planes)
- Traces, alerts, dashboards, and more
Log Analytics is the data engine underneath the “Logs” portion—it’s where you query, correlate, and visualise log data using Kusto Query Language (KQL). It all lives inside a Log Analytics workspace.
Think of Azure Monitor as your barista, and Log Analytics as the espresso machine. Everything routes through that workspace if you want detail, precision, and flavour.
Key Concepts
Concept | Summary |
---|---|
Log Analytics Workspace | Central hub for ingesting and querying logs |
Data Sources | VMs, containers, apps, PaaS services, diagnostic settings |
Diagnostic Settings | Route platform logs/metrics to Log Analytics, Storage, or Event Hub |
Pricing Tier | Pay-as-you-go (GB/month ingested), with cost management via sampling/filtering |
Azure Portal Walkthrough
Let’s set up Azure Monitor + Log Analytics step by step:
Step 1: Create a Log Analytics Workspace
-
Go to Azure Portal → Search: Log Analytics workspaces
-
Click + Create
-
Choose:
- Subscription
- Resource group
- Workspace name
- Region (ideally same as your resources)
-
Click Review + Create → then Create
Step 2: Connect Data Sources
Connect a VM
- Navigate to the VM → Monitoring → Insights
- Click Enable → Select your Workspace
- Azure installs the Log Analytics agent automatically
Connect a PaaS Resource (e.g., App Service)
- Go to the App Service → Monitoring → Diagnostic settings
- Click + Add diagnostic setting
- Choose logs & metrics to push
- Route them to the Log Analytics workspace you created
Step 3: Configure Retention Settings (Optional)
- Log Analytics Workspace → Usage and estimated costs
- Adjust data retention (default is 30 days, up to 730 days)
Bicep: Infrastructure as Code
Here’s a basic Bicep snippet to provision the Log Analytics workspace:
|
|
Add VM onboarding with a VM extension or automate diagnostic settings using Microsoft.Insights/diagnosticSettings
.
|
|
Architecture Overview
Here is a brief overview diagram to demonstrate what kind of resources and how they talk to Azure Log Analytics Workspaces
Gotchas & Edge Cases
- Agent sunset: The legacy Log Analytics agent (MMA) is being replaced by Azure Monitor Agent (AMA). Plan migration now.
- Data volume: Be careful with PaaS logs (especially App Service HTTP logs). Hook into Diagnostic Settings sampling or filtering to reduce noise.
- Latency: Logs may take a few minutes to appear. Not real-time—always validate latency in design.
- Cross-region costs: Sending logs from one region to a workspace in another? Expect egress costs.
Best Practices
- Centralise to fewer workspaces where RBAC and data sovereignty allows
- Apply naming standards (
<env>-observability-la
) - Align workspace region with workload
- Set retention to match compliance & usage (often 30–90 days unless required longer)
- Pull in Diagnostics + platform metrics especially for network resources
- Use Resource Diagnostic Settings over legacy agents when possible
Log Analytics isn’t just for performance dashboards it’s your front line for security signals too. I usually deploy a dedicated management workspace to capture tenant level logs like Entra ID Sign-ins, Risky Users, and Azure Activity Logs.
Yes, the “fewer workspaces” rule can get messy in the wild but it’s still worth the effort. Nothing drains your day faster than chasing a log across five workspaces just to track down a single issue.