Most Azure networks don’t fail loudly. They drift.
Not because engineers are careless, but because intent lives in people’s heads, while configuration lives in dozens of VNets, spread across subscriptions, regions, and teams.
Azure Virtual Network Manager (AVNM) exists for one reason:
to close that gap.
This post argues a clear position:
Once your Azure network spans multiple teams or subscriptions, per‑VNet configuration becomes an operational liability, and intent‑based networking stops being a “nice to have”.
The Mental Model: From Ownership to Obligation
The assumption we need to kill
If you squint, most Azure networks are still designed like this:
- VNets own their peerings
- Subnets own their NSGs
- Teams own “their” part of the network
Automation helps, but the unit of control is still the VNet.
Why this fails in real organisations
At scale, networks aren’t owned they’re shared.
- Platform teams define guardrails
- Application teams create VNets
- Security teams care about rules, not topology
- Nobody owns the relationships between VNets end‑to‑end
The result is predictable:
- Inconsistent connectivity
- Security drift
- Manual exceptions that never get undone
AVNM exists to move networking from ownership to obligation.
How AVNM Actually Changes the Game
This is the critical shift:
With AVNM, VNets no longer decide how they connect.
The platform decides, and VNets comply.
AVNM introduces a control plane where you declare:
- Which VNets belong to the same connectivity domain
- Which security rules are non‑negotiable
- Where that intent is allowed to apply
VNets opt in by identity (tags, scope), not by bespoke wiring.
That’s not convenience.
That’s enforceable architecture.
Control Plane vs Data Plane (Why This Isn’t Magic)
Control Plane] NG[Network Groups] CFG[Connectivity & Security Intent] subgraph DataPlane["Azure VNets (Data Plane)"] Hub[VNet - Hub] Spoke1[VNet - Spoke] Spoke2[VNet - Spoke] end AVNM --> NG NG --> CFG CFG --> Hub CFG --> Spoke1 CFG --> Spoke2
A few truths worth stating plainly:
- AVNM does not carry traffic
- It does not dynamically react to failures
- Nothing happens until you deploy a configuration
That explicit deployment step is the point.
It’s where intent becomes a controlled change, not background magic.
Scoping Is a Power Tool (And a Loaded One)
AVNM is scoped to a management group or subscription.
That scope defines:
- Which VNets can be grouped
- Where intent can be enforced
- Your maximum blast radius
Here’s the uncomfortable reality:
A mis‑scoped AVNM can break every VNet it governs correctly.
Examples I’ve seen in the wild:
- A connectivity config deployed at the wrong management group, flattening carefully segmented landing zones
- Security Admin Rules unintentionally overriding workload‑specific NSGs across dozens of subscriptions
- Platform teams granting AVNM access “temporarily” and never revoking it
AVNM should be treated like:
- Azure Policy assignments
- RBAC at management group scope
Rare, deliberate, and boring, by design.
Connectivity Across Subscriptions: Where AVNM Becomes Mandatory
This is the point where AVNM stops being optional.
Without AVNM
- Cross‑subscription peering requires coordination and trust
- Consistency relies on documentation and hope
- New VNets are snowflakes until someone remembers to wire them in
With AVNM
- Network Groups span subscriptions
- Connectivity intent applies uniformly
- New VNets inherit behaviour automatically
The key change isn’t technical, it’s organisational:
Application teams no longer need permission to design the network.
They only need permission to join it.
That reduces friction and risk, which is a rare combination.
Implementation Example: Opt‑In by Identity, Not Wiring
This example isn’t about syntax it’s about behaviour.
Define AVNM at platform scope
| |
This establishes who is allowed to define intent.
Define a Network Group using tags
| |
The important outcome:
- App teams create VNets
- They apply a tag
- The platform enforces connectivity and security
No tickets.
No peering spreadsheets.
No “did you remember to…”.
Day‑2 Reality: What Actually Changes
Design
- You design connectivity domains, not diagrams
- Architecture reviews focus on intent, not implementation detail
Operations
- Drift is corrected by redeploying intent
- Network changes become auditable events
- Rollback is a config redeploy, not a forensic exercise
Failure modes
- Bad intent propagates fast
- Good intent scales effortlessly
AVNM amplifies whatever maturity you already have.
Gotchas That Actually Matter
- AVNM configurations deploy per region
- Security Admin Rules always override NSGs
- Dynamic groups live or die by tag hygiene
- Existing peerings don’t disappear unless intent says so
The biggest risk isn’t technical.
It’s treating AVNM like a convenience feature instead of a governance boundary.
Best Practices (Hard‑Won, Not Theoretical)
- Start with connectivity, not security
- Scope narrowly, expand reluctantly
- Treat tags as API contracts
- Lock down who can deploy configurations
- Document intent like you would policy