Mapping Attacker Movement Through Azure Networks

Once an attacker is inside, your routing decisions decide how far they can go.

Most Azure networks aren’t breached because they’re badly built.
They’re breached because they’re too well connected.

Hub‑and‑spoke. Shared services. Clean routing. Minimal friction.
All sensible, until someone is already inside.

At that point, your network diagram stops being documentation and starts being a movement map.

Assume breach isn’t about stopping entry.
It’s about deciding how far someone can walk once they’re through the door.

The Mental Model

Common assumption:

“If it’s internal and peered, it’s trusted enough.”

This usually manifests as:

  • Broad hub‑to‑spoke peering
  • Shared services VNets reachable from everywhere
  • Routing optimised for convenience, not containment

Why it misleads:
Azure networking is not transitive in protocol terms but it is effectively transitive in operational reality.

If Azure’s control plane gives you a valid route:

  • The destination exists
  • The return path exists
  • The trust decision has already been made

NSGs may block traffic, but routing defines possibility.
And possibility is all an attacker needs to plan movement.

How It Really Works

Attackers don’t need clever tricks to move through Azure networks.
They rely on the same primitives your workloads do.

Routing Defines the Playing Field

System routes and peering expose:

  • Which address spaces are reachable
  • Which segments are adjacent
  • Which services sit “between” environments

Once a route exists, that relationship is discoverable, regardless of whether traffic is later denied.

Peering Is a Trust Boundary (Whether You Admit It or Not)

VNet peering establishes:

  • Mutual reachability
  • Automatic return paths
  • Address‑space visibility

From an assume‑breach lens, peering answers the most important question an attacker asks:

“What am I implicitly trusted to talk to?”

Shared Services Quietly Become Pivots

Jump hosts, automation, internal APIs, update services these aren’t neutral.
They are high‑leverage infrastructure.

When they’re broadly reachable, they stop being helpers and start being accelerators.

Real‑World Impact

This reframes how you should design and operate Azure networks.

Design: From Connectivity to Containment

The key question is no longer:

“Who needs access to this?”

It becomes:

“If this subnet is compromised, what trust does it inherit?”

That shift usually leads to:

  • Smaller, purpose‑bound VNets
  • Fewer unconditional peerings
  • Explicit containment boundaries that exist before an incident

Incident Response: Optionality Matters

During an incident you can’t redesign the network.
You can only remove paths.

Architectures that collapse operations and containment into the same peering force bad choices:

  • Keep dangerous paths open to preserve access
  • Or cut everything and accept collateral damage

Assume breach is about preserving choice under pressure.

Implementation Examples

The Diagram That Actually Matters

Most diagrams show connectivity.
This one highlights the single link that usually breaks containment.

flowchart LR A[Compromised Workload
Spoke-App-1] -->|System Routes| B[App Subnets] B -->|Peering| C[Hub VNet] C -->|Peering| D[Shared Services VNet] C -->|Peering| E[Spoke-App-2] D --> F[Jump Hosts] D --> G[Automation / APIs] style A fill:#ffdddd style D fill:#ffe0b2 style C fill:#e3f2fd style B fill:#f5f5f5 %% Emphasis B -->|Incident-Critical Trust Path| D

Judgement, not illustration:

If a single peering link enables both normal operations and lateral movement during compromise, it is the wrong trust boundary.

In most environments, that link is: Spoke → Shared Services (often via the hub).

That is the path you will want to cut first and often can’t.

Inspecting the Paths You’ve Already Allowed

You don’t need attacker tooling to see this. Azure exposes it.

1
2
3
az network nic show-effective-route-table \
  --resource-group rg-app1 \
  --name nic-app1-vm01

When you review effective routes, ask one question:

If this destination were compromised, could I afford to isolate it immediately?

If the answer is no, that’s not connectivity, it’s embedded trust.

Shared Services: Unacceptable vs Merely Risky

This is where most architectures quietly fail assume breach.

Shared Services Are Unacceptable When

All of the following are true:

  • Broadly reachable from multiple spokes
  • Required during incidents (access, recovery, automation)
  • Difficult to isolate without breaking everything else

In this state, compromise of any spoke plausibly exposes:

  • The tools you use to respond
  • The systems you rely on to recover

That’s not risk it’s self‑defeating design.

Shared Services Are Merely Risky When

Risk is a conscious trade when:

  • The service is not required for containment
  • Trust is one‑way and disposable
  • Losing it degrades visibility, not control

Examples that often fall here:

  • Log ingestion
  • Metrics collection
  • Read‑only artifact distribution

The distinction isn’t shared vs not shared.
It’s optional vs non‑optional under duress.

Gotchas & Edge Cases

  • Firewalls and NVAs
    Centralising enforcement doesn’t remove trust it concentrates it. Treat them as blast‑radius multipliers.

  • Private Endpoints
    They add paths. They do not remove the need to reason about trust boundaries.

  • Overlapping address spaces
    They delay containment decisions when time matters most.

Best Practices

  • Treat hub‑and‑spoke as a routing pattern, not a trust model
  • Classify services by incident criticality, not convenience
  • Minimise peerings you cannot sever quickly
  • Regularly review effective routes with a containment mindset
  • Document which links you would cut first before you need to
🍺
Brewed Insight:

Hub‑and‑spoke isn’t the problem.
Assuming the hub and what lives behind it, is always trusted is.

Design networks so you can afford to be wrong.

Learn More