Network and Identity Dependencies That Break Migrations

The stuff nobody diagrams until production goes dark

Most migrations don’t fail because Azure can’t do the thing.

They fail because something still expects to talk to something else over a path, under an identity, via a trust relationship nobody explicitly designed and that assumption survives right up until cutover.

At that point, the migration plan stops being an execution document and starts being an incident report.

The Mental Model

Common assumption:
“If it works today, and we move it carefully, the dependencies will come along for the ride.”

Why it’s wrong:
Network and identity dependencies aren’t portable assets. They’re emergent properties of an environment that’s been evolving for years, sometimes decades, without strong boundaries.

On‑prem, those dependencies are tolerated because the environment is permissive by default.
In Azure, boundaries are explicit, evaluated continuously, and enforced whether you meant them to be or not.

Migration doesn’t create these problems. It exposes them.

How It Really Works

Most estates operate with three overlapping dependency layers:

  • Declared dependencies
    Firewall rules, documented service accounts, known trusts.

  • Inherited dependencies
    Transitive network access, group‑based permissions, domain‑wide assumptions.

  • Accidental dependencies
    Scripts, scanners, batch jobs, agents, and “temporary” access that outlived the person who added it.

Only the first layer ever makes it into migration planning.

The other two are discovered late, usually under load, usually in production.

What This Looks Like in Practice

graph TD A[Line-of-Business App] -->|SQL| B[Shared Database] A -->|LDAP / Kerberos| C[On-Prem AD] D[Security Scanner] -->|SMB / WMI| A E[Batch Job] -->|Integrated Auth| B subgraph "Rarely Documented" D E end

The migration plan covers A → B.

The outage is caused by everything else.

Real‑World Impact

1. Reachability Is a Chain, Not a Switch

In Azure, “can it talk?” depends on all of the following being true at the same time:

  • A valid route exists
  • NSGs allow the flow
  • Firewalls allow the flow
  • Private endpoints / service endpoints resolve correctly
  • The identity is authorised at the destination

A failure anywhere in that chain produces the same symptom: timeout.

If your migration validation only checks one layer, you’re not validating you’re effectively guessing.

2. Identity Coupling Is Usually the Hard Stop

In real environments, identity is often coupled to:

  • Machines instead of workloads
    Apps authenticating as computer accounts.

  • Directories instead of scopes
    Domain‑wide permissions where resource‑level access was intended.

  • Humans and systems sharing identities
    Service accounts used interactively “just this once”.

These patterns don’t degrade gracefully.
They fail the moment you introduce isolation.

If you haven’t mapped identity flows explicitly, your migration risk is undefined. That’s not a technical issue; it’s a governance one.

3. Regulated and Hybrid Environments Amplify the Damage

In regulated estates:

  • Network paths are deliberately constrained
  • Identity permissions are layered and inherited
  • Security tooling relies on privileged, out‑of‑band access

This creates two compounding problems:

  1. Automated discovery misses critical paths
  2. Teams trust the output anyway

That false confidence is worse than ignorance. It locks timelines around assumptions that won’t survive first contact with reality.

Implementation Examples

Proving Network Dependencies with Virtual Network Flow Logs

ℹ️
NSG Flow Logs are being retired (end of support June 30), and Azure’s direction is clear:
Virtual Network Flow Logs (VNet Flow Logs) are now the supported way to observe traffic patterns at the network boundary.

This matters operationally, not just administratively.

VNet Flow Logs shift the question from “did the NSG allow it?” to “what traffic actually traversed the virtual network?” which is exactly what you need during a migration.

Example: enabling VNet Flow Logs at the virtual network level.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
resource storageAccount 'Microsoft.Storage/storageAccounts@2021-04-01' existing = {
    name: 'savnetflowlogs'
}

resource virtualNetwork 'Microsoft.Network/virtualNetworks@2021-02-01' existing = {
    name: 'vnet-production-01'
}

resource networkWatcher 'Microsoft.Network/networkWatchers@2021-02-01' existing = {
    name: 'NetworkWatcher'
}

resource vnetFlowLogs 'Microsoft.Network/networkWatchers/flowLogs@2023-09-01' = {
  parent: networkWatcher
  name: 'vnetflowlogs'
  location: resourceGroup().location
  properties: {
    targetResourceId: virtualNetwork.id
    enabled: true
    storageId: storageAccount.id
    format: {
      type: 'JSON'
      version: 2
    }
  }
}

What this changes in practice:

  • You observe real traffic paths, not just rule evaluation
  • Cross‑subnet and transitive flows become visible
  • Infrequent or “forgotten” dependencies show up over time
  • Migration validation shifts from intent to evidence

This is particularly valuable in hybrid environments, where traffic often enters the VNet in ways no one diagrammed.

Identity Reality Check: Application Permissions

Many migration blockers aren’t bugs, they’re permissions that were never questioned.

1
2
az ad app permission list \
  --id <app-registration-id>

Look for:

  • Directory‑wide permissions granted “temporarily”
  • Legacy APIs still in use
  • Permissions that only function because of on‑prem trust paths

If removing a permission scares the team, you’ve found a migration risk, not a feature.

Gotchas & Edge Cases

  • Transitive trust assumptions
    Visibility ≠ authorisation. Azure will enforce the difference.

  • Hard‑coded allowlists
    Often embedded in third‑party services and forgotten until traffic shifts.

  • Infrequent jobs
    Monthly or quarterly processes that fail long after go‑live.

  • Flow log misinterpretation
    Logs show that traffic flowed, not why it was allowed. Context still matters.

  • Discovery tooling bias
    Encrypted traffic, conditional access, and just‑in‑time permissions routinely evade automated mapping.

Best Practices (Hard Rules)

  • If a dependency can’t be explained, treat it as high risk
  • If identity flows aren’t diagrammed, the workload isn’t ready to move
  • Validate under restricted network conditions, not permissive ones
  • Assume discovery tools are incomplete, design reviews must compensate
  • Delay migrations when identity uncertainty is high; rework costs more later
🍺
Brewed Insight: Migration isn’t about moving systems, it’s about forcing your environment to tell the truth.
Network and identity dependencies are where it usually lies the most.

Learn More