Edit

Share via


Virtual network connectivity options and spoke-to-spoke communication

Microsoft Entra ID
Azure Virtual Network
Azure VPN Gateway
Azure Virtual Network Manager

This article compares two ways to connect virtual networks in Azure: virtual network peering and virtual private network (VPN) gateways. It also explores spoke-to-spoke communication patterns for hub-and-spoke architectures to help you choose the optimal approach for your networking requirements.

Virtual networks form the foundation of these topologies. A virtual network is a private network space in Azure that provides isolation for your resources. By default, Azure doesn't route traffic between virtual networks. But you can connect virtual networks, either within a single region or across two regions, to route traffic between them.

Virtual network connection types

  • Virtual network peering connects two Azure virtual networks, which makes them appear as one for connectivity purposes. Azure routes traffic between virtual machines in the peered virtual networks through the Microsoft backbone by using private IP addresses only.

  • Subnet peering connects specific subnets between virtual networks instead of peering entire virtual networks.

  • VPN gateways are specific types of virtual network gateways that send traffic between an Azure virtual network and a cross-premises ___location over the public internet. You can also use a VPN gateway to send traffic between Azure virtual networks. Each virtual network supports only one VPN gateway.

Gateway transit

Virtual network peering and VPN gateways can also coexist through gateway transit.

Gateway transit lets you use a peered virtual network's gateway to connect to on-premises, instead of creating a new gateway. As your workloads in Azure increase, you must scale your networks across regions and virtual networks to support that growth. Use gateway transit to share an Azure ExpressRoute or VPN gateway with all peered virtual networks and manage connectivity in one place. This method saves money and simplifies management.

When you enable gateway transit on virtual network peering, you can create a transit virtual network that contains your VPN gateway, network virtual appliance (NVA), and other shared services. As your organization adds new applications or business units and creates new virtual networks, you can connect them to your transit virtual network by using peering. This setup avoids network complexity and reduces the effort required to manage multiple gateways and appliances.

Configure connections

Virtual network peering and VPN gateways support the following connection types:

  • Virtual networks in different regions
  • Virtual networks in different Microsoft Entra tenants
  • Virtual networks in different Azure subscriptions

For more information, see the following articles:

Comparison of virtual network peering and VPN gateway

Feature or capability Virtual network peering VPN gateway
Limits Up to 500 virtual network peerings per virtual network. Use the Azure Virtual Network Manager connectivity configuration feature to create up to 1,000 virtual networks peerings per virtual network. One VPN gateway per virtual network. The maximum number of tunnels per gateway depends on the gateway SKU.
Pricing model Ingress and egress Hourly and egress
Encryption Use Azure Virtual Network encryption. Apply custom IPsec/IKE policy to new or existing connections. See About cryptographic requirements and Azure VPN gateways.
Bandwidth limitations No bandwidth limitations Varies based on SKU
Private? Yes, routes through the Microsoft backbone and avoids the public internet VPN gateways use public IP addresses, but traffic routes through the Microsoft backbone via Microsoft global network when it's enabled.
Transitive relationship Peering connections are nontransitive. Use NVAs or gateways in the hub virtual network to enable transitive networking. See a hub-spoke network topology example. Transitive routing is supported if you connect virtual networks via VPN gateways and enable Border Gateway Protocol (BGP) in the connections.
Initial setup time Fast About 30 minutes
Typical scenarios Data replication, database failover, and other scenarios that need frequent backups of large data. Supports data policies that prevent sending any traffic over the internet. Scenarios that aren't latency sensitive, don't need high throughput, and include data policies that support internet traversal

The virtual network peering and VPN gateway technologies form the foundation for more complex networking architectures. One of the most common enterprise patterns is hub-and-spoke networking, where multiple spoke virtual networks need to communicate with each other. The choice between virtual network peering and VPN gateways significantly affects how you implement inter-spoke communication.

Spoke-to-spoke communication patterns

Inter-spoke networking refers to direct communication between workloads or workload components that run in different spoke virtual networks within hub-and-spoke architectures. Spoke-to-spoke patterns eliminate the need to route traffic through the central hub.

Inter-spoke networking provides the following benefits:

  • Better performance: Direct connections eliminate extra hops and bottlenecks.
  • Lower costs: Fewer peering connections and reduced hub infrastructure requirements.
  • Easier management: Less complex routing and fewer components to monitor.
  • Regional flexibility: Support for both single-region and cross-region communication patterns.

Most design guides focus on north-south traffic, which flows between users and applications (from on-premises networks or the internet to Azure virtual networks). These patterns focus on east-west traffic, which represents communication flows between workload components deployed in Azure virtual networks, either within a single region or across multiple regions. Ensure that your network design satisfies requirements for east-west traffic to provide performance, scalability, and resiliency to your applications that run in Azure.

Hub-and-spoke architectures provide centralized control and security, but they can become performance bottlenecks and cost centers when all workload-to-workload traffic must traverse the hub. Inter-spoke networking provides architectural flexibility to optimize for performance and cost where it makes business sense, while maintaining centralized control for security and governance needs.

Two main patterns connect spoke virtual networks to each other:

  • Pattern 1: Spokes directly connect to each other. Create virtual network peerings or VPN tunnels between the spoke virtual networks to provide direct connectivity without traversing the hub virtual network.

  • Pattern 2: Spokes communicate over a network appliance. Connect each spoke virtual network to either Azure Virtual WAN or a hub virtual network. A network appliance routes traffic between spokes. You can manage the appliance yourself, or Microsoft can manage it. This management model is similar to management in Virtual WAN deployments.

Choose your approach

Use the following table to choose your overall approach based on your priorities.

Your priority Recommended pattern Key technology
Maximum performance Pattern 1 Virtual network peering
Enterprise scale management Pattern 1 Virtual Network Manager
Traffic inspection Pattern 2 Azure Firewall
Multi-region simplicity Pattern 2 Virtual WAN
Cross-cloud connectivity Pattern 1 VPN tunnels

The following sections provide implementation details for each pattern.

Pattern 1: Spokes directly connect to each other

Direct connections between spokes typically provide better throughput, latency, and scalability than connections that go through an NVA across a hub. Sending traffic through NVAs can add latency if the NVAs reside in different availability zones and traffic must cross at least two virtual network peerings.

To connect two spoke virtual networks to each other directly, use virtual network peering, Virtual Network Manager, or VPN tunnels.

Technology Best for Limitations Management
Virtual network peering High performance, same cloud 500 peering limit Manual
Virtual Network Manager Enterprise scale (more than five spokes) Learning curve Automated
VPN tunnels Cross-cloud, encryption 1.25 Gbps per tunnel Complex

Virtual network peering

Virtual network peering provides the highest performance option for direct spoke-to-spoke connectivity. This method creates low-latency, high-bandwidth connections through the Microsoft backbone infrastructure, without gateways or extra hops in the path. You can also peer virtual networks across Azure regions, which is known as global peering.

Use virtual network peering for scenarios such as cross-region data replication and database failover, specifically where your data policies don't require inspection. This approach is often used between network-isolated components within a single workload. Because traffic stays private and travels only on the Microsoft backbone, virtual network peering supports strict data policies and avoids sending traffic over the public internet.

For spoke-to-spoke implementation, follow these guidelines:

  • Create peering connections directly between spoke virtual networks that need to communicate.

  • Maintain existing hub-to-spoke peerings for centralized services and traffic that should be subjected to egress scrutiny.

  • Limit peering to spokes within the same environment.

  • Plan for scaling based on the number of spokes. Manual peering works for two to five spokes. Use Virtual Network Manager for larger environments.

To optimize your design, follow these best practices:

  • Monitor the 500 peering limit per virtual network.

  • Use regional peering when possible for lowest latency.

  • Use network security groups to control traffic flow between peered networks.

  • Document peering relationships for troubleshooting and compliance.

  • Test connectivity after establishing peerings to verify proper routing.

  • Limit connectivity to be unidirectional when supported by the scenario.

Subnet peering works similarly to virtual network peering but provides more granular control. You can specify which subnets at both sides of the peering can communicate with each other. This feature requires subscription registration and enforces a 400 subnet limit per peering connection. Subnet peering supports scenarios like overlapping virtual network ranges, IPv6-only connections, and selective gateway exposure.

Virtual Network Manager

Virtual Network Manager provides automated management for virtual network connectivity at scale. You can use Virtual Network Manager to build three types of topologies across subscriptions. These topologies work with both existing and new virtual networks.

  • Hub and spoke with spokes that don't connect to each other

  • Hub and spoke with spokes that directly connect to each other, without a hop in the middle

  • A meshed group of virtual networks that connect to each other

Network diagram that shows the topologies that Virtual Network Manager supports.

Download a Visio file of these topologies.

When you create a hub-and-spoke topology by using Virtual Network Manager and connect spokes to each other, Azure automatically creates bi-directional connectivity between spoke virtual networks in the same network group.

Use Virtual Network Manager to statically or dynamically assign spoke virtual networks to specific network groups. This assignment automatically creates virtual network connectivity.

You can create multiple network groups to isolate clusters of spoke virtual networks from direct connectivity. Each network group supports both single-region and multiregion spoke-to-spoke connections. Understand the limits for Virtual Network Manager.

VPN tunnels

Configure VPN services to directly connect spoke virtual networks by using Microsoft VPN gateways or non-Microsoft VPN NVAs. VPN gateways provide limited bandwidth connections and work well in scenarios that require encryption but can tolerate bandwidth restrictions and latency. To help detect and mitigate large-scale attacks, enable Azure DDoS Protection on perimeter virtual networks.

Spoke virtual networks connect across commercial and sovereign clouds within the same cloud provider or between cross-cloud providers. If each spoke virtual network includes software-defined wide area network NVAs, use the non-Microsoft provider's control plane and feature set to manage virtual network connectivity.

VPN-based connections also help meet compliance requirements for the encryption of traffic across virtual networks in a single Azure datacenter, where Media Access Control Security encryption (MACsec) doesn't apply.

This approach has the following limitations:

  • Bandwidth is limited to 1.25 Gbps per tunnel.
  • It requires virtual network gateways in both hub and spoke virtual networks.
  • Spoke virtual networks that have gateways can't connect to Virtual WAN or use hub gateways for on-premises connectivity.

Single vs. multiple regions

The following diagram shows the network topology for a single region, regardless of the technology used for spoke virtual network connection.

Network diagram that shows a single-region hub-and-spoke design.

The following diagram shows the network topology for multiple regions. Designs that connect all spoke virtual networks to each other can extend across multiple regions. Virtual Network Manager helps reduce the administrative effort required to maintain a large number of connections.

Network diagram that shows a two-region hub-and-spoke design with spokes in the same region connected via virtual network peerings.

Note

When you use the direct connection method, whether in a single region or multiple regions, connect spoke virtual networks within the same environment. For example, connect one spoke development virtual network with another spoke development virtual network. Don't connect a spoke development virtual network with a spoke production virtual network.

When you directly connect spoke virtual networks to each other in a fully meshed topology, expect a high number of virtual network peerings. The following diagram shows this challenge. In this scenario, use Virtual Network Manager to automatically create virtual network connections.

Diagram that shows how the required number of peerings grows with the number of spokes.

Pattern 2: Spokes communicate over a network appliance

Instead of connecting spoke virtual networks directly to each other, you can use network appliances to forward traffic between spokes. Network appliances provide other network features like deep packet inspection, traffic segmentation, and monitoring. But they can introduce latency and performance bottlenecks if not properly sized. These appliances typically reside in a hub virtual network that the spokes connect to.

The following resources use a network appliance to forward traffic between spokes:

  • Virtual WAN hub router: Microsoft manages Virtual WAN. Virtual WAN contains a virtual router that attracts traffic from spokes. It routes that traffic to either another virtual network connected to Virtual WAN or to on-premises networks via ExpressRoute or site-to-site or point-to-site VPN tunnels. The Virtual WAN router scales up and down automatically. You only need to ensure that the traffic volume between spokes stays within the Virtual WAN limits.

  • Azure Firewall: Azure Firewall is a Microsoft-managed network appliance that you can deploy in hub virtual networks that you manage or in Virtual WAN hubs. It forwards IP address packets, inspects them, and applies traffic segmentation rules defined in policies. Azure Firewall automatically scales up so that it doesn't become a bottleneck, but it has limits. It supports built-in multiregion capabilities only when used with Virtual WAN. Without Virtual WAN, you must implement user-defined routes to enable cross-regional spoke-to-spoke communication.

  • Non-Microsoft NVAs: If you prefer Microsoft partner NVAs to perform routing and network segmentation, you can deploy them in either a hub-and-spoke or Virtual WAN topology. For more information, see Deploy highly available NVAs and NVAs in a Virtual WAN hub. Ensure that the NVA supports the bandwidth that the inter-spoke communications generate.

  • Azure VPN Gateway: You can use a VPN gateway as a next hop type of user-defined route. Don't use VPN virtual network gateways to route spoke-to-spoke traffic. These devices are designed to encrypt traffic to cross-premises sites or VPN users. For example, Azure doesn't guarantee bandwidth between spokes when you route through a VPN gateway.

  • ExpressRoute: An ExpressRoute gateway can advertise routes that attract spoke-to-spoke communication. This setup sends traffic to the Microsoft Edge router, which routes it to the destination spoke. This pattern, known as ExpressRoute hairpinning, must be explicitly enabled. Avoid using this approach because it introduces latency by sending traffic to the Microsoft backbone edge and back. It creates a single point of failure and expands the blast radius. It also puts extra load on the ExpressRoute infrastructure, specifically the gateway and physical routers, which can result in packet drops.

In self-managed hub-and-spoke network designs that include centralized NVAs, place the appliance in the hub. Create virtual network peerings between hub-and-spoke virtual networks manually, or use Virtual Network Manager to automate the configuration:

Single region deployment

The following diagram shows a single-region hub-and-spoke topology that sends traffic between spokes through an Azure firewall deployed in the hub virtual network. User-defined routes applied to the spoke subnets forward traffic to the centralized appliance in the hub.

Network diagram that shows a basic hub-and-spoke design with spokes that interconnect through a centralized NVA.

To improve scalability, you can separate NVAs that handle spoke-to-spoke traffic from NVAs that handle internet traffic. To separate NVAs, do the following steps:

  • Tune the route tables in each spoke to send private address traffic, such as traffic that uses RFC 1918 prefixes like 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16, to an NVA. This appliance handles Azure-to-Azure and Azure-to-on-premises traffic, often known as east-west traffic.

  • Tune internet traffic, which has a 0.0.0.0/0 route, to a second NVA. This NVA manages Azure-to-internet traffic, also known as north-south traffic.

The following diagram shows this configuration.

Network diagram that shows a basic hub-and-spoke design. It has spokes connected via two centralized NVAs for internet and private traffic.

Note

Azure Firewall supports only one firewall resource per virtual network. To deploy extra Azure Firewall resources, create separate hub virtual networks. For NVA scenarios, use a single hub virtual network for extra NVA deployments.

Multiple region deployment

You can extend the same configuration to multiple regions. For example, in a self-managed hub-and-spoke design that uses Azure Firewall, apply extra route tables to the Azure Firewall subnets in each hub. These route tables support spokes in remote regions and ensure that inter-region traffic flows between the Azure firewalls in each hub virtual network. Inter-regional traffic between spoke virtual networks traverses both Azure firewalls. For more information, see Use Azure Firewall to route a multi-hub and spoke topology.

Network diagram that shows a two-region hub-and-spoke design via NVAs in the hubs.

The following diagram shows a design variation that includes separate Azure firewalls or NVAs for north-south and east-west traffic in a multiregion hub-and-spoke topology.

Network diagram that shows a two-region hub-and-spoke design. It has separate east-west and north-south firewalls in each region.

The following topology uses Virtual WAN to simplify routing. Virtual WAN manages routing in the hubs, which Microsoft manages, and in the spokes, where it can inject routes automatically without manual route table edits. The network administrator only needs to connect the spoke virtual networks to a Virtual WAN hub and doesn't need to manage traffic forwarding between regions.

Network diagram that shows a design with spokes connected via Virtual WAN.

Mixed patterns

Some scenarios require a hybrid approach that combines the two patterns. In this case, traffic between specific spokes goes over direct connections, but the rest of the spokes communicate through a central network appliance. For example, in a Virtual WAN environment, you can directly connect two specific spokes that have high-bandwidth and low-latency requirements.

Another scenario involves spoke virtual networks that are part of a single environment. For example, you might connect a spoke development virtual network directly to another spoke development virtual network, but development and production workloads communicate through the central appliance.

Network diagram that shows a two-region hub-and-spoke design. Some spokes are connected via virtual network peerings.

Another common pattern connects spokes in one region via direct virtual network peerings or Virtual Network Manager connected groups but allows inter-regional traffic to cross NVAs. This model reduces the number of virtual network peerings in the architecture. But compared to the first model (direct connectivity between spokes), this model has more virtual network peering hops for cross-region traffic. These hops increase costs because of the multiple virtual network peerings that are crossed. This model also introduces extra load to the hub NVAs to front cross-regional traffic.

Network diagram that shows a two-region hub-and-spoke design. Spokes in a single region are connected via virtual network peerings.

The same designs apply to Virtual WAN. But direct connectivity between spoke virtual networks requires manual configuration between the virtual networks instead of through the Virtual WAN resource. Virtual Network Manager doesn't support architectures that use Virtual WAN. Consider the following diagram.

Network diagram that shows a Virtual WAN design with spokes connected via Virtual WAN and some virtual network peerings.

Note

For mixed approaches, direct connectivity via virtual network peering propagates system routes between connected virtual networks. These system routes are often more specific than custom routes configured via route tables. Therefore, the virtual network peering path is preferred over custom routes that follow the longest prefix-match route selection.

In less common scenarios, when a system route and a custom user-defined route have the same address prefix, the user-defined route overrides the system routes created by virtual network peering. This behavior causes spoke-to-spoke virtual network traffic to pass through the hub virtual network, even when a direct peering connection exists.

Contributors

Microsoft maintains this article. The following contributors wrote this article.

Principal authors:

Other contributors:

To see nonpublic LinkedIn profiles, sign in to LinkedIn.

Next steps