You probably know Azure Virtual WAN, an Azure technology that abstracts hybrid networking by providing Microsoft-managed Virtual Hubs that use the Microsoft backbone to talk to each other. And you might know as well that those hubs can become Secured Virtual Hubs, including firewalling functionality powered by Azure Firewall.
Virtual WAN secure hubs are great, because they offer an easy-to-use abstraction for a network design that would otherwise imply multiple steps to implement. My favorite benefit of this setup is that the Virtual Networks that you connect to a secured virtual hub do not need to have any User-Defined Routes, because Virtual WAN will inject any required routes for you. That is great from a management perspective, especially if the spoke vnet administrators are not network specialists and they do not want to be bothered with boring stuff such as configuring routes and route tables.
However life is always more difficult that it looks: today Virtual WAN services do not support securing traffic cross-region. Is there anything we can do about it? Enter this post!
One possible workaround is creating one Virtual WAN per region, and interconnecting them via VPN (see my post Connect two VWANs to each other via VPN). However there are some drawbacks to this, such as the performance limits of traffic encryption as compared to native interconnection over the Microsoft backbone, as well as the lack of support for BGP (see the post for further details).
There is another possibility though. The current limitation of Virtual WAN is that traffic traversing two virtual hubs would be routed asymmetrically, meaning that it would not traverse the same firewalls in each direction, and it would consequently be dropped. We need to force Virtual WAN to bypass the firewalls for that cross-hub traffic. Yes, that traffic is not going through any firewall, but at least it is not dropped, and you can eventually secure by other means (like Network Security Groups).
In my test I have this topology:
So essentially we want to achieve these flows:
- Intra-hub flows go all through the firewall in each hub. For example for hub1: spoke11-spoke12, spoke11-branch1, spoke12-branch1
- Internet traffic from all vnets and branches goes through the closest hub, where that vnet or branch is attached
- Interhub flows between vnets will be routed directly, without traversing any firewall
- Interhub flows branch-to-vnet or vnet-to-branch will be firewalled
Achieving the first two points is straight forward: you could use the standard configuration for Virtual Secured Hubs:
- All vnet/branch connections associated to the default route table of their hub
- All vnet/branch connections propagating to the none route table
- A 0.0.0.0/0 route in the default route table of each hub pointing to the local firewall in that hub.
However, this ends up in the asymmetric routing situation we mentioned earlier. Consider the flow from spoke11 to spoke21:
- Traffic from spoke11 to spoke21 is sent through the Azure Firewall in hub1 (let’s call it AzFW1). From there, it goes straight to spoke21
- Traffic back from spoke21 to spoke11 will traverse the Azure Firewall in hub2 (AzFW2), and then straight back to spoke11. Asymmetric routing is served.
How can we force hub1 to send spoke11-to-spoke21 traffic directly to hub2, without using the default route pointing to the Firewall? In which we inject the more specific routes for the vnets and branches in hub2. We can do this with this configuration:
- All vnet/branch connections still associated to the default route table of their hub
- All vnet/branch connections propagating to the default route table of the other hub. For example, we would inject the routes for vnets and branches of hub2 into the default route table of hub1, so that hub1 prefers those more specific routes to the 0.0.0.0/0 pointing to the firewall
- Our 0.0.0.0/0 route is still in the default route table of each hub pointing to the local firewall in that hub, but it is more generic than the routes injected via vnet/branch propagation
We are almost there. There is one more change we need to introduce. We will use an extra route table for the vnets, so that we can separate routes for vnets and branches. The goal here is to have a more granular control on which routes apply to vnets, and which ones apply to branches. So we end up with this configuration:
As you can see in the diagram above, there are some unusual configurations:
- Vnet-Hub-Hub-Vnet traffic is not going through any firewall in this design. Please notice that this fact might make this design inappropriate for your requirements.
- Vnet-Hub-Hub-Branch traffic is not firewalled either. A variation of this design would be the having branches connected to both hubs, thus eliminating those VHHB or BHHV flows.
- Vnets and branches are propagating to the route tables in the the other hubs, but not to the hubs where they are connected
- This configuration is easy to do in the portal for vnet connections, but for branches you will need Azure CLI or Powershell. You can find an example script for the whole configuration in my Github repo here: https://github.com/erjosito/azcli/blob/master/vwan_2xshub.azcli
- We are using labels for propagation, note that the route tables are labeled with the hub they are located, so it is easy enough configuring connections to propagate to any remote hub
As mentioned above, have a look at my CLI script to implement this if you are curious about the exact commands. Enjoy!