A day in the life of a Packet in Azure Redhat Openshift (part 4)

In this part of this blog series we will have a look at how Azure Redhat Openshift works with Azure Private Link, as well as how DNS resolutions works including DNS forwarding to resolve on-premises private zones. You can find the other parts of the blog series here:

If you are wondering what Azure Private Link is, you have not been following the Azure networking world close enough for the past few months, since it is a long awaited feature that has made many dreams come true. In a few words, it allows to access Azure PaaS services such as Azure Storage or Azure SQL Database using a private IP address.

Since our API pod is able to query databases, we will test Private Link using an Azure SQL database. You know the drill, let’s create one of those via the Azure CLI:

az sql server create -n $sql_server_name -g $rg -l $location --admin-user $sql_username --admin-password $sql_password
sql_server_fqdn=$(az sql server show -n $sql_server_name -g $rg -o tsv --query fullyQualifiedDomainName)
az sql db create -n $sql_db_name -s $sql_server_name -g $rg -e Basic -c 5 --no-wait

The previous command creates an Azure SQL Database with a public IP endpoint. What we can do now is expose this database in a new subnet in our ARO vnet:

az network vnet subnet create -n $sql_subnet_name --vnet-name $vnet_name -g $rg --address-prefixes $sql_subnet_prefix
sql_server_id=$(az sql server show -n $sql_server_name -g $rg -o tsv --query id)
az network vnet subnet update -n $sql_subnet_name -g $rg --vnet-name $vnet_name --disable-private-endpoint-network-policies true
az network private-endpoint create -n $sql_endpoint_name -g $rg --vnet-name $vnet_name --subnet $sql_subnet_name --private-connection-resource-id $sql_server_id --group-ids sqlServer --connection-name sqlConnection
sql_nic_id=$(az network private-endpoint show -n $sql_endpoint_name -g $rg --query 'networkInterfaces[0].id' -o tsv)
sql_endpoint_ip=$(az network nic show --ids $sql_nic_id --query 'ipConfigurations[0].privateIpAddress' -o tsv)
echo "The SQL Server is reachable over the private IP address ${sql_endpoint_ip}"

Perfect, our database has a private IP address. You might think that the work is done? Not really, let me explain: certain services in Azure cannot be accessed via IP address, because multiple services (like databases) may share the same IP address, and they are distinguished from each other using the FQDN in the payload. Additionally, using the IP address might break TLS for services using digital certificates to encrypt communication.

OK, so we need to use the FQDN. Your first approach might be overwrite the FQDN of the our SQL server (“sqlserver2519.database.windows.net” in this example) and that would certainly work, but Private Link offers a nice little feature: when you configure private link, a new FQDN is introduced in the DNS resolution change: sqlserver2519.privatelink.database.windows.net.

This way you can change the zone privatelink.database.windows.net without breaking connectivity to other databases that are not using private link. The ARO pods use Openshift DNS hosted in a set of pods. These pods will resolve cluster-internal FQDNs for service discovery as we have seen in previous posts, and they will forward anything else to the node. The node itself happens to be a Virtual Machine in Azure that uses Azure DNS resolution. And the way to add custom DNS resolution to Azure is with Azure DNS Private Zones. Let’s do this:

az network private-dns zone create -n $dns_zone_name -g $rg 
az network private-dns link vnet create -g $rg -z $dns_zone_name -n myDnsLink --virtual-network $vnet_name --registration-enabled false
az network private-dns record-set a create -n $sql_server_name -z $dns_zone_name -g $rg
az network private-dns record-set a add-record --record-set-name $sql_server_name -z $dns_zone_name -g $rg -a $sql_endpoint_ip

The previous commands create a DNS private zone in Azure for “privatelink.database.windows.net”, configure an A record pointing to the private IP address of our SQL Server, and connect the private DNS zone to the ARO virtual network. Now DNS resolution should work, let’s check it out:

curl "http://sqlapilb-project1.apps.m50kgrxk.northeurope.aroapp.io/api/dns?fqdn=sqlserver2519.privatelink.database.windows.net"
  "fqdn": "sqlserver2519.privatelink.database.windows.net",
  "ip": ""

Fantastic! Now we should have access to our database. To verify this, let’s check the sqlsrcip end point of our API, which tells us with which IP address the SQL server sees us:

curl "http://sqlapilb-project1.apps.m50kgrxk.northeurope.aroapp.io/api/sqlsrcip?SQL_SERVER_FQDN=sqlserver2519.privatelink.database.windows.net&SQL_SERVER_USERNAME=azure" 
  "sql_output": ""

In the previous example we supplied a custom FQDN and username for the new SQL Server, since we had configured the default to point to the SQL Server running as Openshift pod inside of the ARO cluster. The password was not required because I deployed the Azure SQL Server with the same password as the SQL Server pod.

Let’s continue with the DNS topic, since it is one that you will encounter sooner rather than later. What if your pods need to access some system that is not publicly resolvable (that is covered by the ARO virtual network default DNS) or in an Azure virtual network (that is covered by Azure Private DNS), such as a system located on-premises?

Typically you would have a separated dedicated DNS zone for your onprem network, something like “onprem.contoso.com”. How do you tell the ARO worker nodes to send certain requests to a custom DNS server? Your first option might be configuring a custom DNS server in the Azure Virtual Network that knows how to resolve the onprem FQDNs, but this is a no-go: Azure Redhat Openshift cannot be deployed in a Virtual Network with custom DNS servers configured.

What we will do is create a DNS server in the test Virtual Machine that we installed in a previous part of this blog series, and instead of defining it as custom DNS server, we will instruct Openshift to use it for specific zones. Let’s get to it. First, let’s configure DNS in our VM, for example dnsmasq:

ssh $vm_pip_ip "sudo apt update && sudo apt -y install dnsmasq"
ssh $vm_pip_ip "sudo sed -i \"\$ a myserver.onprem.contoso.com\" /etc/hosts"
ssh $vm_pip_ip "cat /etc/hosts"

The previous commands install dnsmasq in our ubuntu VM, and add an entry line to the “/etc/hosts” file so that our DNS server will resolve the example FQDN myserver.onprem.contoso.com.

Now we need to instruct Openshift to use this DNS server for the contoso.com zone. Welcome to the DNS Operator! We will modify its configuration accordingly:

oc edit dns.operator/default
apiVersion: operator.openshift.io/v1
kind: DNS
  - dns.operator.openshift.io/dns-controller
  generation: 2
  name: default
  - forwardPlugin:
    name: testvm
    - contoso.com

With that you have instructed the cluster to resolve anything going to contoso.com (and subdomains) to, the IP address of our VM with dnsmasq and a sample A record in the forms of /etc/hosts entry. Let’s verify that pods can now resolve our sample FQDN:

curl "http://sqlapilb-project1.apps.m50kgrxk.northeurope.aroapp.io/api/dns?fqdn=myserver.onprem.contoso.com"                  
  "fqdn": "myserver.onprem.contoso.com",
  "ip": ""

Now this is an example, in your architecture that VM would probably do DNS forwarding to other authoritative DNS servers located on-premises, but the mechanism to integrate that in Openshift is exactly the same as we have seen in this blog.

That takes me to the end of this post, thanks for reading and see you at the next part!

4 thoughts on “A day in the life of a Packet in Azure Redhat Openshift (part 4)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: