In this part 5 of the series of posts on ARO networking we are going to create a second pair of routers in order to expose applications both internally and externally. Other parts of this blog series include:
- Part 1: Intro and SDN Plugin
- Part 2: Internet and Intra-cluster Communication
- Part 3: Inter-Project and Vnet Communication
- Part 4: Private Link and DNS
- Part 5: Private and Public routers
Why is this important? When you create your Azure Redhat Openshift cluster you need to decide whether you want an private (private IP address) or a public (public IP address) router. Public routers will expose your apps to the public Internet, private routers will expose your apps to your internal network.
But what if you want to use your cluster for both? We saw in previous posts how to segregate projects (aka namespaces) from each other at the network level, so you could have a combination of applications with different security postures in the same cluster.
You could obviously have dedicated clusters for internal and public applications, but other than boring, it would be more expensive, since you would be paying for a set of 3 master node VMs (beefy D8v3 VMs) for each of the clusters.
So let’s get to it: the ingress controllers are managing ingresses and routes in Openshift, and you can see some details about which ones you have using the ingress operator:
oc -n openshift-ingress-operator get ingresscontroller NAMESPACE NAME AGE openshift-ingress-operator default 5h41m
We only have the default ingress controller created along the cluster. We can see how it is configured, but its spec is surprisingly simple:
oc -n openshift-ingress-operator get ingresscontroller/default -o json | jq '.spec' { "defaultCertificate": { "name": "185a0dfe-ee40-4184-ae61-c838d0a7481b-ingress" }, "replicas": 2 }
The replicas number refers to the number of pods actually serving the request, and the absence of any other configuration indicates that it is a public router. Let’s check the services in the ingress namespace to be sure:
oc -n openshift-ingress get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.68.100 40.127.249.191 80:32062/TCP,443:31367/TCP 7h13m router-internal-default ClusterIP 172.30.96.218 <none> 80/TCP,443/TCP,1936/TCP 7h13m
Public IP address, check. How to add a second ingress controller, but this time with an internal IP address? The Openshift documentation is pretty helpful there. We will send this yaml to the cluster:
apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: internal spec: domain: intapps.chyrswm9.northeurope.aroapp.io endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal namespaceSelector: matchLabels: type: internal
Alright, there are a couple of things to unpack there: first, in the specs we have a new section telling Openshift that this is going to an internal load balancer. Secondly, there is a namespace selector. As explained in the documentation above you can either use a route selector (so that only routes with a certain label will use this ingress controller) or a namespace selector (all routes in a certain namespace will use this ingress controller).
I have gone with the namespace configuration, so that project admins can create a new project, label it accordingly, and all applications there will be exposed either internally or externally.
By the way Red Hat calls this sharding, which I find pretty confusing, since sharding typically means something else in other contexts, but let’s just accept it.
After putting the yaml above into our cluster with “oc create -f”, you will see a new ingress controller in town (I am using here the flag -A or –all-namespaces because I am too lazy to type long namespace names):
oc get ingresscontroller -A NAMESPACE NAME AGE openshift-ingress-operator default 5h41m openshift-ingress-operator internal 96s
If we check the services, we can see that the new ingress controller has been created with a private IP address:
oc -n openshift-ingress get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.68.100 40.127.249.191 80:32062/TCP,443:31367/TCP 7h13m router-internal LoadBalancer 172.30.9.15 192.168.0.8 80:31295/TCP,443:30956/TCP 57m router-internal-default ClusterIP 172.30.96.218 <none> 80/TCP,443/TCP,1936/TCP 7h13m router-internal-internal ClusterIP 172.30.124.110 <none> 80/TCP,443/TCP,1936/TCP 57m
And lastly, even if we did not specify any number of replicas, two pods have been created for our new internal router:
oc -n openshift-ingress get pod NAME READY STATUS RESTARTS AGE router-default-68d466d76d-8f7xp 1/1 Running 0 36m router-default-68d466d76d-cljjs 1/1 Running 0 36m router-internal-6dccb7f676-r2m8s 1/1 Running 0 47m router-internal-6dccb7f676-xn4zl 1/1 Running 0 47m
Let’s now create a new project, label it as internal, and expose some app:
new-project internal oc label namespace/internal type=internal oc new-app --docker-image erjosito/sqlapi:0.1 oc expose svc sqlapi
And let’s have a look at our brand new route:
oc describe route/sqlapi Name: sqlapi Namespace: internal Created: 31 seconds ago Labels: app=sqlapi Annotations: openshift.io/host.generated=true Requested Host: sqlapi-internal.apps.chyrswm9.northeurope.aroapp.io exposed on router default (host apps.chyrswm9.northeurope.aroapp.io) 31 seconds ago exposed on router internal (host intapps.chyrswm9.northeurope.aroapp.io) 30 seconds ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: sqlapi Weight: 100 (100%) Endpoints: 10.131.0.16:8080
Ooops, the route has been exposed to both ingress controllers! That is because the default router did not have any namespaceSelector or routeSelector, hence it applies to all of the routes in all namespaces. You can change it with “oc -n openshift-ingress-operator edit ingresscontroller/default
” so that it looks like this:
oc -n openshift-ingress-operator get ingresscontroller/default -o json | jq '.spec' { "defaultCertificate": { "name": "185a0dfe-ee40-4184-ae61-c838d0a7481b-ingress" }, "namespaceSelector": { "matchLabels": { "type": "external" } }, "replicas": 2 }
If we now delete the route and create it again, it will only be created in the internal ingress controller:
oc delete route/sqlapi oc expose svc sqlapi oc describe route/sqlapi Name: sqlapi Namespace: internal Created: Less than a second ago Labels: app=sqlapi Annotations: openshift.io/host.generated=true Requested Host: sqlapi-internal.apps.chyrswm9.northeurope.aroapp.io exposed on router internal (host intapps.chyrswm9.northeurope.aroapp.io) less than a second ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: sqlapi Weight: 100 (100%) Endpoints: 10.131.0.16:8080
But what about our initial project1, that was supposed to be exposed to the public Internet? If we create a new route, we will see that it is not exposed to any router:
oc project project1 oc expose svc sqlapi oc describe route/sqlapi Name: sqlapi Namespace: project1 Created: Less than a second ago Labels: app=sqlapi Annotations: openshift.io/host.generated=true Requested Host: sqlapi-project1.apps.chyrswm9.northeurope.aroapp.io Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: sqlapi Weight: 100 (100%) Endpoints: 10.128.2.22:8080
Note that in the previous output there is not any “exposed in router xyz”. The problem is that we need to label the namespace as “external”:
oc label ns/project1 type=external oc delete route/sqlapi oc expose svc sqlapi oc describe route/sqlapi Name: sqlapi Namespace: project1 Created: Less than a second ago Labels: app=sqlapi Annotations: openshift.io/host.generated=true Requested Host: sqlapi-project1.apps.chyrswm9.northeurope.aroapp.io exposed on router default (host apps.chyrswm9.northeurope.aroapp.io) less than a second ago Path: <none> TLS Termination: <none> Insecure Policy: <none> Endpoint Port: 8080-tcp Service: sqlapi Weight: 100 (100%) Endpoints: 10.128.2.22:8080
Bingo, there you have it! Now you can create namespaces that are exposed either internally or publicly. You might want to read again Part 3: Inter-Project and Vnet Communication, since configuring network isolation between public and internal projects is especially critical.
Thanks for reading!
[…] Part 5: Private and Public routers […]
LikeLike
[…] Part 5: Private and Public routers […]
LikeLike
[…] Part 5: Private and Public routers […]
LikeLike
[…] Part 5: Private and Public routers […]
LikeLike