Efficiently Handling Multiple Traefik Entry Points in Kubernetes

A solution for splitting traffic on different cluster external IPs using dedicated Kubernetes Services

Raúl Garcia Sanchez
5 min readJan 21, 2024

Motivation

Hey there! I recently found myself in a scenario where policies dictated that applications need to be exposed via different interfaces for company internal and internet-facing traffic. As a result, the initial plan to route all external-facing cluster applications through a single Kubernetes Service linked to a reverse proxy had to be abandoned. In this article, I’ll walk you through how I tackled this challenge.

Photo by Pablo García Saldaña on Unsplash

Overview

On Kubernetes clusters, it’s rare to find just one single application running on it. Instead, you have a multitude of applications, each serving a unique purpose. Some of these applications need to be exposed to clients beyond the cluster’s boundaries. This isn’t a big deal if you expose them using dedicated Kubernetes Services. A common solution nowadays, however, is to use a reverse proxy like Nginx or Traefik. In this scenario, you normally expose applications via the reverse proxy-linked Kubernetes Service. However, this wouldn’t have fit with the policy regulation.

Basics

If you are already experienced in using Kubernetes Services and Traefik, feel free to skip this part.

Kubernetes Service

In Kubernetes, a service is used to expose an application running on a set of pods to the network. It provides a consistent way to access and discover these applications within the cluster.

When talking about island mode-configured clusters, you normally use a service of type NodePort or LoadBalancer to expose applications to clients beyond the cluster boundaries.

The NodePort type service exposes the application on each node’s IP at a static port, which remains the same on every node. This allows external access to the service from outside the cluster by accessing any node’s IP address on the given node port. This type is commonly used when you need to make a service accessible from outside the cluster and don’t have the possibility to use a LoadBalancer type service.

The LoadBalancer type service isn’t very different from the NodePort service. The key distinction lies in its automatic provisioning of an external load balancer in a cloud environment or internal appliance (if supported). It assigns an internal or internet-reachable IP address to the service, depending on the specified parameters. This assigned IP address is then displayed as the external IP of the service in Kubernetes and can be used to access the application.

Traefik

Traefik is a very popular open-source solution for effectively handling traffic in Kubernetes. It does this by seamlessly integrating itself as a reverse proxy into Kubernetes. From there, it is capable of discovering services, automatically configuring routes, and managing traffic by supporting the various protocols.

The architecture of Traefik involves multiple responsibilities. The following graphic provides a good overview.

Traefik architecture at a glance

Following the path of a request, we can see that an incoming request is received by a so-called entryPoint, which basically represents a port on which the Traefik container listens. Next, the entryPoint is mapped to a Kubernetes Service through specific selector labels. Once processed by the entryPoint, the request is then forwarded to a Router. The Router itself establishes rules in the form of matchers, defining what to listen for and determining the Kubernetes Service to which the request should be forwarded.

Below, you’ll find a lightweight example of a Traefik IngressRoute for a web service that exposes itself on the entryPoints web and websecure by matching incoming requests for kibana.company.net. It also forwards them to the Kubernetes Service kibana-es-http on port. 5601.

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: kibana
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`kibana.company.net`)
kind: Rule
services:
- name: kibana-es-http
port: 5601

As we can see in the example above, we have two entryPoints: one listening on port 80 (web) and the other on port 443 (websecure). These are the default entryPoints that come along if you don’t modify the default Helm values provided for the Traefik Helm chart.

Installation

For the installation of Traefik, we are going to use the official Helm chart, which can be found here: https://github.com/traefik/traefik-helm-chart. As our intention is to create two different services, each with its own external IP address and entryPoints, we have to modify the default values file definition. Below, you’ll find the modified version, which you can save as ‘values.yaml’ to your machine.

ports:
web:
expose: true
exposedPort: 80
port: 8000
protocol: TCP
redirectTo:
port: websecure
websecure:
expose: true
exposedPort: 443
middlewares:
- traefik-default@kubernetescrd
port: 8443
protocol: TCP
web-ext:
expose: false
exposedPort: 7080
port: 7080
protocol: TCP
redirectTo:
port: websecure-ext
websecure-ext:
expose: false
exposedPort: 7443
middlewares:
- traefik-default@kubernetescrd
port: 7443

This config is adding two additional entryPoints, which are exposed by the Traefik containers on ports 7080 and 7443. Next, we need to create a Kubernetes Service which we map against these ports since expose: false is causing the ports not to be exposed on the default service, which is created by the Helm chart.

Below, you’ll find the Kubernetes Service YAML for the internet facing traffic.

apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: traefik
name: traefik-ext
namespace: traefik
spec:
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
ports:
- name: web-ext
port: 80
targetPort: web-ext
- name: websecure-ext
port: 443
targetPort: websecure-ext
selector:
app.kubernetes.io/instance: traefik-traefik
app.kubernetes.io/name: traefik
type: LoadBalancer

Now that we have everything together, let’s deploy Traefik to the cluster using the following command:

helm install --create-namespace traefik \
--namespace traefik \
-f values.yaml \
--version 26.0.0 \
traefik/traefik

Next, install the YAML for the traefik-ext Kubernetes Service. If everything has worked accordingly, you should see the Traefik pod running alongside two Kubernetes Services, exposing themselves on different external IP addresses and mapping against different entryPoints.

Outcome

First of all, you can easily distinguish between application traffic originating from internal sources and traffic arriving from the internet. This becomes particularly evident when going through the logs.

However, there are even more benefits. Consider having multiple IngressRoute definitions on the same entryPoint. Without a Layer 7 firewall, filtering who is allowed to connect to each of them becomes quite complicated, as they all reside on exactly the same external IP address. Even if you haven’t propagated all exposed matches to the external DNS zone. One could use their hosts file to overwrite the IP address behind the hostname of an application that has not been externally propagated and still access it.

By using multiple entryPoints spread across different services, you can precisely define which application is exposed on which external IP address. This makes it easier to filter requests on firewalls sitting outside the cluster. You could even take it a step further and create dedicated services with their own entryPoints for each application. It all depends on how you want to implement it.

I hope this article has provided you with a comprehensive overview of how powerful Traefik entryPoints can be and, especially, how easy their configuration is.

--

--

Responses (1)