Ingress Termination

The Shivering-Isles Infrastructure, given it's a local-first infrastructure has challenges to optimise traffic flow for local devices, without breaking external access.

TCP Forwarding

A intentional design decision was to avoid split DNS. Given that all DNS is hosted on Cloudflare with full DNSSEC integration, as well as running devices with active DoT always connecting external DNS Server, made split-DNS a bad implementation.

At the same time, a simple rerouting of all traffic to the external IP would also be problematic, as it would require either a dedicated IP address or complex source-based routing to only route traffic for client networks while allowing VPN traffic to continue to flow to the VPS.

The solution most elegant solution found was to reroute traffic on TCP level. Allow high volume traffic on port 443 to be rerouted using a firewall rule, while keeping the remote IP identical and not touching any VPN or SSH traffic in the process.

A request for the same website looks like this:

Image of the traffic flow for external and internal users. For internal users, the traffic is redirected directly on the unifi dream machine to the Kubernetes cluster. For external users, they reach the VPS before the traffic is forwarded over VPN to the Unifi Dream Machine and then the traffic is forwarded to the Kubernetes cluster.

In both cases the connections are terminated on the Kubernetes Cluster. The external user reaches the VPS and is then rerouted over VPN. The local user is rerouted before the connection reaches the internet, resulting in keeping all traffic locally.

Since only TCP connections are forward at any point all TLS termination takes place on the Kubernetes cluster regardless.

Preserving source IP addresses

On the VPS, the TCP connection is handled by an HAProxy instance that speaks proxy-protocol with the Kubernetes ingress service.

On the Unifi Dream Machine it's a simple iptables rule, which redirects the traffic. In order to also use proxy-protocol with the ingress service, it's actually redirected to an HAProxy running in the Kubernetes cluster besides the ingress-nginx. This is mainly due to the limitation in ingress-nginx that doesn't allow mixed proxy-protocol and non-proxy-protocol ports without using custom configuration templates.

Image of the flow of traffic for internal and external users within the cluster. For internal users, the traffic without proxy-protocol hits the haproxy-proxy-protocol Service in the Kubernetes cluster, which forwards it to the haproxy Pod. That Pod then sends the traffic, now with proxy-protocol, to the ingress-nginx-controller Service, which forwards it to the ingress-nginx-controller Pod. For external users, the traffic is directly routed to the ingress-nginx-conroller Service, since it's already with proxy-protocol. It's then also forwarded to the ingress-nginx-controller Pod.