Automatic A and AAAA DNS entries with NAT64 for kubernetes?

Posted on 2021-06-24 by ungleich

The DNS kubernetes quiz

Today our blog entry does not (yet) show a solution, but more a tricky quiz on creating DNS entries. The problem to solve is the following:

  • How to make every IPv6 only service in kubernetes also IPv4 reachable?

Let's see who can solve it first or the prettiest. Below are some thoughts on how to approach this problem.

The situation

Assume your kubernetes cluster is IPv6 only and all services have proper AAAA DNS entries. This allows you to directly receive traffic from the Internet to your kubernetes services.

Now to make that service also IPv4 reachable, we can deploy NAT64 service that maps an IPv4 address outside the cluster to an IPv6 service address inside the cluster:

A.B.C.D --> 2001:db8::1

So all traffic to that IPv4 address is converted to IPv6 by the external NAT64 translator.

The proxy service

Let's say the service running on 2001:db8::1 is named "ipv4-proxy" and thus reachable at

What we want to achieve is to expose every possible service inside the cluster also via IPv4. For this purpose we have created an haproxy container that access * and forwards it via IPv6.

So the actual flow would look like:

IPv4 client --[ipv4]--> NAT64 -[ipv6]-> proxy service
IPv6 client ---------------------> kubernetes service

The DNS dilemma

It would be very tempting to create a wildcard DNS entry or to configure/patch CoreDNS to also include an A entry for every service that is:

*.svc IN A A.B.C.D

So essentially all services resolve to the IPv4 address A.B.C.D. That however would also influence the kubernetes cluster, as pods potentially resolve A entries (not only AAAA) as well.

As the containers / pods do not have any IPv4 address (nor IPv4 routing), access to IPv4 is not possible. There are various outcomes of this situation:

  1. The software in the container does happy eyeballs and tries both A/AAAA and uses the working IPv6 connection.

  2. The software in the container misbehaves and takes the first record and uses IPv4 (nodejs is known to have or had a broken resolver that did exactly that).

So adding that wildcard might not be the smartest option. And additionally it is unclear whether coreDNS would support that.

Alternative automatic DNS entries

The .svc names in a kubernetes cluster are special in the sense that they are used for connecting internally. What if coreDNS (or any other DNS) server would instead of using .svc, use a second subdomain like and generate the same AAAA record as for the service and a static A record like describe above?

That could solve the problem. But again, does coreDNS support that?

Automated DNS entries in other zones

Instead of fully automated creating the entries as above, another option would be to specify DNS entries via annotations in a totally different zone, if coreDNS was supporting this. So let's say we also have control over and we could instruct coreDNS to create the following entries automatically with an annotation: AAAA <same as the service IP> A    <a static IPv4 address A.B.C.D>

In theory this might be solved via some scripting, maybe via a DNS server like powerDNS?

Alternative solution with BIND

The bind DNS server, which is not usually deployed in a kubernetes cluster, supports views. Views enable different replies to the same query depending on the source IP address. Thus in theory something like that could be done, assuming a secondary zone

  • If the request comes from the kubernetes cluster, return a CNAME back to
  • If the request comes from outside the kubernetes cluster, return an A entry with the static IP
  • Unsolved: how to match on the AAAA entries (because we don't CNAME with the added A entry)

Other solution?

As you can see, mixing the dynamic IP generation and coupling it with static DNS entries for IPv4 resolution is not the easiest tasks. If you have a smart idea on how to solve this without manually creating entries for each and every service, give us a shout!