Jonathan Kamens
2011-07-11 18:11:57 UTC
The number of DNS queries required for each address lookup requested by
a client has gone up considerably because of IPV6. The problem is being
exacerbated by the fact that many DNS servers on the net don't yet
support IPV6 queries. The result is that address lookups are frequently
taking so long that the client gives up before getting the result.
The example I am seeing this with most frequently is my RSS feed reader,
rss2email, trying to read a feed from en.wikipedia.org in a cron job
that runs every 15 minutes. I am regularly seeing this in the output of
the cron job:
W: Name or service not known [8]
http://en.wikipedia.org/w/index.php?title=/[elided]/&feed=atom&action=history
The wikipedia.org domain has three DNS servers. Let's assume that the
root and org. nameservers are cached already when rss2email does its
query. If so, then it has to do the following queries:
wikipedia.org DNS
en.wikipedia.org AAAA
en.wikipedia.org A
This is fine when the wikipedia.org nameservers are working, but let's
postulate for the moment that two of them are down, unreachable, or
responding slowly, which apparently happens pretty often. Then we end up
doing:
wikipedia.org DNS
en.wikipedia.org AAAA /times out
/en.wikipedia.org AAAA /times out
/en.wikipedia.org AAAA
en.wikipedia.org A /times out/
en.wikipedia.org A /times out
/en.wikipedia.org A
By now the end of that sequence, the typical 30-second DNS request
timeout has been exceeded, and the client gives up.
I said above that the problem is exacerbated by the fact that many DNS
servers don't yet support IPV6 queries. This is because the AAAA queries
don't get NXDOMAIN responses, which would be cached, but rather FORMERR
responses, which are not cached. As a result, the scenario describes
above happens much more frequently because the DNS server has to redo
the AAAA queries often.
One suggestion that I've seen on the net for how to mitigate this
problem is to treat FORMERR responses as negative and cache them just
like NXDOMAIN responses are cached. I took a look at the bind code in
resolver.c briefly to see how easy it would be to do this, and I
although it doesn't look like it would be particularly difficult, I
don't feel like I know the ins and outs of the DNS protocol and BIND
implementation enough to be confident that I'd get it right.
I'm interested to hear if other people are encountering this problem and
if the developers who work on BIND have any thoughts about how to
migitate it, short of getting everyone on the internet to upgrade to
nameservers that support IPV6.
Thanks,
Jonathan Kamens
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20110711/5c6f1eb7/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3920 bytes
Desc: S/MIME Cryptographic Signature
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20110711/5c6f1eb7/attachment.bin>
a client has gone up considerably because of IPV6. The problem is being
exacerbated by the fact that many DNS servers on the net don't yet
support IPV6 queries. The result is that address lookups are frequently
taking so long that the client gives up before getting the result.
The example I am seeing this with most frequently is my RSS feed reader,
rss2email, trying to read a feed from en.wikipedia.org in a cron job
that runs every 15 minutes. I am regularly seeing this in the output of
the cron job:
W: Name or service not known [8]
http://en.wikipedia.org/w/index.php?title=/[elided]/&feed=atom&action=history
The wikipedia.org domain has three DNS servers. Let's assume that the
root and org. nameservers are cached already when rss2email does its
query. If so, then it has to do the following queries:
wikipedia.org DNS
en.wikipedia.org AAAA
en.wikipedia.org A
This is fine when the wikipedia.org nameservers are working, but let's
postulate for the moment that two of them are down, unreachable, or
responding slowly, which apparently happens pretty often. Then we end up
doing:
wikipedia.org DNS
en.wikipedia.org AAAA /times out
/en.wikipedia.org AAAA /times out
/en.wikipedia.org AAAA
en.wikipedia.org A /times out/
en.wikipedia.org A /times out
/en.wikipedia.org A
By now the end of that sequence, the typical 30-second DNS request
timeout has been exceeded, and the client gives up.
I said above that the problem is exacerbated by the fact that many DNS
servers don't yet support IPV6 queries. This is because the AAAA queries
don't get NXDOMAIN responses, which would be cached, but rather FORMERR
responses, which are not cached. As a result, the scenario describes
above happens much more frequently because the DNS server has to redo
the AAAA queries often.
One suggestion that I've seen on the net for how to mitigate this
problem is to treat FORMERR responses as negative and cache them just
like NXDOMAIN responses are cached. I took a look at the bind code in
resolver.c briefly to see how easy it would be to do this, and I
although it doesn't look like it would be particularly difficult, I
don't feel like I know the ins and outs of the DNS protocol and BIND
implementation enough to be confident that I'd get it right.
I'm interested to hear if other people are encountering this problem and
if the developers who work on BIND have any thoughts about how to
migitate it, short of getting everyone on the internet to upgrade to
nameservers that support IPV6.
Thanks,
Jonathan Kamens
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20110711/5c6f1eb7/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3920 bytes
Desc: S/MIME Cryptographic Signature
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20110711/5c6f1eb7/attachment.bin>