We are doing multiple custom dns updates using the
dyndns2 protocol against DynDNS services. We have
reached the stage where the grouping of the updates
means that the hostname field is over 400 characters
and it seems that at about this point the field gets
cut off.
This exhibits itself as the first hostnames work fine,
then at some point (19 hostnames for us) we stop
getting success and get notfqdn returned. If I remove
some of the hosts that come first in the list then this
error is returned against a different host.
Putting ddclient into debug mode shows that the full
get command, the hostname that gets notfqdn returned is
always around the 400th character mark (of the whole
get string).
In the short term I have modded our local version of
ddclient so that my $sig = line of group_hosts_by has
$h at the end - effectively turning off grouping. It
would be useful if ddclient either had a warning for
long hostname strings and a command line option to turn
off grouping or automatically ran multiple GETs when
the hostname parameter gets to long.
I can add logs but would like some guidance on properly
anonymising them.
Alex Masidllover
---
http://www.axiomtech.co.uk and http://www.zednax.com
Logged In: YES
user_id=1447465
I looked at the code but nothing jumps out. Could you post your ddclient.conf (minus any password fields) or provide a URL to it? If you want to post the logs, that would help also (just omit all HTTP header lines that say "Authorization:")
Logged In: YES
user_id=1459777
I have attached the ddclient.conf. I also beleive that it is
not a problem with the ddclient code, but with the dyndns
server limiting the length of the parameters in the URI or
the whole URI.
ddclient.conf
Debug log with multiple requests mod (successful)
Logged In: YES
user_id=1447465
Hmm, I don't see the "notfqdn" error in either of those debug logs. Watch out because some log data doesn't get emailed, so you have to capture the session with script(1) or another method to get the whole thing.
It's entirely possible that they truncate the URL at 400 bytes, though I don't see any mention of that in their Update API paper or in their FAQ. If that's the case, maybe we can convince them to add a new error code like "urltrunc" or at least an FAQ entry or something.
Please let us know if you find out anything more...
Logged In: YES
user_id=1459777
Sorry, I posted the wrong file for the failing version last
time. I've sent the correct one now.
Failure log
Logged In: YES
user_id=722282
Originator: NO
Is there any news about this one?