From: John H. <j....@pl...> - 2003-09-17 10:01:23
|
Hello, Is anyone using any of the available DNS modules with large-ish zones? Not a large number of zones, but zones with a lot of records in them. I've just loaded the University's 18,000 or so records on to a test system. The standard bind8 module works okay but loading the zone, and selecting individual records takes a time (around 30 seconds). This is on a Sun Ultra10 with 512MB of memory. The system is doing nothing else. I really just want to know if this is something to look into or is it something local here. Currently using webmin 1.090. Thanks, John. -- --------------------------------------------------------------- John Horne, University of Plymouth, UK Tel: +44 (0)1752 233914 E-mail: Joh...@pl... Fax: +44 (0)1752 233839 |
From: Marcos R. <we...@al...> - 2003-09-17 19:22:45
|
I don't see it as "large-ish"... but one of my zones has 122 IN A or IN CNAME records. No problem with it. (even though now I have a "larger" server, it also worked OK with a Duron 500, 1/2 GB ram, with over 1000 zones for bind, and the server is really "loaded"... several httpd, xinetd, sendmail, etc... a full virtual server schema, and of course, each virtual running each own webmin/usermin -- and 5 running mailman 2) cheers! Marcos On 17 Sep 2003, John Horne wrote: > Hello, > > Is anyone using any of the available DNS modules with large-ish zones? > Not a large number of zones, but zones with a lot of records in them. > > I've just loaded the University's 18,000 or so records on to a test > system. The standard bind8 module works okay but loading the zone, and > selecting individual records takes a time (around 30 seconds). This is > on a Sun Ultra10 with 512MB of memory. The system is doing nothing else. > > I really just want to know if this is something to look into or is it > something local here. Currently using webmin 1.090. > > > > Thanks, > > John. > > -- > --------------------------------------------------------------- > John Horne, University of Plymouth, UK Tel: +44 (0)1752 233914 > E-mail: Joh...@pl... Fax: +44 (0)1752 233839 > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > - > Forwarded by the Webmin development list at web...@we... > To remove yourself from this list, go to > http://lists.sourceforge.net/lists/listinfo/webadmin-devel > |
From: Jamie C. <jca...@we...> - 2003-09-17 23:12:22
|
John Horne wrote: > Hello, > > Is anyone using any of the available DNS modules with large-ish zones? > Not a large number of zones, but zones with a lot of records in them. > > I've just loaded the University's 18,000 or so records on to a test > system. The standard bind8 module works okay but loading the zone, and > selecting individual records takes a time (around 30 seconds). This is > on a Sun Ultra10 with 512MB of memory. The system is doing nothing else. > > I really just want to know if this is something to look into or is it > something local here. Currently using webmin 1.090. I can see why webmin would be slow with an 18000 record zone file. Currently, it parses that entire file every time you open a page that does anything within the zone, and reading such a large file with Perl is going to take a long time.. Unfortunately, there isn't really any solution, except modifying webmin to add some kind of indexing or caching code for BIND zone files. - Jamie |
From: John H. <j....@pl...> - 2003-09-18 10:13:59
|
On Thu, 2003-09-18 at 00:10, Jamie Cameron wrote: > > I can see why webmin would be slow with an 18000 record zone file. > Currently, it parses that entire file every time you open a page that > does anything within the zone, and reading such a large file with Perl > is going to take a long time.. > > Unfortunately, there isn't really any solution, except modifying > webmin to add some kind of indexing or caching code for BIND zone files. > Agreed. However, caching is already in place in the module so the zone file is only read once per cgi script. The problem, as you said, is that the zone has to be read whenever it is being looked at, modified or wahetever. I see no way around that since the information may well change. I have done some simple timing tests: to read the zone (either dynamic or static zones): 10 seconds to parse the zone into the required structures: 5 seconds so to 'read' a zone takes about 15 seconds. Dynamic zones are using dig whilst static ones are read direct from the file. There is no real difference in time between the two. to remove the '$GENERATE' entries: 20 seconds Ah! This one line is a 'grep' in the edit_master.cgi script. If I comment that out, because we have no generates in there, and dynamic zones certainly won't see them, then the time drops to 15 seconds again - that is, just the time to read the zone. to format and display the zone at this point: 1 second so the displaying of the number of different records in a zone is not a problem. Next bit is displaying particular record types: Removing the $generate entries: 20 seconds Selecting the particular records to display: 1 second Sorting and displaying the records: 14 seconds Okay. We can get around the $GENERATE bit for dynamic zones, and for static ones we can probably cache something as the zone file is parsed. The perl 'grep' seems to be slow and perhaps could be replaced with in-line code or a faster function. The actual reading of the zone and its parsing may possibly be speeded up, and the same for the displaying of the records. Of course, ultimately it could just be that there is no faster way and a large zone will take this sort of time :-) John. -- --------------------------------------------------------------- John Horne, University of Plymouth, UK Tel: +44 (0)1752 233914 E-mail: Joh...@pl... Fax: +44 (0)1752 233839 |