From: Fernando G. <fer...@gm...> - 2007-03-06 09:24:15
|
Hello, First of all I would like to congratulate you on your project, I really think it's great. Second, I want to use the java VTD-XML to do a certain task and I have succeeded but I don't know if I have done it in the right way, or there is a better one. Can you give me some advice? I want to evaluate some XPath expressions on a lot of files of this size and larger, so the memory eficiency is critical. The first idea that comes to my mind is to have a VTDGen object for each XML file but this solution leads to having all the XMLs loaded in memory in the "protected byte[] XMLDoc;" attribute in VTDGen class. So each time I have to evaluate a XPath expression in a XML file I have to read the xml file, parse it, evaluate XPath and set to null the VTDGen object to get the memory freed by the garbage collector. I have obtained these results reading a big XML file (~100Mb): 360 ms reading file 1890 ms parsing file 32 ms evaluating a XPath expression 93 ms showing results total = 2375 milliseconds Where the second step ("parsing file") means: VTDGen vg = new VTDGen(); vg.setDoc(b); vg.parse(true); To speed up the process I have stored the parsing information in a file. After that I can read the XML file and the parsing information file, evaluate the XPath expression and close everything again in a shorter time: 344 ms reading the file 422 ms reading parsing information 125 ms evaluating a XPath expression 93 ms showing results total = 984 I think the result is good enough but maybe there's a better solution than mine. I have stored the parsing info by serializing all the VTDGen object but the XMLDoc attribute. Then I retrieve the object from disk and I set the XMLDoc attribute. This way: ObjectInputStream ois = new ObjectInputStream(new FileInputStream(PARSING_INFO)); vg = (MyVTDGen) ois.readObject(); ois.close(); FileInputStream fis2 = new FileInputStream(TEST_XML); byte[] b2 = new byte[(int) f.length()]; fis2.read(b2); vg.setXML(b2); //This method only sets the XMLDoc attribute Is this solution good? Is there a better one? Can "Buffer reuse" solve my probem? best regards, Fernando |
From: Jimmy Z. <cra...@co...> - 2007-03-06 20:35:07
|
Hey Fernando, Thanks for the email.. I am glad VTD-XML is helpful. My question: Which version are you using? =20 If you are currently using 2.0, it contains the indexing feature that = might accomplish just what is described in your email. Your solution is to seperate XML from VTD and LC, which I think you must have added code to do that... VTD+XML (as in version 2.0) is to package XML, VTD and LCs into=20 a single file... which should also work The only suspicious part is that the XPath performance dropped for=20 your case ... which shouldn't happen=20 Buffer reuse is useful if your app instantiates a VTDGen to sequentially process many incoming XML document ... if you deal only with one XML doc... buffer reuse won't make a big = difference I think you might be interested in first investigating the persistence = feature in=20 2.0, and there is a directory under code examples... Cheers, Jimmy =20 ----- Original Message -----=20 From: Fernando Gonzalez=20 To: vtd...@li...=20 Sent: Tuesday, March 06, 2007 1:23 AM Subject: [Vtd-xml-users] Storing parsing info Hello, First of all I would like to congratulate you on your project, I = really think it's great. Second, I want to use the java VTD-XML to do a certain task and I have = succeeded but I don't know if I have done it in the right way, or there = is a better one. Can you give me some advice?=20 I want to evaluate some XPath expressions on a lot of files of this = size and larger, so the memory eficiency is critical. The first idea = that comes to my mind is to have a VTDGen object for each XML file but = this solution leads to having all the XMLs loaded in memory in the = "protected byte[] XMLDoc;" attribute in VTDGen class. So each time I = have to evaluate a XPath expression in a XML file I have to read the xml = file, parse it, evaluate XPath and set to null the VTDGen object to get = the memory freed by the garbage collector.=20 I have obtained these results reading a big XML file (~100Mb): 360 ms reading file 1890 ms parsing file 32 ms evaluating a XPath expression 93 ms showing results total =3D 2375 milliseconds Where the second step ("parsing file") means: VTDGen vg =3D new VTDGen(); vg.setDoc(b); vg.parse(true); To speed up the process I have stored the parsing information in a = file. After that I can read the XML file and the parsing information = file, evaluate the XPath expression and close everything again in a = shorter time:=20 344 ms reading the file 422 ms reading parsing information 125 ms evaluating a XPath expression 93 ms showing results total =3D 984 I think the result is good enough but maybe there's a better solution = than mine. I have stored the parsing info by serializing all the VTDGen = object but the XMLDoc attribute. Then I retrieve the object from disk = and I set the XMLDoc attribute. This way:=20 ObjectInputStream ois =3D new ObjectInputStream(new = FileInputStream(PARSING_INFO)); vg =3D (MyVTDGen) ois.readObject(); ois.close(); FileInputStream fis2 =3D new FileInputStream(TEST_XML);=20 byte[] b2 =3D new byte[(int) f.length()]; fis2.read(b2); vg.setXML(b2); //This method only sets the XMLDoc = attribute Is this solution good? Is there a better one? Can "Buffer reuse" solve = my probem?=20 best regards, Fernando -------------------------------------------------------------------------= ----- = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to = share your opinions on IT & business topics through brief surveys-and earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV -------------------------------------------------------------------------= ----- _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users |
From: Jimmy Z. <cra...@co...> - 2007-03-07 19:16:34
|
Fernando, It is interetsing that you have substitute Byte[] with = IbyteBuffer... since there is a level of indirection , the slight slow down should be = expected... I would certainly be interested in your approach to the issue and feel free to send me the = code... Cheers, Jimmy ----- Original Message -----=20 From: Fernando Gonzalez=20 To: Jimmy Zhang=20 Sent: Wednesday, March 07, 2007 7:49 AM Subject: Re: [Vtd-xml-users] Storing parsing info Hi Jimmy, Writing the following I have found that may be it's quite complicated = to understand since you don't know exactly the changes I have made. Even = my tests are not thorough so maybe the best option is to submit a = technical description of the changes, pros and cons, the code, and that = kind of things.=20 I have been testing the XPath performance problem and it seems like = it's a classloader issue. As you can see in the following log the = slowest XPath evaluation is the first, no matter how the parsing = information is obtained.=20 391 ms->Load XML 2125 ms->Parse XML 31 ms->Evaluate XPath 0 ms->Evaluate XPath 0 ms->Evaluate XPath 453 ms->Store parse info 0 ms->Clear parse info 313 ms->Read Parse info 0 ms->Evaluate XPath 0 ms->Evaluate XPath I have been working in something more. I have done some changes to VTD = and I have succeeded in the following. 1) The byte[] of the XML file is accessed through an interface = (IByteBuffer).=20 2) When I use the UniByteBuffer implementation I get a bit slower = results at parsing 391 ms->Load XML 2109 ms->Parse XML (vs 1890 ms I obtained accessing directly the = byte[] buffer) 0,172 ms->Evaluate XPath=20 0,078 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,078 ms->Evaluate XPath 3) When I use an implementation that loads chunks as they are needed I = get much slower results in parsing the file, but I get the same results = evaluating a XPath expression. The advantage of this approach is that = there is no need to load all the XML file in memory, so I have obtained = the following results:=20 25406 ms->Parse XML 406 ms->Store parse info 0,156 ms->Evaluate XPath 0,093 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,093 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath=20 500 ms->Read Parse info 0,235 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,094 ms->Evaluate XPath The great thing in these results is that the XML file was 100Mb and I = run the program with the -Xmx64Mb jvm option (just enough to store the = 30mb parsing info, and the 16mb buffer) Well, as I said before I can send you a technical description of the = changes, pros and cons, and the code.=20 cheers, Fernando On 3/7/07, Fernando Gonzalez <fer...@gm...> wrote: Hi Jimmy, Thanks for your response. I think I'm using the version 2.0 since I have tested the = "VTDGen.writeIndex" method. I looked for another solution because I = cannot remove the original XML file so I would have to store the XML = file twice: the original xml file and the file with the XML, VTD and LCs = created by "VTDGen.writeIndex". As I'm dealing with really big XML = files, that's a drawback. Yes, you're right, I have added code. Just three or four lines. If = you're interested I can explain thoroughly my solution. About the XPath = performance, I think that's a classloader issue. I will check that and I = will report the results.=20 greetings, Fernando On 3/6/07, Jimmy Zhang < cra...@co...> wrote: Hey Fernando, Thanks for the email.. I am glad VTD-XML is helpful. My question: Which version are you using? =20 If you are currently using 2.0, it contains the indexing feature = that might accomplish just what is described in your email. Your solution is to seperate XML from VTD and LC, which I think = you must have added code to do that... VTD+XML (as in version 2.0) is to package XML, VTD and LCs into=20 a single file... which should also work The only suspicious part is that the XPath performance dropped for = your case ... which shouldn't happen=20 Buffer reuse is useful if your app instantiates a VTDGen to = sequentially process many incoming XML document ... if you deal only with one XML doc... buffer reuse won't make a big = difference I think you might be interested in first investigating the = persistence feature in=20 2.0, and there is a directory under code examples... Cheers, Jimmy =20 ----- Original Message -----=20 From: Fernando Gonzalez=20 To: vtd...@li...=20 Sent: Tuesday, March 06, 2007 1:23 AM Subject: [Vtd-xml-users] Storing parsing info Hello, First of all I would like to congratulate you on your project, I = really think it's great. Second, I want to use the java VTD-XML to do a certain task and = I have succeeded but I don't know if I have done it in the right way, or = there is a better one. Can you give me some advice?=20 I want to evaluate some XPath expressions on a lot of files of = this size and larger, so the memory eficiency is critical. The first = idea that comes to my mind is to have a VTDGen object for each XML file = but this solution leads to having all the XMLs loaded in memory in the = "protected byte[] XMLDoc;" attribute in VTDGen class. So each time I = have to evaluate a XPath expression in a XML file I have to read the xml = file, parse it, evaluate XPath and set to null the VTDGen object to get = the memory freed by the garbage collector.=20 I have obtained these results reading a big XML file (~100Mb): 360 ms reading file 1890 ms parsing file 32 ms evaluating a XPath expression 93 ms showing results total =3D 2375 milliseconds Where the second step ("parsing file") means: VTDGen vg =3D new VTDGen(); vg.setDoc(b); vg.parse(true); To speed up the process I have stored the parsing information in = a file. After that I can read the XML file and the parsing information = file, evaluate the XPath expression and close everything again in a = shorter time:=20 344 ms reading the file 422 ms reading parsing information 125 ms evaluating a XPath expression 93 ms showing results total =3D 984 I think the result is good enough but maybe there's a better = solution than mine. I have stored the parsing info by serializing all = the VTDGen object but the XMLDoc attribute. Then I retrieve the object = from disk and I set the XMLDoc attribute. This way:=20 ObjectInputStream ois =3D new ObjectInputStream(new = FileInputStream(PARSING_INFO)); vg =3D (MyVTDGen) ois.readObject(); ois.close(); FileInputStream fis2 =3D new = FileInputStream(TEST_XML);=20 byte[] b2 =3D new byte[(int) f.length()]; fis2.read(b2); vg.setXML(b2); //This method only sets the XMLDoc = attribute Is this solution good? Is there a better one? Can "Buffer reuse" = solve my probem?=20 best regards, Fernando ------------------------------------------------------------------------ = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance = to share your opinions on IT & business topics through brief surveys-and earn = cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV=20 ------------------------------------------------------------------------ _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users |
From: Jimmy Z. <cra...@co...> - 2007-03-14 03:07:33
|
Interesting as the main method actually is a GUI benchmarking ... = however, I can't seem to open the ODT file, also I have yet to experience any class loader issues... ( maybe other = folks on this list can comment...) regarding saving VTD into a separate file, there are pros and cons... = disk seeks on two separate might slow things down a bit... my recommendation is to stick with the VTD+XML = format as it works with multiple programming language and byte endianess...=20 ----- Original Message -----=20 From: Fernando Gonzalez=20 To: Jimmy Zhang=20 Sent: Friday, March 09, 2007 3:28 AM Subject: Re: [Vtd-xml-users] Storing parsing info I send you the two proposals mixed in one source folder. I don't think = it's going to be a problem since I also send you a description of the = changes. I hope it's clear and can be easily understood.=20 In org.Prueba you can find a main method. greetings, Fernando On 3/7/07, Jimmy Zhang <cra...@co...> wrote:=20 Fernando, It is interetsing that you have substitute Byte[] with = IbyteBuffer... since there is a level of indirection , the slight slow down should be = expected... I would certainly be interested in your approach to the issue and feel free to send me = the code... Cheers, Jimmy ----- Original Message -----=20 From: Fernando Gonzalez=20 To: Jimmy Zhang=20 Sent: Wednesday, March 07, 2007 7:49 AM Subject: Re: [Vtd-xml-users] Storing parsing info Hi Jimmy, Writing the following I have found that may be it's quite = complicated to understand since you don't know exactly the changes I = have made. Even my tests are not thorough so maybe the best option is to = submit a technical description of the changes, pros and cons, the code, = and that kind of things.=20 I have been testing the XPath performance problem and it seems = like it's a classloader issue. As you can see in the following log the = slowest XPath evaluation is the first, no matter how the parsing = information is obtained.=20 391 ms->Load XML 2125 ms->Parse XML 31 ms->Evaluate XPath 0 ms->Evaluate XPath 0 ms->Evaluate XPath 453 ms->Store parse info 0 ms->Clear parse info 313 ms->Read Parse info 0 ms->Evaluate XPath 0 ms->Evaluate XPath I have been working in something more. I have done some changes to = VTD and I have succeeded in the following. 1) The byte[] of the XML file is accessed through an interface = (IByteBuffer).=20 2) When I use the UniByteBuffer implementation I get a bit slower = results at parsing 391 ms->Load XML 2109 ms->Parse XML (vs 1890 ms I obtained accessing directly the = byte[] buffer) 0,172 ms->Evaluate XPath=20 0,078 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,078 ms->Evaluate XPath 3) When I use an implementation that loads chunks as they are = needed I get much slower results in parsing the file, but I get the same = results evaluating a XPath expression. The advantage of this approach is = that there is no need to load all the XML file in memory, so I have = obtained the following results:=20 25406 ms->Parse XML 406 ms->Store parse info 0,156 ms->Evaluate XPath 0,093 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,093 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath=20 500 ms->Read Parse info 0,235 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,094 ms->Evaluate XPath 0,078 ms->Evaluate XPath 0,094 ms->Evaluate XPath The great thing in these results is that the XML file was 100Mb = and I run the program with the -Xmx64Mb jvm option (just enough to store = the 30mb parsing info, and the 16mb buffer) Well, as I said before I can send you a technical description of = the changes, pros and cons, and the code.=20 cheers, Fernando On 3/7/07, Fernando Gonzalez <fer...@gm...> wrote:=20 Hi Jimmy, Thanks for your response. I think I'm using the version 2.0 since I have tested the = "VTDGen.writeIndex" method. I looked for another solution because I = cannot remove the original XML file so I would have to store the XML = file twice: the original xml file and the file with the XML, VTD and LCs = created by "VTDGen.writeIndex". As I'm dealing with really big XML = files, that's a drawback. Yes, you're right, I have added code. Just three or four lines. = If you're interested I can explain thoroughly my solution. About the = XPath performance, I think that's a classloader issue. I will check that = and I will report the results.=20 greetings, Fernando=20 On 3/6/07, Jimmy Zhang < cra...@co...> wrote:=20 Hey Fernando, Thanks for the email.. I am glad VTD-XML is = helpful. My question: Which version are you using? =20 If you are currently using 2.0, it contains the indexing = feature that might accomplish just what is described in your email. Your solution is to seperate XML from VTD and LC, which I = think you must have added code to do that... VTD+XML (as in version 2.0) is to package XML, VTD and LCs = into=20 a single file... which should also work The only suspicious part is that the XPath performance dropped = for=20 your case ... which shouldn't happen=20 Buffer reuse is useful if your app instantiates a VTDGen to = sequentially process many incoming XML document ... if you deal only with one XML doc... buffer reuse won't make a = big difference I think you might be interested in first investigating the = persistence feature in=20 2.0, and there is a directory under code examples... Cheers, Jimmy ----- Original Message -----=20 From: Fernando Gonzalez=20 To: vtd...@li...=20 Sent: Tuesday, March 06, 2007 1:23 AM Subject: [Vtd-xml-users] Storing parsing info Hello, First of all I would like to congratulate you on your = project, I really think it's great. Second, I want to use the java VTD-XML to do a certain task = and I have succeeded but I don't know if I have done it in the right = way, or there is a better one. Can you give me some advice?=20 I want to evaluate some XPath expressions on a lot of files = of this size and larger, so the memory eficiency is critical. The first = idea that comes to my mind is to have a VTDGen object for each XML file = but this solution leads to having all the XMLs loaded in memory in the = "protected byte[] XMLDoc;" attribute in VTDGen class. So each time I = have to evaluate a XPath expression in a XML file I have to read the xml = file, parse it, evaluate XPath and set to null the VTDGen object to get = the memory freed by the garbage collector.=20 I have obtained these results reading a big XML file = (~100Mb): 360 ms reading file 1890 ms parsing file 32 ms evaluating a XPath expression 93 ms showing results total =3D 2375 milliseconds Where the second step ("parsing file") means: VTDGen vg =3D new VTDGen(); vg.setDoc(b); vg.parse(true); To speed up the process I have stored the parsing = information in a file. After that I can read the XML file and the = parsing information file, evaluate the XPath expression and close = everything again in a shorter time:=20 344 ms reading the file 422 ms reading parsing information 125 ms evaluating a XPath expression 93 ms showing results total =3D 984 I think the result is good enough but maybe there's a better = solution than mine. I have stored the parsing info by serializing all = the VTDGen object but the XMLDoc attribute. Then I retrieve the object = from disk and I set the XMLDoc attribute. This way:=20 ObjectInputStream ois =3D new = ObjectInputStream(new FileInputStream(PARSING_INFO)); vg =3D (MyVTDGen) ois.readObject(); ois.close(); FileInputStream fis2 =3D new = FileInputStream(TEST_XML);=20 byte[] b2 =3D new byte[(int) f.length()]; fis2.read(b2); vg.setXML(b2); //This method only sets the = XMLDoc attribute Is this solution good? Is there a better one? Can "Buffer = reuse" solve my probem?=20 best regards, Fernando -------------------------------------------------------------------- = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the = chance to share your opinions on IT & business topics through brief surveys-and = earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV=20 -------------------------------------------------------------------- _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users |
From: Fernando G. <fer...@gm...> - 2007-03-14 09:18:35
|
Sorry Jimmy, I will send it to you in .doc format. odt is OpenOffice Document (the Sun open source office suite) I have noticed that I haven't sent you the xml file I use to do the tests because it's quite big. I generated it with http://www.xml-benchmark.org/. I will send you a link to download the xml. About the class loader issues. I'm not absolutly sure, but the java classes have to be loaded and they are loaded on demand and that's why I think the first execution is slower. (it's also slower using the original VTD-XML) I understand your recomendation, but I think it would be a good thing to give the other possibility. I do need it, as I explain in the technical description. best regards, Fernando On 3/14/07, Jimmy Zhang <cra...@co...> wrote: > > Interesting as the main method actually is a GUI benchmarking ... > however, I can't seem to open the ODT file, > also I have yet to experience any class loader issues... ( maybe other > folks on this list can comment...) > regarding saving VTD into a separate file, there are pros and cons... > disk seeks on two separate might > slow things down a bit... my recommendation is to stick with the VTD+XML > format as it works with multiple > programming language and byte endianess... > > ----- Original Message ----- > *From:* Fernando Gonzalez <fer...@gm...> > *To:* Jimmy Zhang <cra...@co...> > *Sent:* Friday, March 09, 2007 3:28 AM > *Subject:* Re: [Vtd-xml-users] Storing parsing info > > I send you the two proposals mixed in one source folder. I don't think > it's going to be a problem since I also send you a description of the > changes. I hope it's clear and can be easily understood. > > In org.Prueba you can find a main method. > > greetings, > Fernando > > On 3/7/07, Jimmy Zhang <cra...@co...> wrote: > > > > Fernando, It is interetsing that you have substitute Byte[] with > > IbyteBuffer... since > > there is a level of indirection , the slight slow down should be > > expected... I would certainly > > be interested in your approach to the issue and feel free to send me the > > code... > > Cheers, > > Jimmy > > > > ----- Original Message ----- > > *From:* Fernando Gonzalez <fer...@gm...> > > *To:* Jimmy Zhang <cra...@co...> > > *Sent:* Wednesday, March 07, 2007 7:49 AM > > *Subject:* Re: [Vtd-xml-users] Storing parsing info > > > > Hi Jimmy, > > > > Writing the following I have found that may be it's quite complicated to > > understand since you don't know exactly the changes I have made. Even my > > tests are not thorough so maybe the best option is to submit a technical > > description of the changes, pros and cons, the code, and that kind of > > things. > > > > I have been testing the XPath performance problem and it seems like it's > > a classloader issue. As you can see in the following log the slowest XPath > > evaluation is the first, no matter how the parsing information is obtained. > > 391 ms->Load XML > > 2125 ms->Parse XML > > 31 ms->Evaluate XPath > > 0 ms->Evaluate XPath > > 0 ms->Evaluate XPath > > 453 ms->Store parse info > > 0 ms->Clear parse info > > 313 ms->Read Parse info > > 0 ms->Evaluate XPath > > 0 ms->Evaluate XPath > > > > I have been working in something more. I have done some changes to VTD > > and I have succeeded in the following. > > 1) The byte[] of the XML file is accessed through an interface > > (IByteBuffer). > > 2) When I use the UniByteBuffer implementation I get a bit slower > > results at parsing > > 391 ms->Load XML > > 2109 ms->Parse XML (vs 1890 ms I obtained accessing directly the byte[] > > buffer) > > 0,172 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > 0,094 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > > > 3) When I use an implementation that loads chunks as they are needed I > > get much slower results in parsing the file, but I get the same results > > evaluating a XPath expression. The advantage of this approach is that there > > is no need to load all the XML file in memory, so I have obtained the > > following results: > > > > 25406 ms->Parse XML > > 406 ms->Store parse info > > 0,156 ms->Evaluate XPath > > 0,093 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > 0,093 ms->Evaluate XPath > > 0,094 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > > > 500 ms->Read Parse info > > 0,235 ms->Evaluate XPath > > 0,094 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > 0,094 ms->Evaluate XPath > > 0,078 ms->Evaluate XPath > > 0,094 ms->Evaluate XPath > > > > The great thing in these results is that the XML file was 100Mb and I > > run the program with the -Xmx64Mb jvm option (just enough to store the 30mb > > parsing info, and the 16mb buffer) > > > > Well, as I said before I can send you a technical description of the > > changes, pros and cons, and the code. > > > > cheers, > > Fernando > > > > On 3/7/07, Fernando Gonzalez <fer...@gm...> wrote: > > > > > > Hi Jimmy, > > > > > > Thanks for your response. > > > > > > I think I'm using the version 2.0 since I have tested the " > > > VTDGen.writeIndex" method. I looked for another solution because I > > > cannot remove the original XML file so I would have to store the XML file > > > twice: the original xml file and the file with the XML, VTD and LCs > > > created by "VTDGen.writeIndex". As I'm dealing with really big XML > > > files, that's a drawback. > > > > > > Yes, you're right, I have added code. Just three or four lines. If > > > you're interested I can explain thoroughly my solution. About the XPath > > > performance, I think that's a classloader issue. I will check that and I > > > will report the results. > > > > > > greetings, > > > Fernando > > > > > > On 3/6/07, Jimmy Zhang < cra...@co...> wrote: > > > > > > > > Hey Fernando, Thanks for the email.. I am glad VTD-XML is helpful. > > > > My question: Which version are you using? > > > > If you are currently using 2.0, it contains the indexing feature > > > > that might > > > > accomplish just what is described in your email. > > > > > > > > Your solution is to seperate XML from VTD and LC, which I think you > > > > must have added code to do that... > > > > > > > > VTD+XML (as in version 2.0) is to package XML, VTD and LCs into > > > > a single file... which should also work > > > > > > > > The only suspicious part is that the XPath performance dropped for > > > > your case ... which shouldn't happen > > > > > > > > Buffer reuse is useful if your app instantiates a VTDGen to > > > > sequentially > > > > process many incoming XML document ... > > > > > > > > if you deal only with one XML doc... buffer reuse won't make a big > > > > difference > > > > > > > > I think you might be interested in first investigating the > > > > persistence feature in > > > > 2.0, and there is a directory under code examples... > > > > Cheers, > > > > Jimmy > > > > > > > > > > > > > > > > ----- Original Message ----- > > > > *From:* Fernando Gonzalez <fer...@gm...> > > > > *To:* vtd...@li... > > > > *Sent:* Tuesday, March 06, 2007 1:23 AM > > > > *Subject:* [Vtd-xml-users] Storing parsing info > > > > > > > > Hello, > > > > > > > > First of all I would like to congratulate you on your project, I > > > > really think it's great. > > > > > > > > Second, I want to use the java VTD-XML to do a certain task and I > > > > have succeeded but I don't know if I have done it in the right way, or there > > > > is a better one. Can you give me some advice? > > > > > > > > I want to evaluate some XPath expressions on a lot of files of this > > > > size and larger, so the memory eficiency is critical. The first idea that > > > > comes to my mind is to have a VTDGen object for each XML file but this > > > > solution leads to having all the XMLs loaded in memory in the "protected > > > > byte[] XMLDoc;" attribute in VTDGen class. So each time I have to evaluate a > > > > XPath expression in a XML file I have to read the xml file, parse it, > > > > evaluate XPath and set to null the VTDGen object to get the memory freed by > > > > the garbage collector. > > > > > > > > I have obtained these results reading a big XML file (~100Mb): > > > > > > > > 360 ms reading file > > > > 1890 ms parsing file > > > > 32 ms evaluating a XPath expression > > > > 93 ms showing results > > > > total = 2375 milliseconds > > > > > > > > Where the second step ("parsing file") means: > > > > VTDGen vg = new VTDGen(); > > > > vg.setDoc(b); > > > > vg.parse(true); > > > > > > > > > > > > To speed up the process I have stored the parsing information in a > > > > file. After that I can read the XML file and the parsing information file, > > > > evaluate the XPath expression and close everything again in a shorter time: > > > > 344 ms reading the file > > > > 422 ms reading parsing information > > > > 125 ms evaluating a XPath expression > > > > 93 ms showing results > > > > total = 984 > > > > > > > > I think the result is good enough but maybe there's a better > > > > solution than mine. I have stored the parsing info by serializing all the > > > > VTDGen object but the XMLDoc attribute. Then I retrieve the object from disk > > > > and I set the XMLDoc attribute. This way: > > > > > > > > ObjectInputStream ois = new ObjectInputStream(new > > > > FileInputStream(PARSING_INFO)); > > > > vg = (MyVTDGen) ois.readObject(); > > > > ois.close(); > > > > FileInputStream fis2 = new FileInputStream(TEST_XML); > > > > byte[] b2 = new byte[(int) f.length()]; > > > > fis2.read(b2); > > > > vg.setXML(b2); //This method only sets the XMLDoc > > > > attribute > > > > > > > > Is this solution good? Is there a better one? Can "Buffer reuse" > > > > solve my probem? > > > > > > > > best regards, > > > > Fernando > > > > > > > > ------------------------------ > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > > > > share your > > > > opinions on IT & business topics through brief surveys-and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > > > > > > > > ------------------------------ > > > > > > > > _______________________________________________ > > > > Vtd-xml-users mailing list > > > > Vtd...@li... > > > > https://lists.sourceforge.net/lists/listinfo/vtd-xml-users > > > > > > > > > > > > > > |
From: Jimmy Z. <cra...@co...> - 2007-03-21 17:50:20
|
Not a problem :-), one reason we are having this discussion is that the = indexing feature (and VTD-XML itself) is so new and we have yet to understand the possiblities = and design trade-offs ... Yes, I can see why tuning the optimum buffer size can potentially = improve performance... but in general do you see any issue with the XPath evaluation ? ----- Original Message -----=20 From: Fernando Gonzalez=20 To: vtd...@li...=20 Sent: Wednesday, March 21, 2007 3:14 AM Subject: Re: [Vtd-xml-users] Storing parsing info Sorry Jimmy, I misunderstood you. Please, forget the first paragraph = of my last mail. yes, you're right, when you ask for a byte it may not be in the loaded = chunk... so you have to load another chunk. Of course it's quite slower = than the current solution. But I think there must be a buffer size that = optimizes performance so the solution is only a bit slower without = loading more than 20-30Mb in memory. Don't you think so?=20 Anyway, the proposed changes don't force to use that approach. I'ts = still possible to load the whole XML in memory, so the only side effect = of the proposal is that the library is a bit more complex (there are = some exception handling issues and the user have to provide a = implementation of an interface instead of a byte[]) and the library is a = bit slower (because of the added indirection level)=20 Fernando On 3/21/07, Fernando Gonzalez <fer...@gm...> wrote: "if the chunks don't have what one is looking for, you will have to = load in another chunk... then=20 another chunk.." Not exactly. If something asks for the the byte number 'x' I guess = which chunk the byte is in and I load only that chunk. Only if the = information asked for by the user is spread in two or more contiguous = chunks will be necessary to load more than one. The implementation can = be seen in the " org.ChunkByteBuffer" class in the "public byte = byteAt(int streamIndex)" method. About the alternative you propose. As I have told before, it's not a = good solution to remove or archive the original GML. Splitting wouldn't = be as bad as removal or archiving, but it would add some complexity to = the user. The user should keep track of the splitted GML files that form = the original GML file. The splitting could be implemented in a way that = the user doesn't notice it... while he doesn't try to access the file = with another application.=20 Indeed, I think that, in the end, I'm doing something similar to = splitting since I logically split the GML file into several GML chunks. Fernando On 3/20/07, Jimmy Zhang <cra...@co...> wrote: Ok, I see... it seems that you can be sure that the "chunks" of = GML files contain what the user would need... But in general, if the chunks don't have what one is looking for, = you will have to load in another chunk... then another chunk.. that could mean a lot of disk activities As an alternative, would it be possible to split GML into little = chunks of well-formed GML files, then index them individually.=20 So instead of dealing with 10 big GML files, split them into 100 = smaller GML files and the algorithm you describe may still work.. ----- Original Message -----=20 From: Fernando Gonzalez=20 To: vtd...@li...=20 Sent: Tuesday, March 20, 2007 2:39 AM Subject: Re: [Vtd-xml-users] Storing parsing info On 3/20/07, Jimmy Zhang <cra...@co...> wrote:=20 So what you are trying to accomplish is to load all the GML = docs into memory at once... I guess you can simply index all those files to avoid = parsing... but I still don't seem to understand the benefits of read teh = parse info and a chunk of the XML file.. Quite near. What I need is to access a random feature at any = time with as a low cost as possible. That could be possible loading all = the GML docs in memory but the GML files are very big so I cannot do it. = As that solution wasn't suitable to my problem, I thought of = opening one file each time (using buffer reuse) and then it came to my = mind that I could save parsing time storing the parse info. As I told = before I cannot delete the GML. Storing the GML twice will waste disk = space. I'm talking about an environment where the user can have in his = computer a lot of digital cartography. Disk space is quite a bottle = neck. It could be valid, but storing only the parse info was so easy = that I did it and I obtained a better solution (for my environment). There is a use case where the user doesn't work with the files = directly, but with a spatial region. In this case, the GML files and = other spatial data are "layers", so the user can work at the same time = with a lot of files. These files can be in other formats than GML, = satellite images, different raster or vectorial formats; and these can = bring the system to a even more memory constrained situation. That's = what lead me to load chunks of the GML file. The workflow is the following * I open a file with the chunk approach * I parse the file (loading it with the chunks approach takes a = lot, but no problem) * I store the parse info=20 The user asks for information: * I load the parse info * I load the chunk * I return the asked information I want to speed up the asking of information because the user = can ask for a map image with 20 GML files, and the map code is something = like this: for each gml file guess what "features" are inside the map bounds (GML is = indexed spatially previously) get those features from the GML (random access) (load parse = info + load chunk + return info)=20 draw the features on a image next gml file Maybe this will make things a bit clearer. This screenshot = (http://www.gvsig.gva.es/fileadmin/conselleria/images/Documentacion/captu= ras/raster_shp_dgn_750.gif ) shows a program that uses the library. You = can see on the left all the loaded (from the user point of view) files: = four "dgn" files, one "shp" and seven "ecw" files. A lot of operations = done in the map are done over *every* file listed on the left so I don't = care how much time it takes to put all those files on the left = (generating parse info, etc). I care how much time takes to read the = information after they are loaded (again, from the user point of view). Well, I hope it's clear enough. Notice that I'm not proposing = changing the way VTD-XML works but I'm proposing to add new ways. greetings, Fernando =20 ----- Original Message -----=20 From: Fernando Gonzalez=20 To: vtd...@li...=20 Sent: Monday, March 19, 2007 2:56 AM Subject: Re: [Vtd-xml-users] Storing parsing info Well, jeje, the computer is new but I don't think my disk is = so fast. I think Java or the operating system has to cache something = because the first time I load the file it takes a bit more than 2 = seconds and after the first load, it only takes 300ms to read the = file...=20 I have no experience on doing benchmarks and maybe I'm am = missing something. That's why I attached the program. "So if you can't delete the orginal XML files, can you = compress them and=20 store them away (archiving)?" I cannot delete nor archive the GML file because in this = context it won't be rare to be reading it from two different programs at = the same time... It's difficult to find an open source program that does = everything you need. For example, in a development context, there may be = a map server serving a map image based on a GML file while you are = opening it to see some data in it.=20 "The other issue you raised is buffer reuse. To reuse = internal buffers of=20 VTDGen, you can call setDoc_BR(...). But there is more you = can do... you can in fact reuse the byte array containing the XML = document." Buffer reuse absolutly solves my memory constraints. But the = problem I see with buffer reuse is that it will force me to read and = parse the whole XML file every time the user ask for information on = another XML file, won't it? If I read the XML file by chunks and I = store/read the parse information, each time the user asks for = information on another XML file I only have to read the parse info and a = chunk of the XML file.=20 To show you my point of view: The "user asking for another XML file" may be a map server = that reads some big GML files and draws its spatial information in a map = image. If each time the map server draws a GML file and "changes" to the = next it takes 2 seconds or so, the drawing of the map (all the GML = files) takes too much time.=20 best regards, Fernando On 3/19/07, Jimmy Zhang <cra...@co...> wrote:=20 What intrigues me with Fernando's test results is that it = only takes 300ms to read a 100MB file? He got a super fast disk... ----- Original Message -----=20 From: Rodrigo Cunha=20 To: Jimmy Zhang=20 Cc: Fernando Gonzalez ; = vtd...@li...=20 Sent: Sunday, March 18, 2007 8:40 PM Subject: Re: [Vtd-xml-users] Storing parsing info In fact the idea occured to me in the past also... but = VTD is so fast reading large files anyway! With a fast processor I think = we might be disk-limited rather than processor-limited. Still, if the = code is made already, the option seems cute enought to keep :-) Since I mainly deal with large files requiring a lots of = processing this has not been an issue. Others, in different = environments, might disagree. Jimmy Zhang wrote:=20 Fernando, The option for storing VTD in a separate = file is open.=20 I attached the technical document from your last = email, and am also=20 interested in the suggestions/comments from the = mailing list ...=20 -------------------------------------------------------------------- = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the = chance to share your opinions on IT & business topics through brief surveys-and = earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV=20 -------------------------------------------------------------------- _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance = to share your=20 opinions on IT & business topics through brief surveys-and = earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users=20 ------------------------------------------------------------------------ = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance = to share your opinions on IT & business topics through brief surveys-and earn = cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV=20 ------------------------------------------------------------------------ _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users -------------------------------------------------------------------------= ----- = -------------------------------------------------------------------------= Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to = share your opinions on IT & business topics through brief surveys-and earn cash = http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDEV -------------------------------------------------------------------------= ----- _______________________________________________ Vtd-xml-users mailing list Vtd...@li... https://lists.sourceforge.net/lists/listinfo/vtd-xml-users |
From: Fernando G. <fer...@gm...> - 2007-03-22 09:27:25
|
Well, I don't need much of XPath. All XPath expressions I think I'm going to do are the same: get the nth child of an element. I thought the XPath evaluation was going to have a worse performance with the chunk approach. However I obtained better results than I expected and those results are good enough for me. Also keep in mind that my benchmark is not thorough. I tested only a XPath expression with a unique 100Mb XML file. On 3/21/07, Jimmy Zhang < cra...@co...> wrote: > > Not a problem :-), one reason we are having this discussion is that the > indexing feature (and > VTD-XML itself) is so new and we have yet to understand the possiblities > and design trade-offs > ... Yes, I can see why tuning the optimum buffer size can potentially > improve performance... > but in general do you see any issue with the XPath evaluation ? > > > > ----- Original Message ----- > *From:* Fernando Gonzalez <fer...@gm...> > *To:* vtd...@li... > *Sent:* Wednesday, March 21, 2007 3:14 AM > *Subject:* Re: [Vtd-xml-users] Storing parsing info > > Sorry Jimmy, I misunderstood you. Please, forget the first paragraph of my > last mail. > > yes, you're right, when you ask for a byte it may not be in the loaded > chunk... so you have to load another chunk. Of course it's quite slower than > the current solution. But I think there must be a buffer size that optimizes > performance so the solution is only a bit slower without loading more than > 20-30Mb in memory. Don't you think so? > > Anyway, the proposed changes don't force to use that approach. I'ts still > possible to load the whole XML in memory, so the only side effect of the > proposal is that the library is a bit more complex (there are some exception > handling issues and the user have to provide a implementation of an > interface instead of a byte[]) and the library is a bit slower (because of > the added indirection level) > > Fernando > > On 3/21/07, Fernando Gonzalez <fer...@gm...> wrote: > > > > "if the chunks don't have what one is looking for, you will have to load > > in another chunk... then another chunk.." > > > > Not exactly. If something asks for the the byte number 'x' I guess which > > chunk the byte is in and I load only that chunk. Only if the information > > asked for by the user is spread in two or more contiguous chunks will be > > necessary to load more than one. The implementation can be seen in the " > > org.ChunkByteBuffer" class in the "public byte byteAt(int streamIndex)" > > method. > > > > About the alternative you propose. As I have told before, it's not a > > good solution to remove or archive the original GML. Splitting wouldn't be > > as bad as removal or archiving, but it would add some complexity to the > > user. The user should keep track of the splitted GML files that form the > > original GML file. The splitting could be implemented in a way that the user > > doesn't notice it... while he doesn't try to access the file with another > > application. > > > > Indeed, I think that, in the end, I'm doing something similar to > > splitting since I logically split the GML file into several GML chunks. > > > > Fernando > > > > > > On 3/20/07, Jimmy Zhang <cra...@co...> wrote: > > > > > > Ok, I see... it seems that you can be sure that the "chunks" of GML > > > files contain what the user would need... > > > But in general, if the chunks don't have what one is looking for, you > > > will have to load in another chunk... then > > > another chunk.. that could mean a lot of disk activities > > > As an alternative, would it be possible to split GML into little > > > chunks of well-formed GML files, then index > > > them individually. > > > So instead of dealing with 10 big GML files, split them into 100 > > > smaller GML files and the algorithm you describe > > > may still work.. > > > > > > ----- Original Message ----- > > > *From:* Fernando Gonzalez <fer...@gm...> > > > *To:* vtd...@li... > > > *Sent:* Tuesday, March 20, 2007 2:39 AM > > > *Subject:* Re: [Vtd-xml-users] Storing parsing info > > > > > > On 3/20/07, Jimmy Zhang <cra...@co...> wrote: > > > > > > > > So what you are trying to accomplish is to load all the GML > > > > docs into memory at once... > > > > I guess you can simply index all those files to avoid parsing... > > > > but I still don't seem to understand the benefits of read teh parse > > > > info and a chunk of > > > > the XML file.. > > > > > > > > > > Quite near. What I need is to access a random feature at any time with > > > as a low cost as possible. That could be possible loading all the GML docs > > > in memory but the GML files are very big so I cannot do it. > > > > > > As that solution wasn't suitable to my problem, I thought of opening > > > one file each time (using buffer reuse) and then it came to my mind that I > > > could save parsing time storing the parse info. As I told before I cannot > > > delete the GML. Storing the GML twice will waste disk space. I'm talking > > > about an environment where the user can have in his computer a lot of > > > digital cartography. Disk space is quite a bottle neck. It could be valid, > > > but storing only the parse info was so easy that I did it and I obtained a > > > better solution (for my environment). > > > > > > There is a use case where the user doesn't work with the files > > > directly, but with a spatial region. In this case, the GML files and other > > > spatial data are "layers", so the user can work at the same time with a lot > > > of files. These files can be in other formats than GML, satellite images, > > > different raster or vectorial formats; and these can bring the system to a > > > even more memory constrained situation. That's what lead me to load chunks > > > of the GML file. > > > > > > The workflow is the following > > > * I open a file with the chunk approach > > > * I parse the file (loading it with the chunks approach takes a lot, > > > but no problem) > > > * I store the parse info > > > The user asks for information: > > > * I load the parse info > > > * I load the chunk > > > * I return the asked information > > > > > > I want to speed up the asking of information because the user can ask > > > for a map image with 20 GML files, and the map code is something like this: > > > > > > for each gml file > > > guess what "features" are inside the map bounds (GML is indexed > > > spatially previously) > > > get those features from the GML (random access) (load parse info + > > > load chunk + return info) > > > draw the features on a image > > > next gml file > > > > > > Maybe this will make things a bit clearer. This screenshot (http://www.gvsig.gva.es/fileadmin/conselleria/images/Documentacion/capturas/raster_shp_dgn_750.gif > > > ) shows a program that uses the library. You can see on the left all > > > the loaded (from the user point of view) files: four "dgn" files, one "shp" > > > and seven "ecw" files. A lot of operations done in the map are done over > > > *every* file listed on the left so I don't care how much time it takes to > > > put all those files on the left (generating parse info, etc). I care how > > > much time takes to read the information after they are loaded (again, from > > > the user point of view). > > > > > > Well, I hope it's clear enough. Notice that I'm not proposing changing > > > the way VTD-XML works but I'm proposing to add new ways. > > > > > > greetings, > > > Fernando > > > > > > ----- Original Message ----- > > > > *From:* Fernando Gonzalez <fer...@gm...> > > > > *To:* vtd...@li... > > > > *Sent:* Monday, March 19, 2007 2:56 AM > > > > *Subject:* Re: [Vtd-xml-users] Storing parsing info > > > > > > > > Well, jeje, the computer is new but I don't think my disk is so > > > > fast. I think Java or the operating system has to cache something because > > > > the first time I load the file it takes a bit more than 2 seconds and after > > > > the first load, it only takes 300ms to read the file... > > > > I have no experience on doing benchmarks and maybe I'm am missing > > > > something. That's why I attached the program. > > > > > > > > "So if you can't delete the orginal XML files, can you compress them > > > > and store them away (archiving)?" > > > > I cannot delete nor archive the GML file because in this context it > > > > won't be rare to be reading it from two different programs at the same > > > > time... It's difficult to find an open source program that does everything > > > > you need. For example, in a development context, there may be a map server > > > > serving a map image based on a GML file while you are opening it to see some > > > > data in it. > > > > > > > > "The other issue you raised is buffer reuse. To reuse internal > > > > buffers of VTDGen, you can call setDoc_BR(...). But there is more > > > > you can do... > > > > you can in fact reuse the byte array containing the XML document." > > > > Buffer reuse absolutly solves my memory constraints. But the problem > > > > I see with buffer reuse is that it will force me to read and parse the whole > > > > XML file every time the user ask for information on another XML file, won't > > > > it? If I read the XML file by chunks and I store/read the parse information, > > > > each time the user asks for information on another XML file I only have to > > > > read the parse info and a chunk of the XML file. > > > > > > > > To show you my point of view: > > > > The "user asking for another XML file" may be a map server that > > > > reads some big GML files and draws its spatial information in a map image. > > > > If each time the map server draws a GML file and "changes" to the next it > > > > takes 2 seconds or so, the drawing of the map (all the GML files) takes too > > > > much time. > > > > > > > > best regards, > > > > Fernando > > > > > > > > > > > > On 3/19/07, Jimmy Zhang <cra...@co... > wrote: > > > > > > > > > > > > > > > What intrigues me with Fernando's test results is that it only > > > > > takes 300ms to read a 100MB > > > > > file? He got a super fast disk... > > > > > > > > > > ----- Original Message ----- > > > > > *From:* Rodrigo Cunha <rn...@gm...> > > > > > *To:* Jimmy Zhang <cra...@co...> > > > > > *Cc:* Fernando Gonzalez <fer...@gm...> ; vtd...@li... > > > > > > > > > > *Sent:* Sunday, March 18, 2007 8:40 PM > > > > > *Subject:* Re: [Vtd-xml-users] Storing parsing info > > > > > > > > > > In fact the idea occured to me in the past also... but VTD is so > > > > > fast reading large files anyway! With a fast processor I think we might be > > > > > disk-limited rather than processor-limited. Still, if the code is made > > > > > already, the option seems cute enought to keep :-) > > > > > > > > > > Since I mainly deal with large files requiring a lots of > > > > > processing this has not been an issue. Others, in different environments, > > > > > might disagree. > > > > > > > > > > Jimmy Zhang wrote: > > > > > > > > > > Fernando, The option for storing VTD in a separate file is open. > > > > > > > > > > I attached the technical document from your last email, and am also > > > > > > > > > > interested in the suggestions/comments from the mailing list ... > > > > > > > > > > > > > > > > > > > ------------------------------ > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > > > > share your > > > > opinions on IT & business topics through brief surveys-and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > > > > > > > > ------------------------------ > > > > > > > > _______________________________________________ > > > > Vtd-xml-users mailing list > > > > Vtd...@li... > > > > https://lists.sourceforge.net/lists/listinfo/vtd-xml-users > > > > > > > > > > > > > > > > ------------------------------------------------------------------------- > > > > Take Surveys. Earn Cash. Influence the Future of IT > > > > Join SourceForge.net's Techsay panel and you'll get the chance to > > > > share your > > > > opinions on IT & business topics through brief surveys-and earn cash > > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > > > > _______________________________________________ > > > > Vtd-xml-users mailing list > > > > Vtd...@li... > > > > https://lists.sourceforge.net/lists/listinfo/vtd-xml-users > > > > > > > > > > > ------------------------------ > > > > > > > > > ------------------------------------------------------------------------- > > > Take Surveys. Earn Cash. Influence the Future of IT > > > Join SourceForge.net's Techsay panel and you'll get the chance to > > > share your > > > opinions on IT & business topics through brief surveys-and earn cash > > > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > > > > > > > > ------------------------------ > > > > > > _______________________________________________ > > > Vtd-xml-users mailing list > > > Vtd...@li... > > > https://lists.sourceforge.net/lists/listinfo/vtd-xml-users > > > > > > > > > ------------------------------ > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > > ------------------------------ > > _______________________________________________ > Vtd-xml-users mailing list > Vtd...@li... > https://lists.sourceforge.net/lists/listinfo/vtd-xml-users > > |