Babeldoc provides us with a very flexible system but the large amounts of data we're trying to process is causing a problem.
We're processing large CSV files (7000+ lines) and since each line is being converted into XML, transformed with XSLT and finally written to a database, this process is very slow. Does anyone have any tips on how to improve on the performance of the Babeldoc pipeline stages? We have written custom code to run the pipelines in our J2EE container so we are open to all solutions.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Babeldoc provides us with a very flexible system but the large amounts of data we're trying to process is causing a problem.
We're processing large CSV files (7000+ lines) and since each line is being converted into XML, transformed with XSLT and finally written to a database, this process is very slow. Does anyone have any tips on how to improve on the performance of the Babeldoc pipeline stages? We have written custom code to run the pipelines in our J2EE container so we are open to all solutions.