From: Stephan T. <and...@gm...> - 2001-11-10 10:08:26
|
Hello Bart, > > I was wondering some stuff about your HLib, mostly about the compressor engine... > I'm guessing you search and replace and then eval the compressed strings? > If so... is that not awfully slow for the client, offsetting the download time? > > I'm interested in developing a _fast_ compressor engine.... HLib's build script has different compression modes. The one you're talking about is the most extreme one (it is called with "python build.py -x hlib.js" at the top dir). Javascript files compressed with this mode have to be dynamically decompressed by the client after they have been loaded. The javascript decompression routine can be found at line #162 in build.py. It works - as you noticed - by searching and replacing, though it doesn't eval the code but writes it to the document. I have not made any serious tests, but the decompression routine is implemented in a quite efficient way. You hardly can measure any time lag on IE or Netscape6. On the other hand, Netscape 4 has serious problems loading scripts compressed with this algorithm. These problems are caused by Netscape's poorly implemented document.write() method. I'd like to hear from you if our build script fits your needs. If not, feel invited to add a new compression mode ;-) Best regards, Stephan |