On Sun, Feb 20, 2011 at 7:59 AM, Matthew Mondor <firstname.lastname@example.org>
Would this be an option or would ECL internally use a 16-bit character
On Sat, 19 Feb 2011 23:43:33 +0000
Juan Jose Garcia-Ripoll <email@example.com
> Would you find it useful to have an ECL that only supports character codes 0
> - 65535? That would make it probably easier to embed the part of the Unicode
> database associated to it (< 65535 bytes) and have a standalone executable.
> Executables would also be a bit faster and use less memory (16-bits vs
> 32-bits per character)
representation all the time when unicode support is enabled for the
It would be optional It would make the C image smaller, now that the Unicode database is going to be shipped as an object file -- working on it --, and it would definitely save memory, which may help in some contexts.
I know that some
processors (including older x86) have faster access times for 32-bit
That was not my experience when working on ECL's interpreter -- it is right now 16-bit because it was faster than 8-bit bytecodes and than 32-bit ones..
As for the 65535 bytes output file limitation, is that more difficult
to fix? Is it a toolchain-dependent issue which ECL has no control