From: Josh V. <ho...@na...> - 2001-03-09 20:41:14
|
Gareth Hughes <ga...@va...> writes: > My approach for this is perhaps slightly different to yours. I was > thinking more along the lines of having the compiled functions stored as > strings, which can be copied and edited by the context as needed. This > allows the context to insert hard-coded memory references and so on. > Similarly, I've been kicking around a design of a dynamic software > renderer, which is built from chunks of compiled code that can be > tweaked and chained together depending on the current GL state etc. I > don't think actually "compiling" code is the answer -- it's more a > customization of pre-compiled code to suit the current context. In that situation, compiling the code would help greatly. Here is what you give up by pre-compiling the code: 1. Global optimizations. In the pre-compiled version, there will be an artificial boundary at each chunk. Where the dynamic-compiled version would be free to optimize across chunks, you would be stuck forcing the cpu into a known state at each boundary. 2. Easy processor specific optimizations. If you're compiling at run time, you can get processor specific optimizations by just changing the compiler flags. 3. Portability. The dynamic-compiled version would splice the chunks together automatically. I can't think of a portable way to concatenate pre-compiled code correctly. (Does gcc have an attribute for it?) 4. Flexibility. The pre-compiled code would have to follow a rigid template. If you compile at run time, you have the flexibily to change variable types and structure layouts at run time. Of course, you do have to take a start up penalty with run-time compiled code. Considering that the average system is around 700MHz (just a guess) and getting faster every day, I think people may be overestimating how expensive using a real compiler would be. Josh |