You can subscribe to this list here.
2000 |
Jan
|
Feb
(2) |
Mar
(4) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(23) |
Nov
(4) |
Dec
(25) |
2003 |
Jan
(7) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Kevin S. <kt...@te...> - 2003-01-22 20:08:44
|
On Wed, 2003-01-22 at 09:22, nl...@nl... wrote: > How do you test performance of real compiled language vs byte code > interpreted language, and how can you determine if cache issues are > indeed responsible for better performance of byte code interpreters? When I did it is was on the Playstation, which was an environment I had control over (no OS to speak of), so benchmarking was pretty easy. We ported some logic from C++ to aicomp and it ran faster. > I'd really like a byte-code-interpreter that is faster than compiled code= , > but I have trouble believing that this can actually be the case (and > would want to be able to verify why it is faster if it is the case). A compiled script is a form of compression, one atom (whatever size you use for scripting codes, I have done a system with 8 bit codes, and another with 16 bit codes) is smaller than the code the interpreter runs as a result of that atom. So assuming op codes are used more than once in the script program, overall there will be fewer data transfers from memory (the native code to implement the opcode gets loading into cache once, then each subsequent use only loads a few atoms worth of data). A concrete example: I wrote a scripting language which implemented a 2D graphics library which used 16 bit opcodes & operands. One of the instructions was "Draw a line". It took 5 parameters: x1,y1,x2,y2,color. So each draw line command consumed 12 bytes of memory. Contrast this to a function call in C/C++, which would consist of stack frame creation, then 5 push opcodes (one for each parameter), then the call itself. It would look something like this (I think the x86 has a push immediate, if not this would be even bigger): =09 push color ; 3 bytes each push y2 =09 push x2 push y1 push x1 call DrawLine ; 3 or 5 bytes (short or long call add framesize,sp ; 3 bytes =09 (my x86 is rusty, I don't remember exactly how to specify the size of the constant being pushed, something like WORD in there somewhere) So, assuming the best case that each op-code is 1 byte, the above is 21 or 23 bytes long. As you can see, the code version is almost twice as big (and this is a best case, the actual output from the compiler could be bigger). The reason for this is that for each parameter not only the data must be specified, but what to do with it as well. Note that what to do with it is repetitious (put it on the stack), which is wasteful (and compressible).=20 What does any of this have to do with the cache? The purpose of the cache is to reduce the # of main memory fetches that occur, because main memory is much slower than the processor. The above cooperates with the cache due to 2 things: 1. Reuse. Once the portion of the script interpreter for drawing a line is in memory it doesn't need to get fetched again, 2. The total # of memory fetches is reduced, since reading 12 bytes of script data is less than 21 bytes of native machine code. The above is the smallest case savings, where both the script interpreter and the native code call a function to do the actual work of drawing a line. It is better to have as much of the drawing function inlined as possible, avoids the overhead of the function call and give better cache coherency.=20 Of course this example is a bit contrived, one would hope that the programmer wouldn't hard code line coordinates in code like this (although I have seen much code which did exactly this). If the goal is only to draw a bunch of lines then a C loop through a coordinate data array would be even faster than the script interpreter approach. Where the script becomes the right choice is in the middle ground, between wanting to only do one thing (like draw lines), and needing to do absolutely anything (in which case the native language is best). An example middle ground would be 2D drawing primitives, or AI logic. All of this is based on tests I did on the Sony Playstation, which has the most pathetic cache system I have ever seen (4K code cache, 4K of broken data cache where the program has to explicitly use it), a modern PC is clearly a different beast, but I think some of the same truths will apply. Another thing to consider: inlining is the enemy of the cache, because it puts the same code in many different places, so the cache can't notice the redundancy. Therefore inliing should only be done when the resulting code is smaller than the function call would be. In general smaller code will tend to run faster as it fits into the cache more often, so in some cases setting optimizations to favor size over speed might actually execute faster. > How do you test or effectively monitor cache performance? Furthermore > isn't relying on a cache-advantage a shaky proposition, since > then your performance depends on the memory access pattern of your code, > which is generally not something that you explicitly control? I don't really know how to measure the cache specifically, but one can benchmark the whole program and get an idea where the time is going. BTW, there are multiple ways to measure a program's performance, what seems to be most popular is to run a profiler, which interrupts the program thousands of times a second and notes the PC, so captures a histogram of where the program spends its time. This sort of profiling is valuable, but it can't reveal anything about cache issues since a cache hit is not something which is embodied in PC addresses (it also includes level loading in the profile, which skews the results). The sort of profiling I prefer is to add code to your program which times sub-sections, which gives a complete picture of the performance of each. When doing console games I accomplished this by having each sub-section set the screen background to a different color, so down the screen there would be color bands indicating what was executing (this only works if the game runs at 60Hz, or you only enable color setting for a portion of the execution less than 1/60th of a second, otherwise the colors overlap and are difficult to read (although it can be done with a vcr by pausing)). This was pretty nice because one could run the game, move around and see portions of gameplay affect performance.=20 I haven't done this sort of profiling on the PC, I imagine I would just read the high resolution clock and write values to a file.=20 =20 > I've been looking at Small (http://www.compuphase.com/small.htm), which > seems quite good. It has a virtual machine interpreting byte code at > run-time, is a small language, fixed/float support, coroutine support, > interfaces to C, has just-in-time compilation to native code for Windows, > and a GCC-optimized VM for Linux (but still slower than JIT). Be sure to add that link to the scripting language wiki page). > I still need to do performance tests though. WF needs a performance test library, if I ever have the time I might develop one. --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2003-01-22 15:58:24
|
On Sun, Dec 08, 2002 at 06:30:23PM -0800, Kevin Seghetti wrote: > On Sun, 2002-12-08 at 19:26, nl...@nl... wrote: > > My experience with scripting is that it is a major performance problem > > because you take a large run-time hit for every helper function you > > declare. > > I want to find something in between, which is compiled at level > conversion time and just has some sort of byte code interpreter at run > time (which in many cases can run faster than a real compiled language > due to cache issues). How do you test performance of real compiled language vs byte code interpreted language, and how can you determine if cache issues are indeed responsible for better performance of byte code interpreters? I'd really like a byte-code-interpreter that is faster than compiled code, but I have trouble believing that this can actually be the case (and would want to be able to verify why it is faster if it is the case). How do you test or effectively monitor cache performance? Furthermore isn't relying on a cache-advantage a shaky proposition, since then your performance depends on the memory access pattern of your code, which is generally not something that you explicitly control? I do like scripting systems, it's just that I have trouble understanding the "cache" argument. > I am going to look at lua as the next candidate. I've been looking at Small (http://www.compuphase.com/small.htm), which seems quite good. It has a virtual machine interpreting byte code at run-time, is a small language, fixed/float support, coroutine support, interfaces to C, has just-in-time compilation to native code for Windows, and a GCC-optimized VM for Linux (but still slower than JIT). I still need to do performance tests though. -Norman |
From: <nl...@nl...> - 2003-01-13 11:52:44
|
There is a pair of interesting articles on Gamasutra about camera handling. It sounds similar to World Foundry's camera system, using a model to select and join together the best camera shots. It would be interesting to compare the systems and maybe pick up a few ideas. http://www.gamasutra.com/features/20030108/hawkins_01.htm http://www.gamasutra.com/features/20030110/hawkins_01.htm -Norman |
From: <nl...@nl...> - 2003-01-13 11:48:39
|
Well, maybe not "famous", but it was mentioned at: http://groups.yahoo.com/group/opengl-gamedev-l/message/21877 This was a thread where someone asked about available 3D engines for use in a skeletal animation project. From what I recall, you said that skeletal support is no longer in World Foundry, although by leveraging Nebula for mesh skinning/deformation it should not be too much work to put such a system back in. -Norman |
From: <nl...@nl...> - 2003-01-12 22:33:43
|
Browsing CVS, I saw a "racing" directory in wflevels (unfortunately currently empty). Has a new level been built? Until we get the level editing working, I'm always interested in seeing new assets :-) Speaking of level editing: until "blind data editing" (apparently this is Maya's term for attaching arbitrary binary data to an object - I think the term is a good one) works in Blender, we have to store the override data somewhere else. Python support in Blender will take a few months to settle down. My suggestion is to use VRML2 export to export position, geometry, and u,v texture coordinates (which appears to work well), and to use extra files for storing the attribedit override data - a file named object.iff.txt for the object named "object" in the Blender level file. This causes creation of lots of little *.iff.txt files in the level directory, but it is an approach which easy to implment, easy to understand, and easy to debug. Later on, we can worry about attaching the binary data to the Blender objects and saving this in the Blender file. The only problem with this approach is if you rename an object, then its override file needs to be (manually) renamed or copied. An external script would then parse the VRML file and the *.iff.txt files and create the complete IFF level file. A bit of a hack, but it would work. Sound reasonable? -Norman |
From: Kevin S. <kt...@te...> - 2003-01-12 20:28:53
|
On Sun, 2003-01-12 at 07:45, nl...@nl... wrote: > With regard to adding float support to attribedit, should new classes > TypeFloat32 and GUIElementFloat be added, or should the existing > TypeFixed32/GUIElementFixed be renamed to TypeScalar32/GUIElementScalar > with the actual type (fixed/float/double) being defined at compile time > depending on the WF scalar type? This is the first I have thought all of this through, so I could miss something: Hmmm, that is an interesting question, both answers make sense in different contexts. From a general attribute editor point of view (not specific to WF), AttribEdit ought to be able to edit both types simultaneously. (or maybe not, if a system supports floats, who would want to use the fixed type?).=20 I don't want a 'Scalar' Attribute in the oad .iff files because I want the .iff file to be self describing (one should be able to tell the actual type of a field by its name). So in the .iff file I think we should add a type 'FLOT' and a type 'DOBL'.=20 Given that, it would probably make sense to have levelcon be able to convert floats to fixed point if building a level file for the fixed point version of the engine.=20 Given THAT, maybe AttribEdit should ALWAYS edit floats (or doubles), and leave it to the level converted to truncate it to fixed if needed. To do that would require updating the oad generation process to ouput 'DOBL' for the oad files (but still mark them as fixed in the in game structures (if building fixed point)). > The latter choice would mean that there is only ONE scalar type per > level - no mixing of fixed, float, and double would be possible. The > former choice would allow mixing of fixed, float, and double attributes > in a level. Since it is a compile time switch in the engine I agree there should only be fixed or float in the level (and the whole game for that matter). However, there might be some value to supporting multiple precisions at once (float vs double) (and 32 bit fixed vs 64 bit fixed, which we do have internally in WF).=20 I can't think of a reason to have Attribedit support fixed any longer, just float & double. (Since the only purpose of the fixed point code was for speed on systems which don't support floating point). In summary I think the tools should all work only in floating point, the engine should be buildable for fixed or float, and iff2lvl should convert as appropriate. so I am suggesting: * add 'FLOT' & 'DOBL' support to iffcomp.=20 * add ability to edit 'FLOT' and 'DOBL' to AttribEdit * add ability to read 'FLOT' and 'DOBL' chunks to iff2lvl (and output fixed point) * modify oad file generation to output 'FLOT' instead of 'F32' (and consider adding a double precision scalar type which would output 'DOBL') * finally, add a switch to iff2lvl to output floating point data in the=20 lvl file and change the .ht file generation to generate headers which specify floats (both of these would occur based on the SCALAR_TYPE switch). =20 --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2003-01-12 14:21:32
|
With regard to adding float support to attribedit, should new classes TypeFloat32 and GUIElementFloat be added, or should the existing TypeFixed32/GUIElementFixed be renamed to TypeScalar32/GUIElementScalar with the actual type (fixed/float/double) being defined at compile time depending on the WF scalar type? The latter choice would mean that there is only ONE scalar type per level - no mixing of fixed, float, and double would be possible. The former choice would allow mixing of fixed, float, and double attributes in a level. Which is better? I think the latter (TypeScalar32/GUIElementScalar) is better. Is this a good plan for working on attribedit in this regard: * use fixed point scalar type, compile wftools * generate an *.iff.txt file defining a scalar attribute with a default value * use iffcomp to compile the *.iff.txt into the binary *.iff * use iffdump to verify that the correct default values are in the iff file * change the WF scalar type to float, recompile tools (iffcomp, iffdump) * re-generate *.iff (with floats) from *.iff.txt using new iffcomp * re-run iffdump to verify that the correct (float) default values are in the iff file * bug-fix iffcomp/iffdump as needed until iffcomp/iffdump successfully dump the float values in *.iff * increase the default scalar value in *.iff.txt to be a large number not expressible in fixed point, re-run iffcomp/iffdump, and verify that the large value is being dumped (to verify actual usage of floats) * change attribedit types to be named Scalar32/GUIElementScalar instead of Fixed32/GUIElementFixed * define a scalar attribute with huge min/max values, use float scalar type, edit the attribute in attribedit, enter a huge value, save the override file, verify that the huge float value is correctly saved in the override file This is needed to enter large floating-point parameters in attribedit for ODE-type parameters (e.g. max force available to a motor, etc), which need large values outside of the range of fixed-point. -Norman |
From: <nl...@nl...> - 2002-12-29 17:00:44
|
On Sun, Dec 29, 2002 at 08:42:10AM -0800, Kevin Seghetti wrote: > Rooms are used for several things, collision detection culling is just > one of them. > > 1. They are also used to decide which objects to update (the whole level > doesn't run, only the currently active rooms). If the whole level was > under one octree the engine would have to run all objects. My idea is to only run those objects in the activity bubble(s), where "run" means any kind of update whatsoever (collision, physics, scripts). If anything is outside of the activity bubble(s), it is not updated at all by the engine. In other words, replace the concept of "room" with "octree node", and replace the concept of "3 rooms active" with "all nodes active within the activity bubble(s)". > 2. Similar to 1, they are used to determine which objects to render. > This would probably be better be handled by a portal system, except I > like that only objects which are currently running are rendered, so I > guess regardless of how (1) is accomplished it would also be used for > (2) (with the possibility of portals and frustrum culling added on top) Similarly, you would traverse the octree, finding all nodes within a "visibility bubble", and only render those. The visibility bubble could be the same as the activity bubble, or it could be smaller. This could be combined with a portal or an occlusion culling ("reverse portal") scheme. > 3. Asset management. This is an important one. WF doesn't load all > models, animations, textures, sound effects, etc. for the whole level > into memory at once, just those used in the current rooms (and what is > persistent). Rooms are used to divide the assets into easily loadable > chunks. I see this as the main area where you lose control if you do away with the concepet of "rooms". -Norman |
From: Kevin S. <kt...@te...> - 2002-12-29 16:44:58
|
(adding this discussion to wfcode, I need to start making sure more WF related emails end up there) On Sun, 2002-12-29 at 07:21, nl...@nl... wrote: > On Thu, Dec 26, 2002 at 03:20:32PM -0800, Kevin Seghetti wrote: > > On Thu, 2002-12-26 at 07:59, nl...@nl... wrote: > > So my current thinking is that each room is a separate space, with the > > objects that are entirely inside of the room in that room's space. The > > rooms are adjacent but don't overlap. Each frame each moving object is > > checked against its containing room, if entirely contained the no actio= n > > is taken, but if it is completely or partially out of the room then the > > object is removed from the room's space and added to the global space > > (the same space that the room's spaces are contained in). Objects in th= e > > global space get checked against every other object in the global space= . >=20 > Taking this idea to its logical extreme, you end up with the idea not > of a "global space" and "room spaces", but an entire hierarchy of > spaces - rooms, groups of rooms, groups of groups, etc. If an object > cannot fit inside its current node in the hierarchy, it gets moved up=20 > to the higher-levels until it fits, then gets pushed back down into lower > levels (if possible) until it fits (thus an object straddling two rooms > would be placed in the group above, while an object moving totally from > one room to the next would be pushed up to the group level then back down > into the second room). Sounds good to me (but I have never worked with such a system, so I don't know its pitfalls yet). > Then there's the question of how to generate this room hierarchy. An > octree can do this automatically. >=20 > I'm thinking about trying to use an octree in this manner. By default, > everything is inactive. Each frame, you see which nodes in the octree > overlap a certain sphere or bbox around the player - the "activity bubble= ". > (There can be multiple bubbles around multiple players, etc.) For all > nodes, run collision detection and physics for the objects. (An object > must only be collided against objects in its current node and all objects > in child nodes. In particular, since octree nodes don't overlap, this > avoids collision checks between adjacent child nodes, i.e., adjacent > rooms.) Then after this frame everything is again set to inactive, and > in the next frame a new activity bubble is computed (to account for > player and object movement), physics is only run for this region, and so > forth. Note that moving objects (due to physics updates) require > shuffling objects around in the octree - not terribly expensive, but > the cost must be kept in mind. I haven't looked at how octrees are implemented, but I suspect that you don't want to start over each frame. It seems to me the overhead would approach the time the existing collision system takes to determine where in the tree everything goes. On the other hand, if you just adjusted the tree each frame to help push nodes near the root toward the leaves, that might be fairly efficient. =20 > This would have the advantage of freeing the level designer from > needing to think about "rooms" - you just put geometry where you want > it, and the octree automatically hierarchically divides the entire > level geometry into non-overlapping regions. >=20 > Furthermore, Nebula has an octree implementation which looks very > simple, clean, and extensible. In fact, it does exactly the above > operations (push up until the node fits, push down until it fits in > a child) whenever you move an object in the octree, leading me to > believe that this is a workable approach. Oh, maybe adjusting the tree is what you were suggesting. I don't have first-hand experience with this, but it sounds like it would work for collision. =20 > This could be implemented outside of ODE as a pre-filtering layer, > or it could be implemented as a new space type in ODE, e.g. dxOctreeSpace= . >=20 > Do you see advantages to a manual room-based approach, or do you think an > automatic octree approach is just as flexible with the added advantage > of being automatic and multi-level hierarchical? Again, I'm trying to fin= d > a general-purpose, automatic solution for "the ultimate engine". Rooms are used for several things, collision detection culling is just one of them.=20 1. They are also used to decide which objects to update (the whole level doesn't run, only the currently active rooms). If the whole level was under one octree the engine would have to run all objects. 2. Similar to 1, they are used to determine which objects to render.=20 This would probably be better be handled by a portal system, except I like that only objects which are currently running are rendered, so I guess regardless of how (1) is accomplished it would also be used for (2) (with the possibility of portals and frustrum culling added on top) 3. Asset management. This is an important one. WF doesn't load all models, animations, textures, sound effects, etc. for the whole level into memory at once, just those used in the current rooms (and what is persistent). Rooms are used to divide the assets into easily loadable chunks. --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: Kevin S. <kt...@te...> - 2002-12-29 16:28:54
|
On Sun, 2002-12-29 at 07:08, nl...@nl... wrote: > On Thu, Dec 26, 2002 at 06:56:15PM -0800, Kevin Seghetti wrote: > > On Thu, 2002-12-26 at 04:19, nl...@nl... wrote: > > > On my Debian GNU/Linux unstable distribution with gcc 2.95.4, I had t= o > > > add the following to the compiler flags to get the new floating point= scalar > > > type to compile: > > >=20 > > > -D_ISOC99_SOURCE=3D1 -D_BSD_SOURCE=3D1 > >=20 > > What do these do? What am I doing with floats that is unusual?=20 > > =20 > > > Furthermore, scalar.hp needs to #include <math.h> >=20 > After some research, I think the issue is not gcc 2.95.4, but rather > my libc version. Later versions of libc apparently do not make > certain math functions (trunc, atan2, etc) available in math.h unless > _ISOC99_SOURCE and _BSD_SOURCE are set. I believe this has to do with > some standardization efforts and making gcc/libc standards-compliant. I am using gcc 3.2 on RedHat 8.0, I must assume it uses a newer libc than gcc 2.95.4 does (how do I find out the version of libc?), and I don't need to set those switches. (I don't mind having them, just want to understand why they suddenly are needed). > > Crap, I totally forgot about testing the tools with Scalar set to float= . >=20 > I managed to get attribedit to compile with floats, but it behaved > incorrectly (sliders wouldn't move, etc), and I didn't have time to > investigate further. Did you have to change much? Does it still work built for fixed point? If so feel free to check the changes in (doesn't make it worse). =20 > > Another issue is whether I want floating point data in the binary level > > files and cd.iff.Currently all data is stored in fixed point, and gets > > converted to Scalar at load time (that is recent, before I did float > > support it was copied directly). Clearly it would be more efficient and > > accurate to store floats if Scalar is type float. But then cd.iff would > > no longer be completely portable (currently the same cd.iff file works > > with all possible permutations of compile and run time switches). >=20 > Why would cd.iff no longer be portable? ' cd.iff would not be portable between WF built for fixed point, WF build for float, and WF built for double. Or I need to add some sort of flag which tells me which on disk format is being used and write loaders which convert. > I suppose you mean that dumping the 4 > floating point bytes to disk, raw, is non-portable (but aren't there IEEE > standards defining how floating-point numbers should be represented?) Yes, I am pretty sure I can just copy the binary float data straight to disk and have it come back correct elsewhere (as long as the float size is the same). > One possibility would be to use a custom software floating-point > implementation with a defined byte storage format. You could then > manually read/write the 4 raw bytes, and interpret them in software > to become a floating point value, and only then assign this value to > a real floating point variable. Would that solve the problem? I think we are pretty safe relying on the float point spec, but I should do some web research to be sure. > > Another issue is Scalar can actually be set to 2 different float types: > > float and double.=20 >=20 > This could again be solved with a software floating point emulation > also supporting a double format, right? The issue is I would need some way to flag which representation is in cd.iff. Currently the binary data in cd.if does contain type information, that is compiled in from the .ht files, there is no support for the same variable having more than one potential type. I would probably implement this as a file wide flag (all Scalars would have to be fixed, float or double). =20 --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2002-12-29 13:42:23
|
On Thu, Dec 26, 2002 at 06:56:15PM -0800, Kevin Seghetti wrote: > On Thu, 2002-12-26 at 04:19, nl...@nl... wrote: > > On my Debian GNU/Linux unstable distribution with gcc 2.95.4, I had to > > add the following to the compiler flags to get the new floating point scalar > > type to compile: > > > > -D_ISOC99_SOURCE=1 -D_BSD_SOURCE=1 > > What do these do? What am I doing with floats that is unusual? > > > Furthermore, scalar.hp needs to #include <math.h> After some research, I think the issue is not gcc 2.95.4, but rather my libc version. Later versions of libc apparently do not make certain math functions (trunc, atan2, etc) available in math.h unless _ISOC99_SOURCE and _BSD_SOURCE are set. I believe this has to do with some standardization efforts and making gcc/libc standards-compliant. > Crap, I totally forgot about testing the tools with Scalar set to float. I managed to get attribedit to compile with floats, but it behaved incorrectly (sliders wouldn't move, etc), and I didn't have time to investigate further. > Another issue is whether I want floating point data in the binary level > files and cd.iff.Currently all data is stored in fixed point, and gets > converted to Scalar at load time (that is recent, before I did float > support it was copied directly). Clearly it would be more efficient and > accurate to store floats if Scalar is type float. But then cd.iff would > no longer be completely portable (currently the same cd.iff file works > with all possible permutations of compile and run time switches). Why would cd.iff no longer be portable? I suppose you mean that dumping the 4 floating point bytes to disk, raw, is non-portable (but aren't there IEEE standards defining how floating-point numbers should be represented?) One possibility would be to use a custom software floating-point implementation with a defined byte storage format. You could then manually read/write the 4 raw bytes, and interpret them in software to become a floating point value, and only then assign this value to a real floating point variable. Would that solve the problem? > Another issue is Scalar can actually be set to 2 different float types: > float and double. This could again be solved with a software floating point emulation also supporting a double format, right? -Norman |
From: Kevin S. <kt...@te...> - 2002-12-27 02:59:41
|
On Thu, 2002-12-26 at 04:19, nl...@nl... wrote: > On my Debian GNU/Linux unstable distribution with gcc 2.95.4, I had to > add the following to the compiler flags to get the new floating point sca= lar > type to compile: >=20 > -D_ISOC99_SOURCE=3D1 -D_BSD_SOURCE=3D1 What do these do? What am I doing with floats that is unusual?=20 =20 > Furthermore, scalar.hp needs to #include <math.h> It wasn't? Huh, I wonder why it works on other distributions. > Since I am not sure where the best place is to add these -D flags in > the WF makefiles, I have not checked this in. I assume these symbols > will have no negative effect on other distributions/compiler versions. Probably not.=20 The right place is in the non-existent configure scripts.=20 In the meantime you should just add a section to http://worldfoundry.org/wfwiki/index.php/Installation for that distribution/compiler combination (the right file to suggest putting them in is probably GNUMakefile.linux) > I am experimenting with the new scalar float type not because of ODE, > but because I need scalar support in attribedit. It seems that the iffwri= te > library doesn't yet work with the new floating point scalars. Is this > correct? Crap, I totally forgot about testing the tools with Scalar set to float. I am sure that iffwrite needs to be updated (if for no other reason than to prevent loss of resolution). Another issue is iffwrite doesn't support a floating point output type (one should be able to write both fixed and floating point output regardless of internal Scalar representation). Another issue is whether I want floating point data in the binary level files and cd.iff.Currently all data is stored in fixed point, and gets converted to Scalar at load time (that is recent, before I did float support it was copied directly). Clearly it would be more efficient and accurate to store floats if Scalar is type float. But then cd.iff would no longer be completely portable (currently the same cd.iff file works with all possible permutations of compile and run time switches). Another issue is Scalar can actually be set to 2 different float types: float and double.=20 --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2002-12-26 10:54:40
|
On my Debian GNU/Linux unstable distribution with gcc 2.95.4, I had to add the following to the compiler flags to get the new floating point scalar type to compile: -D_ISOC99_SOURCE=1 -D_BSD_SOURCE=1 Furthermore, scalar.hp needs to #include <math.h> Since I am not sure where the best place is to add these -D flags in the WF makefiles, I have not checked this in. I assume these symbols will have no negative effect on other distributions/compiler versions. I am experimenting with the new scalar float type not because of ODE, but because I need scalar support in attribedit. It seems that the iffwrite library doesn't yet work with the new floating point scalars. Is this correct? Thanks, -Norman |
From: <nl...@nl...> - 2002-12-19 20:33:20
|
Hi, A couple of questions about using attribedit standalone. 1. How can I make attribedit use the new floating type scalar values that WF uses? Currently fractional values appear to be saved both as a string being what the user entered, and a fixed-point value. For ODE-type attributes (e.g. the maximum force available to a motor) I sometimes want huge values, much larger than can be expressed with fixed-point (and attribedit complains if my values are too large). 2. How hard is it to isolate the attribedit code so it is compilable and distributable separately? I've already started down this path, but have not succeeded yet, and wonder how much of WF I need to pull in just to compile attribedit. I mention this because it may be interesting to submit attribedit to the Nebula project (which curently lacks tools), which could also bolster interest in WF. -Norman |
From: <nl...@nl...> - 2002-12-09 08:23:10
|
On Sun, Dec 08, 2002 at 06:30:23PM -0800, Kevin Seghetti wrote: > On Sun, 2002-12-08 at 19:26, nl...@nl... wrote: > > In other words with appropriate macros (or helper functions) you can > > define a "type", where a "type" is a mapping of attribute name to > > mailbox number. Given a game object, and given its type, you can then > > access its mailbox values by attribute name. > > I am using TclLinkVar to define constants (see scripting/tcl.cc, line > 180, and line 144 for use). Is this what you are referring to? Not really - what I am talking about is if the level designer, who ONLY uses the scripting interface, wants to say, "OK, I want to treat objects foo, bar, and baz as objects of type SmartWalkingEnemy. So, I'll define a game-specific type SmartWalkingEnemy. When I request a mailbox number, I can do so by name by specifying the type SmartWalkingEnemy, and then by specifying an attribute name, which gets mapped to a local mailbox number." or "OK, I want to treat objects fee, fie, and foe as objects of type VehicledEnemy. So, I'll define a game-specific type VehicledEnemy When I request a mailbox number, I can do so by name by specifying the type VehicledEnemy, and then by specifying an attribute name, which gets mapped to a local mailbox number." This allows me to specify specific attribute names per designer-defined "type", emulating the ability of me to declare classes and member variables in C++. An attribute name gets mapped to a local mailbox number which is a local location in the game object, just like a C++ member variable gets mapped to a relative memory offset in the object's local allocated memory. If you visualize the mailbox list as a numbered column of values, then think of these designer-defined "types" as being a transparent piece of cellophane laid on top of that list with descriptive names written next to each local mailbox number which is being used. In other words, many similar objects (i.e. many objects of a class) may want to use mailboxes in a predefined way (e.g. the first mailbox is a flag telling it to do something). Another group of similar objects may want to use the same mailboxes in a different but predefined way (e.g. the first maiolbox is it a flag telling it to do something else). Using this naming convention allows us to, within the script language, assign logical names to mailboxes. Ideally you would store the actual type in the object itself as well, so you don't need to be constantly "typecasting" game objects to the proper type. You could use one of the local mailboxes (e.g. you could use the convention that the first local mailbox is the object type, and all other mailboxes are freely available for local storage). The whole point of this is that the level designer needs classes too, but he only has the limited scripting language as a tool. But by using the mailboxes in this way, you can emulate classes or types. Make sense? > I want to find something in between, which is compiled at level > conversion time and just has some sort of byte code interpreter at run > time (which in many cases can run faster than a real compiled language > due to cache issues). I am going to look at lua as the next candidate. Another one you should consider is small: http://www.compuphase.com/small.htm -Norman |
From: Kevin S. <kt...@te...> - 2002-12-09 02:35:04
|
On Sun, 2002-12-08 at 19:26, nl...@nl... wrote: > My experience with scripting is that it is a major performance problem > because you take a large run-time hit for every helper function you > declare. I completely agree that you can make complex behaviors by using > engine code through a limited interface such as a scripting interface, > but if the code is not compiled, you're going to run into performance > problems if your ENTIRE game must be in scripts.=20 I should clarify that: I don't mean that someone could implement a racing game with the existing engine, something like that would require a bunch of work in the engine itself first. But if someone wanted to make a 3D platform game with running & jumping, bad guys shooting at you, etc. then the engine pretty much does all of that, collision detection, physics, animation, path motion and so on, so all the scripting would been used for is enemy logic, and deciding when to let doors open, implement gold pieces, and stuff like that.=20 =20 > I think an extremely high performance compilable scripting language > must be the way go to if you want to follow the philosophy of > "program the entire game in scripts". Ideally, this language > would be C-like, so you could use features like macros, #ifdefs, > high-performance helper functions, etc. Macros are very helpful because > they allow you to "typecast" raw mailbox lists into structured types, all= owing > you to simulate run-time typing in the scripts. For instance with appropr= iately > declared macros you could say >=20 > DefObjVar(PlayerType,Score) > DefObjVar(PlayerType,Energy) >=20 > DefObjVar(EnemyType,Weapon) > DefObjVar(EnemyType,Health) > DefObjVar(EnemyType,Speed) >=20 > Then later you could use these "types" - >=20 > SetObjVar(PlayerType,self,Energy,100) > SetObjVar(EnemyType,someEnemyObject,Speed,100) > SetObjVar(EnemyType,someOtherEnemyObject,Speed,30) >=20 > Instead of doing the tedious and error-prone >=20 > set_mailbox(self,1001,100) > set_mailbox(someEnemyObject,1003,100) > set_mailbox(someOtherEnemyObject,1003,30) >=20 > In other words with appropriate macros (or helper functions) you can > define a "type", where a "type" is a mapping of attribute name to > mailbox number. Given a game object, and given its type, you can then > access its mailbox values by attribute name. I am using TclLinkVar to define constants (see scripting/tcl.cc, line 180, and line 144 for use). Is this what you are referring to? See the script in cubemenu.lev for what it looks like in use. I don't know how slow it is, only running a few scripts at once (but I don't really expect to run more than a few dozen each frame even in a fairly complex game (remember we don't execute the whole level, just the 3 active rooms at any time)). But I agree we need a better binding system, and I think the mailbox system needs more flexibility as well (it should be possible to write a vector instead of having to write 3 scalars). I also plan to find a better scripting language than TCL at some point, hopefully something which is pre-compiled, and focuses on math instead of string manipulation. > Doing this in TCL is hideously slow. Doing this with C macros is wonderfu= lly > fast. (Speaking from experience.) I want to find something in between, which is compiled at level conversion time and just has some sort of byte code interpreter at run time (which in many cases can run faster than a real compiled language due to cache issues). I am going to look at lua as the next candidate. =20 --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2002-12-09 02:05:01
|
On Sun, Dec 08, 2002 at 05:50:08PM -0800, Kevin Seghetti wrote: > On Sun, 2002-12-08 at 19:07, nl...@nl... wrote: > > On Sun, Dec 08, 2002 at 04:49:00PM -0800, Kevin Seghetti wrote: > > > I doubt any commercial game company would ever try to make a game with > > > WF, there is way too much NIH in that industry, it is usually hard to > > > get projects inside of the same company to share code. > > > Once I integrate ODE then I can no longer offer custom licensing, since > > > I don't own that code, we are stuck with GPL. > > > > You still can offer custom licensing, because ODE is offered under > > simultaneously the LGPL (not GPL) *and* the BSD-style license. This > > licensing arrangement was specifically chosen to encourage commercial > > use of ODE. So if you custom-license WF, you can permit the licensee to > > redistribute ODE (and their custom game parts) binary only. > > Oh, well then when I integrate Nebula Device (wait, I can't tell what > their license is, the link is broken). Their license is the TCL license. Basically meaning use it for any purpose, sell it, distribute it binary only, whatever you want. So, sorry, you STILL can offer a custom license! ;-) -Norman |
From: <nl...@nl...> - 2002-12-09 01:59:52
|
On Sun, Dec 08, 2002 at 04:49:00PM -0800, Kevin Seghetti wrote: > My original thinking was that each game would branch the whole tree > (back when it was meant for commercial use). Then later I was thinking > just branch the game and oas directories. Lately I have been thinking > there is seldom a need to add a new object type at all, that most > behaviors can probably be accomplished through scripts. My experience with scripting is that it is a major performance problem because you take a large run-time hit for every helper function you declare. I completely agree that you can make complex behaviors by using engine code through a limited interface such as a scripting interface, but if the code is not compiled, you're going to run into performance problems if your ENTIRE game must be in scripts. I think an extremely high performance compilable scripting language must be the way go to if you want to follow the philosophy of "program the entire game in scripts". Ideally, this language would be C-like, so you could use features like macros, #ifdefs, high-performance helper functions, etc. Macros are very helpful because they allow you to "typecast" raw mailbox lists into structured types, allowing you to simulate run-time typing in the scripts. For instance with appropriately declared macros you could say DefObjVar(PlayerType,Score) DefObjVar(PlayerType,Energy) DefObjVar(EnemyType,Weapon) DefObjVar(EnemyType,Health) DefObjVar(EnemyType,Speed) Then later you could use these "types" - SetObjVar(PlayerType,self,Energy,100) SetObjVar(EnemyType,someEnemyObject,Speed,100) SetObjVar(EnemyType,someOtherEnemyObject,Speed,30) Instead of doing the tedious and error-prone set_mailbox(self,1001,100) set_mailbox(someEnemyObject,1003,100) set_mailbox(someOtherEnemyObject,1003,30) In other words with appropriate macros (or helper functions) you can define a "type", where a "type" is a mapping of attribute name to mailbox number. Given a game object, and given its type, you can then access its mailbox values by attribute name. Doing this in TCL is hideously slow. Doing this with C macros is wonderfully fast. (Speaking from experience.) -Norman |
From: Kevin S. <kt...@te...> - 2002-12-09 01:54:42
|
On Sun, 2002-12-08 at 19:07, nl...@nl... wrote: > On Sun, Dec 08, 2002 at 04:49:00PM -0800, Kevin Seghetti wrote: > > I doubt any commercial game company would ever try to make a game with > > WF, there is way too much NIH in that industry, it is usually hard to > > get projects inside of the same company to share code.=20 > > Once I integrate ODE then I can no longer offer custom licensing, since > > I don't own that code, we are stuck with GPL. >=20 > You still can offer custom licensing, because ODE is offered under > simultaneously the LGPL (not GPL) *and* the BSD-style license. This > licensing arrangement was specifically chosen to encourage commercial > use of ODE. So if you custom-license WF, you can permit the licensee to > redistribute ODE (and their custom game parts) binary only.=20 Oh, well then when I integrate Nebula Device (wait, I can't tell what their license is, the link is broken). Anyhow, I don't care if I can offer a custom license, and I doubt anyone will ask for it. --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2002-12-09 01:40:27
|
On Sun, Dec 08, 2002 at 04:49:00PM -0800, Kevin Seghetti wrote: > I doubt any commercial game company would ever try to make a game with > WF, there is way too much NIH in that industry, it is usually hard to > get projects inside of the same company to share code. > Once I integrate ODE then I can no longer offer custom licensing, since > I don't own that code, we are stuck with GPL. You still can offer custom licensing, because ODE is offered under simultaneously the LGPL (not GPL) *and* the BSD-style license. This licensing arrangement was specifically chosen to encourage commercial use of ODE. So if you custom-license WF, you can permit the licensee to redistribute ODE (and their custom game parts) binary only. -Norman |
From: Kevin S. <kt...@te...> - 2002-12-09 00:53:28
|
On Sun, 2002-12-08 at 13:59, nl...@nl... wrote: > I started to look at WF's object system and I am starting to get comforta= ble > with it. I plan to write a tutorial on the wiki at some point. It basical= ly > looks like to add a new object type to WF, you need to Alright, you prompted me to create a wiki page with an overview of the process: http://worldfoundry.org/wfwiki/index.php/AddingANewObjectType, feel free to augment it. A tutorial would be nice, can you think of a good example of a new object type (I try to do everything from scripts these days, in fact I have deleted a few object types, and there are more than can go, like Enemy). =20 > - make a new newobj.oas file with the desired designer-visible properties Yup. > - do some build processing so the newobj.oas file gets parsed into .ht an= d .hp Edit objects.mac, adding a single line, that't it. Everything flows from there, including the game makefile expecting OBJECTANME.cc, the .iff files for attribedit, and even an enumeration of all object types. > - manually flesh out newobj.hp, the C++ include file (but what if the .oa= s file later > changes?) newobj.bat creates a framework which should compile (although I haven't tested it in years, might need to be updated)(and I need to make a linux version), so nothing is required here except whatever overrides you are creating the new object type for in the first place.=20 It depends on what sort of change is occurring in the .oas file, if it is cosmetic, like the help string or display type, then nothing needs to be done in the C++ file, if it is adding or subtracting an attribute then the C++ file will need to be changed as well (adding or removing reads of the attribute). > * override some standard Actor methods > * add new methods for internal use or for use by other game objects > * use methods of the Actor to get at the OAD override data set by the d= esigner > * use methods of the Actor and Level to access other game objects and g= lobal > game state >=20 > Game programmers (as opposed to designers) may for various reasons need t= o write > custom game object classes. My proposal for a new directory structure is = to > allow such custom game object classes to be stored in a different directo= ry. My original thinking was that each game would branch the whole tree (back when it was meant for commercial use). Then later I was thinking just branch the game and oas directories. Lately I have been thinking there is seldom a need to add a new object type at all, that most behaviors can probably be accomplished through scripts. Any commercial game will want proprietary bits, the only way to do that under the current license would be for all of the proprietary bits to be precompiled in the scripting language. I think this makes a good division of what has to be given back to the project and what doesn't.=20 Fundamental features/capabilities have to be open source, game logic and game specific behaviors don't. > In other words, there is one system OAS directory, and possibly several > game-specific OAS directories. The same goes for the C++ directory for th= e > C++ code corresponding to the OAS file. In other words: >=20 > source/oas - system OAS files > source/game - system game object C++ files > mygame1/oas - mygame1's OAS files > mygame1/game - mygame1's game files > mygame2/oas - mygame2's OAS files > mygame2/game - mygame2's game files Should all of the files in game be copied into mygame, or just the new object's files? What about support files which are game specific? The game directory currently has way too much stuff in it, and is highly cross-linked. Some design work needs to be done first to help separate what is in their into components/libraries.=20 (just look at http://worldfoundry.org/doxygen/html/classLevel.html, what a mess) (although it makes doxygen look pretty cool). > The build and attribedit system would need to be updated to look in multi= ple > OAS directories. This could be faked (under intelligent operating > systems, at least) by creating symlinks in the source/oas and source/game > directories to the files in the other oas and game directories. >=20 > The reasoning behind this is: > * to allow game-specific C++ code to be cleanly separated from the engine > * to allow multiple sets of game-specific C++ code to coexist > * to allow a game to reuse one or more sets of game-specific C++ code > * to allow some game-specific C++ code to be licensed under a proprietary > license >=20 > The last bit is the most troublesome legally, since the entire WF codebas= e is > GPL at the moment, and the custom game-specific C++ code would also have = to > be GPL under the current license. Realistically a proprietary licensing o= ption > is needed I think, since I think complex game programming will always nee= d some > C++ code for new gameobjects and high-performance custom game behavior. I doubt any commercial game company would ever try to make a game with WF, there is way too much NIH in that industry, it is usually hard to get projects inside of the same company to share code.=20 Once I integrate ODE then I can no longer offer custom licensing, since I don't own that code, we are stuck with GPL. Of course, anyone can add a dynamic code loader to WF and then distribute a game using WF and some binary modules, under the GPL I can't even stop them. So what I am saying about commercial developers is if they want that ability let THEM develop it. :-) I think this is a good idea, but doing it doesn't sound like much fun (and with the current make system being split between linux and windows it would need to be done twice), so I am not likely to do it any time soon (if someone actually started trying to make a game with it and this was causing problems I would be much more likely to work on it at that point).=20 In a similar vein I would like to extract the bottom layer of libraries into a separate project (wfcore, probably), which I could use in other non-game projects. I think that the better assertion macros, the nameable, redirectable stream system (including the all important null stream), the platform abstraction and binary IO streams are all generally useful pieces (after a bit of clean up). > If this could all be realized, then my long-winded concerns about the=20 > "composite design pattern" would be addressed, since you can create real > composites in C++ (since, if you know C++, you can easily reuse other > game objects to create a new, more complex game object, choose which > attributes of the subparts to expose, declare these attributes in the new= OAS > file, and in your C++ class do the attribute-to-subpart mapping by readin= g in > the override OAD data and deciding which sub-part to which you want to pa= ss > on the OAD data). Yup, that would work, and is already possible. I eventually still want to explore doing something similar entirely at the attribute level, because I think it is an interesting challenge. --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: Kevin S. <kt...@te...> - 2002-12-08 23:10:22
|
On Sun, 2002-12-08 at 12:49, nl...@nl... wrote: > In addition to the theoretical question of the validity of using > variable timesteps with a time-stepping integration approach as detailed > in my last mail, there is additionally the question of the so-called > "small-large" timestep problem and the ERP. >=20 > See: >=20 > http://q12.org/pipermail/ode/2002-April/001107.html > http://q12.org/pipermail/ode/2002-April/001121.html Speaking as someone who doesn't know anything about ODE and physics in general (other than what I made up/discovered on my own for WF's phyiscs system), it sounds like the way to fix the problem mentioned above is to store the error correction as a position offset instead of a velocity=20 (so it doesn't contain a time component) separate from the normal velocity, and then turn it into a velocity at the start of the next frame (when you will know the step size ) and add it into the normal velocity then. That solution would take more storage, but I don't think it would add any execution time overhead, all of those steps were being done, just now at a different time and in a different order. I can't really tell from the 2nd email if that is what he plans to do. > One solution would be to add a compile-time switch to WF (I think there > are one or two of those already...)=20 I need some way to document and manage the compile time switches. I also need to decide if we want them all. Part of the problem is I am a pack rat and don't want to throw code away, like the playstation code is still there, I am sure it would take quite a bit of work to get it working on the playstation again, and it would never be able use things like ODE (unless I was to port ODE to the WF fixed point Scalar, Uggh). Another more recent switch is Scalar representation, until last month fixed point was the only option. Now it can be built to use floats or doubles. Normally I would keep the fixed point version for hardware which doesn't have a math co-processor, but if we make ODE mandatory then we pretty much have made floating point mandatory as well. So is there any point to maintaining the fixed point version? A long time ago I had a dos batch file which built every combination of switches (at least the ones which make a new obj directory) which I would run nightly to catch any problems. Some day I want to learn how to use the source forge compile farms for this purpose (as well as verifying builds on various distributions (I wonder if they have windows machines :-). > which toggles between fixed-timestep > and variable-timestep physics (by fixed-timestep physics I mean to check > how much real time has elapsed since the last frame, divide by the time > per physics frame, do that many fixed-step physics updates, and save the > fractional remainder as left-over time which gets accounted for in the ne= xt > iteration of the main game loop). For possible future networking using t= he > joystick-recorder approach, it would make things a lot easier if physics > were locked to a particular frame rate. For single-player, assuming the > open questions about variable timesteps can be answered, then the > timestep can be allowed to be variable. I suspect if the two rates are close to each other it would look pretty bad (serious aliasing: if the main loop runs at 20 Hz, at the physics is told it is running at 30Hz, then the physics would run once on even frames and twice on odd frames, which would jitter pretty bad. It will be pretty easy to try it, but I suspect the physics will have to run at a rate at least twice the game loop rate before it will start to look good. (I have a lot of experience with this sort of jitter from my Genesis days doing 2D scrolling games (jitter is typically less noticeable in 3D though). Since ODE appears to be pretty math intensive it might be too slow to run at higher rates.=20 --=20 Kevin Seghetti: E-Mail: kt...@te..., HTTP: www.tenetti.org GPG public key: http://tenetti.org/phpwiki/index.php/KevinSeghettiGPGKey Check out www.worldfoundry.org for my GPL'ed 3D video game engine |
From: <nl...@nl...> - 2002-12-08 20:55:44
|
Hola, I started to look at WF's object system and I am starting to get comfortable with it. I plan to write a tutorial on the wiki at some point. It basically looks like to add a new object type to WF, you need to - make a new newobj.oas file with the desired designer-visible properties - do some build processing so the newobj.oas file gets parsed into .ht and .hp - manually flesh out newobj.hp, the C++ include file (but what if the .oas file later changes?) * override some standard Actor methods * add new methods for internal use or for use by other game objects * use methods of the Actor to get at the OAD override data set by the designer * use methods of the Actor and Level to access other game objects and global game state Game programmers (as opposed to designers) may for various reasons need to write custom game object classes. My proposal for a new directory structure is to allow such custom game object classes to be stored in a different directory. In other words, there is one system OAS directory, and possibly several game-specific OAS directories. The same goes for the C++ directory for the C++ code corresponding to the OAS file. In other words: source/oas - system OAS files source/game - system game object C++ files mygame1/oas - mygame1's OAS files mygame1/game - mygame1's game files mygame2/oas - mygame2's OAS files mygame2/game - mygame2's game files The build and attribedit system would need to be updated to look in multiple OAS directories. This could be faked (under intelligent operating systems, at least) by creating symlinks in the source/oas and source/game directories to the files in the other oas and game directories. The reasoning behind this is: * to allow game-specific C++ code to be cleanly separated from the engine * to allow multiple sets of game-specific C++ code to coexist * to allow a game to reuse one or more sets of game-specific C++ code * to allow some game-specific C++ code to be licensed under a proprietary license The last bit is the most troublesome legally, since the entire WF codebase is GPL at the moment, and the custom game-specific C++ code would also have to be GPL under the current license. Realistically a proprietary licensing option is needed I think, since I think complex game programming will always need some C++ code for new gameobjects and high-performance custom game behavior. If this could all be realized, then my long-winded concerns about the "composite design pattern" would be addressed, since you can create real composites in C++ (since, if you know C++, you can easily reuse other game objects to create a new, more complex game object, choose which attributes of the subparts to expose, declare these attributes in the new OAS file, and in your C++ class do the attribute-to-subpart mapping by reading in the override OAD data and deciding which sub-part to which you want to pass on the OAD data). -Norman |
From: <nl...@nl...> - 2002-12-08 20:55:41
|
On Sun, Dec 08, 2002 at 09:59:43PM +0000, nl...@nl... wrote: > The reasoning behind this is: > * to allow game-specific C++ code to be cleanly separated from the engine > * to allow multiple sets of game-specific C++ code to coexist > * to allow a game to reuse one or more sets of game-specific C++ code > * to allow some game-specific C++ code to be licensed under a proprietary > license Incidentally if game-specific C++ code were compiled as dynamic libraries, perhaps that would be a viable option to allow the game-specific C++ stuff to remain closed. Also, there is a very interesting thread on scripting languages and game programming at http://sourceforge.net/mailarchive/forum.php?forum=gamedevlists-general -Norman |
From: <nl...@nl...> - 2002-12-08 19:22:04
|
In addition to the theoretical question of the validity of using variable timesteps with a time-stepping integration approach as detailed in my last mail, there is additionally the question of the so-called "small-large" timestep problem and the ERP. See: http://q12.org/pipermail/ode/2002-April/001107.html http://q12.org/pipermail/ode/2002-April/001121.html One solution would be to add a compile-time switch to WF (I think there are one or two of those already...) which toggles between fixed-timestep and variable-timestep physics (by fixed-timestep physics I mean to check how much real time has elapsed since the last frame, divide by the time per physics frame, do that many fixed-step physics updates, and save the fractional remainder as left-over time which gets accounted for in the next iteration of the main game loop). For possible future networking using the joystick-recorder approach, it would make things a lot easier if physics were locked to a particular frame rate. For single-player, assuming the open questions about variable timesteps can be answered, then the timestep can be allowed to be variable. -Norman |