alephmodular-devel Mailing List for AlephModular (Page 16)
Status: Pre-Alpha
Brought to you by:
brefin
You can subscribe to this list here.
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(61) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2003 |
Jan
(193) |
Feb
(61) |
Mar
(55) |
Apr
(5) |
May
(3) |
Jun
(1) |
Jul
(3) |
Aug
(14) |
Sep
(19) |
Oct
(48) |
Nov
(6) |
Dec
(25) |
2004 |
Jan
(1) |
Feb
|
Mar
(2) |
Apr
(6) |
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(17) |
Jun
(20) |
Jul
(13) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Br'fin <br...@ma...> - 2003-01-17 05:28:02
|
These would be the various Replays or Films that Marathon 2 had. In this case I'm asking for transcripts of the 3 demo films that either Marathon 2 or AlephModular play when you let the game sit idle at the main menu. Detail wise, I'm looking for recognizable events. Like one of the levels starts with you and a bunch of Bobs on a platform overlooking a large bay as various S'pht counter-attack. Bobs fire on S'pht Player dodges, takes pot shots Bob on player's right takes a hit and dies Player fires on stray S'pht to right Player takes a hit from S'pht. Player dodges behind bob who takes the hit Player and bobs fire upon lone hunter in balcony Bobs teleport out Player collects a second pistol Player empties single bullet from pistol then pulls up both pistols Player dives into ooze Player uses map under ooze Player finishes with map, sees Fusion Pistol and batteries teleport in Player finds underwater life panel Player heals to 2x life meter (yellow) Details people watching could pick out and nod at to say "Yes yes, that's where it's supposed to be. Oops player died there... ok, right, he was supposed to" -Jeremy Parsons On Friday, January 17, 2003, at 12:04 AM, Michael Adams wrote: > What are these films everyone is refering to? How do > I get them? If I can find them and run them, I would > be glad to build a check list. How detailed and/or > long would shuch a list need to be? > > Michael D. Adams > mdm...@ya... > |
From: Michael A. <mdm...@ya...> - 2003-01-17 05:04:50
|
What are these films everyone is refering to? How do I get them? If I can find them and run them, I would be glad to build a check list. How detailed and/or long would shuch a list need to be? Michael D. Adams mdm...@ya... --- Br'fin <br...@ma...> wrote: > > On Thursday, January 16, 2003, at 05:15 PM, > Alexander Strange wrote: > > > > > On Thursday, January 16, 2003, at 09:18 AM, Woody > Zenfell, III wrote: > >> If we didn't care about 100% compatibility with > M2, i.e. a film > >> recorded in M2 and played back in AM would > produce *exactly* the same > >> results (or the same results but rendered more > prettily etc.), (which > >> I should hasten to say, I think we *do*) > > > > This is fairly hard; Aleph One has made only minor > changes to physics > > that I know of, and even those broke films. > > > Breaking films is a definite glitch for > AlephModular. > > On that note, I have a couple related requests. > > Is there anyone willing to write up a transcript of > each of the > existing three demo movies? A point by point > checklist of activities to > confirm that a given build hasn't broken one of > these movies. > > In addition, is there anyone willing to submit their > Marathon 2 films > as additional test cases? The existing three demo > films don't include > any net games. Some other vidmaster films would be > good too. I should > stress that these should be your own movies. That > way we can include > them in CVS or the website with their own > transcriptions and use them > as test cases. > > They should be your own and they should use stock M2 > maps, shapes, and > physics. > > -Jeremy Parsons > > > > ------------------------------------------------------- > This SF.NET email is sponsored by: Thawte.com > Understand how to protect your customers personal > information by implementing > SSL on your Apache Web Server. Click here to get our > FREE Thawte Apache > Guide: > http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0029en > _______________________________________________ > Alephmodular-devel mailing list > Ale...@li... > https://lists.sourceforge.net/lists/listinfo/alephmodular-devel __________________________________________________ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now. http://mailplus.yahoo.com |
From: Br'fin <br...@ma...> - 2003-01-16 23:03:24
|
On Thursday, January 16, 2003, at 05:15 PM, Alexander Strange wrote: > > On Thursday, January 16, 2003, at 09:18 AM, Woody Zenfell, III wrote: >> If we didn't care about 100% compatibility with M2, i.e. a film >> recorded in M2 and played back in AM would produce *exactly* the same >> results (or the same results but rendered more prettily etc.), (which >> I should hasten to say, I think we *do*) > > This is fairly hard; Aleph One has made only minor changes to physics > that I know of, and even those broke films. > Breaking films is a definite glitch for AlephModular. On that note, I have a couple related requests. Is there anyone willing to write up a transcript of each of the existing three demo movies? A point by point checklist of activities to confirm that a given build hasn't broken one of these movies. In addition, is there anyone willing to submit their Marathon 2 films as additional test cases? The existing three demo films don't include any net games. Some other vidmaster films would be good too. I should stress that these should be your own movies. That way we can include them in CVS or the website with their own transcriptions and use them as test cases. They should be your own and they should use stock M2 maps, shapes, and physics. -Jeremy Parsons |
From: Alexander S. <ast...@it...> - 2003-01-16 22:15:16
|
On Thursday, January 16, 2003, at 09:18 AM, Woody Zenfell, III wrote: > If we didn't care about 100% compatibility with M2, i.e. a film > recorded in M2 and played back in AM would produce *exactly* the same > results (or the same results but rendered more prettily etc.), (which > I should hasten to say, I think we *do*) This is fairly hard; Aleph One has made only minor changes to physics that I know of, and even those broke films. |
From: Woody Z. I. <woo...@sb...> - 2003-01-16 16:51:33
|
On Thursday, January 16, 2003, at 08:18 AM, Woody Zenfell, III wrote: > I see, yes since I didn't really have any obvious problems with the > current code in this regard, I didn't spend much time thinking about > these issues, and so my suggestions were generally made with something > rather like the current scheme in mind. I should mention that I rarely manage (yet) to think of how the stuff would work in a non-game, which I know is very important to the project's goals. So the stuff I wrote does not necessarily work well in such an environment (though offhand I can't think of any specific problems). > Though my COW discussion was heavily biased towards prediction for > simplicity, the same mechanism would be used for interpolation. The > basic idea is to be able to split off a copy of the game-state, mess > around with it, render from it, and then throw it away (returning to > the original state). It occurs to me that a sufficiently generalized chain-of-game-states (which IMO is what we want anyway, despite needing at most 3 for my real-predictive-interpolative scheme) could also be used to support reverse film playback. Naturally the bulk-copy approach would probably be ill-suited to this (except where implemented in an OS with general COW memory support and some intelligent placement of likely-to-change elements near each other in memory) because the larger amount of memory required per game-state would really start to add up when you're keeping around a hundred of them (or however long a chain you might want, determined by how far you want to be able to back up). (You could just remember periodic "keyticks", and on backing up, compute forward varying amounts from those, to (greatly) extend how far you're able to go backwards in a reasonable amount of memory. Or actually, you could remember keyticks that are like exponentially far back (e.g., last tick, 2 ticks ago, 4 ticks ago, 8 ticks ago, etc.) or that happen periodically (keep the last 8 ticks as well as every 64th tick, etc.), with the idea that backing up a little bit is going to be done more frequently than jumping back a long way, but without totally abandoning the ability to back up a long way... and without having to recompute all the way from the start of film when you do. Of course, a game-state that's 64 ticks after its "base" is probably going to have more changes (and thus require more memory) than a game-state only 1 tick after its "base".) Anyway speaking of keeping around lots of game-states... I wonder if roughly doubling the size of a saved-game file (to account for unpacking int16's into int32's in many cases) would give a reasonable estimate. (?) If so, well, my saved-game on Waterloo Waterpark (from AM) is about 230k, which points toward roughly half a megabyte per resident game-state if no "store-only-the-changes" scheme is used. (Less-complicated net levels might require substantially less storage.) Of course, estimating how much stuff *changes* per tick is a bit trickier without any of these mechanisms already in place. :) But, it's bound to be smaller (most of the map geometry, e.g., is constant). (Anyone want to take a guess?) (Of course, an in-game game-state-chain-based replayer and a render-to-DV/MPEG/etc. tool (maybe also in-game, with an in-game movie player for conveniently viewing the results) are not mutually exclusive; each has advantages.) </random thoughts> Woody |
From: Br'fin <br...@ma...> - 2003-01-16 16:50:58
|
On Thursday, January 16, 2003, at 09:18 AM, Woody Zenfell, III wrote: > I see, yes since I didn't really have any obvious problems with the > current code in this regard, I didn't spend much time thinking about > these issues, and so my suggestions were generally made with something > rather like the current scheme in mind. > > For my overview of the way things currently work (in M2/A1), see > > http://source.bungie.org/_enginedevelopment/reference/networking- > input.html > > (Note that contrary to what the end of that says, I now believe that > heartbeat_count is probably important for proper timing of film > playback (but not single-player games), though I still haven't > bothered to analyze it in detail.) I should indeed make time to read that then. >> (I don't currently know the feasibility of having the world itself be >> actively managed over 30 fps, would we have to make sure that M2 >> style monster aren't allowed to readjust their AI except every 1/30th >> of a second?) > > If we didn't care about 100% compatibility with M2, i.e. a film > recorded in M2 and played back in AM would produce *exactly* the same > results (or the same results but rendered more prettily etc.), (which > I should hasten to say, I think we *do*) then it would probably be > reasonably straightforward to do what I'd call "real updates" more > frequently than 30 ticks/sec. But since we do care, I think we need > to preserve the "real update" code as-is (reorganization > notwithstanding), including the 30 ticks/sec restriction. So I agree > that some kind of lightweight "move some things a little bit between > actual ticks" interpolative scheme is the way to go. I am currently and will continue to use the 3 M2 demo films as test cases for M2 Compatibility. (Oh interesting, I notice that close object visibility in one of those films too run from M2 proper too :) ) It is an area I'd not mind seeing some experimentation in. But that won't help us get our first and primary goals done either :) > "Which side of the code" is responsible? Well neither I guess, again > I think this is an oversimplification. There would be some kind of > game manager that would know what needs to happen (apply real updates > to real game state, copy that state (whether an actual copy or marking > as COW) to make a predictive state, apply predictive updates to that > state, copy the resulting state (maybe COW) to make an interpolative > state, apply an interpolative update to that (pretty sure we should > always only do one of these, representing the activity of some > fraction of a game-tick), and call the renderer). I don't see a great deal of difference between the predictive state and the interpolative state. If we find that simply 'continue doing as you are' is inappropriate, we could put in a toggle to the interpretive controllers. Interpolate for X-real time then either give up the ghost to try to predict. I see something like the following. Both Game Core and Interpolation have their own world state. Game Core drives Interpolation but doesn't actually know everything. (Game Core doesn't need to care about specific frames of animation or management of effects. It can generate new effects, but when they stop interacting with other things in the world, Game Core forgets about them) Game Core is lord over Interpolation. When an object moves in Game Core, the Game Core tells interpolation that the object is moving from initial position with a specific velocity and facing. (As well as if it's doing anything) It also tells Interpolation of any new objects or objects which should be removed. I don't know if this is copy on write, I suppose it could be... But Interpolation is picking up not just the object's states, but it's attitudes in a different way then Game Core does. > It sort of sounds like you're picturing a decoupling in the actual > execution of the game-update stuff and rendering stuff (e.g. into > separate threads or something). I suppose there could be advantages > to that on a multiprocessor system (and yes, there are some of those > out there) but on a uniprocessor it seems like it could only make > performance worse (and complicate the code)... and might gratuitously > break Mac OS 9 compatibility (in case that's a problem). Yes, I'm picturing a decoupling. I do not know where good lines are with respect to something client/server. (How much information over a TCP/IP line is too much? And would we actually be saturating the line?) It might be nice to be able to spin these off into multiple threads if the current machine can handle them (ala multiple processors) But single thread for this loop is probably reasonable on most machines. 1 Game Core update world No faster than 30FPS play catch up if we've been idle too long and have the input to do so reset Interpolator's synced data to new states (critical section) 2 Interpolator run as often as possible allow locking to 60 fps each run should process as many times steps as last render step took but no more than 30fps. pause and allow renderer to display current Interpolator step 3 Renderer run as often as allowed In a networked setting supposing Client/Server with the server doing all of the Game_Core, then you'd want something like a Network Receiver replacing Game Core up above. (Interpolation shouldn't stop just because our network is busy after all :) ) Hsm. Interesting stray thought... Interpolation should have at least one network sensitive predictor even in the simplest case. I don't think a user would want their high speed machine and lagging network careening them through walls while waiting for the next Game Core update to apply their movement controls. While not ok, it is acceptable to jump enemies around the screen when their real update comes in. It is not acceptable to jump or jerk the current player's view port around. Not sure how in this case platforms and even wall collisions would be handled. (Player state is true copy on write as opposed to other details of the system? This would let all of the functions in this space operate on the Interpolator's version of the player as if it was the real one Game Core knew about...) Another amusing thought involved 'only objects reasonably in a person's view are updated for the Interpolator' which might make a nice filter if a client/server's bandwidth needs to updates needed to be lessoned. > (I just got this picture of Conker asking the gargoyle "Isn't it a bit > early in the morning to be talking about gothic architecture...?") > Oh definitely. -Jeremy Parsons |
From: Woody Z. I. <woo...@sb...> - 2003-01-16 14:18:51
|
On Wednesday, January 15, 2003, at 07:42 PM, Br'fin wrote: > Hsm. I admit that I'm having trouble following the discussion of > Copy-on-write. This might be due to lack of sleep or the minutiae that > these discussions are going into. > > Also, we don't necessarily have to keep it similar to the current code. > Prime importance is having a clear description of how networking ties > in with input and game core. > > How does the GUI layer and the core layer work together? How does the > network layer and the core layer work together? What scenarios do you > see happening? I see, yes since I didn't really have any obvious problems with the current code in this regard, I didn't spend much time thinking about these issues, and so my suggestions were generally made with something rather like the current scheme in mind. For my overview of the way things currently work (in M2/A1), see http://source.bungie.org/_enginedevelopment/reference/networking- input.html (Note that contrary to what the end of that says, I now believe that heartbeat_count is probably important for proper timing of film playback (but not single-player games), though I still haven't bothered to analyze it in detail.) > If we assumed an 'animation layer' was appropriate, would the following > be acceptable > User enters input > Input is sent to other servers (Send to network) > Other players inputs are accumulated (Receive from network) > Update game core with new info > Game core overwrites the animation layer's dynamic data with new info. > Go back to process user's input > > The animation layer itself runs faster than the game core. It's > information tends to keep track of a simple subset of Game Core info. > (Game core decides that a player is at x,y and is turning/firing, > monster x is charging in a particular vector) Game Core does all the > heavy lifting of AI and the animation layer would just do fluff based > on last known info to add frames between decisions. Maybe we're confusing each other because we have different terminologies in mind. :) I am also suspecting that what we're picturing is more similar than it might seem at a glance. Oh first off for clarity, going back to the above, I tend to use the term GUI exclusively to refer to things like the game's main menu, the dialogs, etc. I would tend to use the term "renderer" when talking about drawing game frames. (What terms you use is up to you as long as we know what you mean :) ) (Does this change anything about the previous SDL discussion? Nah let's not get back into that... :) ) It seems to me that what you're calling an "animation layer" is what would be responsible for the interpolative segment of my three-stage game-update scheme. (Which I _have_ posted here, haven't I? With real game states, predicted game states (optionally), and interpolated game states (optionally))? In the existing scheme (and thus in mine) there's only one type of "game-state object" (which consists, conceptually, of essentially all the stuff that you'd see in a saved-game, basically) which the "game-core" (the part that does what I'd call "real updates", I gather - currently update_world() does zero or more real updates when called), interpolative updater (in my scheme), and renderer all share to do their work. In my scheme, the interpolative updater only makes "small" modifications to the (interpolative) game state, i.e. moving players and monsters and projectiles and platforms etc. just the way they've been moving and so on. What you call "heavy lifting" (monster AI etc.) is done only by the real and predictive updaters (which in my scheme are essentially the same code operating on different copies of the game-state object - sounds like you're instead picturing that the predictive and interpolative updaters have a lot in common - I guess which way to go depends on how expensive the different kinds of updates are). > I should emphasize that this is one possibility. I would like to see > the discussion a little higher level in description right now. you're > speaking of copy on write, but which side of the code is responsible > for that? In retrospect what I'm thinking of as an 'animation layer' > might just be a different way of expressing the same issues. Then > again, it's also trying to address decoupling framerate from the game's > current ~30fps game running Though my COW discussion was heavily biased towards prediction for simplicity, the same mechanism would be used for interpolation. The basic idea is to be able to split off a copy of the game-state, mess around with it, render from it, and then throw it away (returning to the original state). > (I don't currently know the feasibility of having the world itself be > actively managed over 30 fps, would we have to make sure that M2 style > monster aren't allowed to readjust their AI except every 1/30th of a > second?) If we didn't care about 100% compatibility with M2, i.e. a film recorded in M2 and played back in AM would produce *exactly* the same results (or the same results but rendered more prettily etc.), (which I should hasten to say, I think we *do*) then it would probably be reasonably straightforward to do what I'd call "real updates" more frequently than 30 ticks/sec. But since we do care, I think we need to preserve the "real update" code as-is (reorganization notwithstanding), including the 30 ticks/sec restriction. So I agree that some kind of lightweight "move some things a little bit between actual ticks" interpolative scheme is the way to go. "Which side of the code" is responsible? Well neither I guess, again I think this is an oversimplification. There would be some kind of game manager that would know what needs to happen (apply real updates to real game state, copy that state (whether an actual copy or marking as COW) to make a predictive state, apply predictive updates to that state, copy the resulting state (maybe COW) to make an interpolative state, apply an interpolative update to that (pretty sure we should always only do one of these, representing the activity of some fraction of a game-tick), and call the renderer). Whether you want this to be a function like update_world with all the actual update_players(), update_monsters(), etc. calls sliced out to a sub-function, or whether it's an object in the C++ sense and thus can be of different classes depending on whether it's a netgame or whether interpolation is desired etc., or whether it's set up by virtue of correct registration of a group of routines in some sort of master list of routines to call (in some order) repeatedly, is in some sense irrelevant here (but could be a related topic to discuss). The object-level COW scheme (which covers both OPTION 1 and OPTION 2 previously discussed) has the advantage that we can effectively copy the entire game-state very cheaply if only a small part of it is expected to be changed. Using a bulk-copy mechanism (like the OPTION 3 I outlined) you'd probably want to slice up the game-state into stuff that never changes during a game (since there's no need to copy it then), stuff that never changes during an interpolative update, and the stuff that changes most frequently, so that you could copy only what's really needed. But doing that in a static sort of way (which is not strictly necessary I suppose but would be much easier I think) could limit the ability of scripts or the like to later come in and mess with things that M2 itself does not mess with (like, say, a polygon's floor transfer mode (right?), or a platform's speed, or even maybe general map geometry (though this could create some real headaches if there are any objects in the related polygons etc.), or something). It sort of sounds like you're picturing a decoupling in the actual execution of the game-update stuff and rendering stuff (e.g. into separate threads or something). I suppose there could be advantages to that on a multiprocessor system (and yes, there are some of those out there) but on a uniprocessor it seems like it could only make performance worse (and complicate the code)... and might gratuitously break Mac OS 9 compatibility (in case that's a problem). I suppose if you wanted to move toward a client/server approach where one machine does the updating and the other machines effectively only do the rendering portion, it might be good to let the two segments run independently so that e.g. rendering on the server does not bog down everyone else's game (since they're dependent on receiving the new game-state data from the server). In some sense this is exactly why the existing scheme handles user-input and protocol processing stuff in separately-scheduled tasks. (If I say "threads", will people automatically know that I don't really mean threads in the Mac OS 9 version?) Hmm note that an object-grained COW approach could help a potential server figure out which objects need to be sent out on the wire to the clients. (If you didn't want full-blown COW for some reason, you'd use the same sort of mechanisms to mark the object "dirty".) Note that object-grained COW (whether for efficient-copy-the-game-state reasons or client-server-scheme reasons) could potentially benefit from decomposing some existing objects into likely-to-change-often and likely-to-remain-static sub-objects, which in some sense is the static "slicing up" of the game-state I talked about with regard to bulk-copying above. But with the COW scheme the likely-to-remain-static stuff would still be automatically copied in those rare cases when it's needed, so scripts (or later update-logic changes compiled into the code proper) would still be able to do weird things without breaking (unlike in a statically-sliced bulk-copying scheme, where one _must_ not modify stuff outside the current "slice"). (I just got this picture of Conker asking the gargoyle "Isn't it a bit early in the morning to be talking about gothic architecture...?") Woody |
From: Br'fin <br...@ma...> - 2003-01-16 01:41:25
|
On Monday, January 13, 2003, at 11:36 PM, Woody Zenfell, III wrote: > I'd like to take one idea from my monstrous idea dump, copy-on-write > (COW) game objects, and talk a little more about it. > > Remember, the goal is to be able to split off into a fake (predicted) > game-state for one or more ticks, then later return to the original > (real) game-state, with the game-update logic being essentially the > same for predictive updates and real updates (and as similar to the > current code as possible, for practical reasons). > > (Sounds could be sticky, but I'll put that problem off till later.) > Hsm. I admit that I'm having trouble following the discussion of Copy-on-write. This might be due to lack of sleep or the minutiae that these discussions are going into. Also, we don't necessarily have to keep it similar to the current code. Prime importance is having a clear description of how networking ties in with input and game core. How does the GUI layer and the core layer work together? How does the network layer and the core layer work together? What scenarios do you see happening? If we assumed an 'animation layer' was appropriate, would the following be acceptable User enters input Input is sent to other servers (Send to network) Other players inputs are accumulated (Receive from network) Update game core with new info Game core overwrites the animation layer's dynamic data with new info. Go back to process user's input The animation layer itself runs faster than the game core. It's information tends to keep track of a simple subset of Game Core info. (Game core decides that a player is at x,y and is turning/firing, monster x is charging in a particular vector) Game Core does all the heavy lifting of AI and the animation layer would just do fluff based on last known info to add frames between decisions. I should emphasize that this is one possibility. I would like to see the discussion a little higher level in description right now. you're speaking of copy on write, but which side of the code is responsible for that? In retrospect what I'm thinking of as an 'animation layer' might just be a different way of expressing the same issues. Then again, it's also trying to address decoupling framerate from the game's current ~30fps game running (I don't currently know the feasibility of having the world itself be actively managed over 30 fps, would we have to make sure that M2 style monster aren't allowed to readjust their AI except every 1/30th of a second?) -Jeremy Parsons |
From: Alexander S. <ast...@it...> - 2003-01-16 00:31:30
|
On Wednesday, January 15, 2003, at 07:23 PM, Joe Auricchio wrote: >> OPTION 3. Arrange game-state structures explicitly in memory and >> bulk-copy > ... >> On ExitPredictiveMode(), of course, the copy is simply deallocated >> and the base pointer set back to the "real_mode" chunk of memory. >> Cheap cheap, fun fun. > > Could we just let the predictive copy rot And do an madvise() on it if we're on a POSIX system; that'll tell the VM we don't care about those pages. |
From: Joe A. <av...@fm...> - 2003-01-16 00:23:34
|
> OPTION 3. Arrange game-state structures explicitly in memory and > bulk-copy ... > On ExitPredictiveMode(), of course, the copy is simply deallocated and > the base pointer set back to the "real_mode" chunk of memory. Cheap > cheap, fun fun. Could we just let the predictive copy rot, and just put the base pointer back to the real_mode area? Then on ExitPredictiveMode, all we have is one pointer twiddle and one InPredictiveMode flag cleared. We also avoid allocating the memory each time. All the pointers to those objects disappear BUT we know where the chunk of memory is (it's wherever we change the base pointer to on EnterPredictiveMode), and how big it is too, so we can allocate/deallocate the entire chunk when we start/end the game. This approach feels a little wrong on a gut level, but as long as we write it right it should be fine. And I'm sure the VM system will thank us for not hitting them up for sizeof(the world) every 30th of a second. Joe Auricchio ~ av...@fm... |
From: Michael A. <mdm...@ya...> - 2003-01-15 15:51:38
|
FYI: At least in the open source world, the state of the art in cross-platform UI would be wxWindows (www.wxwindows.org). Closed source (to Windows developers, open to Linux) would probobly be QT. This is a topic I've spend a little time on. Note: wxWindows "hijacks" main though IIRC you still have a real function called "main". wxWindows just uses macros to intersept and have *its* main be called before yours. This allows the differences of main vs WinMain to be transparent. Most non-windows developers probobly aren't aware that the function called in a Windows app is not main but WinMain, so any cross platform solution will *have* to hijack main in some way or another. Michael D. Adams mdm...@ya... --- "Woody Zenfell, III" <woo...@sb...> wrote: > On Tuesday, January 14, 2003, at 09:38 PM, Br'fin > wrote: > > > A1 has far more things "in parallel" than it > should. I don't agree with > > you, Woody, on which things should be shared. > > I hope some of your next messages illuminate what > you have in mind, > then... I am beginning to suspect I just > misunderstand what you're > thinking and that's why things don't seem to be > making sense. In any > event, I think I've made my points many times over, > and if they're not > compelling for you, then you obviously have some > reasons. :) But what > are they? > > > But I agree that A1 proper doesn't share enough. I > applaud you and > > whoever else managed to get SDL working with A1. > The basic code only > > has some nods towards cross-platform issues. > > I can't really take much (if any) credit for that > stuff. I tend to > deflect it to Christian Bauer, whom I gather did > quite a bit, but even > then he might not have acted alone - I don't know. > > > SDL may or may not be appropriate within AM based > tools. Mostly it > > would depend on how nicely it can play with > others. > > I concede that picturing SDL in a traditional > application environment > (like an editor) kind of gives me the willies - > shades of like an MS-DOS > painting program or something. <<shudder>> :) SDL > might serve as a > "buffer provider" into which the software renderer > draws while playing a > game, but something else fills that role (through > the same higher-level > code interface from the renderer's viewpoint of > course) in a traditional > app. > > The traditional app strikes me as more likely to be > platform-specific, > anyway (probably Cocoa, and if anyone asks why, then > they probably > haven't worked in Cocoa ;) ). Though there are > cross-platform > general-purpose UI widget toolkits, I'm not sure > I've encountered any > that really get the "look and feel" right > (especially on a > detail-oriented platform like the Mac). But I am by > no means an > authority on cross-platform UI widget toolkits. :) > > Woody > > > > ------------------------------------------------------- > This SF.NET email is sponsored by: Take your first > step towards giving > your online business a competitive advantage. > Test-drive a Thawte SSL > certificate - our easy online guide will show you > how. Click here to get > started: > http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0027en > _______________________________________________ > Alephmodular-devel mailing list > Ale...@li... > https://lists.sourceforge.net/lists/listinfo/alephmodular-devel __________________________________________________ Do you Yahoo!? Yahoo! Mail Plus - Powerful. Affordable. Sign up now. http://mailplus.yahoo.com |
From: Woody Z. I. <woo...@sb...> - 2003-01-15 08:46:20
|
On Tuesday, January 14, 2003, at 09:38 PM, Br'fin wrote: > A1 has far more things "in parallel" than it should. I don't agree with > you, Woody, on which things should be shared. I hope some of your next messages illuminate what you have in mind, then... I am beginning to suspect I just misunderstand what you're thinking and that's why things don't seem to be making sense. In any event, I think I've made my points many times over, and if they're not compelling for you, then you obviously have some reasons. :) But what are they? > But I agree that A1 proper doesn't share enough. I applaud you and > whoever else managed to get SDL working with A1. The basic code only > has some nods towards cross-platform issues. I can't really take much (if any) credit for that stuff. I tend to deflect it to Christian Bauer, whom I gather did quite a bit, but even then he might not have acted alone - I don't know. > SDL may or may not be appropriate within AM based tools. Mostly it > would depend on how nicely it can play with others. I concede that picturing SDL in a traditional application environment (like an editor) kind of gives me the willies - shades of like an MS-DOS painting program or something. <<shudder>> :) SDL might serve as a "buffer provider" into which the software renderer draws while playing a game, but something else fills that role (through the same higher-level code interface from the renderer's viewpoint of course) in a traditional app. The traditional app strikes me as more likely to be platform-specific, anyway (probably Cocoa, and if anyone asks why, then they probably haven't worked in Cocoa ;) ). Though there are cross-platform general-purpose UI widget toolkits, I'm not sure I've encountered any that really get the "look and feel" right (especially on a detail-oriented platform like the Mac). But I am by no means an authority on cross-platform UI widget toolkits. :) Woody |
From: Woody Z. I. <woo...@sb...> - 2003-01-15 08:26:51
|
On Tuesday, January 14, 2003, at 10:21 PM, Timothy Collett wrote: > Just to step into this debate a minute... > > I don't know all that much about SDL. If we made AM be largely > SDL-dependent, would that mean that one would have to install SDL to > run AM? That is, could all the relevant libraries (reasonably) be part > of AM itself, or do they have to be installed separately? Others have talked about the situation on Mac OS X (can be bundled with the application). FWIW, Windows lets you put the .dll's in the file with the executable, which is how A1/SDL for Windows currently works. (No need to install SDL separately or place things in your system directories etc.) > Also, what are the different important parts of SDL, how exactly could > each be used within AM, and how much interdependence is there among > them? This is too big a question to really answer, but briefly, the purpose of SDL is to provide a common interface across a very wide variety of platforms for the following: * Video (i.e. gaining byte-level access to the framebuffer, blitting 2D surfaces, getting an OpenGL context, setting the screen mode and bit depth, etc.) * Window Manager (very little in here, mostly resize window and toggle fullscreen) * Event Handling (processing mouse clicks, keystrokes, etc.) * Joystick * Audio (very low-level access, you provide the bytes for the sound card to play back) * CDROM (using CD-ROM drive as CD audio player) * Threads * Timers (getting time values, waiting, and asking for periodic scheduling) * Endian-independence (byte-swapping) * Main (providing a single source-code entry point for you so you don't have to worry about things like WinMain() on Windows) Those are in the core "SDL". Each is essentially an independent subsystem which can be used with or without the rest. SDL_net contains, essentially, nonblocking TCP and UDP sockets, select() (i.e. wait for data arrival etc.), and DNS name-to-host resolution. (Many platforms have BSD sockets or something a lot like them, so this isn't too exciting in most cases, but it does serve to hide the little differences. Also, significantly, it works in Mac OS 9, which does *not* have a sockets-like programming API on its own.) SDL_net requires SDL, but only (as noted) for SDL_GetError() and SDL_SetError(). I suppose you could probably provide your own implementations for those symbols if you really wanted to dump SDL and only use SDL_net. SDL_image lets your code load the following image formats into an SDL surface, suitable for byte-level inspection/manipulation (e.g. setting up in an OpenGL texture) or blitting with the SDL Video subsystem: BMP, PNM, XPM, LBM, PCX, GIF, JPEG, PNG, TGA. I suppose it requires the SDL Video subsystem. (From the SDL_mixer site:) SDL_mixer is a sample multi-channel audio mixer library. It supports any number of simultaneously playing channels of 16 bit stereo audio, plus a single [stereo] channel of music, mixed by the popular MikMod MOD, Timidity MIDI, Ogg Vorbis, and SMPEG MP3 libraries. (end quote) It does require the free SMPEG and libvorbis libraries for MP3 and Ogg Vorbis support. (MikMod, which can play virtually any file in the (large) MOD family, is built in directly.) See www.libsdl.org for more information. > The reason I ask these things is that I am leaning, along with Br'fin, > strongly in the direction of having SDL be a separate platform. My > main reasons are two: dislike of dependence on separately-installed > libraries and dislike of SDL widgets. Well I think we've talked about the first one pretty well (though I don't know how it works in Linux etc., IIRC in Mac OS 9 you can also put the library in the folder with the application and have it work - oh right, Mac OS 9 supports bundles too, so the user wouldn't even have to see it). Anyway I assume the plan is probably to use SDL_net anyway...? which means some of the libraries will be there regardless. I'm not sure what you guys are talking about with this SDL widget business. Though there are several cross-platform, typically game-oriented GUI libraries out there that are built on top of SDL, SDL itself has no "widgets". The stuff used in A1 was written by Christian Bauer for A1 and tweaked up a bit by me. Woody |
From: Br'fin <br...@ma...> - 2003-01-15 05:01:17
|
On Wednesday, January 15, 2003, at 12:46 AM, Mark Levin wrote: >> I don't know all that much about SDL. If we made AM be largely >> SDL-dependent, would that mean that one would have to install SDL to >> run AM? That is, could all the relevant libraries (reasonably) be >> part of AM itself, or do they have to be installed separately? > > Under OS X, SDL is contained within the application bundle (the Carbon > version has been dependent on SDL for some time now). On other > platforms, it would need to be installed as a system library. > > D'oh. I forgot about other platforms. :) -Jeremy |
From: Mark L. <hav...@ma...> - 2003-01-15 04:50:04
|
> I don't know all that much about SDL. If we made AM be largely > SDL-dependent, would that mean that one would have to install SDL to > run AM? That is, could all the relevant libraries (reasonably) be > part of AM itself, or do they have to be installed separately? Under OS X, SDL is contained within the application bundle (the Carbon version has been dependent on SDL for some time now). On other platforms, it would need to be installed as a system library. --Mark "Don't talk to me about murder. I invented murder." |
From: Br'fin <br...@ma...> - 2003-01-15 04:43:16
|
On Tuesday, January 14, 2003, at 11:21 PM, Timothy Collett wrote: > > Just to step into this debate a minute... > > I don't know all that much about SDL. If we made AM be largely > SDL-dependent, would that mean that one would have to install SDL to > run AM? That is, could all the relevant libraries (reasonably) be > part of AM itself, or do they have to be installed separately? Thanks to the magic of OS X bundles... It is possible to embed the necessary SDL Frameworks within a Carbon application that uses them. I made sure this was possible after shoe-horning SDL network code into AlephOne carbon. I too agree that we don't want extra downloadables/installs. I'm not sure if my use of the SDL libs in AlephOne is appropriately setup with respect to OSX's desire/interest to prebind its libraries though. :/ This is not a factor in my arguments against SDL as the base platform. -Jeremy Parsons |
From: Timothy C. <tco...@ha...> - 2003-01-15 04:22:17
|
> Ok and as a final parting shot about the SDL stuff, it is completely > possible and practical to use, say, SDL_Video to insulate you against > the differences between DisplaySprocket and DirectDraw et al., or > between wgl and agl et al. (for attaching OpenGL contexts to on-screen > windows etc.), etc. without committing to using SDL_main to insulate > you from differences in application initialization between platforms, > and without committing to using SDL_event as the basis for your event > loop. (Perhaps Br'fin's negative experiences have primarily been with > these latter components.) So you can mix-n-match. :) Just to step into this debate a minute... I don't know all that much about SDL. If we made AM be largely SDL-dependent, would that mean that one would have to install SDL to run AM? That is, could all the relevant libraries (reasonably) be part of AM itself, or do they have to be installed separately? Also, what are the different important parts of SDL, how exactly could each be used within AM, and how much interdependence is there among them? The reason I ask these things is that I am leaning, along with Br'fin, strongly in the direction of having SDL be a separate platform. My main reasons are two: dislike of dependence on separately-installed libraries and dislike of SDL widgets. Thank you. Timothy Collett "There is a theory that states that if ever anyone discovers what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory that states that this has already happened." - The Restaurant at the End of the Universe, by Douglas Adams |
From: Alexander S. <ast...@it...> - 2003-01-15 04:08:37
|
On Tuesday, January 14, 2003, at 11:02 PM, Alexander Strange wrote: > > On Tuesday, January 14, 2003, at 10:38 PM, Br'fin wrote: >> I presume that AlephOne carbon has had it's main hijacked > > No, it hasn't. I just checked. And looking at SDL_net, SDLNet_Init is a no-op, and the only thing it calls from SDL is SDL_SetError... (this is on OSX, of course) |
From: Alexander S. <ast...@it...> - 2003-01-15 04:02:41
|
On Tuesday, January 14, 2003, at 10:38 PM, Br'fin wrote: > I presume that AlephOne carbon has had it's main hijacked No, it hasn't. I just checked. And I expect the reason PB debugging doesn't work is all the funny optimization flags I'm using; but i usually use gdb from the Terminal anyway. > (SDL is a requirement to including SDL_net) but I wouldn't notice so > much there. It's menus aren't visible after all. > > SDL may or may not be appropriate within AM based tools. Mostly it > would depend on how nicely it can play with others. > > -Jeremy |
From: Br'fin <br...@ma...> - 2003-01-15 03:37:53
|
On Tuesday, January 14, 2003, at 09:35 PM, Woody Zenfell, III wrote: > > Well I say the first effort, but I guess I don't really mean that - > the first effort _could_ be Mac OS only, as long as that code knows > it's doomed, and is, in fact, replaced to the greatest degree possible > with more portable code when the latter arrives (which I would hope > would be very shortly thereafter). My desire is to encapsulate the MacOS Specific code into areas that are specifically labeled for the MacOS platform. Here's the find file API and under MacOS this is implemented with MasOS specific foo. The degree to which this happens does have several criteria, including actively or passively breaking classic compatibility. (OSX code itself can lean much more towards meshing with unix definitions, of course :) ) > From my point of view one of the major problems with A1 currently is > that there are often two versions of the same thing "in parallel", > when the cross-platform case *could* completely replace the Mac > OS-specific version. A1 has far more things "in parallel" than it should. I don't agree with you, Woody, on which things should be shared. But I agree that A1 proper doesn't share enough. I applaud you and whoever else managed to get SDL working with A1. The basic code only has some nods towards cross-platform issues. >> The problem is that A1 has pretty much painted itself into a corner >> and the effort of adding all these nice features would be more than >> that expended to bring AM up to A1's level and then adding the >> features :) > > I'm not convinced this is the case - or more accurately, I'm not > convinced that modularizing A1, rather than M2, and then adding the > new features would be more effort. But, as noted, it doesn't really > matter at this point. Remember all those "in parallel" elements you mentioned? A1 has both Mac VS SDL and software renderer vs openGL. You've a lot more stuff to worry about breaking. On top of that, certain areas continue to feel mis-implemented. OpenGL HUD flicker still seems to be a lingering complaint and problem. > Ok and as a final parting shot about the SDL stuff, it is completely > possible and practical to use, say, SDL_Video to insulate you against > the differences between DisplaySprocket and DirectDraw et al., or > between wgl and agl et al. (for attaching OpenGL contexts to on-screen > windows etc.), etc. without committing to using SDL_main to insulate > you from differences in application initialization between platforms, > and without committing to using SDL_event as the basis for your event > loop. (Perhaps Br'fin's negative experiences have primarily been with > these latter components.) So you can mix-n-match. :) > I believe Bryce over at Bochs would love to know how to defer/control SDL initialization so that it's only incorporated when the SDL UI plugin is used instead of it's current behavior of stealing main(). I presume that AlephOne carbon has had it's main hijacked (SDL is a requirement to including SDL_net) but I wouldn't notice so much there. It's menus aren't visible after all. SDL may or may not be appropriate within AM based tools. Mostly it would depend on how nicely it can play with others. -Jeremy |
From: Woody Z. I. <woo...@sb...> - 2003-01-15 02:48:57
|
I should make explicit that my comments about the usefulness of plist preferences were made before I received this message from Br'fin. Those comments would have been different otherwise. Though I don't recall exactly where I came down in that A1 discussion, what Br'fin's saying here makes sense to me. Right, maintaining backwards-compatibility (even with earlier versions of itself) has been another particularly weak spot in A1. Now *there* might be a good argument for AM's "back to basics" approach. OK, I _swear_ I'm shutting up now ("and there was much rejoicing"), because I have to test and submit a few more patches for A1. And assign percussion parts for Shostakovich's 1st symphony. :) So umm, everybody please quit saying interesting things. ;) (Ha ha, I am only kidding of course.) Woody On Tuesday, January 14, 2003, at 08:30 PM, Br'fin wrote: > A correction to the issue of plists. > > Folks were looking to redo the preferences in XML. My suggestion of XML > specification was for plists. (I wasn't even suggesting using the OSX > APIs to access these files!) This would have been a cross-platform and > XML-based preferences file with the additional bonus that under OSX > there were tools to read/write plists *outside* of A1. > > As an additional bonus, plists aren't tied to the needs of any one > program. So if one platform needed an extra preferences item, it could > add it without requiring additional XML tags and code in A1. > > .... > > How fascinating. > > You know where the MML vs PLIST XML preferences came up? April 2002. > What did it arise from? The accursed preferences related assert bug > What was done? Loren redid the preferences in a custom XML format. > When were we still finding the accursed assert? October, 2002 > > Now I'm pretty sure the preferences were changed to XML and released > (With fun havok there since that also was a sudden change and I don't > remember any preferences converting code as being part of the deal...) > So this must have been old AlephOne versions floating around without us > thinking about it. |
From: Woody Z. I. <woo...@sb...> - 2003-01-15 02:36:03
|
All right talked to death yes; my hope is that this message does a little forward-looking as well as back- (trying to swing the previous discussion around somewhat into more immediately helpful stuff). On Tuesday, January 14, 2003, at 06:10 PM, Mark Levin wrote: > There should be no "preferred" branch. There is high-level game code > which should be 100% platform agnostic and low-level platform code > which is necessarily restricted to 1 platform. I disagree about the lack of a "preferred" branch, but you know that. I also disagree on the latter part though - a good, practical decomposition would not have only two levels like this. Maybe I haven't been clear on this whole SDL thing. Yes, there would be high-level game code that is totally platform-independent. Yes, it would interact with lower-level code through some well-defined interface. But that lower-level code is *not* necessarily separate for every platform. Indeed, an SDL-based software renderer, or sound system, could serve many different specific platforms, just as stdio- or fstreams- or whatever-based file I/O could serve many different specific platforms. So maybe (almost) all platforms use stdio for saving/loading data from disk, but they have different locations to place and search for those files. And if some of them support mmap()-type functionality and some don't, then those that do are free to use it instead of stdio. But the baseline, the first effort at writing this lower-level code in the new modular format, should use methods that are as portable as possible, and should resort to platform-specificity only when there are significant gains to doing so. Well I say the first effort, but I guess I don't really mean that - the first effort _could_ be Mac OS only, as long as that code knows it's doomed, and is, in fact, replaced to the greatest degree possible with more portable code when the latter arrives (which I would hope would be very shortly thereafter). From my point of view one of the major problems with A1 currently is that there are often two versions of the same thing "in parallel", when the cross-platform case *could* completely replace the Mac OS-specific version. >> (Of course this is essentially the same argument as for using standard >> library calls and SDL instead of Mac OS calls whenever possible, >> though they are different issues. Shared code, not separate >> branches. Shared code, not separate branches!!) > > Of course, the flip side of that is, what if we *do* want to take > advantage of features only available on 1 platform? Clearly, when it's advantageous to use a more platform-specific interface for doing something, the platform-specific code simply overrides the generic implementation. > There was an enormous argument on the A1 list over whether plain XML or > Apple's property list format should be used for the preferences file, > the primary arguments being the former was cross-platform and the > latter was much easier on a Mac. If the preferences code was properly > modular, it would have been possible to write a MacOS-specific plist > loader and an XML loader for everything else. This would have been a Bad Thing (serves no function except to complicate the code and encourage platform differences), but I understand you're using it merely as an example. > And what if we want to use something like Quicktime, which is probably > far more powerful than SDL's media handlers? You might be surprised at how much SDL_image and SDL_mixer can do. But if platforms with QuickTime want to use QuickTime (e.g. for video content, if no suitable widely-available alternative can be found), there's no problem with that. But the initial, basic implementation should be one that is as widely-applicable as possible. The Windows and Mac OS versions can *then* choose to override the baseline with QuickTime-based routines if they see fit. If there's functionality that really isn't already available in a cross-platform package somewhere (like sound input, to my knowledge), then we have to go through the whole 9 yards of writing a different implementation for every platform. But that should be the exception, not the norm. (Especially when C/C++ standard library authors, SDL authors, etc. have *already* done all the work of writing a different implementation for every platform and giving them all the same interface!) > As for the specific example, I think the cut would be in the > implementation of *get*_control_value. I think we agree (though I'm not sure of the significance of the *'s around get). The high-level game code says "Ok, user clicked Gather Network Game, so let's handle that." Probably most platforms have the same routine for this, which says "Ok, first let's put up the Setup Network Game dialog". Probably most platforms have the same routine for _this_, which first consists of like "Ok, find and display the dialog box with this id:". Now here, the platforms start to diverge more widely. Maybe the baseline SDL code constructs the dialog programmatically. Maybe the Cocoa version asks the Mac OS to construct it from a resource or a nib or something instead, so it conforms to the Aqua UI. But then they return to that shared "Setup Network Game dialog" routine, which next says "Ok, let's fill in the values from the Preferences." And again, most platforms use the same routine for *that*, which consists of calling (alternately) accessors to get Preferences values and "set dialog item value to:" routines. This latter routine has perhaps the same implementation on all platforms that are using the baseline GUI stuff, but the Mac OS version has its own version that instead calls SetDialogItemText() or whatever the Mac-specific API is. If someone later wants the Windows version to have a Windows-like UI, they can override some of the baseline routines ("set dialog item value to:", "find and display dialog box with id:", etc.) with Win32-specific routines. If someone later wants to add new items to the Setup Network Game box, they can put the code to initialize the values, update control states (enabled, etc.) as appropriate when the new items are used, and read the values back into the game_info (or whatever the structure's called) when the user hits "OK". And then, the people who maintain (look, there *I* go now ;) ) the Mac OS UI and the Windows UI merely have to update the dialog-creation so that the new elements show up; there's no need to re-code the initialization, control-state logic, reading back, etc. for each different UI platform. One obvious way to do this is to have base classes for these things that use the least-common-denominator implementations, which platform-specific classes inherit by default but may override (or completely replace) if they wish. And there only need to be platform-specific classes if the platform indeed wishes to override some methods; else the generic (base class) version can be used directly. Some sort of centralized resource (some might call it a "Factory") would be responsible for dishing out a WindowsSetupNetworkGameDialog object or a WindowsDialogInterface (this latter being the part that maps set_control_value() to whatever the Win32 routine is) or just a plain SetupNetworkGameDialog object (or DialogInterface) based on some criteria or other when code says "Hey, give me the Setup Network Game Dialog object". It would not have to dish out only all Windows components or all generic components, of course, either; it could mix n match according to the builder's/user's/whatever's whim. Its ability to mix-n-match would naturally be greatly influenced by the object granularity - if there's only one big platform object that is responsible for all the details of drawing and sound playback and dialogs and files etc. then it can only be used or not. OTOH, having a little object for every minute aspect of dialog management would be not only cumbersome, but perhaps even error-prone as some versions of routines may be incompatible with other versions of other routines, and it'd be difficult for the "Factory" to get all the details right. (The inherit-the-class way is not the only way of course, but given that we're using C++ it seems the most obvious.) > In fact, there would be no get_control_value call at all; there would > be an (e.g.) get_screen_size call exposed from the preferences module > which corresponds to some other call inside the module that obtains the > value from the GUI or the preferences database. There wouldn't be a > set_control_value call either, there would be a set_screen_size called > by (e.g.) pressing F1 that would update the platform-specific elements. I think you're off-track here. Yes there should be a routine to ask the preferences for the user's preferred screen size. But that doesn't mean that that routine is used in the Preferences dialog. Likewise, there should be a routine that says "Make the screen size one notch bigger!" or "Set the screen size to NxM!" but again, these would not be used by the Preferences dialog. Well, not in my formulation anyway. > The problem is that A1 has pretty much painted itself into a corner and > the effort of adding all these nice features would be more than that > expended to bring AM up to A1's level and then adding the features :) I'm not convinced this is the case - or more accurately, I'm not convinced that modularizing A1, rather than M2, and then adding the new features would be more effort. But, as noted, it doesn't really matter at this point. Ok and as a final parting shot about the SDL stuff, it is completely possible and practical to use, say, SDL_Video to insulate you against the differences between DisplaySprocket and DirectDraw et al., or between wgl and agl et al. (for attaching OpenGL contexts to on-screen windows etc.), etc. without committing to using SDL_main to insulate you from differences in application initialization between platforms, and without committing to using SDL_event as the basis for your event loop. (Perhaps Br'fin's negative experiences have primarily been with these latter components.) So you can mix-n-match. :) Woody |
From: Br'fin <br...@ma...> - 2003-01-15 02:29:45
|
> Of course, the flip side of that is, what if we *do* want to take > advantage of features only available on 1 platform? There was an > enormous argument on the A1 list over whether plain XML or Apple's > property list format should be used for the preferences file, the > primary arguments being the former was cross-platform and the latter > was much easier on a Mac. If the preferences code was properly > modular, it would have been possible to write a MacOS-specific plist > loader and an XML loader for everything else. And what if we want to > use something like Quicktime, which is probably far more powerful than > SDL's media handlers? Or Direct3D on Windows? A correction to the issue of plists. Folks were looking to redo the preferences in XML. My suggestion of XML specification was for plists. (I wasn't even suggesting using the OSX APIs to access these files!) This would have been a cross-platform and XML-based preferences file with the additional bonus that under OSX there were tools to read/write plists *outside* of A1. As an additional bonus, plists aren't tied to the needs of any one program. So if one platform needed an extra preferences item, it could add it without requiring additional XML tags and code in A1. .... How fascinating. You know where the MML vs PLIST XML preferences came up? April 2002. What did it arise from? The accursed preferences related assert bug What was done? Loren redid the preferences in a custom XML format. When were we still finding the accursed assert? October, 2002 Now I'm pretty sure the preferences were changed to XML and released (With fun havok there since that also was a sudden change and I don't remember any preferences converting code as being part of the deal...) So this must have been old AlephOne versions floating around without us thinking about it. -Jeremy Parsons |
From: Br'fin <br...@ma...> - 2003-01-15 01:59:46
|
On Tuesday, January 14, 2003, at 07:19 PM, Woody Zenfell, III wrote: > >> There were other issues too, relating to cruft. Since shifting to >> MacOS X 10.2.x and PB 2.0.x, I've been unable to figure out how to >> run AO within the debugger. Talk about annoying. > > Yeah, I was unable to link the thing for about 6 months, til I found > -force_flat_namespace... > > Maybe it's the older devtools, but I find the debugger almost useless > within PB; it always seems to get hung up or confused or something. I > always run it from the command line. FWIW. I found the debugger to rock, but I'm relatively unfamiliar with debuggers too :) Something with how A1 is linked or something, but under the newer tools GDB ends up reporting some problem linking to one library or another then exits. I admit I don't know how to set up the project file to support multiple versions of PB. Guess it's problem of it not being as refined as the unix make process :) >> My overall assumption is that if AM can be steered right, then it >> should be possible to adapt things like AO's OpenGL code. And it >> wouldn't be the first time I had shoe-horned things together :) > > Well, I hope you're right... and hearing you state it as if _you_ plan > to do that sort of thing (and not just leave it to unspecified others) > gives me cause for hope. :) It probably is a Good Thing to figure > out a framework and then fit the pieces together, rather than vice > versa as would need to be done with A1... but OTOH I think examining > A1 could inform the design of that framework. Learning from history > and all that. But I'm sure that's all part of the plan. A1 is both a source of learning what to do and, alas, what not to do. Certainly when I find something not working or serialized or what not I do tend to peek at what AO looks like in that area :) > BTW thanks for the bug summary. I find it a little curious that you > seem to feel that your looking for and fixing these bugs now, as > opposed to earlier, is somehow A1's fault... but I can see where > you're coming from nonetheless. To be fair, most of the bugs I wasn't aware of before. Almost anything software rendering wise would have skipped my notice. One of the bugs is probably not even noticeable in the fix. I fixed it when I found the compiler giving warnings about something being shifted by TRIG_MAGNITUDE (1024) instead of the appropriate TRIG_whatever shifting define. Two of them I would consider high-profile. One is the disappearing close creature one. And the other is that darn assertion. It felt like everyone on the list knew to trash the preferences if someone complained about the assertion, but it went on for how long? My knowledge of the code has certainly grown since then. > I think it's a Good Thing that AM has someone In Charge (and doubly so > that that someone happens to be as thoughtful as Br'fin). > > Well anyway I think we've talked this one to death ... what's next for > us to talk about wrt AM? Proposals for decomposition into systems and > subsystems etc.? Or are you not quite at that point yet? Yes actually. I really need to stop flexing my milestone designations. Though on that front I picked up a fun book. 'Game Architecture and Design' by Andrew Rollings and Dave Morris. Given that 1/3rd of it is team planning/management and another 1/3rd deals with appropriate architectures, it seemed like a good buy with x-mas gift certificates, especially since I found it half off the cover price. I would love to see some actual proposals. My considerations right now are for hardware abstraction components. For instance, what would updated elements of portable_files or screen (the display module) look like? -Jeremy Parsons |
From: Woody Z. I. <woo...@sb...> - 2003-01-15 00:19:34
|
On Tuesday, January 14, 2003, at 04:56 PM, Br'fin wrote: > Unknown on the AM versus restructuring A1 issue. At least part of where > I felt AO had failed was in its CVS management. Plus I could no longer > be sure whether a behavior was AO amended or original. At which point, > it just seemed easier to start fresh with Bungie's code. This would > guarantee that for AM I had a complete history from Bungie's code to > present. I agree, perhaps A1's greatest single failure was not preserving the histories of those files. I feel I was quite outspoken at the time about the importance of preserving them, and I was (outspokenly) aghast when I found that all that information had been simply discarded. (Well, I guess it's still in the cvs "attic", but it's hard to reconnect current files to their 'attic' counterparts...) > There were other issues too, relating to cruft. Since shifting to MacOS > X 10.2.x and PB 2.0.x, I've been unable to figure out how to run AO > within the debugger. Talk about annoying. Yeah, I was unable to link the thing for about 6 months, til I found -force_flat_namespace... Maybe it's the older devtools, but I find the debugger almost useless within PB; it always seems to get hung up or confused or something. I always run it from the command line. FWIW. > My overall assumption is that if AM can be steered right, then it > should be possible to adapt things like AO's OpenGL code. And it > wouldn't be the first time I had shoe-horned things together :) Well, I hope you're right... and hearing you state it as if _you_ plan to do that sort of thing (and not just leave it to unspecified others) gives me cause for hope. :) It probably is a Good Thing to figure out a framework and then fit the pieces together, rather than vice versa as would need to be done with A1... but OTOH I think examining A1 could inform the design of that framework. Learning from history and all that. But I'm sure that's all part of the plan. BTW thanks for the bug summary. I find it a little curious that you seem to feel that your looking for and fixing these bugs now, as opposed to earlier, is somehow A1's fault... but I can see where you're coming from nonetheless. I guess the real problem with A1 is that it has no leader. I mean, I guess we have to consider Loren the primary developer (at least over the past year-and-a-third or so, can't really comment on earlier times - guess I'm a relative newcomer compared to you guys), given the amount of work he's put into it and continues to put into it... and as the primary developer I guess he is sort of looked to as a leader. But I don't get the impression he considers himself to be "in charge" of the project (nor does anyone else, AFAICT). I think it's a Good Thing that AM has someone In Charge (and doubly so that that someone happens to be as thoughtful as Br'fin). Well anyway I think we've talked this one to death ... what's next for us to talk about wrt AM? Proposals for decomposition into systems and subsystems etc.? Or are you not quite at that point yet? Woody |