[Alephmodular-devel] save; crossplatform issues.
Status: Pre-Alpha
Brought to you by:
brefin
From: Chris B. <zzk...@uq...> - 2003-01-04 12:50:36
|
> Ultimately the desirability of a large or small save file should be up > to anyone deriving a new game work from AlephModular. The basic save > code of AlephModular will be a derivative of the existing save code > for M2 and Minf. I would be willing to entertain features that would > make alternate save code pieces and strategies easier to implement. Given Jeremy's stated goals and direction for AM, i'd have to agree that 20MB is not very reasonable for a save file. We're not targeting top-of-the-line machines here, nor do many people have 500GB drives. I'd say the norm is around 20GB, the top-of-the-line "off-the-shelf" machine would have maybe 80GB, and the previous generation top-of-the-line would have maybe 10-15GB. If we continue to support (or allows support) for OS9, that brings us to machines with 0.5GB - 2GB. With that in mind, i don't see any reason that the "save module" couldn't be replaced with something a lot more efficient. Marathon maps currently don't have a lot of dynamic elements, I honestly can't see much need for more than a few tens of KB per level (ie. save the states and positions of platforms, lights, monsters etc.) There's no need for a diff scheme here - that would probably work but it's overkill considering that we know that a large portion of the data can't change and therefore doesn't need to be saved anyway. - > We have classes we can use for encapsulation. We have the ability to > define Class Foo and have platform specific subclass FooPlatform. Indeed. The danger here is that virtual functions can be slower than you'd expect. With care and experience, you can write a completely virtual API which performs quite well. And of course many APIs are not performance critical. It's something that needs to be looked on a case-by-case basis. My take on it is - where something can be decided at compile time without loss of flexibility, that path is preferable. This generally means one of two approaches: 1. A single .cpp file with #ifdefs surrounding certain key code elements. This is nice where a large portion of the code is reusable, and there are only slight differences between the platforms. It saves rewriting/maintaining separate versions of the larger body of code. The downside here is tracking the #ifdefs; ie. when porting to a new platform, you have to find all the bits of code you need to rewrite. I usually solve this by putting in a compiler error/warning directive (as suitable) whenever one of these codeblocks detects an unsupported platform. 2. Seperated .cpp files, one for each platform. This can be a real pain to maintain, if you're not planning carefully. Ideally they share a common public header but each implement it differently internally. The downside is that any change to the header may break the compilation of all the platforms. Generally people only take the care to fix their own platform, without paying much attention to what they've broken on another platform (with good reason- often they can't test their changes as they don't have said platform available.) YMMV. All three methods (classes, single files, multiple files) have difficulties, especially if the code is evolving. For this reason, i'd like to put forward that (in effect) the entirety of the platform-specific code is moved away from the generic code into a separate API layer. IE. Abstraction of file classes is performed at the file class level; not inside the save/load code itself. Same goes for sound code; network code; etc. This may sound obvious but it's easy to overlook and could make for an awful mess later on if not approached thoughtfully. chris |