alephmodular-devel Mailing List for AlephModular (Page 10)
Status: Pre-Alpha
Brought to you by:
brefin
You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(61) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(193) |
Feb
(61) |
Mar
(55) |
Apr
(5) |
May
(3) |
Jun
(1) |
Jul
(3) |
Aug
(14) |
Sep
(19) |
Oct
(48) |
Nov
(6) |
Dec
(25) |
| 2004 |
Jan
(1) |
Feb
|
Mar
(2) |
Apr
(6) |
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(17) |
Jun
(20) |
Jul
(13) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Br'fin <br...@ma...> - 2003-03-08 19:47:17
|
Ok, I think the entirety of your description (observers, readonly
bitmaps) is much like one of my overblown efforts. However, I think
your base idea is workable.
As I was reading your message I'd been thinking 'what if pixel doubling
was an attribute of a bitmap.' But, as I thought that through, that
would require checks wherever in buffer or what not for this attribute,
perhaps building concealed clipping buffers or something. And no, no,
that was too much in a different direction.
However, if there was a CDoublingBuffer, and itself was a descendant of
CClippingBuffer, then it could accept an input area that's divisible by
2, clip itself even more heavilly, and when it unlocks its pixels, it
could inflate them.
This also allows the buffer to position the original clipping optimally
for not mis-overwriting when inflating.
... Hmm, currently to get a clipping buffer, you request one from the
current buffer... guess you could add an attribute to that call.
CClippedBuffer::Ptr (auto_ptr) CBuffer::get_clipped_buffer( dimensions,
attr = CBuffer::PixelStandard)
But attr could also be CBuffer::PixelQuadruple which would switch it to
be allocating a doubling buffer instead of a normal clipped buffer.
-Jeremy Parsons
On Saturday, March 8, 2003, at 12:41 PM, Woody Zenfell, III wrote:
> [pixel-doubling (actually quadrupling I think but let's overlook that)]
>
> What about making a CDoublingBuffer that's sort of like your
> CClippingBuffer in that it "modifies" an existing buffer?
>
> i.e. a CDoublingBuffer would know about an IBuffer, call it mBuffer,
> that it's supposed to double; its GetHeight() would be { return
> mBuffer.GetHeight() * 2; } and so on.
>
> Of course as you correctly point out there needs to be some "hook"
> that lets the DoublingBuffer actually perform the doubling
> calculations - or does there?
>
> Maybe CDoublingBuffer would cache the doubled pixels in its
> mCacheBitmap member (pardon me if I'm getting the terms wrong here)
> and would maintain a 'dirty' flag that would tell it (when someone
> tries to read from it) whether it needs to re-render the doubled
> pixels to its cache or can just provide data straight from the cache.
> Rerendering would of course clear "dirty".
>
> Now the sticky bit is setting "dirty". For that you probably want to
> use some kind of "Observer" pattern so that a buffer can announce when
> its bitmap is modified, and interested parties can register with it to
> be given notification. So the CDoublingBuffer would be an observer of
> its mBuffer base-buffer; the latter would announce "got changed" when
> it gets Unlocked, say (right? IIRC you may only render into a
> buffer's bitmap if you lock it first? or do you actually lock (part
> of) the bitmap itself? in that case the buffer is probably an
> observer of its bitmap, and the bitmap notifies when it's Unlocked();
> the buffer can then notify that its bitmap changed, which lets the
> observing CDoublingBuffer set its dirty flag).
>
> Of course hmm another issue there is that the CDoublingBuffer's
> IBitmap probably doesn't want to have other people rendering in it?
> Because the changes will be discarded the next time the buffer is
> rerendered? Is there a notion of "read/write" vs. "read-only"
> IBuffers? (e.g. IReadOnlyBuffer is an ancestor class of
> IReadWriteBuffer? CDoublingBuffer is a descendant of IReadOnlyBuffer
> but not of IReadWriteBuffer?)
>
> Oh of course all the caching stuff can be skipped if CDoublingBuffer
> is only used like theDoublingBuffer->BltInto(theWorldPixelsBuffer,
> ...) and its contents typically change as often as that routine is
> called (both of which may well be the case). Which would be nice of
> course because then you wouldn't have to keep storage around for the
> doubled pixels... but then it's probably doubly important to
> distinguish in the interface between read-only and read/write > IBuffers.
>
> Note that a temporary Bitmap could be generated if a caller wants to
> lock down the pixels (for read-only access of course, e.g. writing out
> a screenshot or something). But there's probably no need to create
> this intermediate if we're doing a blt to another existing Buffer.
>
> Observe that RLE Bitmaps probably want to be read-only as well, and in
> fact may have a lot of the same properties (can copy directly from one
> to somewhere else, but locking for direct pixel reading requires the
> creation of a temporary non-RLE'd Bitmap). Actually you could have
> read/write RLE Bitmaps if you really wanted to of course, when the
> temporary non-RLE'd Bitmap is Unlocked() you could perform the RLE on
> it to produce an updated RLE Bitmap. Or, disallow operating on RLE
> Bitmaps directly but offer routines to explicitly convert them to/from
> "full" Bitmaps. Or something. If you want. Heh heh.
>
> Does any of this make any sense? Maybe I have an incomplete
> understanding of your scheme and how it fits into AM...
>
> Woody
>
>
>
> -------------------------------------------------------
> This SF.net email is sponsored by: Etnus, makers of TotalView, The
> debugger for complex code. Debugging C/C++ programs can leave you
> feeling lost and disoriented. TotalView can help you find your way.
> Available on major UNIX and Linux platforms. Try it free. > www.etnus.com
> _______________________________________________
> Alephmodular-devel mailing list
> Ale...@li...
> https://lists.sourceforge.net/lists/listinfo/alephmodular-devel
>
|
|
From: Br'fin <br...@ma...> - 2003-03-08 19:29:16
|
Personally, I'd rather not drop a feature of the existing code offhand. Though 256 colors does have the stronger ploy at the moment. Between not looking right on the OSX machines it runs on (I haven't strongly looked into addressing the color issues yet), and just plain not being able to kick into 256 color mode the way M2 does on some OSX boxes. I honestly don't know all the machines that will try to run it, and right now I'm sometimes dipping below 30 fps at high-res & 100%. (I admit the code is also tied up with more error checking too, but I'm not using Marathon infinity files either) Certainly it makes certain things smoother if you only choose from available screen sizes and then most of the in-game toggling that you have comes from turning the HUD on and off :) -Jeremy Parsons On Saturday, March 8, 2003, at 01:26 PM, Kevin Walker wrote: > On Saturday, March 8, 2003, at 02:11 PM, Mark Levin wrote: >> IMHO it would be reasonable to drop low-resolution mode entirely by >> now. Is anyone out there still using a computer slow enough to >> benefit from it? The same could be said for 256-color mode. >> >> --Mark > > While I understand where you're coming from (if you're running Mac OS > X, you should be able to use hi-res), and while I certainly don't NEED > it, low-res can be nice if you want some easy framerate boosting. And > if AM allows for more than 30 fps, then this could be nice for the > people who want more fluidity than sharpness. > Don't beat a low-res dead horse if it isn't working well - I wantz me > AM! -^= (this is supposed to be a thumbs-up) > > I would agree with dropping 256-colors, though. > > -KWalker |
|
From: Kevin W. <lu...@ea...> - 2003-03-08 18:47:23
|
On Saturday, March 8, 2003, at 02:11 PM, Mark Levin wrote: > IMHO it would be reasonable to drop low-resolution mode entirely by > now. Is anyone out there still using a computer slow enough to benefit > from it? The same could be said for 256-color mode. > > --Mark While I understand where you're coming from (if you're running Mac OS X, you should be able to use hi-res), and while I certainly don't NEED it, low-res can be nice if you want some easy framerate boosting. And if AM allows for more than 30 fps, then this could be nice for the people who want more fluidity than sharpness. Don't beat a low-res dead horse if it isn't working well - I wantz me AM! -^= (this is supposed to be a thumbs-up) I would agree with dropping 256-colors, though. -KWalker |
|
From: Mark L. <hav...@ma...> - 2003-03-08 18:15:35
|
On Saturday, March 8, 2003, at 10:33 AM, Br'fin wrote: > Well, I'm slowly getting everything that needs some aspect of the > display abstraction to use it. With the only really awkward stumbling > blocks so far being handling of low-resolution mode and not centering > the less than 640x480 displays. IMHO it would be reasonable to drop low-resolution mode entirely by now. Is anyone out there still using a computer slow enough to benefit from it? The same could be said for 256-color mode. --Mark "Take your best shot, Flatlander Woman!" Random acts of programming: http://homepage.mac.com/haveblue |
|
From: Woody Z. I. <woo...@sb...> - 2003-03-08 17:47:16
|
On Saturday, March 8, 2003, at 11:41 AM, Woody Zenfell, III wrote: > [pixel-doubling (actually quadrupling I think but let's overlook that)] I should point out that I was not terribly careful in my distinctions between Bitmaps and Buffers. So interpret suggestions as applying to whichever you deem more appropriate - don't figure that I carefully analyzed all the interactions and decided that something really belonged in a Buffer rather than a Bitmap, or anything like that. Woody |
|
From: Woody Z. I. <woo...@sb...> - 2003-03-08 17:41:27
|
[pixel-doubling (actually quadrupling I think but let's overlook that)]
What about making a CDoublingBuffer that's sort of like your
CClippingBuffer in that it "modifies" an existing buffer?
i.e. a CDoublingBuffer would know about an IBuffer, call it mBuffer,
that it's supposed to double; its GetHeight() would be { return
mBuffer.GetHeight() * 2; } and so on.
Of course as you correctly point out there needs to be some "hook" that
lets the DoublingBuffer actually perform the doubling calculations - or
does there?
Maybe CDoublingBuffer would cache the doubled pixels in its mCacheBitmap
member (pardon me if I'm getting the terms wrong here) and would
maintain a 'dirty' flag that would tell it (when someone tries to read
from it) whether it needs to re-render the doubled pixels to its cache
or can just provide data straight from the cache. Rerendering would of
course clear "dirty".
Now the sticky bit is setting "dirty". For that you probably want to
use some kind of "Observer" pattern so that a buffer can announce when
its bitmap is modified, and interested parties can register with it to
be given notification. So the CDoublingBuffer would be an observer of
its mBuffer base-buffer; the latter would announce "got changed" when it
gets Unlocked, say (right? IIRC you may only render into a buffer's
bitmap if you lock it first? or do you actually lock (part of) the
bitmap itself? in that case the buffer is probably an observer of its
bitmap, and the bitmap notifies when it's Unlocked(); the buffer can
then notify that its bitmap changed, which lets the observing
CDoublingBuffer set its dirty flag).
Of course hmm another issue there is that the CDoublingBuffer's IBitmap
probably doesn't want to have other people rendering in it? Because the
changes will be discarded the next time the buffer is rerendered? Is
there a notion of "read/write" vs. "read-only" IBuffers? (e.g.
IReadOnlyBuffer is an ancestor class of IReadWriteBuffer?
CDoublingBuffer is a descendant of IReadOnlyBuffer but not of
IReadWriteBuffer?)
Oh of course all the caching stuff can be skipped if CDoublingBuffer is
only used like theDoublingBuffer->BltInto(theWorldPixelsBuffer, ...) and
its contents typically change as often as that routine is called (both
of which may well be the case). Which would be nice of course because
then you wouldn't have to keep storage around for the doubled pixels...
but then it's probably doubly important to distinguish in the interface
between read-only and read/write IBuffers.
Note that a temporary Bitmap could be generated if a caller wants to
lock down the pixels (for read-only access of course, e.g. writing out a
screenshot or something). But there's probably no need to create this
intermediate if we're doing a blt to another existing Buffer.
Observe that RLE Bitmaps probably want to be read-only as well, and in
fact may have a lot of the same properties (can copy directly from one
to somewhere else, but locking for direct pixel reading requires the
creation of a temporary non-RLE'd Bitmap). Actually you could have
read/write RLE Bitmaps if you really wanted to of course, when the
temporary non-RLE'd Bitmap is Unlocked() you could perform the RLE on it
to produce an updated RLE Bitmap. Or, disallow operating on RLE Bitmaps
directly but offer routines to explicitly convert them to/from "full"
Bitmaps. Or something. If you want. Heh heh.
Does any of this make any sense? Maybe I have an incomplete
understanding of your scheme and how it fits into AM...
Woody
|
|
From: Br'fin <br...@ma...> - 2003-03-08 16:33:05
|
Well, I'm slowly getting everything that needs some aspect of the display abstraction to use it. With the only really awkward stumbling blocks so far being handling of low-resolution mode and not centering the less than 640x480 displays. But I'm going to need to look into the stuff that sets up the current display and how to get info about the current buffer and stuff. (Especially when the buffer is just assumed from the current drawing context.) I still haven't dealt with the menus yet. But aside from the viewscreen in small modes, all of the in game elements are showing up in their proper places. -Jeremy Parsons |
|
From: Br'fin <br...@ma...> - 2003-03-07 20:15:37
|
I've managed to hash out the buffer hierarchy and setup a
CBuffer_Carbon that does the glue between the hierarchy and its
embedded graphics port (In the current case,
GetWindowPort(screen_window))
The CClippingBuffer appears to be working wonderfully so far. And
visible speed is much much better then 0.3.
That said, there's still more work to be done. I haven't addressed any
portion of the drawing context and other display elements are often in
the wrong location. (Menus, your health interface) Mousing is
completely off (Because the concept of setting the origin for the
window is now different and only rendering is aware of CDisplay) And
LowRes mode shows very very tiny displays.
Does anyone have any suggestions on which system should own pixel
doubling? Is it a method I call on a buffer or on a bitmap? Or an
appropriate way to setup to call it?
do_rendering(CBitmap)
if(view_options->low_res)
{
PixelDouble(CBitmap, initial_height, initial_width) ?
}
-Jeremy Parsons
|
|
From: Br'fin <br...@ma...> - 2003-03-06 06:48:30
|
I've created a branch off of the development tree for the development of the display abstraction. Why? Well it's a big chunk of code in its effects and I want to get individual parts right. As such I've created the branch devel-display-abstraction To see this branch perform in the alephmodular directory: cvs update -r devel-display-abstraction To go back to the main development trunk cvs update -A The main features included with this branch so far is the actual design document that I was sending to the list and the first working in of the CBitmap class, its kin, and its decoders. I'm not as happy as I think I could be with CBitmap, but I managed to deal with the row_addresses area so that only CChunkedShapesBitmap needs care taken when allocating and disposing thereof. (CChunkedShapesBitmap is the CBitmap variant specifically for dealing with the way a single collection (high level shapes, low level shapes, and bitmaps) are allocated within a single chunk of memory for each collection. -Jeremy Parsons |
|
From: Br'fin <br...@ma...> - 2003-03-03 21:52:36
|
Some of you might have noticed the stuff in my design document about
working with scoped resources. For instance:
> Locking pixels. (std::auto_ptr<CBuffer::PixelLock> get_pixel_lock)
> When dealing with buffers on Macintosh, one must lock down pixels
> before doing certain operations. Such as rendering onto the bitmap or
> using copybits. With an auto_ptr based mechanism, the lock is
> automatically freed once out of scope. A corrolary of this is that
> directly requesting a buffer's bitmap should make sure the
> corresponding bitmap has locked the buffer and has control of the
> lock. Trying to double lock or doubly unlock a buffer should assert.
I though I'd show off the sort of thing I'm setting up to actually do
this.
A common trait that all these locks share is a scope. Mostly along the
lines of 'lock this resource until the scope is done and then call this
other private method to revert the lock' Since my design just for the
CDisplay has this in 2-4 different spots so far, I worked up a template
class to handle this.
It is defined with a class and is instantiated with a specific instance
of the class and the address of of member function. The member function
can be any member function with no arguments and no return value ( void
(*class::memberfunc)() )
#define CALL_MEMBER_FN(object,ptrToMember) ((object).*(ptrToMember))
template<class T>
class ScopedLock
{
typedef void (T::*TMemberFn)();
T& origin;
TMemberFn release_method;
public:
ScopedLock(T& _origin, TMemberFn _method): origin(_origin),
release_method(_method) {}
~ScopedLock() { CALL_MEMBER_FN(origin, release_method)(); }
};
As an example of how to use it, here is a very incomplete version of
CBuffer. The typedefs breakdown the complexity of the definition.
Though it's not perfect, hence alloc_pixels_lock which both sets up the
PixelLock and passes in the actual terminating function that really
makes it a PixelLock.
/* Abstract base class for interfacing with the platform specific way
of handling
a drawing surface. For instance a CGrafPort under MacOS
*/
class CBuffer
{
public:
typedef ScopedLock<CBuffer> _CBufferLock;
typedef std::auto_ptr<_CBufferLock> PixelLock;
private:
bool pixels_locked;
virtual void _lock_pixels() = 0;
virtual void _unlock_pixels() = 0;
void unlock_pixels() { assert(pixels_locked); _unlock_pixels();
pixels_locked= false; }
PixelLock alloc_pixel_lock() { return PixelLock(new
_CBufferLock(*this, &CBuffer::unlock_pixels)); }
protected:
CBuffer(): pixels_locked(false) {}
public:
PixelLock get_pixel_lock();
virtual ~CBuffer() {}
};
...
CBuffer::PixelLock CBuffer::get_pixel_lock()
{
assert(!pixels_locked);
_lock_pixels();
pixels_locked= true;
return alloc_pixel_lock();
}
If one class needed multiple locks of some kind, then it could use one
defined form of ScopedLock, and simply setup typedefs and appropriate
alloc_funcs to name versions instantiated with different
release_methods.
-Jeremy Parsons
|
|
From: Jamie W. <jam...@bl...> - 2003-03-02 22:53:41
|
I'll second that! I have taken an active interest in what's going on, but most of what is said is way over my head. And until I do understand what the hell is being said, I will keep my mouth shut. I will download the source, compile it and have a quick look to see what's going on from time to time, but that's really as far as it goes atm. Keep the random discussions coming as far as I'm concerned btw, I think that's very cool. Like you say, errors seem to be picked up by people who (look like they) know what they are talking about, and this is a good thing! :-) So, indeed, keep up the good work! Jamie On Sunday, Mar 2, 2003, at 13:21 Europe/London, Timothy Collett wrote: > > Heh, I would love to help fill that void; unfortunately, at the moment > most of what you're doing is *way* over my head, and I don't exactly > feel qualified to comment on it. It'll be a while, I suspect, before > you're in areas I know more about (eg, AI), or I actually learn more > about the areas you're doing. But I'm always reading your stuff, so > don't feel as if it totally falls on deaf ears--just ignorant ones ;-) > Also, I think that reading your posts is pretty educational for me, > and as I read them, I feel that I understand more about both AM and > how to use C++. So I'm rooting for you, even if you don't hear from > me! I think the whole concept is a fantastic one, making a program as > modular as possible. It's the most interesting programming project > I've heard of, and some of its pieces may even be useful to me in a > project I'm working on, sometime in the distant future. > > So keep up the great work! > |
|
From: Br'fin <br...@ma...> - 2003-03-02 17:18:02
|
On Sunday, March 2, 2003, at 10:46 AM, Alexander Strange wrote:
>
> On Saturday, Mar 1, 2003, at 14:27 US/Eastern, Br'fin wrote:
>
>> As part of dealing with abstracting the display stuff I found myself
>> wanting to back off a little and implement a BitDepth enum. Or
>> similar. So I've been dabbling with it, and in wanting some class
>> related foo I ended up developing the following:
>>
>> class BitDepth
>> <snip>
>
> Eww. Why not:
> typedef enum Depth {depth_8bit = 8, depth_16bit = 16, depth_32bit =
> 32} Depth;
>
that code that was actually checked in actually had
enum BitDepth {
_8bit= 8,
_16bit= 16,
_32bit= 32
};
And a couple serialization operators>>/<<
It was also pointed out that doing typdef enum {...} EnumName; could be
cleaned up a bit as just enum EnumName {...};
Using the class for a period did prove useful during the time when I
was making sure that the rest of the code was properly using the enum
instead of the raw numbers.
-Jeremy Parsons
|
|
From: Alexander S. <ast...@it...> - 2003-03-02 15:48:26
|
On Sunday, Mar 2, 2003, at 03:58 US/Eastern, Dietrich Epp wrote:
> class _Exception { ... };
> #define Exception(parms...) _Exception (__FILE__, __LINE__,
> __PRETTY_FUNCTION__, parms)
>
> or some variant. The things that compilers other than GCC would choke
> on could be cut out (methinks __PRETTY_FUNCTION__ is gcc) with > #ifdefs.
For that, you would probably use __FUNCTION__, although the mangling
would make it less readable.
|
|
From: Alexander S. <ast...@it...> - 2003-03-02 15:46:35
|
On Saturday, Mar 1, 2003, at 14:27 US/Eastern, Br'fin wrote:
> As part of dealing with abstracting the display stuff I found myself
> wanting to back off a little and implement a BitDepth enum. Or
> similar. So I've been dabbling with it, and in wanting some class
> related foo I ended up developing the following:
>
> class BitDepth
> <snip>
Eww. Why not:
typedef enum Depth {depth_8bit = 8, depth_16bit = 16, depth_32bit = 32}
Depth;
|
|
From: Timothy C. <tco...@ha...> - 2003-03-02 13:21:43
|
> I may not be happy with the speed that AlephModular is being > developed, or the seeming void that I'm tossing points of discussion > out into. Heh, I would love to help fill that void; unfortunately, at the moment most of what you're doing is *way* over my head, and I don't exactly feel qualified to comment on it. It'll be a while, I suspect, before you're in areas I know more about (eg, AI), or I actually learn more about the areas you're doing. But I'm always reading your stuff, so don't feel as if it totally falls on deaf ears--just ignorant ones ;-) Also, I think that reading your posts is pretty educational for me, and as I read them, I feel that I understand more about both AM and how to use C++. So I'm rooting for you, even if you don't hear from me! I think the whole concept is a fantastic one, making a program as modular as possible. It's the most interesting programming project I've heard of, and some of its pieces may even be useful to me in a project I'm working on, sometime in the distant future. So keep up the great work! Timothy Collett What good are dreams if they stay in your head? --Stephen Kennedy, creator of Project Majestic Mix |
|
From: Dietrich E. <die...@zd...> - 2003-03-02 08:58:50
|
On Saturday, Mar 1, 2003, at 23:59 US/Pacific, Br'fin wrote:
[...]
> Exceptions I'm still mulling over in my head. I'm used to Java's
> exception mechanism, and still need to learn C++'s approach. The first
> thing you're apt to see on that front is a try/catch block in main and
> a growing use of exceptions first thrown from fatal csaslert dialogs
> and asserts and in place of halt().
[...]
Yah... the hardest part, adding error checking. You could break it
down into parts, and add exception handling to each part individually,
marking the bits that are error-proof (well, nothing is error-proof,
but things like the really low-level rasterizing code can essentially
be guaranteed not to cause an exception).
What I have found helpful in the past is to make a macro for creating
exceptions, like
class _Exception { ... };
#define Exception(parms...) _Exception (__FILE__, __LINE__,
__PRETTY_FUNCTION__, parms)
or some variant. The things that compilers other than GCC would choke
on could be cut out (methinks __PRETTY_FUNCTION__ is gcc) with #ifdefs.
|
|
From: Br'fin <br...@ma...> - 2003-03-02 07:59:00
|
Heh, I can't say as I blame you. Though, for better or for worse,=20
AlephModular is being used as a learning experience for me. STL was=20
something that was out there but not exactly trustable to be fully=20
implemented on the compilers I was using back then. since then I've=20
been out of the loop with a little C and mostly Perl. :)
Just about the only thing tempering this is my urge to do things right=20=
with a recognition that not all of what I'm going to play with is going=20=
to be right from the starting line. This is also why I'm trying to be=20
really transparent with what I'm developing. We are catching mistakes=20
before I actually commit them :)
I may not be happy with the speed that AlephModular is being developed,=20=
or the seeming void that I'm tossing points of discussion out into. But=20=
I do generally like the things that I have been committing in. This=20
particular case, I agree, was overblown, but I really like how the file=20=
abstraction shaped up.
Exceptions I'm still mulling over in my head. I'm used to Java's=20
exception mechanism, and still need to learn C++'s approach. The first=20=
thing you're apt to see on that front is a try/catch block in main and=20=
a growing use of exceptions first thrown from fatal csaslert dialogs=20
and asserts and in place of halt().
-Jeremy Parsons
On Sunday, March 2, 2003, at 01:51 AM, Dietrich Epp wrote:
> <rant>
>
> Hate to break it to you, but that is most abuse I have ever seen the=20=
> C++ language take. In Ada we would write:
>
> type Bit_Depth is (Depth_8, Depth_16, Depth_32);
>
> or something along those lines. In C, it would be:
>
> typedef enum _bit_depth { depth_8, depth_16, depth_32 } bit_depth;
>
> If C++ is such a superior language, why does it take so damn much=20
> code? The above two examples provide the same functionality (although=20=
> only the Ada version has the bounds checking). THE SAME=20
> FUNCTIONALITY. I could use less code programming it in assembly. Why=20=
> the hell do you need bounds checking anyway? private vs. public? A=20=
> class? Damn, I would have just used existing functionality. It is no=20=
> wonder that the Marathon engine doesn't see any significant changes=20
> when people waste so much effort on things that really only need one=20=
> line of code.
>
> I'm not really bashing C++, it's just that it has it's strong points=20=
> and its weak points like most languages (Intercal a notable=20
> exception). C++ has exceptions - these are good! Use them! That way=20=
> you don't have to manually check for errors -- essentially, instead of=20=
> doing an operation and asking "Was there an error?", if there is an=20
> error it takes care of itself. Classes and namespaces: another good=20=
> thing. Instead of calling BKSparseArray_GetValues (...) we can just=20=
> do something like myArray.GetValues (...), or even myArray[...]. =20
> Making classes for everything you can think of? Well, do that in Ada=20=
> because Ada was designed to do that, and you can imitate it with C=20
> using enum and struct, but class is a beast of a different nature and=20=
> it should be treated that way. C++ has a dichotomy of classes and=20
> not-classes, you have to deal with this or use a different language=20
> like Smalltalk or Python.
>
> I bring this up because I hate to see projects make the mistakes that=20=
> are so damn ubiquitous these days... some things should be learnt=20
> though experience, but large projects are not places for such learning=20=
> of design methods.
>
> -----
>
> "The main problem for the C++ community today is to use Standard C++=20=
> in the way it was intended rather than as a glorified C or a poor=20
> man's Smalltalk."
> =97 Bjarne Stroustrup (inventor of C++), "Getting =46rom The =
Past=20
> To The Future" ,
> p. 23 C++ Report, SIGS Nov/Dec 1999 Vol1 No.10
>
> C++: The power of assembly language with the ease of use of assembly=20=
> language.
>
> "I invented the term Object-Oriented, and I can tell you I did not=20
> have C++ in mind."
> =97 Alan Kay
>
> "C++ : an octopus made by nailing extra legs onto a dog"
>
> </rant>
>
>
> -------------------------------------------------------
> This sf.net email is sponsored by:ThinkGeek
> Welcome to geek heaven.
> http://thinkgeek.com/sf
> _______________________________________________
> Alephmodular-devel mailing list
> Ale...@li...
> https://lists.sourceforge.net/lists/listinfo/alephmodular-devel
>
|
|
From: Br'fin <br...@ma...> - 2003-03-02 07:12:55
|
In the end, the verdict on comp.lang.c++ was generally overblown, and I settled back to simply using an enum. However, temporary period with the class was useful. As it meant for a time I was in complete control of what could and couldn't be checked against the enum. The compiler helped point out the places in the code that really really needed to be updated. And most things just worked after backing out to a simple enum. -Jeremy Parsons On Saturday, March 1, 2003, at 02:27 PM, Br'fin wrote: > As part of dealing with abstracting the display stuff I found myself > wanting to back off a little and implement a BitDepth enum. Or > similar. So I've been dabbling with it, and in wanting some class > related foo I ended up developing the following: > > class BitDepth > [snip] > And I'm trying to figure out if this is appropriate or overblown. If > it's overblown then I should just fall back to using the enum by its > lonesome. > > -Jeremy Parsons |
|
From: Dietrich E. <die...@zd...> - 2003-03-02 06:51:44
|
On Saturday, Mar 1, 2003, at 11:27 US/Pacific, Br'fin wrote:
> From: "Br'fin" <br...@ma...>
> Date: Sat Mar 1, 2003 11:27:59 US/Pacific
> To: ale...@li...
> Subject: [Alephmodular-devel] Enum versus class?
>
> As part of dealing with abstracting the display stuff I found myself=20=
> wanting to back off a little and implement a BitDepth enum. Or=20
> similar. So I've been dabbling with it, and in wanting some class=20
> related foo I ended up developing the following:
[...]
> And I'm trying to figure out if this is appropriate or overblown. If=20=
> it's overblown then I should just fall back to using the enum by its=20=
> lonesome.
<rant>
Hate to break it to you, but that is most abuse I have ever seen the=20
C++ language take. In Ada we would write:
type Bit_Depth is (Depth_8, Depth_16, Depth_32);
or something along those lines. In C, it would be:
typedef enum _bit_depth { depth_8, depth_16, depth_32 } bit_depth;
If C++ is such a superior language, why does it take so damn much code?=20=
The above two examples provide the same functionality (although only=20=
the Ada version has the bounds checking). THE SAME FUNCTIONALITY. I=20
could use less code programming it in assembly. Why the hell do you=20
need bounds checking anyway? private vs. public? A class? Damn, I=20
would have just used existing functionality. It is no wonder that the=20=
Marathon engine doesn't see any significant changes when people waste=20
so much effort on things that really only need one line of code.
I'm not really bashing C++, it's just that it has it's strong points=20
and its weak points like most languages (Intercal a notable exception).=20=
C++ has exceptions - these are good! Use them! That way you don't=20
have to manually check for errors -- essentially, instead of doing an=20
operation and asking "Was there an error?", if there is an error it=20
takes care of itself. Classes and namespaces: another good thing. =20
Instead of calling BKSparseArray_GetValues (...) we can just do=20
something like myArray.GetValues (...), or even myArray[...]. Making=20
classes for everything you can think of? Well, do that in Ada because=20=
Ada was designed to do that, and you can imitate it with C using enum=20
and struct, but class is a beast of a different nature and it should be=20=
treated that way. C++ has a dichotomy of classes and not-classes, you=20=
have to deal with this or use a different language like Smalltalk or=20
Python.
I bring this up because I hate to see projects make the mistakes that=20
are so damn ubiquitous these days... some things should be learnt=20
though experience, but large projects are not places for such learning=20=
of design methods.
-----
"The main problem for the C++ community today is to use Standard C++ in=20=
the way it was intended rather than as a glorified C or a poor man's=20
Smalltalk."
=97 Bjarne Stroustrup (inventor of C++), "Getting =46rom The =
Past=20
To The Future" ,
p. 23 C++ Report, SIGS Nov/Dec 1999 Vol1 No.10
C++: The power of assembly language with the ease of use of assembly=20
language.
"I invented the term Object-Oriented, and I can tell you I did not have=20=
C++ in mind."
=97 Alan Kay
"C++ : an octopus made by nailing extra legs onto a dog"
</rant>=
|
|
From: Br'fin <br...@ma...> - 2003-03-01 19:27:23
|
As part of dealing with abstracting the display stuff I found myself
wanting to back off a little and implement a BitDepth enum. Or similar.
So I've been dabbling with it, and in wanting some class related foo I
ended up developing the following:
class BitDepth
{
public:
typedef enum {
_8bit,
_16bit,
_32bit
} Depth;
private:
Depth depth;
public:
BitDepth(BitDepth::Depth _depth = _8bit) : depth(_depth) {}
BitDepth(const BitDepth& _depth) : depth(_depth.depth) {}
BitDepth(uint16);
operator Depth() { return depth; }
bool operator==(const BitDepth::Depth _depth) const { return depth ==
_depth; }
bool operator==(const BitDepth& _depth) const { return
this->operator==(_depth.depth); }
bool operator!=(const BitDepth::Depth _depth) const { return depth !=
_depth; }
bool operator!=(const BitDepth& _depth) const { return
this->operator!=(_depth.depth); }
BitDepth& operator=(const BitDepth& _depth) { depth = _depth.depth;
return *this; }
BitDepth& operator=(const BitDepth::Depth _depth) { depth = _depth;
return *this; }
};
inline BitDepth::BitDepth(uint16 val)
{
switch(val)
{
case 8:
depth= _8bit;
break;
case 16:
depth= _16bit;
break;
case 32:
depth= _32bit;
break;
default:
assert(0);
}
}
And I'm trying to figure out if this is appropriate or overblown. If
it's overblown then I should just fall back to using the enum by its
lonesome.
-Jeremy Parsons
|
|
From: Br'fin <br...@ma...> - 2003-02-28 04:53:54
|
This document is only coming together slowly. But here's the full
document as it currently exists, including the bits and pieces I've
been posting recently. I recently added some thoughts on file
arrangement and CDisplay to this version of the document.
-Jeremy Parsons
$Id: ... $
Note: This document is still evolving. But is being used to guide the
process
and is being updated to reflect final code.
Technical Specification: Display Hardware Abstraction
First there was screen.h. But it had some drawbacks and limitations. A
mild macintosh bias in soem of the interfaces (Which is probably
stronger in the functions that provide buffers to be drawn upon) But
the primary problems relate to legacy.
Also, screen is reponsible for viewport effects as well. We do not wish
to handle these, but the lower level display abstractions.
Older computers rarely had a screen bigger than 640x480 and Marathon
itself, while it was geared to also run on slower machines, didn't have
anything in mind for addressing way faster machines. Way larger screen
estate. Nor did it ever expect windows to offer their own back buffer
support.
We need to offer a CDisplay class that performs the following:
Manages either a window or fullscreen as a legitimate display target.
And allows toggling between the two.
Manages corresponding buffers that are passed to code to draw upon
Manages implementation of screen based effects, including fades and
pallets
Owns the visibility of the mouse
A note on buffers:
Existing code assumes that it will always be dealing with an
appropriately sized buffer. So Low Res mode will be drawing in a teeny
display buffer, and if you're playing 640x480 with a HUD, then the
buffer will be 640x(480-HUD height) And things like the interface
tended to be drawn directly to the screen window.
We are going to need the ability to pass code a larger than expected
buffer with appropriate offsets and limits such that the rendering code
will only use a portion of it.
Current code currently uses a bitmap_definition structure to abstract
the data. This is used both for screen rendering AND for shape
information, both walls and RLE transparent frames.
There are potentially three distinct kinds of bitmaps. A solid bitmap
(screen, walls) and two forms of RLE bitmap encoding (M1 and M2 shapes)
GUI Support:
Windows or Fullscreen
Selected at initialization, but toggleable.
Windows events
Only occur if window mode is on
Display will need to support
is_display_event (akin to is_dialog_event)
handle_display_event
manages window updates, repositioning
click_callback
Clicks in the window need to be passed back to controlling app
callback can be changed many times. For instance, during gameplay,
you can have a noop_click_callback that does zilch
Sizes
Tries to fit requested screen sizes into available screen space.
mouse/menu hiding/revealing
Game Support:
get_current_buffer
Returns the current back buffer for drawing on
Returns a bitmap_definition that has been prepared for us
set_gamma
set_fade_effect
swap_buffers
duplicate_buffer
ensures that all available buffers matchup
Handling screen sizes
Windows
You may not request a window equal to or bigger than the current
resolution
We do no resolution switching in windowed mode.
We do no screen color mode switching in windowed mode
256, Thousands, Millions should operate, but only affect the window,
not the screen
Full screen
Display manager should handle multiple screens
Minimum support, allow screen selection
Bonus support, use multiple screens at once
We most likely need support for multiple rendering passes/partial
rendering for this. (ie, render rectangle x,y,h,w of effective view
rectangle total_x, total_y, total_h, total_w)
Display provides a list of available screen sizes and can answer bit
depth questions for specific sizes
GUI requests a specific size (height/width) and depth
we find the largest available size that fits the request
requested rectangle is offset to be centered in display area
Requests to erase a buffer to a color apply to entire buffer
Displays
CDisplay is a singleton that controls the communication between the
game's shell and the host operating system. You want to know about
available resolutions, pick a screen, or do a fade? Here's the platform
abstraction to speak to.
The details about screen management and resolution switching will be
hashed out later.
Screen effects such as fades are a two part process. An in-game element
does the actual fade state management and animation while CDisplay does
the actual setting of palettes.
Methods for drawing to the screen:
get_working_buffer()
Returns the current back buffer. Most likely this will be some kind of
clipped representation of the true buffer.
flush_working_buffer()
When you're all done drawing and want your handiwork to show up on the
screen, call flush_working_buffer. For instance, under an
implementation
that uses double buffering, this would perform an actual buffer swap.
Other implementations may already be using an element's back buffer
and
just need to have the screen be refreshed with the updated contents.
This effectively invalidates the buffer from get_working_buffer, so
you
should get call get_working_buffer again instead of trusting the
existing buffer.
Buffers
A buffer is a complete description of a surface that one can draw upon.
It is an abstract base class with platform specific implementations.
For instance, under Macintosh, a buffer would wrap around a
GWorldPtr/CGraftPtr/GrafPtr.
Typical usage would be:
Request current buffer from display system
(it is limited as in dimensions, display system knows this)
Lock down the buffer
Get a bitmap from the buffer
Render operations on the bitmap
Release the bitmap
Perform drawing operations on the buffer (access to platform native
elements available?)
Unlock the buffer
Ask display system to swap buffers
A buffer helps support higher level options such as copying portions
and displaying text using OS specific calls.
world_pixels would be a buffer.
Buffer attributes
Height/Width
Bit Depth
Clut (8 bit depth only?)
Here is a set of methods for buffers suggested by how existing Marathon
code uses world_pixels:
Locking pixels. (std::auto_ptr<CBuffer::PixelLock> get_pixel_lock) When
dealing with buffers on Macintosh, one must lock down pixels before
doing certain operations. Such as rendering onto the bitmap or using
copybits. With an auto_ptr based mechanism, the lock is automatically
freed once out of scope. A corrolary of this is that directly
requesting a buffer's bitmap should make sure the corresponding bitmap
has locked the buffer and has control of the lock. Trying to double
lock or doubly unlock a buffer should assert.
Instantiating and updating the internal buffer. Both myUpdateGWorld and
myNewGWorld are called, depending on whether or not world_pixels
already exists. Updates included depth, bounds, clut and associated
screen device and flags. This may be protected functionality handled by
the display abstraction. Since we are adding funtionality to the buffer
(like by being able to use pre-existing buffers as if we had allocated
them, such as the natural back buffer of a window) these may not have
direct correlations to the outside world anymore.
Accessing pixels. (std::auto_ptr<CBitmap> or facsimile) Gets a bitmap
preconfigured by the buffer. Trying to access the pixels with another
access on the pixels outstanding is a failure and should assert.
Clipping requests (std::auto_ptr<CBuffer::ClipLock>
lock_clipping_region(left/top/right/bottom)) Specifies boundaries for
drawing operations. For instance, during map drawing you may wish to
draw wildly all over the place. But only the details that actually fall
within the clipping area should be displayed. This is apt to be a
protected operation performed for you by a clipping buffer.
Boundary requests: aka GetPortBounds
Buffers will either be allocated for you. (Ask the Display manager for
the buffer to use!) or by knowingly using the platform specific buffer
calls.
There will be cause to have clipped buffers. That is, a buffer that has
its own height and width, but which is actually a window into a larger
buffer. For instance, the raw display buffer could be your screen at
1024x768 and the outermost buffer would need to point to this. But for
game purposes, it only cares about a 640x480 space for the entire
display area. And on top of that it typically only uses 640x320 for the
game world, and the remaining space for the HUD. A clipped buffer
performs two operations for you. It is automatically clipped. And its
origin is shifted to the top/left of the clipping region.
CBuffer hierarchy
CBuffer
Root Class
CPlatformBuffer : public CBuffer
CBuffer class the is root for platform specific buffer classes. For
instance, direct calls for clipping would be within this class's
interface.
CBuffer_Carbon would be a descendant of this.
CClippedBuffer : public CBuffer
CBuffer class that does origin translation and clipping effects to act
as a
subset of its root buffer.
world_pixels usage:
game_window_macintosh.cpp
used within draw_panels
HUD is drawn in back buffer
copy bits called to send it to the screen
preprocess_map_mac.cpp
world_pixels is used as the offscreen buffer for saved game preview
pictures :/
screen.cpp
world_pixels is setup during initialize_screen
world_pixels provides the pixels for use in render_screen
world_pixels clut is updated to sync with screen in change_screen_clut
world_pixels is used in render_computer_interface
world_pixels is used in render_overhead_map
world_pixels is copied from in update_screen
(and used as source for quadruple_screen!)
screen_drawing.cpp
_set_port_to_gworld encapsulates swapping to world_pixels for gWorld
foo.
Bitmaps
A bitmap, in contrast to a buffer, is a low level object that
explicitly describes the pixels associated with a buffer. It knows very
little, but is a stream of bytes in memory with precalculated
row-addresses for jumping quickly to a scan line.
RLE encoded shapes are also implemented as a bitmap that is owned by a
collection.
world_pixels_structure would be a bitmap.
Here is the current layout of a bitmap.
enum /* bitmap flags */
{
_COLUMN_ORDER_BIT= 0x8000,
_TRANSPARENT_BIT= 0x4000
};
const unsigned int FILE_SIZEOF_bitmap_definition = 30;
struct bitmap_definition
{
int16 width, height; /* in pixels */
int16 bytes_per_row; /* if ==NONE this is a transparent RLE shape */
int16 flags; /* [column_order.1] [unused.15] */
int16 bit_depth; /* should always be ==8 */
int16 unused[8];
pixel8 *row_addresses[1];
//serialization
friend AIStream& operator>>(AIStream&, struct bitmap_definition&);
friend AOStream& operator<<(AOStream&, struct bitmap_definition&);
};
/* ---------- prototypes/TEXTURES.C */
/* assumes pixel data follows bitmap_definition structure immediately */
pixel8 *calculate_bitmap_origin(struct bitmap_definition *bitmap);
/* initialize bytes_per_row, height and row_address[0] before calling */
void precalculate_bitmap_row_addresses(struct bitmap_definition
*texture);
void map_bytes(uint8 *buffer, uint8 *table, int32 size);
void remap_bitmap(struct bitmap_definition *bitmap, pixel8 *table);
We could work within this to provide working on a subset of the bitmap.
Such work would create create the following needs. Such a workd would
also only apply to a non RLE bitmap.
width is width of subset
height is height of subset
offset_column, offset_row is added for subsets
bytes_per_row is unchanged
flags is unchanged
bit_depth is unchanged
row_addresses
Is unchanged for shapes
for a subset
row_address[0] points to the resultant byte due to offsetting. This
position is base_address+(offset_row*bytes_per_row)+offset_column.
Adding
bytes_per_row to this will hit each subsequent row with the handy
offsets already applied.
What is interesting to note is that making a class of bitmap_definition
is fairly hard. Between my requirements that limit our ability to
subclass and derive.
...
Strike that, looks like subclassing can be handled 'nicely'
When loading collections, the first pass of shapes allocates size based
upon sizeof bitmap_definition, and then copies the existing stream into
place. We can intercedede in this by doing an 'in place new' (new(void
*) T(val); ) that allocates and constructs the bitmap at the desired
point. Roughly this would look something like
void *source= (raw_collection + OffsetTable[k]);
void *destination= (NewCollection + NewCollLocation);
int32 length= OffsetTable[k+1] - OffsetTable[k];
shape_bitmap_definition *bitmap= new (destination)
shape_bitmap_definition(source, length);
*(NewOffsetPtr++) = NewCollLocation;
NewCollLocation +=
AdjustToPointerBoundary(bitmap->sizeof());
Note that it is an inplace new.
The arguments are a spot in memory and and a length to process.
The sizeof method performs calculation to determine the actual length
of the the bitmap + its offset pointers + its actual data
Remember that something that frees up a collection should call delete
on each bitmap just in case for the future :)
On a similar sort of note, a screen based bitmap can be pre-allocated,
then created in place with one of its elments holding an auto_ptr to
own its own allocation buffer.
So now, the base code can deal with bitmap_definition, and higher level
code can deal with the specifics of a shape_bitmap_definition and a
display(screen?)_bitmap_definition, which is mostly useful for
construction in either case. Mmm, might want to add the control of its
own managment to shape_bitmap_definition as well for someone wanting to
create-edit bitmaps. If the memory doesn't need to be freed or is freed
elsewhere, then the auto_ptr can be left not holding anything.
DrawingContext
A DrawingContext would own all of the high-level operations for
operating on a buffer. For instance, using line primitives and
displaying text.
Some systems, such as Macintosh, use a method of drawing commands that
works as follows.
Store the current drawing context
Use the desired graphics port as the context
perform drawing operations
Swap the ports back to where they were before.
Would this be better encapsulated as:
std::auto_ptr<DrawingContext> context= CBuffer.get_drawing_context()
context.draw_line...
context.draw_text
(context automatically freed)
Or
std::auto_ptr<DrawingContextScope> scoped=
DrawingContext.get_drawing_scope(CBuffer)
DrawingContext.draw_line...
DrawingContext.draw_text
(scope automatically freed)
Or something else?
File organization details:
Initial work will proceed in portable_files.h, files_macintosh.cpp,
Support/CFiles.cpp, Support/CFileTypes.h, Support/CFileTypes.cpp, and
CFileDesc_Carbon.h. At a later point in time when files are
reorganized, this will become
The files will be reorganized as follows
Support/CBitmap.cpp (was textures.cpp)
Support/CBitmap.h (was textures.h)
Graphics/CBuffer.cpp
Graphics/CBuffer.h
Graphics/CDisplay.cpp
Graphics/CDisplay.h
Graphics/Carbon/CBuffer_Carbon.cpp
Graphics/Carbon/CBuffer_Carbon.h
Graphics/Carbon/CDisplay_Carbon.cpp
Graphics/Carbon/CDisplay_Carbon.h
Why is CBitmap in Support? Well, because Shapes themselves aren't quite
Graphics themselves. Graphics itself covers everything needed to
display data to a screen and the interim steps to go from data to
rendering. Conceptually, a server should need *nothing* from the
graphics directory. *BUT* But, the shapes file contains timing and
sound data related to animations and firing. The core of the game could
care less about the sounds, but the animation system would give a darn
about that. And the game core would definitly care about 'spawn monster
projectile 10 ticks after it decides to fire.' I admit that right now
there is no seperation between game core and animation, but the shapes
file is still required by the game core.
|
|
From: Br'fin <br...@ma...> - 2003-02-26 02:40:29
|
Trying to hash out Buffers. This is better. Though I feel the hierarchy may be off or the clipping buffers might be off. Hrm. -Jeremy Parsons Buffers A buffer is a complete description of a surface that one can draw upon. It is an abstract base class with platform specific implementations. For instance, under Macintosh, a buffer would wrap around a GWorldPtr/CGraftPtr/GrafPtr. Typical usage would be: Request current buffer from display system (it is limited as in dimensions, display system knows this) Lock down the buffer Get a bitmap from the buffer Render operations on the bitmap Release the bitmap Perform drawing operations on the buffer (access to platform native elements available?) Unlock the buffer Ask display system to swap buffers A buffer helps support higher level options such as copying portions and displaying text using OS specific calls. world_pixels would be a buffer. Buffer attributes Height/Width Bit Depth Clut (8 bit depth only?) Here is a set of methods for buffers suggested by how existing Marathon code uses world_pixels: Locking pixels. (std::auto_ptr<CBuffer::PixelLock> get_pixel_lock) When dealing with buffers on Macintosh, one must lock down pixels before doing certain operations. Such as rendering onto the bitmap or using copybits. With an auto_ptr based mechanism, the lock is automatically freed once out of scope. A corrolary of this is that directly requesting a buffer's bitmap should make sure the corresponding bitmap has locked the buffer and has control of the lock. Trying to double lock or doubly unlock a buffer should assert. Instantiating and updating the internal buffer. Both myUpdateGWorld and myNewGWorld are called, depending on whether or not world_pixels already exists. Updates included depth, bounds, clut and associated screen device and flags. This may be protected functionality handled by the display abstraction. Since we are adding funtionality to the buffer (like by being able to use pre-existing buffers as if we had allocated them, such as the natural back buffer of a window) these may not have direct correlations to the outside world anymore. Accessing pixels. (std::auto_ptr<CBitmap> or facsimile) Gets a bitmap preconfigured by the buffer. Trying to access the pixels with another access on the pixels outstanding is a failure and should assert. Clipping requests (std::auto_ptr<CBuffer::ClipLock> lock_clipping_region(left/top/right/bottom)) Specifies boundaries for drawing operations. For instance, during map drawing you may wish to draw wildly all over the place. But only the details that actually fall within the clipping area should be displayed. This is apt to be a protected operation performed for you by a clipping buffer. Boundary requests: aka GetPortBounds Buffers will either be allocated for you. (Ask the Display manager for the buffer to use!) or by knowingly using the platform specific buffer calls. There will be cause to have clipped buffers. That is, a buffer that has its own height and width, but which is actually a window into a larger buffer. For instance, the raw display buffer could be your screen at 1024x768 and the outermost buffer would need to point to this. But for game purposes, it only cares about a 640x480 space for the entire display area. And on top of that it typically only uses 640x320 for the game world, and the remaining space for the HUD. A clipped buffer performs two operations for you. It is automatically clipped. And its origin is shifted to the top/left of the clipping region. CBuffer hierarchy CBuffer Root Class CPlatformBuffer : public CBuffer CBuffer class the is root for platform specific buffer classes. For instance, direct calls for clipping would be within this class's interface. CBuffer_Carbon would be a descendant of this. CClippedBuffer : public CBuffer CBuffer class that does origin translation and clipping effects to act as a subset of its root buffer. world_pixels usage: game_window_macintosh.cpp used within draw_panels HUD is drawn in back buffer copy bits called to send it to the screen preprocess_map_mac.cpp world_pixels is used as the offscreen buffer for saved game preview pictures :/ screen.cpp world_pixels is setup during initialize_screen world_pixels provides the pixels for use in render_screen world_pixels clut is updated to sync with screen in change_screen_clut world_pixels is used in render_computer_interface world_pixels is used in render_overhead_map world_pixels is copied from in update_screen (and used as source for quadruple_screen!) screen_drawing.cpp _set_port_to_gworld encapsulates swapping to world_pixels for gWorld foo. |
|
From: Br'fin <br...@ma...> - 2003-02-25 04:45:43
|
On Monday, February 24, 2003, at 06:39 AM, Br'fin wrote: > On Monday, February 24, 2003, at 02:16 AM, Mark Levin wrote: > >>> Ponder... Setting the drawing context should occur seperately >> >> Maybe this should be changed to a stack-based mechanism (like Quartz) >> instead of requiring the program itself to cache and restore the >> previous state. > > I haven't seen Quartz's method, but I feel you're right. I just > haven't figured out who should own the drawing context yet. It does > seem that some other class to encapsulate drawing options is > appropriate. (A singleton CDrawingContext with the auto-releasing > method I'd mentioned?) Very hazy notes on Drawing Context -Jeremy Parsons DrawingContext A DrawingContext would own all of the high-level operations for operating on a buffer. For instance, using line primitives and displaying text. Some systems, such as Macintosh, use a method of drawing commands that works as follows. Store the current drawing context Use the desired graphics port as the context perform drawing operations Swap the ports back to where they were before. Would this be better encapsulated as: std::auto_ptr<DrawingContext> context= CBuffer.get_drawing_context() context.draw_line... context.draw_text (context automatically freed) Or std::auto_ptr<DrawingContextScope> scoped= DrawingContext.get_drawing_scope(CBuffer) DrawingContext.draw_line... DrawingContext.draw_text (scope automatically freed) Or something else? |
|
From: Woody Z. I. <woo...@sb...> - 2003-02-24 19:48:47
|
On Monday, February 24, 2003, at 01:16 AM, Mark Levin wrote:
> Are you talking about the original MacOS LockPixels() concept (which
> IIRC was meant to protect pixmaps from memory manager housekeeping) or
> some sort of mutex for multithreaded drawing? The former is probably
> obsolete on everything except OS9.
I gather more like the latter. For example, DirectDraw in Windows (and
indeed SDL) require that a pixel-buffer ("surface") be locked before
it's read from or written to. This is because the video hardware may
have various blits outstanding, some of which may involve the surface...
and so to ensure consistency, access to the buffer must be synchronized
with the blitting hardware.
Note the IDirectDrawSurface->Lock() interface lets you (optionally)
specify a portion of the surface to lock. So in theory blits that
involve other parts of the surface could continue while you do your
operation, etc. (SDL_LockSurface() might let you specify a portion
also - don't remember offhand. Probably so though, since my impression
is that SDL_video most closely resembles DirectDraw.)
Indeed DirectX (DirectSound does this too) has a habit of providing
pointers to raw data (and other relevant descriptions of the raw data)
only in response to Lock() calls, to encourage users to always Lock
before doing direct writing or reading.
This way also I can Lock() my gBackBufferSurface in an odd-numbered
frame and get a pointer into one buffer in a double-buffer structure,
then call Lock() with exactly the same value for gBackBufferSurface in
an even-numbered frame and get a pointer into the other buffer. My code
doesn't have to track the flippings. (Indeed DirectDraw has built-in
support for generalized flipping chains, so moving from double- to
triple-buffering is a matter of merely changing the value "2" to "3" in
the call that sets up the "primary surface"... well assuming I redraw
the whole screen every frame of course.)
And, well, normally the primary surface "owns" all the surfaces in its
flipping chain, so they all get deallocated when the primary surface
does. But, since surfaces implement COM reference-counting, I can do
gBackBuffer->AddRef() so that I can gBackBuffer->Release() later without
worrying about whether the surface was created by the primary surface
(as in a flipping chain) or by me (as when I'm maintaining my own
offscreen buffer for blitting to a window, say).
Whoops, too much information. Sorry. Go read the DirectX docs and/or
SDL docs if you need more detail. :)
> Rectangle representation: Do we use OS9-style absolute boundaries
> (left/top/right/bottom) or "relative" rectangles (x/y/width/height)? Is
> the origin at the physical top or bottom of a buffer? (Carbon and
> OpenGL disagree on this last bit, which has caused me its share of
> confusion too. How do Windows and SDL work?)
Both assume the top-left corner is 0,0 (and that positive X goes
rightward and positive Y goes downward). (I think. :) ) Err, for
Windows there I was thinking DirectDraw. Windows GDI (sort of like the
Toolbox) OTOH likes to put origins in the lower-left (and have positive
Y go upward). I think. Sometimes. Or something. That way makes sense
I guess if you consider the conventions mathematicians etc. use... but
given that windows typically resize in the lower-right-hand corner,
anchoring the origin at the top-left makes more sense to me. Shrug.
(But this resizing convention probably comes from our language's
left-to-right, top-to-bottom reading order. Shrug.)
DirectDraw uses tlrb whereas SDL IIRC uses tlwh.
One could always create a new Rectangle class whose objects can use
either convention (probably by storing only one internally (maybe
determined by what the underlying target API prefers?) but
providing/taking values in either scheme for convenience/compatibility).
Woody
|
|
From: Br'fin <br...@ma...> - 2003-02-24 11:50:08
|
Oh, and a side note of all this. Can anyone think of a way that a CBuffer could also be used to host an OpenGL context? Perhaps changing render_screen from using a CBitmap to using a CBuffer directly (CBuffer vector for multi screen display across monitors?) and then render_screen chooses a rendering path based upon whether or not that display is hardware accelerated? I'm not including OpenGL in this pass, but I'd like to accommodate it into the design. (Meanwhile not knowing how to program in OpenGL myself :p) -Jeremy Parsons On Monday, February 24, 2003, at 12:40 AM, Br'fin wrote: > And more brainstorming: > > -Jeremy Parsons > > Buffers > > A buffer is a complete description of a surface that one can draw > upon. It is an abstract base class with platform specific > implementations. For instance, under Macintosh, a buffer would wrap > around a GWorldPtr/CGraftPtr/GrafPtr. > > Typical usage would be: > Request current buffer from display system > (it is limited as in dimensions, display system knows this) > Lock down the buffer > Get a bitmap from the buffer > Render operations on the bitmap > Release the bitmap > Perform drawing operations on the buffer (access to platform native > elements available?) > Unlock the buffer > Ask display system to swap buffers > > A buffer helps support higher level options such as copying portions > and displaying text using OS specific calls. > > world_pixels would be a buffer. > > Here is a set of methods for buffers suggested by how existing > Marathon code uses world_pixels: > > Locking pixels. (std::auto_ptr<CBuffer::PixelLock> get_pixel_lock) > When dealing with buffers on Macintosh, one must lock down pixels > before doing certain operations. Such as rendering onto the bitmap or > using copybits. With an auto_ptr based mechanism, the lock is > automatically freed once out of scope. A corrolary of this is that > directly requesting a buffer's bitmap should make sure the > corresponding bitmap has locked the buffer and has control of the > lock. Trying to double lock or doubly unlock a buffer should assert. > > Instantiating and updating the internal buffer. Both myUpdateGWorld > and myNewGWorld are called, depending on whether or not world_pixels > already exists. Updates included depth, bounds, clut and associated > screen device and flags. This may be protected functionality handled > by the display abstraction. Since we are adding funtionality to the > buffer (like by being able to use pre-existing buffers as if we had > allocated them, such as the natural back buffer of a window) these may > not have direct correlations to the outside world anymore. > > Accessing pixels. (std::auto_ptr<CBitmap> or facsimile) Gets a bitmap > preconfigured by the buffer. Trying to access the pixels with another > access on the pixels outstanding is a failure and should assert. > > Setting as target of drawing operations > (std::auto_ptr<CBuffer::DrawbleLock> get_drawing_lock). Encapsulation > for MacOS concepts of setting/swapping current drawing target with our > buffer, then restoring the old drawing target afterwards. Attempts to > perform drawing operations without this lock should assert. Attempts > to get a second drawing lock should work though, in assumption of > attempts to nest calls to drawing to different buffers. > Ponder... Setting the drawing context should occur seperately > > Boundary requests: aka GetPortBounds > > world_pixels usage: > > game_window_macintosh.cpp > used within draw_panels > HUD is drawn in back buffer > copy bits called to send it to the screen > preprocess_map_mac.cpp > world_pixels is used as the offscreen buffer for saved game preview > pictures :/ > screen.cpp > world_pixels is setup during initialize_screen > world_pixels provides the pixels for use in render_screen > world_pixels clut is updated to sync with screen in change_screen_clut > world_pixels is used in render_computer_interface > world_pixels is used in render_overhead_map > world_pixels is copied from in update_screen > (and used as source for quadruple_screen!) > screen_drawing.cpp > _set_port_to_gworld encapsulates swapping to world_pixels for gWorld > foo. > |