Did you ever try to write a bitblt operator? Back in the day, I used to
write video driver code. There are about a zillion cases. As I
understand it, BitBlt has a code generator writes code to do the
specific blt needed and then calls it.
I don't know for sure, but I'd guess that one of several things could be
going on here:
* GetColorTransform isn't really what's going on, that's just the name
your profiler is assigning to the generated code.
* GetColorTransform is the compiler that writes the blt, and the actual
blt operation takes almost no time.
* The fact that your read and write don't have the same alignment is
causing trouble.
* You're right, and you don't actually have the same 16-bit buffer
layout.
How 'bout doing trying it in 24-bit or 32-bit mode and profiling that to
see where the time is spent? It'd at least be one way to confirm or deny
your supposition.
Kent
Brian Hook wrote:
>
> It looks like about 20% of my time is spent in BitBlt() via DIBsections.
> Of that time, this funky call to XLATEOBJ_hGetColorTransform is
> consuming like 98% of the time. I'm guessing from the name that it's
> doing a color transform from my DIB to the screen, however both are in
> 16-bit. So I'm figuring one of two things is going on:
>
> - BitBlt() doesn't detect the unity situation with DIB sections
>
> - even though the DIB is 16-bits and the framebuffer is 16-bits, they're
> both "different" 16-bits. If this is the case, is there a way to query
> for the native format to allow for a unity blit?
--
-----------------------------------------------------------------------
Kent Quirk | MindRover: "Astonishingly creative."
Game Architect | Check it out!
ken...@co... | http://www.mindrover.com/
_____________________________|_________________________________________
|