gamedevlists-windows Mailing List for gamedev (Page 59)
Brought to you by:
vexxed72
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(48) |
Oct
(58) |
Nov
(49) |
Dec
(38) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(124) |
Feb
(83) |
Mar
(17) |
Apr
(37) |
May
(12) |
Jun
(20) |
Jul
(47) |
Aug
(74) |
Sep
(62) |
Oct
(72) |
Nov
(54) |
Dec
(13) |
2003 |
Jan
(36) |
Feb
(8) |
Mar
(38) |
Apr
(3) |
May
(6) |
Jun
(133) |
Jul
(20) |
Aug
(18) |
Sep
(12) |
Oct
(4) |
Nov
(28) |
Dec
(36) |
2004 |
Jan
(22) |
Feb
(51) |
Mar
(28) |
Apr
(9) |
May
(20) |
Jun
(9) |
Jul
(37) |
Aug
(20) |
Sep
(23) |
Oct
(15) |
Nov
(23) |
Dec
(27) |
2005 |
Jan
(22) |
Feb
(20) |
Mar
(5) |
Apr
(14) |
May
(10) |
Jun
|
Jul
(6) |
Aug
(6) |
Sep
|
Oct
(12) |
Nov
(1) |
Dec
|
2006 |
Jan
(18) |
Feb
(4) |
Mar
(3) |
Apr
(6) |
May
(4) |
Jun
(3) |
Jul
(16) |
Aug
(40) |
Sep
(6) |
Oct
(1) |
Nov
|
Dec
(2) |
2007 |
Jan
(5) |
Feb
(2) |
Mar
(4) |
Apr
(1) |
May
(13) |
Jun
|
Jul
(26) |
Aug
(3) |
Sep
(10) |
Oct
|
Nov
(4) |
Dec
(5) |
2008 |
Jan
(1) |
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
(5) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Brian H. <bri...@py...> - 2002-01-20 23:08:32
|
Off the top of my head I'm not sure of a way, but you might want to look at GetModuleHandleEx(), which accepts the GET_MODULE_HANDLE_EX_FLAG_FROM_ADDRESS flag which, from what I can read, lets you get a module handle from a memory address you pass in lieu of a module name. From the module handle you can call GetProcAddress(), etc. Brian At 11:55 PM 1/20/2002 +0100, Gabor Simko wrote: >Hi, > >Is there a way to load a dynamic library from the memory instead of >a file? I have the dynamic library in the memory and I don't want to >write it to the winchester and use LoadLibrary() if it isn't necessery... > >Thanks for any replies! > Gabor Simko > > > > > >_______________________________________________ >Gamedevlists-windows mailing list >Gam...@li... >https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows |
From: Gabor S. <ts...@co...> - 2002-01-20 22:55:56
|
Hi, Is there a way to load a dynamic library from the memory instead of a file? I have the dynamic library in the memory and I don't want to write it to the winchester and use LoadLibrary() if it isn't necessery... Thanks for any replies! Gabor Simko |
From: Tugkan C. <tu...@in...> - 2002-01-17 16:02:36
|
----- Original Message ----- From: "Eero Pajarre" <epa...@ko...> To: "Tugkan Calapoglu" <tu...@in...> Cc: <gam...@li...> Sent: Thursday, January 17, 2002 3:21 PM Subject: Re: [GD-Windows] profiler for AMD > Tugkan Calapoglu wrote: > > > We are considering to switch to Athlon XP from Intel chips.We know that > > XP is compatible with > > > > Intel instruction set but are they compatible with intel architecture > > to the extent of internal counters? > > > > What I am after is, indeed, whether I could use VTune when I use AMD chip? > > > > If not, is AMD's CodeAnalyst good enough to meet development needs ( > > currently it has version 1.1 which seems like > > > > they did'nt have a long history of making profilers :) ) ? > > > > Considering how often Vtune (I have only tested it up to > version 4.5) decides that all my CPU is used by "other32", > I have been rather happy with AMD codeAnalyst. > > You can also use VTune with Athlon, except for the EBS based sampling. > > > > Eero That 'other32' and 'library' is really a problem for me ( Some bars say they are library.There is no clue about which library they are ) I am not sure whether it is problem of VTune or compiler. I think compiler could not succeed to insert debug info or VTune has problems in interpreting it. I worked around it by selecting "line numbers only" option. My question to Intel's support service remained unanswered ( Well they gave an answer which is totally useless. ) But if CodeAnalyst does a better job than VTune in this subject, this is a real breakthrough :). |
From: Eero P. <epa...@ko...> - 2002-01-17 14:22:12
|
Tugkan Calapoglu wrote: > We are considering to switch to Athlon XP from Intel chips.We know that > XP is compatible with > > Intel instruction set but are they compatible with intel architecture > to the extent of internal counters? > > What I am after is, indeed, whether I could use VTune when I use AMD chip? > > If not, is AMD's CodeAnalyst good enough to meet development needs ( > currently it has version 1.1 which seems like > > they did'nt have a long history of making profilers :) ) ? > Considering how often Vtune (I have only tested it up to version 4.5) decides that all my CPU is used by "other32", I have been rather happy with AMD codeAnalyst. You can also use VTune with Athlon, except for the EBS based sampling. Eero |
From: Corrinne Y. <cor...@sp...> - 2002-01-16 19:08:41
|
-----Original Message----- From: gam...@li... [mailto:gam...@li...] On Behalf Of Brian Sharon Sent: Wednesday, January 16, 2002 1:01 PM To: cor...@sp...; gam...@li... Subject: RE: [GD-Windows] Spin Control Right Align "Upside Down" Why the default has a minimum of 100 and a maximum of 0 is beyond me though. --brian -- Thanks, Brian. It was an RTFM solution after all. :) |
From: Brian S. <bs...@mi...> - 2002-01-16 19:00:49
|
From CSpinButtonCtrl: "Any time the minimum setting is greater than the maximum setting (for example, when the default settings are used), clicking the up arrow decreases the position value and clicking the down arrow increases it." Why the default has a minimum of 100 and a maximum of 0 is beyond me though. --brian > -----Original Message----- > From: Corrinne Yu [mailto:cor...@sp...] > Sent: Wednesday, January 16, 2002 10:54 AM > To: gam...@li... > Subject: [GD-Windows] Spin Control Right Align "Upside Down" >=20 > This is for the editor of my engine. >=20 > I have an auto-buddy-ed Spin Control that is aligned right to be edit > box. >=20 > It is such that artists can type in number of, use the spin to scroll it > bigger or smaller. >=20 > I notice that when you click the "up arrow" the numbers get bigger, and > the "down arrow" the numbers get smaller. >=20 > This seems upside down (or backwards) to me. :) >=20 > I figure in order to get the direction I want, I have to not use the > convenient auto-buddy, and manually link the edit box number and the > spin control value. >=20 > Before I waste time writing that code, I wonder if you know of a special > flag that would make the spin works "upside down" than from before (or a > way to call the Windows API to do it for you). >=20 > Thank you for any info from you Windows Experts out there. :) >=20 >=20 >=20 > _______________________________________________ > Gamedevlists-windows mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows |
From: Corrinne Y. <cor...@sp...> - 2002-01-16 18:54:19
|
This is for the editor of my engine. I have an auto-buddy-ed Spin Control that is aligned right to be edit box. It is such that artists can type in number of, use the spin to scroll it bigger or smaller. I notice that when you click the "up arrow" the numbers get bigger, and the "down arrow" the numbers get smaller. This seems upside down (or backwards) to me. :) I figure in order to get the direction I want, I have to not use the convenient auto-buddy, and manually link the edit box number and the spin control value. Before I waste time writing that code, I wonder if you know of a special flag that would make the spin works "upside down" than from before (or a way to call the Windows API to do it for you). Thank you for any info from you Windows Experts out there. :) |
From: Jon W. <hp...@mi...> - 2002-01-16 18:54:08
|
VTune does not work very well with AMD processors; it degrades to timer-based sampling (yeah, the 8253 lives! :-) CodeAnalyst is where it's at for AMD -- it's certainly better than nothing, but I haven't used it enough to say more (i e sniffed it once) Even though VTune is at a high version number, it has its annoying habits, such as blue-screening Win2k now and then, and being very slow at parsing symbol files. I believe most of it is written in Visual Basic (except for the device driver part). If you think VTune is doing fine, I wouldn't be scared of trying CodeAnalyst. Cheers, / h+ -----Original Message----- From: gam...@li... [mailto:gam...@li...]On Behalf Of Tugkan Calapoglu Sent: Wednesday, January 16, 2002 8:33 AM To: gam...@li... Subject: [GD-Windows] profiler for AMD We are considering to switch to Athlon XP from Intel chips.We know that XP is compatible with Intel instruction set but are they compatible with intel architecture to the extent of internal counters? What I am after is, indeed, whether I could use VTune when I use AMD chip? If not, is AMD's CodeAnalyst good enough to meet development needs ( currently it has version 1.1 which seems like they did'nt have a long history of making profilers :) ) ? |
From: Neil S. <ne...@r0...> - 2002-01-16 17:42:32
|
On pre-XP Athlons, you couldn't use Event Based Sampling (EBS) in VTune, which uses Intel's internal counters, but you could still use Time Based Sampling (TBS), using various timers (eg. Virtual Timer Device), which should be good enough for most purposes. I don't think this has changed on Athlon XP. ----- Original Message ----- From: "Tugkan Calapoglu" <tu...@in...> To: <gam...@li...> Sent: Wednesday, January 16, 2002 4:33 PM Subject: [GD-Windows] profiler for AMD We are considering to switch to Athlon XP from Intel chips.We know that XP is compatible with Intel instruction set but are they compatible with intel architecture to the extent of internal counters? What I am after is, indeed, whether I could use VTune when I use AMD chip? If not, is AMD's CodeAnalyst good enough to meet development needs currently it has version 1.1 which seems like they did'nt have a long history of making profilers :) ) ? |
From: Andy G. <an...@mi...> - 2002-01-16 17:20:11
|
Short answer - No, this message cannot be disabled in VC 6. This bugged me too - I had a little program that dumped memory on crashes, I would check each byte with IsBadReadPtr before reading it and sometimes this would generate hundreds of 1st chance exceptions. IsBadReadPtr just reads the byte surrounded by an exception handler - which is pretty silly, you could just have done that yourself. I moved to writing my own IsBadReadPtr routine - 1. Check the address is not bad (<64K etc..) 2. Check the page protections for the address permit reading 3. Read the byte in an exception block. This got rid of 99.9% of my first chance exception messages and seems much more sensible. I don't know why you are getting a lot of exceptions - this is NOT the best way of coding something, exceptions are slow and should be used for, well.... Exceptional things.... Andy Glaister -----Original Message----- From: Jacob Turner (Core Design Ltd) [mailto:Ja...@Co...]=20 Sent: Wednesday, January 16, 2002 4:13 AM To: 'Gam...@li...' Subject: RE: [GD-Windows] annoying First-chance exception message I don't know if this is what you want, but have you tried the "Exceptions" menu option in the "Debug" menu. Then select all the options using Shift and mouse and select "Stop if not handled". Then if you have a try, catch your try, catch should get the exception before MSVC does. We used this to disable MSVC exception has occured dialog box. Jake > -----Original Message----- > From: Ivan-Assen Ivanov [mailto:as...@ha...] > Sent: 16 January 2002 11:39 > To: GDWindows > Subject: [GD-Windows] annoying First-chance exception message >=20 >=20 > Do you know a way to suspend the annoying "First chance > exception" message which > appears in the output window of MSVC 6.0 any time anything=20 > throws any kind of exception in > your program? We are trying to move to an exception-based=20 > error handling system and > these messages make the output window nearly unusable. In=20 > "real" applications, of course, > exceptions would not appear too often (they're not an=20 > replacement for return values), but still, > these messages clutter my debug output too much. >=20 >=20 > _______________________________________________ > Gamedevlists-windows mailing list=20 > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows >=20 _______________________________________________ Gamedevlists-windows mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows |
From: Tugkan C. <tu...@in...> - 2002-01-16 16:26:15
|
We are considering to switch to Athlon XP from Intel chips.We know that = XP is compatible with Intel instruction set but are they compatible with intel architecture = to the extent of internal counters? What I am after is, indeed, whether I could use VTune when I use AMD = chip? If not, is AMD's CodeAnalyst good enough to meet development needs ( = currently it has version 1.1 which seems like they did'nt have a long history of making profilers :) ) ?=20 |
From: Jacob T. (C. D. Ltd) <Ja...@Co...> - 2002-01-16 12:30:48
|
I don't know if this is what you want, but have you tried the "Exceptions" menu option in the "Debug" menu. Then select all the options using Shift and mouse and select "Stop if not handled". Then if you have a try, catch your try, catch should get the exception before MSVC does. We used this to disable MSVC exception has occured dialog box. Jake > -----Original Message----- > From: Ivan-Assen Ivanov [mailto:as...@ha...] > Sent: 16 January 2002 11:39 > To: GDWindows > Subject: [GD-Windows] annoying First-chance exception message > > > Do you know a way to suspend the annoying "First chance > exception" message which > appears in the output window of MSVC 6.0 any time anything > throws any kind of exception in > your program? We are trying to move to an exception-based > error handling system and > these messages make the output window nearly unusable. In > "real" applications, of course, > exceptions would not appear too often (they're not an > replacement for return values), but still, > these messages clutter my debug output too much. > > > _______________________________________________ > Gamedevlists-windows mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows > |
From: Ivan-Assen I. <as...@ha...> - 2002-01-16 11:57:15
|
Do you know a way to suspend the annoying "First chance exception" message which appears in the output window of MSVC 6.0 any time anything throws any kind of exception in your program? We are trying to move to an exception-based error handling system and these messages make the output window nearly unusable. In "real" applications, of course, exceptions would not appear too often (they're not an replacement for return values), but still, these messages clutter my debug output too much. |
From: Rich <leg...@xm...> - 2002-01-16 06:17:49
|
Hey, I just happened to be reading "ATL Internals" and they were talking about how you could get a debug listing of all the QueryInterface traffic between your object and the outside world. They gave a sample listing the interfaces queried on a control hosted by IE4. One of the interfaces listed was IActiveScript. So I look at IActiveScript in MSDN and it has this method: HRESULT AddNamedItem( LPCOLESTR pstrName, // address of item name DWORD dwFlags // item flags ); Adds the name of a root-level item to the scripting engine's name space. A root-level item is an object with properties and methods, an event source, or all three. Now that sounds exactly like what you want. Now when a control is hosted in IE4, at some point it hands you a pointer to itself. You can use that to QI for IActiveScript, I believe. (I'm a little new to AX controls yet.) -- Ask me about my upcoming book on Direct3D from Addison-Wesley! Direct3D Book <http://www.xmission.com/~legalize/book/> Don't Support Spammers! Boycott Fractal Painter 7! <http://www.xmission.com/~legalize/spammers.html> |
From: Rich <leg...@xm...> - 2002-01-16 04:18:27
|
In article <HEE...@mi...>, "Jon Watte" <hp...@mi...> writes: > Thanks; this is the second recommendation I get for this method. By "this method", I assume you mean making a simple COM object you can call from HTML script? For an example of that, see the source for izfree <http://izfree.sourceforge.net/>. It contains a simple COM object written in ATL that exposes a dispinterface that can be accessed from script in a web page. The object wraps the API function ::CoCreateGuid, so that scripts can generate GUIDs anytime they want in a web page. Here's a step-by-step I posted to usenet: Date: Thu Dec 20 11:35:47 MST 2001 Groups: microsoft.public.scripting.vbscript From: leg...@ma... (Rich) Reply-To: (Rich) leg...@ma... Subject: Re: API Calls from VBScript Refs: <339...@po...> <ez#p2ZUiBHA.1 976@tkmsftngp05> --------- [Please do not mail me a copy of your followup] "Christian METZ" <cm...@dy...> spake the secret code <ez#p2ZUiBHA.1976@tkmsftngp05> thusly: >Has Michael Harris say it, not directly from VBS, but you can use OCX in a >VBS. So if you need to make an API call you need a OCX to do it. Its easier than that. You only need to make a COM object that is callable from scripting languages. Scripting languages use late binding, also known as dispinterfaces. The only requirement on your COM object is that it provide a default dispinterface for the scripting environment. This is very easy to do with Visual C++ and the ATL app wizard. Here's the recipe: 1. Select File / New... 2. Select the Projects tab 3. Select ATL COM AppWizard, enter a name and location for the project, and click OK. 4. Select Server Type DLL and click Finish 5. In the Workspace window, select the Class View tab 6. Select New ATL Object... from the context menu for your project's classes 7. Select Simple Object from the Objects category in the ATL Object Wizard and click Next. 8. Select the Attributes tab, check "Support ISupportErrorInfo" and leave the rest at the default values. ISupportErrorInfo allows your object to communicate more useful information about errors back to the calling script. (Its what fills in the Err object in VBScript.) 9. Select the Names tab and enter a short name for your object; the Object Wizard automatically fills in the rest of the fields which you can edit if you don't like what it created. 10. Click OK 11. Expand your project's classes in the class view to display the new interface you just created (it will have a sideways lollipop displayed next to the name which will begin with an 'I') 12. Select Add Method... or Add Property... from the context menu for the newly created interface. I found it best to add all the properties and methods you need before doing anything else. As you add methods and properties to the interface definition, the Object Wizard adds stub code to your C++ class that implements the corresponding methods and/or properties. 13. Expand the C++ class that implements your interface in the Class View. You should see your new interface listed underneath your implementation class. Expand the interface to show its methods and properties. You can double-click on any of the methods and property functions and the corresponding code is loaded in the editor window. 14. Edit the boiler plate code to provide the necessary implementation. 15. Debug your implementation The compile project for the code provided by the ATL COM AppWizard automatically registers the COM object for use after a successful compile. After step 12 is completed above you should be able to compile your object that does nothing and have it registered. You can then test it by writing a VBScript to create your object. The object's ProgId was entered in the Names tab of the Object Wizard, but you can also find it by opening the object's .rgs file in the editor. The .rgs file is in the Resources folder in the File View of the Workspace window. I've done this sort of thing several times, and it works great. For instance, I've written an HTA using VBScript and I needed to generate GUIDs. At first I was just running guidgen from the Platform SDK to generate new GUIDs. But lots of people might not have the Platform SDK installed or they might not have guidgen from Visual C++. So I wrote a small object that called ::CoCreateGuid and then ::StringFromGUID2 to return the GUID as a BSTR. Here's the entire implementation: // Generator.cpp : Implementation of CGenerator #include "stdafx.h" #include "Guidgen.h" #include "Generator.h" ///////////////////////////////////////////////////////////////////////////// // CGenerator #define NUM_OF(ary_) (sizeof(ary_)/sizeof((ary_)[0])) STDMETHODIMP CGenerator::Generate(BSTR *guid) { if (!guid) { return E_POINTER; } GUID g = { 0 }; const HRESULT hr = ::CoCreateGuid(&g); if (FAILED(hr)) { return hr; } OLECHAR buffer[80]; if (!::StringFromGUID2(g, buffer, NUM_OF(buffer))) { return E_INVALIDARG; } *guid = CComBSTR(buffer).Detach(); return S_OK; } Its slightly ugly due to the COM semantics of HRESULTs and BSTRs, but the idea is pretty straightforward. So using that object, I was able to eliminate the dependency my HTA had for guidgen, and as a side effect it works faster too since the generator code is attached to my process and I don't have to run guidgen into a file and parse the resulting file. -- Ask me about my upcoming book on Direct3D from Addison-Wesley! Direct3D Book http://www.xmission.com/~legalize/book/ Don't Support Spammers! Boycott Fractal Painter 7! http://www.xmission.com/~legalize/spammers.html |
From: Jon W. <hp...@mi...> - 2002-01-16 04:03:38
|
Thanks; this is the second recommendation I get for this method. Although I had hoped for some way to implement IXMLDOMNode and just make it available to my hosted IE, I think this way can be made to work, too (although with slightly heavier lifting required on my part). Cheers, / h+ > -----Original Message----- > From: gam...@li... > [mailto:gam...@li...]On Behalf Of > Rich > Sent: Tuesday, January 15, 2002 6:49 PM > To: gam...@li... > Subject: Re: [GD-Windows] Extending Internet Explorer DOM when hosting > > > > In article <HEE...@mi...>, > "Jon Watte" <hp...@mi...> writes: > > > I'm hosting an Internet Explorer control in my application. I would > > like to expose some DOM properties with functions on them to JScript > > running in web pages inside that control. > > You can make a simple COM object that can be instantiated in the HTML > and use that to provide a callback into your larger application. > > I think the way WSH provides 'global' objects like WScript is that > they host the scripting environment itself and expose that through the > scripting environment. This is what IE is doing for you, although I > don't know if IE explicitly exposes the scripting environment that > way. > > Take a look at the docs for Windows Script Host and see if that helps. > -- > Ask me about my upcoming book on Direct3D from Addison-Wesley! > Direct3D Book <http://www.xmission.com/~legalize/book/> > Don't Support Spammers! Boycott Fractal Painter 7! > <http://www.xmission.com/~legalize/spammers.html> > > _______________________________________________ > Gamedevlists-windows mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows > |
From: Rich <leg...@xm...> - 2002-01-16 02:49:08
|
In article <HEE...@mi...>, "Jon Watte" <hp...@mi...> writes: > I'm hosting an Internet Explorer control in my application. I would > like to expose some DOM properties with functions on them to JScript > running in web pages inside that control. You can make a simple COM object that can be instantiated in the HTML and use that to provide a callback into your larger application. I think the way WSH provides 'global' objects like WScript is that they host the scripting environment itself and expose that through the scripting environment. This is what IE is doing for you, although I don't know if IE explicitly exposes the scripting environment that way. Take a look at the docs for Windows Script Host and see if that helps. -- Ask me about my upcoming book on Direct3D from Addison-Wesley! Direct3D Book <http://www.xmission.com/~legalize/book/> Don't Support Spammers! Boycott Fractal Painter 7! <http://www.xmission.com/~legalize/spammers.html> |
From: Jon W. <hp...@mi...> - 2002-01-16 01:54:43
|
I'm hosting an Internet Explorer control in my application. I would like to expose some DOM properties with functions on them to JScript running in web pages inside that control. I've searched MSDN for a good 45 minutes, but couldn't find anything quite relevant (though lots of near misses. Has anyone done this, and/or know what interface name I should start my investigation at? Either "this is the interface name that you implement to publish one of these guys" or "this is the interface name to call to register your such guy" would be fine by me. Cheers, / h+ |
From: Brian H. <bri...@py...> - 2002-01-15 20:33:04
|
Holy cow, you're right! It doesn't mention this in the CreateDIBSection() docs, I had to do an MSDN search for "555 CreateDIBSection", and even then it was only found in one sample doc. It's also in the docs for BITMAPINFOHEADER, but that didn't show in the search. Thanks for the heads up, that should be worth at least 10-20%. Brian |
From: Jon W. <hp...@mi...> - 2002-01-15 20:24:18
|
> >For what it's worth, when I was doing Mac programming, the Apple line > >was always: > > > > "You should assume that CopyBits() is written by super intelligent > > space aliens and will always perform optimally." > > Sadly, CopyBits() is _much_ slower than a memcpy() buried in a > for loop. So > much for super intelligent space aliens :P Well, it usually was the case that you had to "prime" CopyBits by making sure the moon phases were aligned for the source and destination GWorlds, but you could usually get it rolling pretty well. All the cost in CopyBits comes from pre-copy set-up, so the bigger the copy, and the more moon phase alignment you can manage, the faster it gets. Of course, there were special cases, like the "pixel doubling blit" which stuffed one 32-bit pixel twice into a double and used the 64-bit data path to the frame buffer, where CopyBits was lagging behind for a while. But I'd be really surprised if they haven't gotten around to fixing that (and other) by now, seeing as that was five years ago... Cheers, / h+ |
From: Andy G. <an...@mi...> - 2002-01-15 20:15:46
|
CreateDIBSection supports 555 and 565 16 bit modes, check out the docs?? I would either work in 565 all the time (as most people will have this) or do your own custom color convert/blt code - you should be able to easily match or beat the speed of GDI and it's kinda fun mmx code. Andy. -----Original Message----- From: Brian Hook [mailto:bri...@py...]=20 Sent: Tuesday, January 15, 2002 11:57 AM To: gam...@li... Subject: RE: [GD-Windows] BitBlt() syncing to VBL > The comment at the end about using 555 buffers is > interesting. Maybe that's why you are using GDI, not DDRAW? -=20 Actually, I'm using 555 BECAUSE I'm using GDI, not the other way around. GDI's 16-bit DIB sections are 555. > I have used GDI in the past to do 555 to primary blits with > very few issues, it's very fast, esp. when you involve=20 > stretching. However, I know of very few displays that=20 > actually have a desktop running in 555 - almost everything=20 > these days has a 565 desktop (in 16 bit mode).=20 Correct. > difference between the mac and PC is that the mac has a 555 > desktop? Maybe the PC is being forced to do an expensive=20 > conversion from 555 to 565. It definitely is (as per the XLATEOBJ_hGetColorTransform thread of a while ago), but I don't think that's accounting for all the difference. Both machines have NVidia graphics accelerators, and they're both running in 16-bit, and I don't think the GF series support a 555 mode. Brian _______________________________________________ Gamedevlists-windows mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows |
From: Brian H. <bri...@py...> - 2002-01-15 19:57:03
|
> The comment at the end about using 555 buffers is > interesting. Maybe that's why you are using GDI, not DDRAW? - Actually, I'm using 555 BECAUSE I'm using GDI, not the other way around. GDI's 16-bit DIB sections are 555. > I have used GDI in the past to do 555 to primary blits with > very few issues, it's very fast, esp. when you involve > stretching. However, I know of very few displays that > actually have a desktop running in 555 - almost everything > these days has a 565 desktop (in 16 bit mode). Correct. > difference between the mac and PC is that the mac has a 555 > desktop? Maybe the PC is being forced to do an expensive > conversion from 555 to 565. It definitely is (as per the XLATEOBJ_hGetColorTransform thread of a while ago), but I don't think that's accounting for all the difference. Both machines have NVidia graphics accelerators, and they're both running in 16-bit, and I don't think the GF series support a 555 mode. Brian |
From: Andy G. <an...@mi...> - 2002-01-15 19:39:33
|
The comment at the end about using 555 buffers is interesting. Maybe that's why you are using GDI, not DDRAW? - DDRAW blits -do not- do any pixel format conversion - specifically because generally these are slow and it would be more optimum to convert your source data. I have used GDI in the past to do 555 to primary blits with very few issues, it's very fast, esp. when you involve stretching. However, I know of very few displays that actually have a desktop running in 555 - almost everything these days has a 565 desktop (in 16 bit mode). Maybe the difference between the mac and PC is that the mac has a 555 desktop? Maybe the PC is being forced to do an expensive conversion from 555 to 565. I bet you would get better performance if you wrote your own blitter using mmx code to go from 555->565 and 555->8888 and blit to an off screen buffer then flipped/blit this. If the primary is not 555, 565 or 8888 then just let GDI do it. There are some advantages to 555, like true gray scales, easier for a single code path etc... - but these are very very slight. Andy Glaister. -----Original Message----- From: Brian Hook [mailto:bri...@py...]=20 Sent: Tuesday, January 15, 2002 11:27 AM To: gam...@li... Subject: RE: [GD-Windows] BitBlt() syncing to VBL > Yeah, but assuming that the offscreen backbuffer is on the > video card, the vidmem-to-vidmem blit is so fast that it's=20 > essentially free. Certainly doesn't cost you any CPU time to=20 > queue it up. If only it were so simple =3D) You can specify where you want your offscreen GWorld allocated, and all my allocations are hardcoded into system memory. DX programming has taught me to stay away from anything twitchy -- like VRAM/AGP buffers =3D) > Yeah, the fact that your frame rate is at the refresh rate - > it would be a stretch to suspect anything else. Can you=20 > disable bits of your pipeline and log your framerate to see=20 > where the bottleneck is? i.e. if build your frame but not=20 > blit it to the card, how many fps do you get? What if you=20 > just blit an empty frame to the card every time? Etc etc. I'm going to check all that again. On the DX list this erupted into the "how to measure time" thread, but for now I'm using QPC and I'll see what my timings are like for the screen build and the blit. > I think the recipe for speed is to minimizing blits across > the bus - composite the new frame in system memory, do one=20 > blit to the back buffer of the video card, then flip or blit=20 > back to front. You don't want to send something across the=20 > bus that will later be overdrawn. That's what I'm doing. > I would bet that CopyBits is heavily optimized, but BitBlt > should be too. Not necessarily -- I mean, I would imagine that the Windows engineers have long ago decided that GDI acceleration probably isn't going to be a major priority and have concentrated their efforts elsewhere. The Mac engineers, however, recognize that they need to look for optimizations for Altivec anywhere they can since that's a part of their marketing strategy (the "MHz Myth"). > I think the key for both is to make sure > you're on the fast path - no transparency, pixel formats=20 > match, palettes (if any) match - so that the function can=20 > just blast bits. Well, this goes back to the XLTE_ColorTransform thread. There is some conversion happening, but it's nearly unavoidable unless I write some huge explosion of blitters. Right now I'm taking advice from a friend to basically do everything in some canonical format (x555 in my case) and let the back blitter handle conversion. This is theoretically slower than making a DDB and then writing every permutation of blitter necessary to support building my buffers in DDB land. That just seems like a failure case waiting to happen though, given the sheer number of pixel formats that are available. Brian _______________________________________________ Gamedevlists-windows mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-windows |
From: Brian H. <bri...@py...> - 2002-01-15 19:27:08
|
> Yeah, but assuming that the offscreen backbuffer is on the > video card, the vidmem-to-vidmem blit is so fast that it's > essentially free. Certainly doesn't cost you any CPU time to > queue it up. If only it were so simple =) You can specify where you want your offscreen GWorld allocated, and all my allocations are hardcoded into system memory. DX programming has taught me to stay away from anything twitchy -- like VRAM/AGP buffers =) > Yeah, the fact that your frame rate is at the refresh rate - > it would be a stretch to suspect anything else. Can you > disable bits of your pipeline and log your framerate to see > where the bottleneck is? i.e. if build your frame but not > blit it to the card, how many fps do you get? What if you > just blit an empty frame to the card every time? Etc etc. I'm going to check all that again. On the DX list this erupted into the "how to measure time" thread, but for now I'm using QPC and I'll see what my timings are like for the screen build and the blit. > I think the recipe for speed is to minimizing blits across > the bus - composite the new frame in system memory, do one > blit to the back buffer of the video card, then flip or blit > back to front. You don't want to send something across the > bus that will later be overdrawn. That's what I'm doing. > I would bet that CopyBits is heavily optimized, but BitBlt > should be too. Not necessarily -- I mean, I would imagine that the Windows engineers have long ago decided that GDI acceleration probably isn't going to be a major priority and have concentrated their efforts elsewhere. The Mac engineers, however, recognize that they need to look for optimizations for Altivec anywhere they can since that's a part of their marketing strategy (the "MHz Myth"). > I think the key for both is to make sure > you're on the fast path - no transparency, pixel formats > match, palettes (if any) match - so that the function can > just blast bits. Well, this goes back to the XLTE_ColorTransform thread. There is some conversion happening, but it's nearly unavoidable unless I write some huge explosion of blitters. Right now I'm taking advice from a friend to basically do everything in some canonical format (x555 in my case) and let the back blitter handle conversion. This is theoretically slower than making a DDB and then writing every permutation of blitter necessary to support building my buffers in DDB land. That just seems like a failure case waiting to happen though, given the sheer number of pixel formats that are available. Brian |
From: Brian S. <bs...@mi...> - 2002-01-15 19:19:07
|
> Nope. This is on OS X using off-screen Gworlds and=20 > CopyBits(). This is > pretty much the MacOS equivalent to DIBSections and BitBlt().=20 > In fact, > I should have even WORSE performance under OS X because I'm actually > triple buffering -- my blit goes into the window's off-screen > backbuffer, which is then blitted by the OS later. Under Windows I'm > just going straight from DIB section to the Window's front buffer (in > theory). Yeah, but assuming that the offscreen backbuffer is on the video card, the vidmem-to-vidmem blit is so fast that it's essentially free. Certainly doesn't cost you any CPU time to queue it up. > My guess is that there's still some VBL action going on=20 > somewhere (note: > the Mac also VBLs, even though I'm getting 125fps, but the triple > buffering is probably accelerating things by allowing=20 > multiple blits in > a single refresh?), since I'm locked very close to my monitor's > ostensible frame rate. Yeah, the fact that your frame rate is at the refresh rate - it would be a stretch to suspect anything else. Can you disable bits of your pipeline and log your framerate to see where the bottleneck is? i.e. if build your frame but not blit it to the card, how many fps do you get? What if you just blit an empty frame to the card every time? Etc etc. I think the recipe for speed is to minimizing blits across the bus - composite the new frame in system memory, do one blit to the back buffer of the video card, then flip or blit back to front. You don't want to send something across the bus that will later be overdrawn. > I've tried disabling all the various "sync" parameters in the driver > properties, but to no avail. >=20 > I do find this quite a bit odd simply because I was expecting to do a > lot of optimization work on the Mac since the Mac has a slower clock > speed and significantly less memory bandwidth. My nearest=20 > guess is that > I'm either doing something terribly wrong on the Windows side, or the > Mac has some kind of mad, stupid Altivec optimized=20 > memcpy()/CopyBits(). I would bet that CopyBits is heavily optimized, but BitBlt should be too. I think the key for both is to make sure you're on the fast path - no transparency, pixel formats match, palettes (if any) match - so that the function can just blast bits. I prefer DirectDraw over GDI because if you're not on the fast path you can tell immediately - either nothing will draw, or in the case of 1555 vs. 565, everything looks very, very odd. --brian |