gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1399)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: gl <gl...@nt...> - 2000-08-27 00:30:07
|
Yes, I did mean 1555. Neurons misfiring... -- gl ----- Original Message ----- From: "Scott Justin Shumaker" <sjs...@um...> To: <gda...@li...> Sent: Sunday, August 27, 2000 12:50 AM Subject: Re: [Algorithms] Alpha Channel > A bit OT, but oh well: > The 5551 format is not available in D3D. Only the 1555 format (alpha in > the high bit) is. This is equivalent to the OpenGL 1_5_5_5_REV packed > pixel format exposed in OpenGL 1.2. > > Direct3D only exposes the REV packed pixel formats (they don't call them > reversed, of course), while OpenGL prefers the regular formats but does > expose the REV formats in 1.2. > > You have to be very careful that you understand the formats correctly. > The packed formats look the same on all machine architectures (i.e., > 1_5_5_5_REV always has the alpha in the high bit), but OpenGL formats like > GL_UNSIGNED_BYTE - GL_RGBA have components in different places depending > on the endianness of the host machine. > > Even more OT: > > Color keying can actually be a win if you have a lot of 2d code > interspersed with 3d code. Because of texture limitations, if you have a > 2d-windowing system that need to run in a 3d window you'll probably end up > compositing it in system memory, than slicing it into small tiles, each > which is uploaded as a texture. On the other hand, you can probably do a > 2d blt to the render target on most D3D hardware for 2d stuff, and this is > far faster. (By way of comparison, the software driver at my employer > runs the 2d stuff 3x faster than a Geforce2 without doing 2d blits). > > -- > Scott Shumaker > sjs...@um... > > On Sat, 26 Aug 2000, gl wrote: > > > > > Take a look at the 'D3DFrame' texture loader code (part of the SDK) - it > > does exactly that from alpha files. > > > > If you can use 16bit, choose the 5551 format if available, as 1 bit of alpha > > is all you need. BTW, you are really describing colour keying, but don't > > use it, as it's legacy and can cause all kinds of problems when combined > > with alpha blending (search the archives for details). > > -- > > gl > > > > ----- Original Message ----- > > From: "Pai-Hung Chen" <pa...@ac...> > > To: <gda...@li...> > > Sent: Saturday, August 26, 2000 9:08 PM > > Subject: [Algorithms] Alpha Channel > > > > > > > Hi, > > > > > > I want to use billboard in my program but cannot find a good way to create > > > 32-bit bitmap with alpha channel of appropriate values. Basically I want > > to > > > create a bitmap of tree with leaves in various green colors (with alpha = > > > 255) and all non-green colors transparent (alpha = 0) so that I can use it > > > as the texture for my billboarded trees. It would be nice if I can > > specify > > > a non-green color C and make the alpha of all the pixels of color C in the > > > bitmap to 0. Is there any program capable of doing that with 32-bit > > bitmap > > > with alpha channel? This may be off-topic and you could send your advise > > to > > > me off-line. > > > > > > Thanks in advance, > > > > > > Pai-Hung Chen > > > > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Scott J. S. <sjs...@um...> - 2000-08-26 23:50:52
|
A bit OT, but oh well: The 5551 format is not available in D3D. Only the 1555 format (alpha in the high bit) is. This is equivalent to the OpenGL 1_5_5_5_REV packed pixel format exposed in OpenGL 1.2. Direct3D only exposes the REV packed pixel formats (they don't call them reversed, of course), while OpenGL prefers the regular formats but does expose the REV formats in 1.2. You have to be very careful that you understand the formats correctly. The packed formats look the same on all machine architectures (i.e., 1_5_5_5_REV always has the alpha in the high bit), but OpenGL formats like GL_UNSIGNED_BYTE - GL_RGBA have components in different places depending on the endianness of the host machine. Even more OT: Color keying can actually be a win if you have a lot of 2d code interspersed with 3d code. Because of texture limitations, if you have a 2d-windowing system that need to run in a 3d window you'll probably end up compositing it in system memory, than slicing it into small tiles, each which is uploaded as a texture. On the other hand, you can probably do a 2d blt to the render target on most D3D hardware for 2d stuff, and this is far faster. (By way of comparison, the software driver at my employer runs the 2d stuff 3x faster than a Geforce2 without doing 2d blits). -- Scott Shumaker sjs...@um... On Sat, 26 Aug 2000, gl wrote: > > Take a look at the 'D3DFrame' texture loader code (part of the SDK) - it > does exactly that from alpha files. > > If you can use 16bit, choose the 5551 format if available, as 1 bit of alpha > is all you need. BTW, you are really describing colour keying, but don't > use it, as it's legacy and can cause all kinds of problems when combined > with alpha blending (search the archives for details). > -- > gl > > ----- Original Message ----- > From: "Pai-Hung Chen" <pa...@ac...> > To: <gda...@li...> > Sent: Saturday, August 26, 2000 9:08 PM > Subject: [Algorithms] Alpha Channel > > > > Hi, > > > > I want to use billboard in my program but cannot find a good way to create > > 32-bit bitmap with alpha channel of appropriate values. Basically I want > to > > create a bitmap of tree with leaves in various green colors (with alpha = > > 255) and all non-green colors transparent (alpha = 0) so that I can use it > > as the texture for my billboarded trees. It would be nice if I can > specify > > a non-green color C and make the alpha of all the pixels of color C in the > > bitmap to 0. Is there any program capable of doing that with 32-bit > bitmap > > with alpha channel? This may be off-topic and you could send your advise > to > > me off-line. > > > > Thanks in advance, > > > > Pai-Hung Chen > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Joe B. <jb...@av...> - 2000-08-26 22:27:56
|
There isn't anything our artists can't do with Adobe Photoshop. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of > Pai-Hung Chen > Sent: Saturday, August 26, 2000 3:51 PM > To: gda...@li... > Subject: Re: [Algorithms] Alpha Channel > > > Thanks for the help. Actually I am looking for a graphics package that > allow me to pre-process (e.g. color-keying) the alpha info into > the bitmap, > so that I can use it directly as alpha-translucent billboard in > my program. > > Pai-Hung Chen |
From: Pai-Hung C. <pa...@ac...> - 2000-08-26 21:57:36
|
Thanks for the help. Actually I am looking for a graphics package that allow me to pre-process (e.g. color-keying) the alpha info into the bitmap, so that I can use it directly as alpha-translucent billboard in my program. Pai-Hung Chen ----- Original Message ----- From: gl <gl...@nt...> To: <gda...@li...> Sent: Saturday, August 26, 2000 2:48 PM Subject: Re: [Algorithms] Alpha Channel > > Take a look at the 'D3DFrame' texture loader code (part of the SDK) - it > does exactly that from alpha files. > > If you can use 16bit, choose the 5551 format if available, as 1 bit of alpha > is all you need. BTW, you are really describing colour keying, but don't > use it, as it's legacy and can cause all kinds of problems when combined > with alpha blending (search the archives for details). > -- > gl > > ----- Original Message ----- > From: "Pai-Hung Chen" <pa...@ac...> > To: <gda...@li...> > Sent: Saturday, August 26, 2000 9:08 PM > Subject: [Algorithms] Alpha Channel > > > > Hi, > > > > I want to use billboard in my program but cannot find a good way to create > > 32-bit bitmap with alpha channel of appropriate values. Basically I want > to > > create a bitmap of tree with leaves in various green colors (with alpha = > > 255) and all non-green colors transparent (alpha = 0) so that I can use it > > as the texture for my billboarded trees. It would be nice if I can > specify > > a non-green color C and make the alpha of all the pixels of color C in the > > bitmap to 0. Is there any program capable of doing that with 32-bit > bitmap > > with alpha channel? This may be off-topic and you could send your advise > to > > me off-line. > > > > Thanks in advance, > > > > Pai-Hung Chen > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > |
From: gl <gl...@nt...> - 2000-08-26 21:48:58
|
Take a look at the 'D3DFrame' texture loader code (part of the SDK) - it does exactly that from alpha files. If you can use 16bit, choose the 5551 format if available, as 1 bit of alpha is all you need. BTW, you are really describing colour keying, but don't use it, as it's legacy and can cause all kinds of problems when combined with alpha blending (search the archives for details). -- gl ----- Original Message ----- From: "Pai-Hung Chen" <pa...@ac...> To: <gda...@li...> Sent: Saturday, August 26, 2000 9:08 PM Subject: [Algorithms] Alpha Channel > Hi, > > I want to use billboard in my program but cannot find a good way to create > 32-bit bitmap with alpha channel of appropriate values. Basically I want to > create a bitmap of tree with leaves in various green colors (with alpha = > 255) and all non-green colors transparent (alpha = 0) so that I can use it > as the texture for my billboarded trees. It would be nice if I can specify > a non-green color C and make the alpha of all the pixels of color C in the > bitmap to 0. Is there any program capable of doing that with 32-bit bitmap > with alpha channel? This may be off-topic and you could send your advise to > me off-line. > > Thanks in advance, > > Pai-Hung Chen > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pai-Hung C. <pa...@ac...> - 2000-08-26 20:14:52
|
Hi, I want to use billboard in my program but cannot find a good way to create 32-bit bitmap with alpha channel of appropriate values. Basically I want to create a bitmap of tree with leaves in various green colors (with alpha = 255) and all non-green colors transparent (alpha = 0) so that I can use it as the texture for my billboarded trees. It would be nice if I can specify a non-green color C and make the alpha of all the pixels of color C in the bitmap to 0. Is there any program capable of doing that with 32-bit bitmap with alpha channel? This may be off-topic and you could send your advise to me off-line. Thanks in advance, Pai-Hung Chen |
From: Charles B. <cb...@cb...> - 2000-08-26 18:27:55
|
The outcast home page has a description of their water technique, as well as a lot of other cool tricks. Unfortunately, their algorithms only work with their software renderer for their voxel engine. The recent "Game Programming Gems" book has a little article by some ATI guys (Jason Mitchell + some guy) on 3d water effects. Basically, they just wiggle some verts with trig functions, and then do some nice UV mapping for reflection and refraction simulation. The water in Halo has been talked about alot. It would be nice if someone wrote up a summary of all the tricks for water (I've been meaning to). (For example, it's well known that "particles can be used to simulate the specular flicker", but I've never seen a good write-up of that technique). My basic summar of how to do water is to start with reflection, using EMBM or environment mapping or something. Refraction is pretty irrelevant for large bodies of water, you basically can't see into them. Once you've got the water base-textured with the reflection map (with uv's wiggled to simulate waves) you need to lay some fake speculars on. I would do this with a black and white texture. You simply tile it, and wiggle the uv's to make it irregular, and lay it down as an additive pass. Then, rotate the uv's and wiggle them differently and lay down another additive pass. The combinatin of the two passes, both wiggled a bit, is that you break up the regular tiling pattern that afflicts most water. Also, if you shift the uvs of both passes you get a nice irregular wave effect. Of course, to simulate the height of the water, you just do the old 2d wave effect and make a mesh from it. A bit rushed and incoherent, but hopefully you get the idea... At 10:34 AM 8/26/00 -0700, you wrote: >Hi, > >Does anyone know how the fantastic water effect is done in Outcast? Is >there any good 3D water-effect tutorial on the web? (I know GameDev has >one, but only 2D.) > >Thank you for the help, > >Pai-Hung Chen > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > ------------------------------------------------------- Charles Bloom cb...@cb... http://www.cbloom.com |
From: Pai-Hung C. <pa...@ac...> - 2000-08-26 17:40:58
|
Hi, Does anyone know how the fantastic water effect is done in Outcast? Is there any good 3D water-effect tutorial on the web? (I know GameDev has one, but only 2D.) Thank you for the help, Pai-Hung Chen |
From: Kevin L. <lac...@in...> - 2000-08-26 16:33:01
|
On Sat, 26 Aug 2000, dga wrote: I'll tackle the first two because they are very simple. > 1) How to obtain the position of an object, based on it's transformation > matrix ? I'm using DirectX, so a vector is transformed like this: > I like to add a point p0 that contains the center coordinates of any given mesh object. Any time the object is translated just add the translation into p0. > 2) How do I rotate an object around an arbitrary axis ? More exactly, given > a matrix W that represents the objects current position, how do I built > the matrix A such as WA gives me the transformation matrix for the object > that rotates the object at it's current position around axis a=(ax,ay,az) First tranlate the object to the origin. Then perform the rotation. Then transform it back to where it was. And for a stab at number four. This will depend on how smart you want the missle to bvehave. You could at every refresh have the missle rotate itself toward the current position of its target. Find the position of the two and use a cross product to find the angle. Then move the missle by some value in that direction. This assume a very smart missle. Kevin |
From: dga <Dio...@mi...> - 2000-08-26 15:33:14
|
Hi all! I have some problems with a work I have to make for school... Please, please, please, please (ad nauseum) take the time to read this mail, since I'm pulling hairs already (spent the last 17 hours trying to get this right!! And I need to deliver this in the end of the next week... And lots of work still needing to be done) I'm trying to do a space game (simple one), but I have problems with several of it's components, that I've been unable to understand, let alone correct. 1) How to obtain the position of an object, based on it's transformation matrix ? I'm using DirectX, so a vector is transformed like this: Wv=v' I think I must use the (4,1), (4,2) and (4,3) positions of the matrix (line,column), to get the x, y and z of the object... Is this correct ? 2) How do I rotate an object around an arbitrary axis ? More exactly, given a matrix W that represents the objects current position, how do I built the matrix A such as WA gives me the transformation matrix for the object that rotates the object at it's current position around axis a=(ax,ay,az) ? Currently, I'm using the following code: D3DMATRIX mat; float c=float(cos(Deg2Rad(ang))); float s=float(sin(Deg2Rad(ang))); float c1=(1.0f-c); mat._11=(ax*ax)*c1+c; mat._12=(ax*ay)*c1-(az*s); mat._13=(ax*az)*c1+(ay*s); mat._21=(ay*ax)*c1+(az*s); mat._22=(ay*ay)*c1+c ; mat._23=(ay*az)*c1-(ax*s); mat._31=(az*ax)*c1-(ay*s); mat._32=(az*ay)*c1+(ax*s); mat._33=(az*az)*c1+c; mat._14=mat._24=mat._34=0.0f; mat._41=mat._42=mat._43=0.0f; mat._44=1.0f; but I think it fails in certain cases... :( 3) How do I project a 3d point into 2d ? I need to have the projection to get my lens flare to work properly... It sometimes work, in simple cases, but when I attach an lens flare to an object and use it's coordinates, it sometimes go off by a long way... I'm using the following code: x=GetXPos(); y=GetYPos(); z=GetZPos(); // v'=Cv, where v=(x,y,z) and C is the camera transformation matrix cameraTrans->PreMult(x,y,z,&vx,&vy,&vz); if (vz<=0) return; // v=Pv', where v'=(vx,vy,vz) and P is the perpective transformation matrix perspTrans->PreMult(vx,vy,vz,&x,&y,&z); // resX=resolution in X divided by 2, resY=resolution in Y divided by 2 x=resX*x; y=resY*y; x=resX+x; y=resY-y; I think this maybe has something to do with the first two questions, since if I set the flare position manually, it gives me the correct position, but if I let the other object "drag" the flare around, it doesn't, although the coordinates for the two objects are reported correctly (but again, it can be connected to the first). 4) How do I make an object follow another (like a missile) ? What I'm doing is SCPoint3d rotAxis; float x,y,z,norm; float dx,dy,dz; float sAng,cAng,ang; float ox=GetOrientX(),oy=GetOrientY(),oz=GetOrientZ(); // 1. Calc orientation to axis x=target->GetXPos()-GetXPos(); y=target->GetYPos()-GetYPos(); z=target->GetZPos()-GetZPos(); // 2. Get intended orientation projected on the orientation vector dx=x*ox; dy=y*oy; dz=z*oz; // 3. Get the sine of the angle sAng=sqrtf(Sqr(x-dx)+Sqr(y-dy)+Sqr(z-dz)); norm=sqrtf(Sqr(x)+Sqr(y)+Sqr(z)); sAng=sAng/norm; // 4. Normalize vector x=x/norm; y=y/norm; z=z/norm; // 5. Get rotation axis rotAxis.CrossProduct(ox,oy,oz,x,y,z); rotAxis.Normalize(); // 6. Get the cosine of the angle cAng=DotProduct(x,y,z,ox,oy,oz); // 7. Get the angle ang=GetAngleWithSinAndCos(sAng,cAng); if (ang>MISSILE_TURN_SPEED) ang=MISSILE_TURN_SPEED; else if (ang<-MISSILE_TURN_SPEED) ang=-MISSILE_TURN_SPEED; // 5. Rotate missile RotateAroundAxis(ang,rotAxis.x,rotAxis.y,rotAxis.z); Again, this can be connected to the first two problems, since it relies heavily on the position of the objects and the rotation around an arbitrary axis. Basically, what I'm doing is, given the current orientation (normalized) (ox,oy,oz) and the intended orientation ((x,y,z) in the above example), I calculate the sine (using standart trigonometry, that's the reason for 2. in the above example) and the cosine (using the dotproduct of the normalized intended orientation and the current orientation). Then, I use the crossproduct between those two angles and normalize it to obtain the rotation axis. Then I calculate the angle using the sine and the cosine of it, and finally, I limit the angle (to keep the missile to turn too fast). Then I rotate it... On paper, this works fine (as far as I can visualize it), but when I use the above code, the missile only hits the target when our ship and the target are on the Y=0 plane. The track code is the most important one, since I will use it to drive the AI of the enemies (they'll try to track our ship, all guns blazing, nothing too fancy, the teacher it's more interested in the visual aspect of the game). Thanks in advance Diogo de Andrade Dio...@mi... |
From: Fredo D. <fr...@gr...> - 2000-08-26 12:59:31
|
> My biggest problem is finding a good way to extract intensity information > from the scene. The simplest method is to read the pixels back from the > frame, calculate the statistics I've mentioned above and scale the > lightmaps accordingly (which means everything gets a light map ... even the > sky). Of course, reading back the frame buffer into system memory to > perform the calculation is slow at best. Another idea I had for game-like application was to precompute the adaptation level (i.e. average viewed luminance) by sampling the scene. It can be stored adaptatively (basically determining where a huge step will occur). I haven't had time to try, so I relied on reading the frame-buffer (but using the log of colors in a first pass to be able to treat the whole dynamic range) And yes, the siggraph paper was far from real time, but my paper at the workshop and Scheel's paper at EG are at least interactive (but with additional cost or restrictions) > They human eye can see 1.5 log units of light at one moment (log (cd/m^2)), > but can adapt (mainly neurologically) to over 7 log units of light. Of this > only one log unit is achieved through the widening of the iris, the rest is > neurological. To simulate this wide range, you have to simulate the eye in a > lot more detail. Well, unfortunately it is a bit more complex. A single neuron of the human eye has a dynamic range of 1 to 40, but since they all have a different adaptation state (different gain, different low pass filter, etc.), the human eye is able to see a static scene with a high dynamic range. The fact that we really see well only in the tiny visual field of the fovea and that our eyes are always in motion does not simplify anything! This is the big limitation of both the siggraph paper approach and my approach: they assume that the adaptation state is global in the retina. -- Fredo Durand, MIT-LCS Graphics Group NE43-255, Cambridge, MA 02139 phone : (617) 253 7223 fax : (617) 253 4640 http://graphics.lcs.mit.edu/~fredo/ |
From: Johan H. <jh...@mw...> - 2000-08-26 09:22:18
|
Hi I want to bring the subject back to shadow calculation. I have 2 algorithms in mind, both of which can be acelerated by 3d hardware, that may be of interest. I need some feedback on this, and maybe it helps some of you. (To Ben Discoe, I will write this up for the VTP website once the dicussion is finished if you want. I will probably post this on my website, and you can refer there, or just copy it.) 1. RADIOSITY The fisrt is a radiosity approach. I am going to asume all of you know what radisity is and basically how it works. The intereseting part is how this very slow algorithm can be adapted to run in 3d hardware. By rendering a 180 degree FOV scene (you shoudl use a non standard projection system, ask me if you want the details) from every point in the landscape where you want to calculate the shadow, you get the picture of what is happening there. By locking the bach buffer and reading the values back, the total amount of light that falls on this point can be calculated. This is obviosly true for any computer scene, not just landscapes. By rendering just 32x32 or 64x64 pixels, this can be done at a 100 calculation per second easily. Another aproxmation for landscapes would be to render a very small FOV (fraction of a degree) 1x1 pixel scene in the direction of the sun, and determine weather the sun is visible or not. A bonus of both of these approaches is that shadows of clouds, and all static objects appear automatically and correct 2. ZBUFFER SHADOWS This asumes that you know how to generate stencil buffer shadows in real time (please ask if you dont), and is probably the fastes method of the two (maybe real-time capable with a bit of caching) Start off by rendering the area that you want to calculate in a top down view into a back buffer with Z buffer enabled. The color should be white, and every pixel in the back buffer should represent a height grid point. Now you have a backbuffer that is completely white, with a Z buffer, with the height of the corrsponding terrain in it. Next, take the grid of heights, that may affect this area. (Note, this may be the same grid as above, or a bigger area if you are only doing a partial update) From each of those points, render a black quadrilateral into the back buffer that has the following properties. The quadrilateral is 1 pixel wide. On plan view, its direction is in the sun direction Its slope is set at the sun angle It starts fro the exact position and height as the point that generates it. As long as the Z buffer of this quadrilateral is bigger than the Z values in your Z buffer, it will get drawn, and that area is in the shadow. Thus your shadow pixels will be black. Now lock the buffer and read out your shadow map. The drawback to the radiosity approach is that you will have to add cloud and object shadows afterwards in some manner. Also note that you still have to do the dot product lighting for those pixels in the light. One soluution may be to render the orriginal height grid with the dot product lighting, rather than white. I can discuss later why that wont work. POSSIBLE OPTIMIZATIONS FOR REAL TIME OPERATION. If you are able to cache these shadow pictures, there is a possible optimization than can get this to run in real-time. Since you don't need the full shadow resolution over the area that you are drawing (large areas is too far away to see this), you could start off on a course grid, and calculate the shadows for a tile, by taking the heights on that tile, and the 3 possible neighbours in the direction of the sun into acount (note that this limits the distance that shadows can be thown). Save this low res shadow. When you get close to the area, blt this shadow into a bigger surface, expandinf the pilxels and blurring the image. Thsi blurrign is correct since through difraction and other processes, mountains do not cast sharp shadows at a distance. Now take the heigher resolution height grid, (but only for a smaller area), and add any heigh detil shadows that may arrise from this. You can probably also optimize, by doing a dot product of the sun vector, and the normal at each point in the landscape fisrt. Only those which are very close to zero, are at edges and will through shadows. You dont have to render a shadow quad far any of the other points. I hope this helps a lot of you, and I hope to get some feedback on this as I have to schoose a method of shadow calculation for my own code soon. Johan Hammes Nineteenseventyfive http://mzone.mweb.co.za/residents/jhammes/main.htm |
From: Akbar A. <sye...@ea...> - 2000-08-26 08:36:10
|
something might bite you later on in that book before you check out something in it, make sure you test it against this http://www.acm.org/pubs/tog/GraphicsGems/Errata.GraphicsGems here's the raybox code if anyone is currently working on it ;) it has the fix http://www.acm.org/tog/GraphicsGems/gems/RayBox.c entire listing from the series in a -listed fashion http://www.acm.org/tog/GraphicsGems/gems.html peace. akbar A. "We want technology for the sake of the story, not for its own sake. When you look back, say 10 years from now, current technology will seem quaint" Pixars' Edwin Catmull. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Klaus Hartmann Sent: Saturday, August 26, 2000 1:26 AM To: gda...@li... Subject: [Algorithms] Problem with Woo's "Fast Ray-Box Intersection" Hi all, I have a problem that I don't understand. I have the following line of source in my implementation of Woo's "Fast Ray-Box Intersection": hitpoint[i] = rayOrg[i] + maxT[whichPlane] * rayDir[i]; hitpoint, rayOrg, maxT, and rayDir are all arrays with three floating-point values (indices 0 to 2). Now here are some values: i = 2 (which is a valid index) whichPlane = 0 (also a valid index) rayOrg[i] = -2 maxT[whichPlane] = 2.82843 rayDir[i] = 0.707107 These are all very valid values, and the result should now be: hitpoint[i] = -2 + 2.82843 * 0.707107 = 2.65201e-06 However, the debugger says, that hitpoint[i] = -6.8457097e-008 Okay, so I thought that maybe this is a precision problem. I tried doubles, but I got the same wrong value. Then I thought, that there might be something wrong hitpoint[i], so I replaced that with a local double variable, but the result is still wrong. Why? I cannot believe that 2.65201e-06 is too small to be represented as a double-precision floating-point value. Can someone explain this to me? Or do you see a bug in the above C line? If you don't believe me, then I'm very well willing to upload the code (very small), and you can step through it, and see for yourselves. Niki _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: <ae...@my...> - 2000-08-26 08:28:52
|
-- On Sat, 26 Aug 2000 09:00:34 Klaus Hartmann wrote: >Never mind... The VC+ 6 Debugger just >displayed a completely wrong floating-point >value. A simple printf() displayed the correct >one. The helped me to make the code work... >finally. Just had to subtract/add a small >epsilon from/to the AABB's min/max points. Thats quite odd . . . I was having a similar precision problem with my terrain system a few days ago. I had just converted the whole thing over to double precision to allow me to model planets with micrometer resolution, and yet I was still getting single precision accuracy. I stepped through it in the debugger, and saw the extra bits just being chopped off. It turns out that the internal precision on my CPU was being set to 23 bits somehow, and a call to _controlf87 to reset the precision to 64 bits fixed the problem.' -Jake Cannell --== Sent via Deja.com http://www.deja.com/ ==-- Before you buy. |
From: Klaus H. <k_h...@os...> - 2000-08-26 07:05:30
|
Never mind... The VC+ 6 Debugger just displayed a completely wrong floating-point value. A simple printf() displayed the correct one. The helped me to make the code work... finally. Just had to subtract/add a small epsilon from/to the AABB's min/max points. Niki ----- Original Message ----- From: Klaus Hartmann <k_h...@os...> To: <gda...@li...> Sent: Saturday, August 26, 2000 8:25 AM Subject: [Algorithms] Problem with Woo's "Fast Ray-Box Intersection" > Hi all, > > I have a problem that I don't understand. I have the following line of > source in my implementation of Woo's "Fast Ray-Box Intersection": > > hitpoint[i] = rayOrg[i] + maxT[whichPlane] * rayDir[i]; > > hitpoint, rayOrg, maxT, and rayDir are all arrays with three floating-point > values (indices 0 to 2). Now here are some values: > > i = 2 (which is a valid index) > whichPlane = 0 (also a valid index) > rayOrg[i] = -2 > maxT[whichPlane] = 2.82843 > rayDir[i] = 0.707107 > > These are all very valid values, and the result should now be: > hitpoint[i] = -2 + 2.82843 * 0.707107 = 2.65201e-06 > > However, the debugger says, that > hitpoint[i] = -6.8457097e-008 > > Okay, so I thought that maybe this is a precision problem. I tried doubles, > but I got the same wrong value. Then I thought, that there might be > something wrong hitpoint[i], so I replaced that with a local double > variable, but the result is still wrong. > > Why? I cannot believe that 2.65201e-06 is too small to be represented as a > double-precision floating-point value. Can someone explain this to me? Or do > you see a bug in the above C line? If you don't believe me, then I'm very > well willing to upload the code (very small), and you can step through it, > and see for yourselves. > > Niki > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Klaus H. <k_h...@os...> - 2000-08-26 06:30:05
|
Hi all, I have a problem that I don't understand. I have the following line of source in my implementation of Woo's "Fast Ray-Box Intersection": hitpoint[i] = rayOrg[i] + maxT[whichPlane] * rayDir[i]; hitpoint, rayOrg, maxT, and rayDir are all arrays with three floating-point values (indices 0 to 2). Now here are some values: i = 2 (which is a valid index) whichPlane = 0 (also a valid index) rayOrg[i] = -2 maxT[whichPlane] = 2.82843 rayDir[i] = 0.707107 These are all very valid values, and the result should now be: hitpoint[i] = -2 + 2.82843 * 0.707107 = 2.65201e-06 However, the debugger says, that hitpoint[i] = -6.8457097e-008 Okay, so I thought that maybe this is a precision problem. I tried doubles, but I got the same wrong value. Then I thought, that there might be something wrong hitpoint[i], so I replaced that with a local double variable, but the result is still wrong. Why? I cannot believe that 2.65201e-06 is too small to be represented as a double-precision floating-point value. Can someone explain this to me? Or do you see a bug in the above C line? If you don't believe me, then I'm very well willing to upload the code (very small), and you can step through it, and see for yourselves. Niki |
From: Johan H. <jh...@mw...> - 2000-08-26 06:17:17
|
Hi Everyone My name is Johan Hammes, and I basically joined the list because someone told me that there is a shadow discussion going on here. More on that later. I started my own company a year ago that develops real-time algorithms, and one of my products does exactly what you are discussing currently, except that by just adjusting gamma ramps, you can never simulate the range of light that the human eye is capable of seeing. On the SIGGRAPH article, my stuff s based on that, but they did a non real-time implimentation, unless I missed an article. They human eye can see 1.5 log units of light at one moment (log (cd/m^2)), but can adapt (mainly neurologically) to over 7 log units of light. Of this only one log unit is achieved through the widening of the iris, the rest is neurological. To simulate this wide range, you have to simulate the eye in a lot more detail. One of the nice things of an algorihm like this is that it can also be used to simulate other adapting sensors like video cameras. Have a look on my website for the screenshots on how this work. I know you probably dislike advertisements, but all of this is for sale to incorporate in commercial engines. For those of you who are just enthusiasts, if you preasure me enough, I will release it as open source for non commercial use. Johan Hammes Nineteenseventyfive http://mzone.mweb.co.za/residents/jhammes/main.htm |
From: Kamil B. <no...@te...> - 2000-08-26 05:53:23
|
Hello Akbar, Friday, August 11, 2000, 7:22:05 PM, you wrote: >>I have Kajiya's one but only as a paper version. >>Want some ugly scanned pics? :) > i suppose, is it worth it? I'm also interested in them :) -- Best regards, Kamil Burzynski |
From: Maik G. <mai...@gm...> - 2000-08-26 01:46:38
|
didnt you say its off topic to speak about that opengl stuff : ) but anyway .... i was just askin if theres a "common way ..." to get the full performance boost out of the LockArrayEXT optimization .... even the latest nvidia - drivers (at least for my tnt2) dont optimize anything if your vertex-array-size is over a special value ... --> you got to split your vertex-arrays in little pieces (4096 vectors in my case ) or you WONT get the performance boost of ... 25% with a vertex-array-size over that its kinda useless to use this function on a TNT2 ... thats the problem ... but as some people said ... there is no other way to get full performance boost than splitting the one huge vertex array in alot of little ones ... so i guess this topic is finished =) -----Ursprungliche Nachricht----- Von: gda...@li... [mailto:gda...@li...]Im Auftrag von Brian Sharp Gesendet: Freitag, 25. August 2000 05:58 An: gda...@li... Betreff: Re: [Algorithms] OpenGL LockArrayEXT ... workaround you wrote: > > are you sure 4096 is the maxium size for optimized compiled vertex arrays? > > I just though it was 1024, since q3 uses 1K vertex buffers (according to >Brian Sharp's conference). My conference? Any information I have on Q3A is pretty much hearsay from Brian Hook. Or maybe you're referring to Brian Hook, not Brian Sharp? We're more different than our last names: I never worked for id. ;-) At 11:30 AM 8/25/00 +0800, you wrote: >It depends on the accelerator generally. Remember Brian worked on the 3dfx >ogl drivers, which may mean his 1k number was based on those, rather than >other drivers. I think most of the T&L cards are 4/8k. So first of all, I don't see any actual defined limit (in the EXT_compiled_vertex_array spec) to the number of vertices you can lock. The original poster should have no problem locking 100,000 vertices. If it's an issue of what implementations optimize for, that's kind of lame to write to that, especially because that's totally volatile and will probably change in most drivers before you ship. In the existing 3dfx driver, at least, it'll optimize whatever you throw at it. Sure, that's transformation happening on the CPU, but it guarantees that you don't end up retransforming verts. The downside to that is the hypothetical case where someone locks a 30 million vert array, and the driver allocates a ton of memory. It can't be any more than the app already has allocated, but it's nonetheless potential for a lot of memory allocation. So for future work it's being donw a different way, but it won't "not optimize" the arrays if they're over some size. I'm not really sure what the motivation would be for a driver to not optimize arrays over a certain size. I mean, they could only pretransform a set number and treat any more as regular verts to be transformed. Hmm. Especially if it's a hardware T&L implementation, where you'd obviously want to have a tiered caching system in the driver already, capable of locked arrays on the host bigger than your hardware cache. I'm confused. So, what was the original poster asking about? Why couldn't he lock all his verts? A driver crash? A slowdown? A spec misreading? -Brian ========== GLSetup, Inc. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Tom H. <to...@3d...> - 2000-08-25 23:03:02
|
At 02:52 PM 8/25/2000, you wrote: >Bottomline: the problem is exactly the same for cinema and video, so if >you're happy with the "sun impression" that you get on films, there is >no physical limitation with your CRT display that will prevent you from >getting teh same with 3D games. Very true. I for one don't really want a CRT capable of emitting energy ranges equal to real life. I mean, staring into the sun hurts :) I did a lot of work on dynamic range management for digital cameras a little while back. The techniques I used where based on pixel statistics (histograms) of the luminance values of the image in a feedback system (calc statistics for current frame, adjust next frames exposure time and digital gains to compensate). I came up with a pretty sweet system that would converge on an answer in very few frames, and was very resistant to oscillations (even with the optimizations required to be workable in cheap hardware). Adjusting gamma values will not give you the desired result however. A couple of reasons: 1. You'll get banding as you stretch or squish the range by any significant amount. Applying any gamma value other than 1.0 (identity) throws away bits of color resolution. 2. Going from inside to outside is a HUGE range shift ... on the order of ~200 Lux for a normally lit office inside to ~5000 lux for outside at noon in the summer on a clear day. Not something that will scale well with just a gamma change. My biggest problem is finding a good way to extract intensity information from the scene. The simplest method is to read the pixels back from the frame, calculate the statistics I've mentioned above and scale the lightmaps accordingly (which means everything gets a light map ... even the sky). Of course, reading back the frame buffer into system memory to perform the calculation is slow at best. Another possibility would be to attach intensity information to surfaces and accumulate the statistics on those as they're added to the scene. Unfortunately this doesn't give you very good information about occlusion. Another idea I just had would be to create low-res imposters for the items in the scene (people and cars would be cubes, etc) and render these with software into an intensity image with proper occlusion tests (zbuffer ... span buffer .. whatever). The result could then be used to get a coarse approximation of the scene intensity. With something like a span buffer you would never need to generate the image itself as you could calculate pixel statistics from the spans and still have proper occlusion. Tom |
From: Amit B. <am...@de...> - 2000-08-25 22:52:34
|
----- Original Message ----- From: "Bass, Garrett T." <gt...@ut...> To: <gda...@li...> Sent: Friday, August 25, 2000 2:44 PM Subject: [Algorithms] Indoor/Outdoor lighting > Along these lines, I've been thinking that it may look very good to > change the lighting depending on the viewer's location. What I'm suggesting > is darkening the lighting of indoor areas the viewer can see into from > outdoors, and brightening the lighting of outdoor areas the viewer can see > from indoors. I implemented a similar idea. When you looked straight at the sun I lowered the gamma which made everything darker but kept the sun still stayed quite bright. The effect was great! I actually squinted my eyes when looking at the sun =). It worked for most cards I tried this on (using the win32 SetGammaRamp function). |
From: Mark D. <duc...@ll...> - 2000-08-25 22:24:29
|
Kevin, It depends on how much effort you put in on each. ROAM is easier to get going for height maps, and is theoretically faster when you completely optimize the mesh each frame. Hoppe describes an idea of partially optimizing in sweeps over the output mesh, so that a complete optimization happens over the coarse of say ten frames. ROAM automagically takes advantage of frame-to-frame coherence to get the best quality in your time budget, and take time proportionate to the number of LOD changes. ROAM thus gives you the best possible quality in minimum theoretical time, whereas VDPM can cause uncontrolled errors (example: you suddenly put your nose right on top of some interesting geometry and it takes a full second to pop to full detail). I'm happy to give lot's of implementation details/suggestions to you or anyone else who wants to give ROAM a try. There are a number of other sources of information on various approaches to ROAM-like systems out there (Alex Pfaffe http://ddg.sourceforge.net/, Seumas McNally http://www.longbowdigitalarts.com/seumas/ for example, Ben Disoe has a good page http://vterrain.org/LOD/published.html). Kevin Lackey wrote: > Which algorithm is a smaller O. VDPM or ROAM. I am looking at fractally > creating a height map for an entire planet and would like to know which of > the two common terrain techniques is less computationally intense. > > Kevin > > -- > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Charles B. <cb...@cb...> - 2000-08-25 22:09:19
|
Some guys did this in Siggraph 2000. I didn't read the paper, but I saw it in the video. It looked pretty good, definitely better than nothing. At 04:44 PM 8/25/2000 -0500, you wrote: >>I also think that physiology simulation might be just as >>essential for the reaslism you strive for: if the player >>moves from bright outdoors to dimly lit indoors, do you >>gamma correct to account for the adaptation period? Do >>you change gamma to create pale outdoors when the Sun >>comes out behind a cloud, or blend to change colors? >>Simulating the eye might be more effective than >>simulating the world. > > Along these lines, I've been thinking that it may look very good to >change the lighting depending on the viewer's location. What I'm suggesting >is darkening the lighting of indoor areas the viewer can see into from >outdoors, and brightening the lighting of outdoor areas the viewer can see >from indoors. > > As the viewer crosses a boundary from outdoor to indoor, the indoor >and outdoor lighting can be brightened until the indoor lighting is at the >"normal" light level and vice-versa. I'm not sure whether this can be >achieved through gamma correction. Are configurable gamma ramps supported >on most major 3D accelerators? I know I can change the gamma on my nVidia >cards, but my older 2D card (an SiS 3D deccelerator) didn't respond. > >Regards, >Garett Bass >gt...@ut... >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > -------------------------------------- Charles Bloom www.cbloom.com |
From: Fredo D. <fr...@gr...> - 2000-08-25 21:52:24
|
Right, displaying the high dynamic range of real conditions on a CRT display (at best limited to a 1-100 contrast) is a hard problem. See the thesis by Tumblin http://www.cc.gatech.edu/gvu/people/jack.tumblin/ for very interesting discussions about the problem. For dynamic conditions, one way is to try to simulate human visual adaptation to attempt to provide a visual impression as close as possible. There is a paper at siggraph this year on the subject http://www.graphics.cornell.edu/pubs/2000/PTY+00.html and I have done something on the subject too: http://graphics.lcs.mit.edu/~fredo/PUBLI/EGWR2000/index.htm (a much more comprehensive version should be available soon at the same address) But right, flares are crucial and very efficient at increasing the subjective dynamic range of images. Bottomline: the problem is exactly the same for cinema and video, so if you're happy with the "sun impression" that you get on films, there is no physical limitation with your CRT display that will prevent you from getting teh same with 3D games. Fredo -- Fredo Durand, MIT-LCS Graphics Group NE43-255, Cambridge, MA 02139 phone : (617) 253 7223 fax : (617) 253 4640 http://graphics.lcs.mit.edu/~fredo/ |
From: Bass, G. T. <gt...@ut...> - 2000-08-25 21:45:15
|
>I also think that physiology simulation might be just as >essential for the reaslism you strive for: if the player >moves from bright outdoors to dimly lit indoors, do you >gamma correct to account for the adaptation period? Do >you change gamma to create pale outdoors when the Sun >comes out behind a cloud, or blend to change colors? >Simulating the eye might be more effective than >simulating the world. Along these lines, I've been thinking that it may look very good to change the lighting depending on the viewer's location. What I'm suggesting is darkening the lighting of indoor areas the viewer can see into from outdoors, and brightening the lighting of outdoor areas the viewer can see from indoors. As the viewer crosses a boundary from outdoor to indoor, the indoor and outdoor lighting can be brightened until the indoor lighting is at the "normal" light level and vice-versa. I'm not sure whether this can be achieved through gamma correction. Are configurable gamma ramps supported on most major 3D accelerators? I know I can change the gamma on my nVidia cards, but my older 2D card (an SiS 3D deccelerator) didn't respond. Regards, Garett Bass gt...@ut... |