gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1433)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Angel P. <ju...@bi...> - 2000-07-22 18:38:23
|
> I've been working on my viewport culling and I just wanted to test something > out > I was wondering if anyone could tell me how to render a portion of a plane > around > a particular point using the > Normal + distance version of a plane... This code will create a huge quad liying on a plane. In your case - you can create such a quad for each frustum plane and clip it to the other frustum planes. int maxaxis=0, a2, a3; for( int i=1; i<3; i++ ) if( fabs(P.Normal[i]) > fabs(P.Normal[maxaxis]) ) maxaxis=i; switch( maxaxis ) { case 0: a2=1; a3=2; break; case 1: a2=0; a3=2; break; case 2: a2=1; a3=0; break; } Verts[0][a2] = Verts[0][a3] = Verts[1][a2] = Verts[3][a3] = LEVEL_BOX_SIZE; Verts[1][a3] = Verts[2][a2] = Verts[2][a3] = Verts[3][a2] = -LEVEL_BOX_SIZE; Verts[0][maxaxis]= ( P.D - P.Normal[a2] * LEVEL_BOX_SIZE - P.Normal[a3] * LEVEL_BOX_SIZE )/P.Normal[maxaxis]; Verts[1][maxaxis]= ( P.D - P.Normal[a2] * LEVEL_BOX_SIZE + P.Normal[a3] * LEVEL_BOX_SIZE )/P.Normal[maxaxis]; Verts[2][maxaxis]= ( P.D + P.Normal[a2] * LEVEL_BOX_SIZE + P.Normal[a3] * LEVEL_BOX_SIZE )/P.Normal[maxaxis]; Verts[3][maxaxis]= ( P.D + P.Normal[a2] * LEVEL_BOX_SIZE - P.Normal[a3] * LEVEL_BOX_SIZE )/P.Normal[maxaxis]; |
From: Mark W. <mwa...@to...> - 2000-07-22 09:06:20
|
Yep, hence the cap on the maximum number. You could consider these overlay textures "caches" of the bullet holes within an area, so instead of rendering dozens or more bullet holes each frame, you render the one overlay texture. Can't be any more expensive than dark mapping, can it ? :) Mark > Using overlay textures would require you to use a ton of unique textures. . > . can become very unreasonable quickly. > > Gary > > -----Original Message----- > From: Mark Wayland [mailto:mwa...@to...] > Sent: Thursday, July 20, 2000 9:55 PM > To: gda...@li... > Subject: Re: [Algorithms] Bullets on walls > > > An extension to Jaimi's method may be to render the bullet holes to an > overlay texture (of which you would have a maximum number) which is only > rendered *once* on-top of the wall. With complex texture mapping this is > obviously not possible, but perhaps for simple quads it would be acceptable > ? > > Perhaps some form of scalable solution - like use Jaimi's method for a small > number and when a threshold is reached, combine them into the single texture > as I've mentioned and repeat ... ?? > > Just my 2c worth ... > > Mark > > > In d3d, this is the way I do it: > > > > Create decals with a one pixel transparent border. > > Then, set the texture address mode to "Clamp". > > Calculate new UV coordinates for your decal based on the wall, instead > > of the Decal - then redraw the entire wall with your decal texture and > > new UV coordinates. > > This way you do not have to clip at all. I would assume you could > > do the same in OpenGL. > > > > Jaimi > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Steve W. <Ste...@im...> - 2000-07-22 00:26:00
|
Ignore my last post...cross-list bleed > -----Original Message----- > From: Steve Wood > Sent: Friday, July 21, 2000 2:58 PM > To: 'gda...@li...' > Subject: RE: [Algorithms] Rendering (a portion of) a Plane > > > Oh...then set up a culling rectangle that is smaller than > your viewport. > Where did you see the extraction of clipping planes so you > can see around > them? What you saw may not be used in the way you want it to be > implemented...but there is always a first. > > R&R > > > From: Jeremy Bake [mailto:Jer...@in...] > > > > Actually what I'm more after is all these frustum culling routines > > I've seen lately that extract the clipping planes from the > > modelview and projection matrices I'd like to be able to > extract these > > draw them (at least a portion since a plane goes to infinity) > > and move my eye position elsewhere to test if things really > > are getting > > culled outside (I learn better when I can observe what's happening > > visually, dunno I just have a hard time trusting my code... > > I'd lie to know > > I'm going in the right direction before getting in too deep) > > even an example of how to draw a few points on the plane so I > > can see it > > visually would be great... is that a better description of > > what I'm going for? > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Steve W. <Ste...@im...> - 2000-07-22 00:24:35
|
I'm sorry if this doesn't make sense today...being a programmer from MSDOS days, but when I needed a timer I looked at the Real Time Clock (RTC) memory addresses. I don't remember the exact memory locations, but it's in the same place on all PC's, and is the most stable and detailed (.1 millisecond ticks). Is there some security reason that we can't access it now? (Forgive me for not being able to test it, but since Microsoft removed support for MASM I haven't been able to code anything with assembler). It just doesn't make any sense that "reading" the RTC would be sooooo buggy. R&R > -----Original Message----- > From: Steve Wood > Sent: Friday, July 21, 2000 2:58 PM > To: 'gda...@li...' > Subject: RE: [Algorithms] Rendering (a portion of) a Plane > > > Oh...then set up a culling rectangle that is smaller than > your viewport. > Where did you see the extraction of clipping planes so you > can see around > them? What you saw may not be used in the way you want it to be > implemented...but there is always a first. > > R&R > > > From: Jeremy Bake [mailto:Jer...@in...] > > > > Actually what I'm more after is all these frustum culling routines > > I've seen lately that extract the clipping planes from the > > modelview and projection matrices I'd like to be able to > extract these > > draw them (at least a portion since a plane goes to infinity) > > and move my eye position elsewhere to test if things really > > are getting > > culled outside (I learn better when I can observe what's happening > > visually, dunno I just have a hard time trusting my code... > > I'd lie to know > > I'm going in the right direction before getting in too deep) > > even an example of how to draw a few points on the plane so I > > can see it > > visually would be great... is that a better description of > > what I'm going for? > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Bernd K. <bk...@lo...> - 2000-07-21 22:18:33
|
Eric Haines writes: > I look forward to others' suggestions, http://graphics.lcs.mit.edu/~seth/pubs/pubs.html The 1996 and before references on Visibility and such, for portal stabbing algorithms etc. b. |
From: Steve W. <Ste...@im...> - 2000-07-21 22:03:32
|
Oh...then set up a culling rectangle that is smaller than your viewport. Where did you see the extraction of clipping planes so you can see around them? What you saw may not be used in the way you want it to be implemented...but there is always a first. R&R > From: Jeremy Bake [mailto:Jer...@in...] > > Actually what I'm more after is all these frustum culling routines > I've seen lately that extract the clipping planes from the > modelview and projection matrices I'd like to be able to extract these > draw them (at least a portion since a plane goes to infinity) > and move my eye position elsewhere to test if things really > are getting > culled outside (I learn better when I can observe what's happening > visually, dunno I just have a hard time trusting my code... > I'd lie to know > I'm going in the right direction before getting in too deep) > even an example of how to draw a few points on the plane so I > can see it > visually would be great... is that a better description of > what I'm going for? > |
From: Steve W. <Ste...@im...> - 2000-07-21 20:04:12
|
I'm hoping you caught that the equation I gave: x^2 + y^2 = distance should read: x^2 + y^2 = distance^2 :-<> R&R > -----Original Message----- > From: Anceschi Mauro [mailto:anc...@li...] > Sent: Friday, July 21, 2000 12:42 PM > To: gda...@li... > Subject: Re: [Algorithms] Rendering (a portion of) a Plane > > > > ----- Original Message ----- > From: "Steve Wood" <Ste...@im...> > To: <gda...@li...> > Sent: Friday, July 21, 2000 9:25 PM > Subject: RE: [Algorithms] Rendering (a portion of) a Plane > > > > Hmm, > > > > Your equation: > > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > > > > is defining 3D space as an infinite number of parallel > planes with the > same > > Normal as in the equation: > > Normal[0] * X + Normal[1] * y + Normal[2] * z = 0 > > > > using an index of distance to determine which parallel plane you are > > referencing. > > Normal[0] * X + Normal[1] * y + Normal[2] * z = -distance > > > > Correct me if I'm wrong, but it sounds like you want to > find points ON the > > plane...if you are trying to describe a circle on the plane > then you want > > all points (x,y,z) which lie on the plane and which are the > same distance > > from your point of origin (x',y',z'). If you like matrices and > > transformations then perhaps use the 2D (x,y) coordinate > plane to find > your > > points x^2 + y^2 = distance, then translate and rotate them > onto your > plane. > > > > R&R > > > > > > > -----Original Message----- > > > From: Jeremy Bake [mailto:Jer...@in...] > > > Sent: Friday, July 21, 2000 6:57 AM > > > To: 'gda...@li...' > > > Subject: [Algorithms] Rendering (a portion of) a Plane > > > > > > > > > I've been working on my viewport culling and I just wanted to > > > test something > > > out > > > I was wondering if anyone could tell me how to render a > > > portion of a plane > > > around > > > a particular point using the > > > Normal + distance version of a plane. > > > I think it relates to > > > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > > > but I can't seem to get my head around how I could draw > this function > > > over a specific range... any help would be Greatly appreciated > > > > > > Jeremy Bake > > > RtroActiv > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > thanks!!!!!!! > > mauro > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: <lin...@cc...> - 2000-07-21 20:00:31
|
[snip] > > Lindstrom-Koller does not exhibit this behavior. That algorithm will > > never instantiate a vertex unless the actual projection of the error > value, > > from the exact position of the vertex, exceeds the error threshold. This is true, with the exception that a vertex may be included because it depends on some other vertex, even though its error is smaller than the threshold. > > They > > do pay a CPU cost for this -- it's what their whole region of > uncertainty > > thing is all about. If you think of a Lindstrom block as being > analogous > > to a ROAM wedge, then you will see that where Lindstrom-Koller has a > > mechanism for the uncertainty region (and delta-max and delta-min), > > ROAM just assumes that any wedge exceeding delta-min must be > subdivided. > > ROAM is optimal and correct. It is possible to spend extra work to get > less over-conservative, but this isn't what Lindstrom et al do for > their final bottom-up pass as far as I recall...but my recollections > on that paper are fuzzy now (Peter?). As are mine... :-) Seriously, our algorithm is conservative only when choosing what resolution block to use for a region, which is then coarsened. Conceptually, however, it works exactly as though each vertex were visited bottom-up (from the very highest resolution height field) and tested for inclusion. Thus it always produces the "optimal" triangulation with respect to the error metric, and is never over-conservative. ________________________________________________________________________ Peter Lindstrom Graphics, Visualization, & Usability Center PhD Student Georgia Institute of Technology lin...@cc... http://www.cc.gatech.edu/~lindstro |
From: Jeremy B. <Jer...@in...> - 2000-07-21 20:00:00
|
Actually what I'm more after is all these frustum culling routines I've seen lately that extract the clipping planes from the modelview and projection matrices I'd like to be able to extract these draw them (at least a portion since a plane goes to infinity) and move my eye position elsewhere to test if things really are getting culled outside (I learn better when I can observe what's happening visually, dunno I just have a hard time trusting my code... I'd lie to know I'm going in the right direction before getting in too deep) even an example of how to draw a few points on the plane so I can see it visually would be great... is that a better description of what I'm going for? Jeremy Bake RtroActiv > Hmm, > > Your equation: > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > > is defining 3D space as an infinite number of parallel planes with the same > Normal as in the equation: > Normal[0] * X + Normal[1] * y + Normal[2] * z = 0 > > using an index of distance to determine which parallel plane you are > referencing. > Normal[0] * X + Normal[1] * y + Normal[2] * z = -distance > > Correct me if I'm wrong, but it sounds like you want to find points ON the > plane...if you are trying to describe a circle on the plane then you want > all points (x,y,z) which lie on the plane and which are the same distance > from your point of origin (x',y',z'). If you like matrices and > transformations then perhaps use the 2D (x,y) coordinate plane to find your > points x^2 + y^2 = distance, then translate and rotate them onto your plane. > > R&R > > > > -----Original Message----- > > From: Jeremy Bake [mailto:Jer...@in...] > > Sent: Friday, July 21, 2000 6:57 AM > > To: 'gda...@li...' > > Subject: [Algorithms] Rendering (a portion of) a Plane > > > > > > I've been working on my viewport culling and I just wanted to > > test something > > out > > I was wondering if anyone could tell me how to render a > > portion of a plane > > around > > a particular point using the > > Normal + distance version of a plane. > > I think it relates to > > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > > but I can't seem to get my head around how I could draw this function > > over a specific range... any help would be Greatly appreciated > > > > Jeremy Bake > > RtroActiv > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list thanks!!!!!!! mauro _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Igor S. <igo...@mt...> - 2000-07-21 19:53:54
|
unsubscribe algorithms |
From: Anceschi M. <anc...@li...> - 2000-07-21 19:44:38
|
----- Original Message ----- From: "Steve Wood" <Ste...@im...> To: <gda...@li...> Sent: Friday, July 21, 2000 9:25 PM Subject: RE: [Algorithms] Rendering (a portion of) a Plane > Hmm, > > Your equation: > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > > is defining 3D space as an infinite number of parallel planes with the same > Normal as in the equation: > Normal[0] * X + Normal[1] * y + Normal[2] * z = 0 > > using an index of distance to determine which parallel plane you are > referencing. > Normal[0] * X + Normal[1] * y + Normal[2] * z = -distance > > Correct me if I'm wrong, but it sounds like you want to find points ON the > plane...if you are trying to describe a circle on the plane then you want > all points (x,y,z) which lie on the plane and which are the same distance > from your point of origin (x',y',z'). If you like matrices and > transformations then perhaps use the 2D (x,y) coordinate plane to find your > points x^2 + y^2 = distance, then translate and rotate them onto your plane. > > R&R > > > > -----Original Message----- > > From: Jeremy Bake [mailto:Jer...@in...] > > Sent: Friday, July 21, 2000 6:57 AM > > To: 'gda...@li...' > > Subject: [Algorithms] Rendering (a portion of) a Plane > > > > > > I've been working on my viewport culling and I just wanted to > > test something > > out > > I was wondering if anyone could tell me how to render a > > portion of a plane > > around > > a particular point using the > > Normal + distance version of a plane. > > I think it relates to > > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > > but I can't seem to get my head around how I could draw this function > > over a specific range... any help would be Greatly appreciated > > > > Jeremy Bake > > RtroActiv > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list thanks!!!!!!! mauro |
From: Steve W. <Ste...@im...> - 2000-07-21 19:31:28
|
Hmm, Your equation: Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 is defining 3D space as an infinite number of parallel planes with the same Normal as in the equation: Normal[0] * X + Normal[1] * y + Normal[2] * z = 0 using an index of distance to determine which parallel plane you are referencing. Normal[0] * X + Normal[1] * y + Normal[2] * z = -distance Correct me if I'm wrong, but it sounds like you want to find points ON the plane...if you are trying to describe a circle on the plane then you want all points (x,y,z) which lie on the plane and which are the same distance from your point of origin (x',y',z'). If you like matrices and transformations then perhaps use the 2D (x,y) coordinate plane to find your points x^2 + y^2 = distance, then translate and rotate them onto your plane. R&R > -----Original Message----- > From: Jeremy Bake [mailto:Jer...@in...] > Sent: Friday, July 21, 2000 6:57 AM > To: 'gda...@li...' > Subject: [Algorithms] Rendering (a portion of) a Plane > > > I've been working on my viewport culling and I just wanted to > test something > out > I was wondering if anyone could tell me how to render a > portion of a plane > around > a particular point using the > Normal + distance version of a plane. > I think it relates to > Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 > but I can't seem to get my head around how I could draw this function > over a specific range... any help would be Greatly appreciated > > Jeremy Bake > RtroActiv > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Steve W. <Ste...@im...> - 2000-07-21 19:09:55
|
Oops...forgot to mention...the wind resistance in the Y direction changes sign after the fodder has started it's way down...so it would be the opposite sign of the sign of the Y velocity: Wy = Wy * -sgn(Vy) Where Vy = Vy0 - g * t R&R > -----Original Message----- > From: Steve Wood > Sent: Friday, July 21, 2000 11:48 AM > To: 'gda...@li...' > Subject: RE: [Algorithms] interpolate between points in 3d space > > > When a projectile is fired you should use it's trajectory > equation. It sounds like you are entering the realm of > physics. I'll give you a clue, but please pick up a book on > newtonion physics, browse the web for physics stuff...there's > been many references to physics web sites on this > list...unfortunately we don't have an archive that goes back > far enough right now that I know of. Maybe someone will post > a link or two for you. > > Anyway for your particular example...let's add a mild > constant wind resistance Wz and Wx along Z and X axis: > > X0 = initial X position of fodder after it leaves the cannon. > Y0 = initial Y position of fodder after it leaves the cannon. > Z0 = initial Z position of fodder after it leaves the cannon. > Vx0 = initial velocity in X direction of the fodder after it > leaves the cannon. > Vy0 = initial velocity in Y direction of the fodder after it > leaves the cannon. > Vz0 = initial velocity in Z direction of the fodder after it > leaves the cannon. > Wx = constant wind resistance > Wy = constant wind resistance > Wz = constant wind resistance > g = acceleration of gravity > t = time with zero when the fodder leaves the cannon. > > X = X0 + Vx0 * t - Wx * t //X position > Y = Y0 + Vy0 * t - Wy * t - .5g * t^2 //Y position > Z = Z0 + Vz0 * t - Wz * t //Z position > > That's pretty much it, except you may want to use vector math > instead of breaking it up into parts like I did. Did this > answer your question? > > R&R > > > From: Anceschi Mauro [mailto:anc...@li...] > > > Hi. > > I have a little problem in my 3D space game. > > When my ship "fire" from a cannon i use a new local axis for > > the new object i'm firing , for calculate the next point of > the laser. > > But this works bad, and i don't have much precision. > > So my question is: > > How can interpolate from 2 point in 3d? > > (x,y,z) and (x',y',z') (suppose the second is an enemy ship). > > > > Thanks, > > > > mauro > |
From: Steve W. <Ste...@im...> - 2000-07-21 18:53:56
|
When a projectile is fired you should use it's trajectory equation. It sounds like you are entering the realm of physics. I'll give you a clue, but please pick up a book on newtonion physics, browse the web for physics stuff...there's been many references to physics web sites on this list...unfortunately we don't have an archive that goes back far enough right now that I know of. Maybe someone will post a link or two for you. Anyway for your particular example...let's add a mild constant wind resistance Wz and Wx along Z and X axis: X0 = initial X position of fodder after it leaves the cannon. Y0 = initial Y position of fodder after it leaves the cannon. Z0 = initial Z position of fodder after it leaves the cannon. Vx0 = initial velocity in X direction of the fodder after it leaves the cannon. Vy0 = initial velocity in Y direction of the fodder after it leaves the cannon. Vz0 = initial velocity in Z direction of the fodder after it leaves the cannon. Wx = constant wind resistance Wy = constant wind resistance Wz = constant wind resistance g = acceleration of gravity t = time with zero when the fodder leaves the cannon. X = X0 + Vx0 * t - Wx * t //X position Y = Y0 + Vy0 * t - Wy * t - .5g * t^2 //Y position Z = Z0 + Vz0 * t - Wz * t //Z position That's pretty much it, except you may want to use vector math instead of breaking it up into parts like I did. Did this answer your question? R&R > From: Anceschi Mauro [mailto:anc...@li...] > Hi. > I have a little problem in my 3D space game. > When my ship "fire" from a cannon i use a new local axis for > the new object i'm firing , for calculate the next point of the laser. > But this works bad, and i don't have much precision. > So my question is: > How can interpolate from 2 point in 3d? > (x,y,z) and (x',y',z') (suppose the second is an enemy ship). > > Thanks, > > mauro |
From: Mark D. <duc...@ll...> - 2000-07-21 18:40:34
|
Jonathan, I hope this message reaches you before you head to siggraph, as I think there are critical points you should be aware of. Jonathan Blow wrote: > Mark Duchaineau wrote: > > > In a full-blown ROAM implementation, only a few percent of the triangles > > change each frame. > > I find that, actually, this is not so true. (Not just for ROAM, but for any > CLOD implementation). This statement is highly dependant on the density > of your terrain samples, and on the speed of viewpoint motion relative to > that density. Now, in games, we are aiming for higher detail, and at the > same time we want to be able to have fast motion. I have found that under > these conditions quite a number of polygons can change each frame. Now > this of course also depends on the frame rate: the lower the frame rate, > the more polygons you need to change each frame to maintain reasonable > quality. Suppose you're cruising along at some acceptable frame rate > and then you hit a frame that is a bit more expensive than usual. > Your viewpoint moves a bit further during that frame, so you need to > change more polygons than you were previously, to keep up. Because you're > changing more polygons, you're eating more CPU, so the frame rate > dives lower... it's a vicious cycle. This is a fundamental problem of > simulation (physics guys are familiar with this phenomenon). There is > a catastrophe point in CLOD rendering: on one side of it, you're cruising > along fine at decent frame rates; on the other side, you crash and burn > without hope of recovery. I think you misunderstand our main points in the ROAM paper. You can stop the ROAM optimization at any point in time. For example, if you want to maintain 85 frames/sec *consistently*, you can just watch the clock and stop when time's up. ROAM will do the best possible incremental work during the time available, i.e. it will give the best quality possible given the amount of time you had to spend. It is easy to know ahead of time whether it is best to skip the incremental work and start doing split-only from the base mesh. Also, the amount of work done by the ROAM optimizer is proportionate to the number of splits and merges you end up doing, which is never worse than being proportionate to the number of tirangles you output. No algorithms besides ROAM are able to optimize the full output mesh every frame in time less than some multiple of the output size. Hoppe's idea of optimizing say 1/10th of the output mesh each frame does give a 10x speedup, but fails to optimize the full mesh each frame which can lead to arbitrarily bad results. Here's another way of looking at this: suppose you think that in some cases if is better to just do recursive splits or just a static view-independent mesh. How much time does it take, and what quality do you get? If you are in an unusually incoherent situation where split-only would win, split-only takes time proportionate to the output size. ROAM would do exactly what split-only would do, taking the same *theoretical* amount of time, because we easily detected these cases. ROAM has more overhead in current implementations than simple split-only methods, but there is no fundamental reason why this has to be the case. If you think just having a static mesh is better, again the theory says no but the practice has to catch up. Here's why. If ROAM were allowed to operate until convergence, it would give the optimal mesh, meaning it could never be worse than the static mesh if you wait half a second. Suppose you managed a static mesh that matched ROAM after a half-second startup. Now start moving. If you do no work at all in ROAM, the mesh will remain static and you will be no worse than that static mesh. If you do any work at all in ROAM each frame, you will beat the static mesh. So every microsecond you get to spend ROAMing helps, and *in theory* is better than a pre-optimized static mesh. Again, there is no fundamental reason that ROAM implementations can't match this theory, but it may take a lot of hard work to do so. > > > Static LOD is not so susceptible to this problem which is another reason > why it's a lot more comfortable to use. > > > This gives you a great opportuinity to lock geometry > > on the graphics card, with the big IF: you need to be able to make > > a few scattered changes to the vertex and triangle arrays. If this is > > not possible in current hardware or APIs, I don't see why it couldn't be. > > Any OpenGL guru's have an opinion on this? > > I do not know much about the hardware so I can't venture a guess on the > feasibility of updating partial vertex buffers. I suppose that it could > be done (The same way there are API hooks for updating partial textures) > but I am not sure that it's a high priority to implement on hardware. > (Does anyone's implementation of glTexSubImage actually refrain from copying > the whole texture back into hardware?) > I've just done some tests to help answer this. Under Linux with the Nvidia 0.9-4 drivers on a 450mHz PIII AGP2x GeForce DDR, I was able to get 14.9meg tris/sec with static geometry and around 5meg tris/sec with 10% or the geometry changing each frame. This is without the special Nvidia OpenGL extensions for memory allocation/priorities. When I get access to those functions I'll report what I get again. But even without, it is still worth while to change some geometry each frame. The cost is higher than we would like, but we can just take this into account and still do the optimal thing. > > > > * Top-down terrain rendering algorithm that performs LOD computations faster > > > than the published algorithms (Lindstrom-Koller, ROAM, Rottger etc) > > > That isn't hard ;-). We were all publishing research prototypes. I'm finding > > it possible to tune my new ROAM code to get about 20x faster just through > > careful data structure design and re-ordering operations to be cache friendly. > > Yep, although I am speaking of algorithmically faster, not implementation-detail > faster. > Look at my explanation above: ROAM is the theoretical best. You can't beat it *theoretically*. In practice you might for now. If you managed to tie ROAM theoretically I will be very pleased--I haven't yet seen another approach that get's time O(output change), and that is rather boring ;-). > > > Woa! First let's see if we agree on quality: the most common but imperfect > > definition is the max of the projected error bounds in screen space... > > This is imperfect because it > > does not take into account temporal effects ("popping"), occlusion, color > > gradients, other perceptual issues. But it is fast and practical to measure > > and gives good results. > > Yes, we agree there. I do not think projected error is the "right" answer but > I do not have anything better, so that is what I use. Good...so far... > > > > Given this definition, a correct bottom-up and top-down give exactly > > the same results! > > I do not believe this is true. The basic nature of ROAM and other top-down > algorithms is that they place a bounding volume around an area of terrain and > use that volume to determine subdivision. In order for ROAM to be > conservative, it must take the maximal length of the projection of this > bounding volume onto the viewport (again, with the usual conventions of ignoring > certain things like perspective distortion to make the computations simpler). Not even close! ROAM projects the *error vector from the linear approximation* onto the screen. If a huge (in screen space) coarse-level triangle has a tiny error relative to each finest-resolution grid point, then the "thickness segment" will be tiny and project to a tiny length on the screen. We describe a conservative bound in the paper based on this, but the level of over-conservativeness is small on average in real applications. Measure it. Furthermore, it is not strictly bad to be over conservative--the critical thing is to be *consistent* in how over-conservative you are. If each projected error is exactly twice what it should have been, then you get *exactly* the same optimal mesh! It is only the *inconsistencies* in how conservative the bounds are that causes sub-optimal meshing per frame. > > > A ROAM wedgie is as thick as it is because some interior terrain points pushed > the ceiling upward or the floor downward. (Actually in the ROAM paper you have > only one thickness value, but significant gains can be had if you allow the > floor and ceiling to move independently, and this is really a minor change). We worked out that a two-sided bound gives consistently half the thickness values of a one-sided bound. So from the logic above, you are doing twice as much work and using twice the storage for nothing... > > Now when you conservatively project this wedgie onto the viewport, you are > making the assumption that the terrain points that forced the wedgie to be > as thick as it is are located at that pessimal point. This is almost never > true; they usually lie somewhere inside the wedgie. > Yes, but where? We were going for a conservative bound. You can always relax this if your app isn't so picky. > > Because they lie somewhere inside the wedgie, they are in general further from > the viewpoint than the part of the wedgie that is being used for projection. > Because they are further from the viewpoint, they project smaller (because > their z value is larger). > > Thus in general the actual projection of the error displacements within a > wedgie will be less than the actual projection of the wedgie. The wedgie > is overconservative. This means that in many cases ROAM will subdivide a > wedgie, creating extra polygons, when in fact there was not enough error > inside the wedgie to justify that. Once I coded up a test in my ROAM renderer > that iterated over the currently instantiated terrain and tried to gauge > which of the polygons *really* needed to be there (it did a brute force > iteration that actually projected each vertex to the viewport using the > same error metric) and depending on the scene, 25% to 50% of the polygons > were extraneous. Well, try this again when you've gotten the logic right. I think you may get a slightly sub-optimal mesh because the conservativeness is inconsistent, but not more than a few %. > > > Lindstrom-Koller does not exhibit this behavior. That algorithm will > never instantiate a vertex unless the actual projection of the error value, > from the exact position of the vertex, exceeds the error threshold. They > do pay a CPU cost for this -- it's what their whole region of uncertainty > thing is all about. If you think of a Lindstrom block as being analogous > to a ROAM wedge, then you will see that where Lindstrom-Koller has a > mechanism for the uncertainty region (and delta-max and delta-min), > ROAM just assumes that any wedge exceeding delta-min must be subdivided. ROAM is optimal and correct. It is possible to spend extra work to get less over-conservative, but this isn't what Lindstrom et al do for their final bottom-up pass as far as I recall...but my recollections on that paper are fuzzy now (Peter?). > > > I am viewing this as being separate from the issue of "do we need to > consider the total accumulated error or just the error of one pop". > I agree with you that considering the total accumulation is a good > idea. It is not too hard to do this even with the Lindstrom-Koller > algorithm; it's just a preprocessing issue. ROAM certainly does it, > but the way it does it is overconservative. I'd be happy to hear any ideas on making the ROAM bound less over-conservative *and* more consistently over-conservative *and* that is not too slow. I haven't yet ;-). Hope to see you at SIGGRAPH. --Mark D. |
From: Stephen J B. <sj...@li...> - 2000-07-21 17:49:49
|
On Fri, 21 Jul 2000, Mark Atkinson wrote: > We do it this way: > > If the decal does not cross a polygon boundary with a high enough crease > angle, draw it as a quad. Use ZBIAS if supported and/or move it along the > normal to lift it off the surface. > > Otherwise draw it as decal, as per Jaimi's method. Remember to enable > ALPHATEST so there's no overdraw or fill-rate hit even if the base triangle > is large and has several decals. Eh? Alpha Test (presuming it's the same mechanism as glAlphaFunc in OpenGL) won't eliminate (all of) the overdraw/fillrate issues. The renderer still has to scan-convert all those pixels, do the texture calculation on each one and compare the alpha value against the threshold. * On some hardware, that'll be no cheaper than drawing the entire polygon. * On other hardware, you'll only pay *some* per pixel cost (maybe half). * It'll never be zero though - that's impossible - it has to consume some texture bandwidth per-pixel of the entire polygon. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: <ma...@ci...> - 2000-07-21 16:39:24
|
We do it this way: If the decal does not cross a polygon boundary with a high enough crease angle, draw it as a quad. Use ZBIAS if supported and/or move it along the normal to lift it off the surface. Otherwise draw it as decal, as per Jaimi's method. Remember to enable ALPHATEST so there's no overdraw or fill-rate hit even if the base triangle is large and has several decals. Mipmapping is still a problem - clamp mip levels to a minimum size (16x16 or 32x32) and make sure your algorithm maintains the alpha==0 border all the way, otherwise decals at oblique angles will bleed over the whole base triangle. For small decals like bulletholes just don't mipmap them (and use a suitably low-res texture). -=Mark=- Mark Atkinson, Technical Director, Computer Artworks Ltd. http://www.artworks.co.uk > Using overlay textures would require you to use a ton of unique textures. > . > . can become very unreasonable quickly. > > Gary > > -----Original Message----- > From: Mark Wayland [mailto:mwa...@to...] > Sent: Thursday, July 20, 2000 9:55 PM > To: gda...@li... > Subject: Re: [Algorithms] Bullets on walls > > > An extension to Jaimi's method may be to render the bullet holes to an > overlay texture (of which you would have a maximum number) which is only > rendered *once* on-top of the wall. With complex texture mapping this is > obviously not possible, but perhaps for simple quads it would be > acceptable > ? > > Perhaps some form of scalable solution - like use Jaimi's method for a > small > number and when a threshold is reached, combine them into the single > texture > as I've mentioned and repeat ... ?? > > Just my 2c worth ... > > Mark > > > In d3d, this is the way I do it: > > > > Create decals with a one pixel transparent border. > > Then, set the texture address mode to "Clamp". > > Calculate new UV coordinates for your decal based on the wall, instead > > of the Decal - then redraw the entire wall with your decal texture and > > new UV coordinates. > > This way you do not have to clip at all. I would assume you could > > do the same in OpenGL. > > > > Jaimi > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > |
From: Mark D. <duc...@ll...> - 2000-07-21 16:30:01
|
Jonathan, I hope this message reaches you before you head to siggraph, as I think there are critical points you should be aware of. Jonathan Blow wrote: > Mark Duchaineau wrote: > > > In a full-blown ROAM implementation, only a few percent of the triangles > > change each frame. > > I find that, actually, this is not so true. (Not just for ROAM, but for any > CLOD implementation). This statement is highly dependant on the density > of your terrain samples, and on the speed of viewpoint motion relative to > that density. Now, in games, we are aiming for higher detail, and at the > same time we want to be able to have fast motion. I have found that under > these conditions quite a number of polygons can change each frame. Now > this of course also depends on the frame rate: the lower the frame rate, > the more polygons you need to change each frame to maintain reasonable > quality. Suppose you're cruising along at some acceptable frame rate > and then you hit a frame that is a bit more expensive than usual. > Your viewpoint moves a bit further during that frame, so you need to > change more polygons than you were previously, to keep up. Because you're > changing more polygons, you're eating more CPU, so the frame rate > dives lower... it's a vicious cycle. This is a fundamental problem of > simulation (physics guys are familiar with this phenomenon). There is > a catastrophe point in CLOD rendering: on one side of it, you're cruising > along fine at decent frame rates; on the other side, you crash and burn > without hope of recovery. You misunderstand our main points in the ROAM paper, it appears. You can stop the ROAM optimization at any point in time. For example, if you want to maintain 85 frames/sec *consistently*, you can just watch the clock and stop when time's up. ROAM will do the best possible incremental work during the time available, i.e. it will give the best quality possible given the amount of time you had to spend. It is easy to know ahead of time whether it is best to skip the incremental work and start doing split-only from the base mesh. Also, the amount of work done by the ROAM optimizer is proportionate to the number of splits and merges you end up doing, which is never worse than being proportionate to the number of tirangles you output. No algorithms besides ROAM are able to optimize the full output mesh every frame in time less than some multiple of the output size. Hoppe's idea of optimizing say 1/10th of the output mesh each frame does give a 10x speedup, but fails to optimize the full mesh each frame which can lead to arbitrarily bad results. Here's another way of looking at this: suppose you think that in some cases if is better to just do recursive splits or just a static view-independent mesh. How much time does it take, and what quality do you get? If you are in an unusually incoherent situation where split-only would win, split-only takes time proportionate to the output size. ROAM would do exactly what split-only would do, taking the same *theoretical* amount of time, because we easily detected these cases. ROAM has more overhead in current implementations than simple split-only methods, but there is no fundamental reason why this has to be the case. If you think just having a static mesh is better, again the theory says no but the practice has to catch up. Here's why. If ROAM were allowed to operate until convergence, it would give the optimal mesh, meaning it could never be worse than the static mesh if you wait half a second. Suppose you managed a static mesh that matched ROAM after a half-second startup. Now start moving. If you do no work at all in ROAM, the mesh will remain static and you will be no worse than that static mesh. If you do any work at all in ROAM each frame, you will beat the static mesh. So every microsecond you get to spend ROAMing helps, and *in theory* is better than a pre-optimized static mesh. Again, there is no fundamental reason that ROAM implementations can't match this theory, but it may take a lot of hard work to do so. > > > Static LOD is not so susceptible to this problem which is another reason > why it's a lot more comfortable to use. > > > This gives you a great opportuinity to lock geometry > > on the graphics card, with the big IF: you need to be able to make > > a few scattered changes to the vertex and triangle arrays. If this is > > not possible in current hardware or APIs, I don't see why it couldn't be. > > Any OpenGL guru's have an opinion on this? > > I do not know much about the hardware so I can't venture a guess on the > feasibility of updating partial vertex buffers. I suppose that it could > be done (The same way there are API hooks for updating partial textures) > but I am not sure that it's a high priority to implement on hardware. > (Does anyone's implementation of glTexSubImage actually refrain from copying > the whole texture back into hardware?) > I've just done some tests to help answer this. Under Linux with the Nvidia 0.9-4 drivers on a 450mHz PIII AGP2x GeForce DDR, I was able to get 14.9meg tris/sec with static geometry and around 5meg tris/sec with 10% or the geometry changing each frame. This is without the special Nvidia OpenGL extensions for memory allocation/priorities. When I get access to those functions I'll report what I get again. But even without, it is still worth while to change some geometry each frame. The cost is higher than we would like, but we can just take this into account and still do the optimal thing. > > > > * Top-down terrain rendering algorithm that performs LOD computations faster > > > than the published algorithms (Lindstrom-Koller, ROAM, Rottger etc) > > > That isn't hard ;-). We were all publishing research prototypes. I'm finding > > it possible to tune my new ROAM code to get about 20x faster just through > > careful data structure design and re-ordering operations to be cache friendly. > > Yep, although I am speaking of algorithmically faster, not implementation-detail > faster. > Look at my explanation above: ROAM is the theoretical best. You can't beat it *theoretically*. In practice you might for now. If you managed to tied ROAM theoretically I will be very pleased--I haven't yet seen another approach that get's time O(output change), and that is rather boring ;-). > > > Woa! First let's see if we agree on quality: the most common but imperfect > > definition is the max of the projected error bounds in screen space... > > This is imperfect because it > > does not take into account temporal effects ("popping"), occlusion, color > > gradients, other perceptual issues. But it is fast and practical to measure > > and gives good results. > > Yes, we agree there. I do not think projected error is the "right" answer but > I do not have anything better, so that is what I use. Good...so far... > > > > Given this definition, a correct bottom-up and top-down give exactly > > the same results! > > I do not believe this is true. The basic nature of ROAM and other top-down > algorithms is that they place a bounding volume around an area of terrain and > use that volume to determine subdivision. In order for ROAM to be > conservative, it must take the maximal length of the projection of this > bounding volume onto the viewport (again, with the usual conventions of ignoring > certain things like perspective distortion to make the computations simpler). Not even close! ROAM projects the *error vector from the linear approximation* onto the screen. If a huge (in screen space) coarse-level triangle has a tiny error relative to each finest-resolution grid point, then the "thickness segment" will be tiny and project to a tiny length on the screen. We describe a conservative bound in the paper based on this, but the level of over-conservativeness is tiny on average in real applications. Measure it! Furthermore, it is not strictly bad to be over conservative--the critical thing is to be *consistent* in how over-conservative you are. If each projected error is exactly twice what it should have been, then you get *exactly* the same optimal mesh! It is only the *inconsistencies* in how conservative the bounds are that causes sub-optimal meshing per frame. > > > A ROAM wedgie is as thick as it is because some interior terrain points pushed > the ceiling upward or the floor downward. (Actually in the ROAM paper you have > only one thickness value, but significant gains can be had if you allow the > floor and ceiling to move independently, and this is really a minor change). We worked out that a two-sided bound gives consistently half the thickness values of a one-sided bound. So from the logic above, you are doing twice as much work and using twice the storage for nothing... > > Now when you conservatively project this wedgie onto the viewport, you are > making the assumption that the terrain points that forced the wedgie to be > as thick as it is are located at that pessimal point. This is almost never > true; they usually lie somewhere inside the wedgie. > Yes, but where? We were going for a conservative bound. You can always relax this if your app isn't so picky. > > Because they lie somewhere inside the wedgie, they are in general further from > the viewpoint than the part of the wedgie that is being used for projection. > Because they are further from the viewpoint, they project smaller (because > their z value is larger). > > Thus in general the actual projection of the error displacements within a > wedgie will be less than the actual projection of the wedgie. The wedgie > is overconservative. This means that in many cases ROAM will subdivide a > wedgie, creating extra polygons, when in fact there was not enough error > inside the wedgie to justify that. Once I coded up a test in my ROAM renderer > that iterated over the currently instantiated terrain and tried to gauge > which of the polygons *really* needed to be there (it did a brute force > iteration that actually projected each vertex to the viewport using the > same error metric) and depending on the scene, 25% to 50% of the polygons > were extraneous. Well, try this again when you've gotten the logic right. I think you may get a slightly sub-optimal mesh because the conservativeness is inconsistent, but not more than a few %. > > > Lindstrom-Koller does not exhibit this behavior. That algorithm will > never instantiate a vertex unless the actual projection of the error value, > from the exact position of the vertex, exceeds the error threshold. They > do pay a CPU cost for this -- it's what their whole region of uncertainty > thing is all about. If you think of a Lindstrom block as being analogous > to a ROAM wedge, then you will see that where Lindstrom-Koller has a > mechanism for the uncertainty region (and delta-max and delta-min), > ROAM just assumes that any wedge exceeding delta-min must be subdivided. ROAM is optimal and correct. It is possible to spend extra work to get less over-conservative, but this isn't what Lindstrom et al do for their final bottom-up pass as far as I recall...but my recollections on that paper are fuzzy now. > > > I am viewing this as being separate from the issue of "do we need to > consider the total accumulated error or just the error of one pop". > I agree with you that considering the total accumulation is a good > idea. It is not too hard to do this even with the Lindstrom-Koller > algorithm; it's just a preprocessing issue. ROAM certainly does it, > but the way it does it is overconservative. I'd be happy to hear any ideas on making the ROAM bound less over-conservative *and* more consistently over-conservative. I haven't yet ;-). Hope to see you at SIGGRAPH. --Mark D. |
From: Gary M. <ga...@va...> - 2000-07-21 15:02:09
|
Using overlay textures would require you to use a ton of unique textures. . . can become very unreasonable quickly. Gary -----Original Message----- From: Mark Wayland [mailto:mwa...@to...] Sent: Thursday, July 20, 2000 9:55 PM To: gda...@li... Subject: Re: [Algorithms] Bullets on walls An extension to Jaimi's method may be to render the bullet holes to an overlay texture (of which you would have a maximum number) which is only rendered *once* on-top of the wall. With complex texture mapping this is obviously not possible, but perhaps for simple quads it would be acceptable ? Perhaps some form of scalable solution - like use Jaimi's method for a small number and when a threshold is reached, combine them into the single texture as I've mentioned and repeat ... ?? Just my 2c worth ... Mark > In d3d, this is the way I do it: > > Create decals with a one pixel transparent border. > Then, set the texture address mode to "Clamp". > Calculate new UV coordinates for your decal based on the wall, instead > of the Decal - then redraw the entire wall with your decal texture and > new UV coordinates. > This way you do not have to clip at all. I would assume you could > do the same in OpenGL. > > Jaimi _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Stephen J B. <sj...@li...> - 2000-07-21 14:06:02
|
On Fri, 21 Jul 2000, Neal Tringham wrote: > > Giovanni Bajo <ba...@pr...> writes: > > In DirectX, there is a ZBIAS stuff that let you set a priority order > > (integer) when Z matches. It is a renderstate. But looking at the caps, it > > is not supported in every boards I have tried. I heard it is an old > thingie > > of first Voodoo boards. > > Yes, the support for zbias seems to be rather inconsistent. > > > I had not found something similiar in OpenGL. > > You may want to look at the 1.1 glPolygonOffset function. In OpenGL (and presumably in D3D also), you can fake a polygon offset by moving the near clip plane a tiny bit between drawing the 'base' and 'decal' polygons. This (in effect) changes the "meaning" of all the numbers currently stored in the Z buffer - so they represent a distance that is a little further away than than they did when they were originally rendered. In effect, this moves the entire scene backwards a little bit. You need to experiment a bit with how much to move it - but it's a trick that does work on ALL hardware. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Jeremy B. <Jer...@in...> - 2000-07-21 13:57:01
|
I've been working on my viewport culling and I just wanted to test something out I was wondering if anyone could tell me how to render a portion of a plane around a particular point using the Normal + distance version of a plane. I think it relates to Normal[0] * X + Normal[1] * y + Normal[2] * z + distance = 0 but I can't seem to get my head around how I could draw this function over a specific range... any help would be Greatly appreciated Jeremy Bake RtroActiv |
From: Tom F. <to...@mu...> - 2000-07-21 13:38:39
|
We know. Nothing on the planet supports something like ALPMOST_EQUAL as far as I know (and even if it did, that still has problems when things walk beside the walls). You will have to either clip the decals to the wall geometrically; draw the wall poly with clamp texture mode & the right UVs; or composite the decals onto a wall-shaped poly, as suggested. If drawing individual decals, which seems a popular method, then you will need some sort of Z-bias functionality, which is what glPolygonOffset and moving the clip planes gives you. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Angel Popov [mailto:ju...@bi...] > Sent: 21 July 2000 15:50 > To: gda...@li... > Subject: Re: [Algorithms] Bullets on walls > > > No, neither glPolygonOffset or moving the near clipplane will > have the same > functionality as the non-existent glDepthFunc( GL_ALMOST_EQUAL ) > because these two approaches will not fix the problem with > the wallmark > not getting clipped at wall corners. > > > > For D3D (and probably OpenGL), a good alternative is to > move the near & > far > > clip planes a bit further away for those tris. You only > need to move the > > near clip plane if using Z-buffering. If using W-buffering, > you'll need to > > move both. > > > > Tom Forsyth - Muckyfoot bloke. > > Whizzing and pasting and pooting through the day. > > > > > -----Original Message----- > > > From: Neal Tringham [mailto:ne...@ps...] > > > Sent: 21 July 2000 10:56 > > > To: gda...@li... > > > Subject: Re: [Algorithms] Bullets on walls > > > > > > > > > > > > Giovanni Bajo <ba...@pr...> writes: > > > > In DirectX, there is a ZBIAS stuff that let you set a > priority order > > > > (integer) when Z matches. It is a renderstate. But looking > > > at the caps, it > > > > is not supported in every boards I have tried. I heard > it is an old > > > thingie > > > > of first Voodoo boards. > > > > > > Yes, the support for zbias seems to be rather inconsistent. > > > > > > > I had not found something similiar in OpenGL. > > > > > > You may want to look at the 1.1 glPolygonOffset function. > > > > > > > > > Neal Tringham (Sick Puppies) > > > > > > ne...@ps... > > > ne...@em... > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Angel P. <ju...@bi...> - 2000-07-21 13:22:05
|
No, neither glPolygonOffset or moving the near clipplane will have the same functionality as the non-existent glDepthFunc( GL_ALMOST_EQUAL ) because these two approaches will not fix the problem with the wallmark not getting clipped at wall corners. > For D3D (and probably OpenGL), a good alternative is to move the near & far > clip planes a bit further away for those tris. You only need to move the > near clip plane if using Z-buffering. If using W-buffering, you'll need to > move both. > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > > -----Original Message----- > > From: Neal Tringham [mailto:ne...@ps...] > > Sent: 21 July 2000 10:56 > > To: gda...@li... > > Subject: Re: [Algorithms] Bullets on walls > > > > > > > > Giovanni Bajo <ba...@pr...> writes: > > > In DirectX, there is a ZBIAS stuff that let you set a priority order > > > (integer) when Z matches. It is a renderstate. But looking > > at the caps, it > > > is not supported in every boards I have tried. I heard it is an old > > thingie > > > of first Voodoo boards. > > > > Yes, the support for zbias seems to be rather inconsistent. > > > > > I had not found something similiar in OpenGL. > > > > You may want to look at the 1.1 glPolygonOffset function. > > > > > > Neal Tringham (Sick Puppies) > > > > ne...@ps... > > ne...@em... |
From: Kent Q. <ken...@co...> - 2000-07-21 12:35:00
|
Nik...@ao... wrote: > I recently finished writing a collision detection system for my engine (using > OBBs); however I have been having trouble with the system not always > detecting collisions. I tested all of the functions separately to confirm > their accuracy; yet this brought no avail. I decided to work backwards to > find the bug, and I used a function from the RAPID library which tests for > disjointedness between two boxes. Here is the description of the function: > > This is a test between two boxes, box A and box B. It is assumed that > the coordinate system is aligned and centered on box A. The 3x3 > matrix B specifies box B's orientation with respect to box A. > Specifically, the columns of B are the basis vectors (axis vectors) of > box B. The center of box B is located at the vector T. The > dimensions of box B are given in the array b. The orientation and > placement of box A, in this coordinate system, are the identity matrix > and zero vector, respectively, so they need not be specified. The > dimensions of box A are given in array a. > obb_disjoint(double B[3][3], double T[3], double a[3], double b[3]); > > I set up a situation in Max using two boxes, in which the collision detection > failed, and I then entered the box data in by hand. The result was that > after putting box 2 in box 1's coordinate system, the matrix B was a 128 > degree rotation around the Z axis (or Max's Y axis). The location of box 2 > in relation to box 1 was: > T[0]= -5.853f; > T[1]= -2.173f; > T[2]= 3.842f; > And the dimensions (half-lengths) were: > Bd[0]= 2; Bd[1]= 4; Bd[2]= 4.5; > Ad[0]= 4; Ad[1]= 1; Ad[2]= 7.5; > If you set this situation up in Max, it is obvious that the two boxes > intersect (make sure to switch the Y & Z coordinates because of the > difference in D3D and Max's CS, and multiply the half-lengths by 2 to get the > actual dimensions). Yet, RAPID's function reports that the boxes are > disjoint. Any help would be appreciated. I too struggled with this for a long time. After I had given up, and gone to another collision system, someone else reported that RAPID's OBB intersection algorithm (as documented) is too optimistic in its use of absolute value. There is a matrix in the code that gets built as the absolute values of another matrix, thereby saving some steps...but that's an invalid savings. Sadly, I can't find the reference. But I hope this points you in the right direction, or perhaps someone else has a better memory. Kent -- ----------------------------------------------------------------------- Kent Quirk | CogniToy: Intelligent toys... Game Designer | for intelligent minds. ken...@co... | http://www.cognitoy.com/ _____________________________|_________________________________________ |
From: Tom F. <to...@mu...> - 2000-07-21 10:43:01
|
For D3D (and probably OpenGL), a good alternative is to move the near & far clip planes a bit further away for those tris. You only need to move the near clip plane if using Z-buffering. If using W-buffering, you'll need to move both. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Neal Tringham [mailto:ne...@ps...] > Sent: 21 July 2000 10:56 > To: gda...@li... > Subject: Re: [Algorithms] Bullets on walls > > > > Giovanni Bajo <ba...@pr...> writes: > > In DirectX, there is a ZBIAS stuff that let you set a priority order > > (integer) when Z matches. It is a renderstate. But looking > at the caps, it > > is not supported in every boards I have tried. I heard it is an old > thingie > > of first Voodoo boards. > > Yes, the support for zbias seems to be rather inconsistent. > > > I had not found something similiar in OpenGL. > > You may want to look at the 1.1 glPolygonOffset function. > > > Neal Tringham (Sick Puppies) > > ne...@ps... > ne...@em... > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |