Thread: Re: [Algorithms] Cubemap support (Page 2)
Brought to you by:
vexxed72
From: Paul F. <pf...@at...> - 2001-09-14 14:06:40
|
Odin Jensen wrote: > Well. If you specify 6 textures, it's rather fast. > For true environment reflection, you have to render the scene 6 times > into the texture. The speed of that really depends on your card. No, thats not what I meant. To actually draw the cube-map reflection, the object with the reflective texture on it must be rendered (internally) 6 times - once with each cube-map texture bound to it, surely? Cheers, Paul. |
From: Jon W. <hp...@mi...> - 2001-09-14 16:40:13
|
> > Is GeForce the only card to support cube maps? (In both GL and DX) > > What I'd like to know is how much slower a cube-mapped surface is > to render > than a single textured surface? > > Surely it would be six times slower since it has to render the > object six times? No, there's only one texture fetch, which chooses between one of six simultaneously bound textures. The difference between cube map texture coordinate generation and general 2D texture coordinate generation is that the cube map math is higher order than just the single texture matrix transform (and uses normal and possibly eye ray as inputs instead of u,v). Note that cube maps are used for many other things than just environment maps. If you do a true live environment map, then the entire SCENE has to be rendered six times PER OBJECT, plus a last time for actually displaying it; this can get a tad slow. But there are other, more interesting uses of cube maps. I think most modern cards support cube mapping (if they say "EMBM" chances are good, and that started back in the Matrox G400 days :-). Whether they expose the ARB_texture_cube_map extension to OpenGL is up to the driver writers, but as it's an ARB, support is fairly widespread. (I suggest reading the spec for more details about how it's rendered). Cheers, / h+ |
From: Paul F. <pf...@at...> - 2001-09-17 09:42:00
|
Jon Watte wrote: > > > Is GeForce the only card to support cube maps? (In both GL and DX) > > > > What I'd like to know is how much slower a cube-mapped surface is > > to render > > than a single textured surface? > > > > Surely it would be six times slower since it has to render the > > object six times? > > No, there's only one texture fetch, which chooses between one of six > simultaneously bound textures. But surely you can see more than one of the cube reflection's faces on the surface of a reflective sphere, for example? > The difference between cube map texture > coordinate generation and general 2D texture coordinate generation is that > the cube map math is higher order than just the single texture matrix > transform (and uses normal and possibly eye ray as inputs instead of u,v). > > Note that cube maps are used for many other things than just environment > maps. If you do a true live environment map, then the entire SCENE has to be > rendered six times PER OBJECT, plus a last time for actually displaying it; > this can get a tad slow. But there are other, more interesting uses of cube > maps. Sure, like per-pixel lighting. I'd have thought that dual-peraboloid would be faster and more correct (less distortion) than cube-mapping for that kind of thing? > I think most modern cards support cube mapping (if they say "EMBM" chances > are good, and that started back in the Matrox G400 days :-). Whether they > expose the ARB_texture_cube_map extension to OpenGL is up to the driver > writers, but as it's an ARB, support is fairly widespread. (I suggest > reading the spec for more details about how it's rendered). Do you have any references for the definitive rendering algorithm for cube-mapping? I can only find very vague stuff. Actually, whilest I'm at it, do you know of any references for EMBM - I thought only cards which could do dependent texture reads were capable of EMBM? Cheers, Paul. |
From: Jon W. <hp...@mi...> - 2001-09-17 17:52:01
|
> But surely you can see more than one of the cube reflection's faces on the > surface of a reflective sphere, for example? Actually, no, each fragment only fetches texels from one of the six cube sides. That's why cube maps need extra-special careful attention to detail in interpolation/borders/wrapping/clamping (like use GL_CLAMP_TO_EDGE). Of course, each individual fragment can get a texel from a different side of the cube. (In Direct3D, this means each output pixel comes from exactly one side of the cube). > Sure, like per-pixel lighting. I'd have thought that > dual-peraboloid would be > faster and more correct (less distortion) than cube-mapping for > that kind of thing? Why? If you pre-calculate your cube map to match the light distribution you want, it's a fairly distortion free method. The drawback is that it uses more texture memory than the other mapping functions supported in hardware. > Do you have any references for the definitive rendering algorithm > for cube-mapping? http://oss.sgi.com/projects/ogl-sample/registry/ARB/texture_cube_map.txt As the hardware is more or less the same, Direct3D cube mapping uses the same underlying mechanism, although some details (handedness etc) are different (as is the way you use the API to make it happen). Cheers, / h+ |
From: <sa...@dn...> - 2002-02-11 00:08:21
|
Very cool, this certainly looks promising. I browsed quickly through a cached copy of the swift manual on google, and de= composer is indeed a preprocessor for non-convex models that will output something useable by swift. Unfortunately the U= NC site still seems to be down so I can't actually download anything. The swift manual refers to the decomposer manual, = which I am also unable to find elsewhere. Can't seem to find any mirrors, either. Can you tell I'm anxious to check thi= s stuff out? (-: -Sam -----Original Message----- From: gda...@li... [mailto:gda...@li...] On Behalf Of P= ierre Terdiman Sent: Sunday, February 10, 2002 3:13 PM To: gda...@li... Subject: Re: [Algorithms] Breaking down a mesh into convex polyhedra >The UNC site which hosts the SWIFT++ page seems >to be down at the moment. Does the SWIFT++ package >actually contain an algorithm to split a mesh into convex surfaces Yes. Called "decomposer", it's actually an independent lib relying on RAPID, with its own manual and interface. The underlying SWIFT, i.e. a revisited vorono=EF walking in a good-old Lin-Canny way, only works with convex polytopes. So it sounds exactly like what you need. [disclaimer : I've never used it] P. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php? -------------------------------------------------------------------- mail2web - Check your email from the web at http://mail2web.com/ . |
From: Alain S. <sa...@pt...> - 2002-02-22 17:23:45
|
Joris Mans: I don't see the use of Z information with spheres that won't ever intersect. Simple textures should do the trick. Jon Watte: I agree for the environment mapping and the shadows. But 160000 polys for the balls only is too much if I also want to render some environment (a pool hall with people in the background, etc...). The CLOD doesn't reduce quality but enhances performance, so it's never a bad thing. Tom Forsyth: Okay, okay, I should use the correct terms. With CLOD I didn't the mean the specific technique, but simply what it literally means: a method to reduce polygon count in a continuous way. I already had your method of some -hedron which is subdivided in mind. A real general-purpose CLOD system would of course be overkill to render spheres. The thing with the stencil buffer you mention sound jaggy. If I understand correctly you take a texture of a disc and put it into the stencil buffer, which is 1 bit. At high resultions this will give worse edges than my blurry textures. This technique is okay when rendering on a rather fixed resolution, but what if someone plays my game at 1600x1200? I can't put that large textures in there. The same counts for the pixel shader method you describe. The problem is that textures have to be a fixed size. The memory would be sufficient, but does most of the hardware handle textures at really big sizes (1024x1024 & higher)? Thanks guys, Alain P.S. Dear Santa Claus, please let me have a DrawAntialiasedDiscPrimitive for next Christmas, will you? |
From: Garett B. <gt...@st...> - 2002-02-22 18:48:57
|
>P.S. Dear Santa Claus, please let me have a >DrawAntialiasedDiscPrimitive for next Christmas, will you? Perhaps you could draw the table and pub (or whatever it is you're spending the rest of your 3 gajillion polygon budget on) first, then draw each ball as a triangle fan with GL_POLYGON_SMOOTH to AA the edges of the ball? However, I really don't understand how you can do a pool game without using real mesh spheres for each ball, since the numbers and stripes need to spin around as the ball rolls. Sixteen balls times 5,000 polys per ball, sounds about like Quake 3 Arena minus the death animations. Word, Garett Bass gt...@st... |
From: Weston F. <we...@ml...> - 2002-09-24 15:23:27
|
unsubscribe |
From: Jaimi M. <ja...@az...> - 2002-10-17 02:49:03
|
Given a convex polygon (with x,y,z & uv at each vertex) what is the fastest way to calculate the UV coordinates of a point that is inside the polygon? The brute-force method would seem to be: shoot a line from one of the vertexes through the point to the opposite edge, calculate the UV coord of that new point on the edge, and interpolate to the point based on distance... But this seems a little intensive. Jaimi |
From: Sylvain V. <vi...@ii...> - 2003-02-11 09:20:22
|
> I'm using ID-based shadowbuffers at the moment. They're not soft, but > then neither is any other current solution. You can do soft shadows using volumes in the stencil buffer and the alpha buffer, with a very small speed penality. Still it can be hard in some case to live without the alpha buffer for other effects. And tweaking the softing algo to get rid of flickering and of the "pill effect" can be quite annoying sometimes. |
From: Tiayr C. <ti...@ti...> - 2003-05-12 14:38:03
|
From: Ian E. <Ia...@ia...> - 2003-05-12 15:32:24
|
Your clipping volume is too small. #:o) > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Tiayr > Cannon > Sent: Monday, May 12, 2003 7:38 AM > To: gda...@li... > Subject: [Algorithms] (no subject) > > > > > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, Santa Clara > The only event dedicated to issues related to Linux enterprise solutions > www.enterpriselinuxforum.com > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > |
From: Joris M. <jor...@pa...> - 2003-05-15 17:19:44
|
You know you have been coding graphics for too long when ..... :) > > > Your clipping volume is too small. > > #:o) > > > -----Original Message----- > > From: gda...@li... > > [mailto:gda...@li...]On Behalf Of > > Tiayr Cannon > > Sent: Monday, May 12, 2003 7:38 AM > > To: gda...@li... > > Subject: [Algorithms] (no subject) > > > > > > > > > > > > > > > > ------------------------------------------------------- > > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, > Santa Clara > > The only event dedicated to issues related to Linux enterprise > > solutions www.enterpriselinuxforum.com > > > > _______________________________________________ > > GDAlgorithms-list mailing list > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > > > > ------------------------------------------------------- > Enterprise Linux Forum Conference & Expo, June 4-6, 2003, > Santa Clara The only event dedicated to issues related to > Linux enterprise solutions www.enterpriselinuxforum.com > > _______________________________________________ > GDAlgorithms-list mailing list GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > |
From: Andrew M. <an...@ty...> - 2003-05-30 06:39:44
|
Does anyone have a link to the paper: Chin, N., and Feiner, S., Near Real-Time Shadow Generation Using BSP Trees I've searched google, but have only found references to the paper. --------------------- Andrew McGregor Engine programmer Tycom Studios an...@ty... www.tycomstudios.com --------------------- |
From: Shea L.F. <sl...@ra...> - 2003-05-30 08:28:40
|
I am trying to implement AABB trees for a 3D game. I am unsure of how to actually implement my own AABB trees in C++. I believe I understand the concept of AABB trees. I have little experience with collision detection. Any help would be greatly appreciated. Thanks. |
From: Sander v. R. <mv_...@ho...> - 2003-07-14 09:11:45
|
>I think a pretty good solution was posted by Per >http://www.flipcode.com/cgi-bin/msg.cgi?sh= >owThread=3D00008625&forum=3D3dtheory&id=3D-1 > >You started that thread, in fact. Is there a reason you don't want to use >his linear programming >method, or his make-a-polygon method? Maybe you could elaborate on what >you're looking for... Sorry for the crosspost. I'm just experimenting with some math, for fun, to see if i can make building brushes and CSG more precise (less fp errors) and more efficient Tough precision is more important to me than efficiency, since this is for preprocessing. Right now i'm working an algorithm that builds a brush by calculating the intersection points of 3 planes together with edge connectivity (well actually plane connectivity), and use that to build my polygons. Basically i store my vertices as a matrix (which contains the three planes), and i never have to clip my polygons. Since all vertices are always calculated with the exact same process and with the exact same number of multiplications (no divisions), numerical errors are at the very least more predictable.. I haven't put much tought into the csg part yet, but i have a couple of ideas.. I need a plane/brush intersection determination algo to determine if during a csg operation a plane actually intersects a brush.. _________________________________________________________________ Add photos to your e-mail with MSN 8. Get 2 months FREE*. http://join.msn.com/?page=features/featuredemail |
From: Loic B. <lo...@el...> - 2004-07-22 11:01:10
|
Yeah I know, this is a classic... Anyway, please share your opinion/experience on this. I've read many papers and presentations; I was initially going for the Shadow Buffer (I'm really not interested about the SSVs). So Perspective Shadow Mapping seemed the best, but I remember Jonathan Blow saying "it's a nightmare, don't do it", the presentation of Tom Forsyth made it clear that it's not easy too. But, my whole point is: is your opinion stills the same since the article of Simon Kozlov in GPU Gems? Looks like he solved some crucial issues (not easy to implement, I agree). My primary concern is: will I be able to do PSM with Point Lights (at a reasonable speed/quality)? The second one would be: will I be able to implement the whole technique! :) Because after reading the article from Kozlov and the one from Stamminger and Drettakis, my head nearly exploded... I also traced the sample from nVidia, following Kozlov article, and things look good. Well, no point light, and we can't control the directional (or I didn't find how to). Btw, I forgot to mention the hardware I'm targeting is VS/PS 2.0 at least. Loic |
From: Steven W. <St...@eu...> - 2005-12-05 15:57:27
|
OK, this is my 2peneth on LCP's. An LCP is a known mathematical problem, it has a know mathematical solution (the big matrix solution) and handily it is a logically equivalent problem to that of solving collision/constraint forces for a group of bodies. As the problems are equivalent there will be an equivalent to the big matrix solution working within our original problem domain BUT we don't know what it is and it may be very complex to execute. So we can translate our original problem domain into that of an LCP, use the known mathematical solution and translate it back. Jobs a goodun'. However if we start to say we are using the iterative solution on the LCP rather than big matrix, we are suddenly not doing such complex maths on it at all and I think we COULD work out what this means in our original problem domain. If so would it not be better to do the work in the original problem domain (remember the two domains are equivalent) rather than wasting time translating into an equivalent (LCP) domain and back again. It MIGHT be that impulse constraints are infact already equivalent to the iterative solution to the LCP and the results of Dirks solution will be identical to an iteratively solved LCP except with (logically) less overhead. - Steven - covers head and waits to be shot down ;) >3) When you say LCP solver, you mean big matrix solver right? I've mostly >been using iterative solver so far and the results look similar to those in >your demo. Do you see any benefits of this system over iterative LCP solver, >for NextGen specifically? > >Thanks, >Alen _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ This e-mail is confidential and may be read only by the intended recipient. If you are not the intended recipient, please do not forward, copy or take any action based on it and, in addition, please delete this email and inform the sender. We cannot be sure that this e-mail or its attachments are free from viruses. In keeping with good computing practice, please ensure that you take adequate steps to check for any viruses. Before replying or sending any email to us, please consider that the internet is inherently insecure and is an inappropriate medium for certain kinds of information. We reserve the right to access and read all e-mails and attachments entering or leaving our systems. |
From: <Pau...@sc...> - 2005-12-05 16:08:04
|
gda...@li... wrote on 05/12/2005 15:56:11: > OK, this is my 2peneth on LCP's. > It MIGHT be that impulse constraints are infact already equivalent > to the iterative solution to the LCP and the results of Dirks > solution will be identical to an iteratively solved LCP except with > (logically) less overhead. I thought an LCP wouldn't correctly propagate momentum, making them different...? Ta, Paul. ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify pos...@sc... This footnote also confirms that this email message has been checked for all known viruses. ********************************************************************** Sony Computer Entertainment Europe |
From: Manolache A. <pro...@ya...> - 2009-03-18 09:50:43
|
That's very interesting Jason, glad to know i'm not the only one to have troubles with fixed time step loops. Do you have any further material on this topic, that may speak about other refinements you mention in your post? P.S: i'm new here, how can i post in the same topic, not make a new post for the initial topic? Jason Hughes wrote: > Varying framerates is always a problem with a fixed update loop. If you > have no way to catch up, you eventually will get into a death spiral > where one slow frame can cascade to the next, and to the next, etc. You > have to guarantee that nothing inside (or outside) your update ever > approaches the length of time it is simulating. When you run at 400hz, > you have to have pretty small amounts of work to do, and your fixed > overhead becomes a larger percentage of the time spent (loop overhead, > etc). If your main thread is waiting on vsync and performing simulation > steps, it's possible you just missed a vsync and just began waiting for > the next one, 1/60th of a second wasted. That's a lot of simulation > steps you have to make up for. > Perhaps you can try implicit integrators so your timesteps can be > larger? Or limit the number of simulation steps you allow per update, > which breaks the death spiral. Or, divide the actual time between > frames by a fixed number of steps so you get several timesteps in a row > at a fixed dt, but not necessarily the same dt every frame. > There are many other refinements. But rest assured you're not the first > person to run into this problem. :-) > JH Manolache Adrian wrote: > Sorry for the typo, Line 9 should be: > 9. accumulator -= fixedDt. > in the simulation it's written like above, otherwise i would not get > numUpdatesThisFrame greater than 1. > Sorry for the silly mistake :D > > --- On *Wed, 3/18/09, Andrew Ellem /<andrew@ar...>/* wrote: > > From: Andrew Ellem <andrew@ar...> > Subject: Re: [Algorithms] Fixed time step loop with varying framerate > To: prog_ady@ya... > Date: Wednesday, March 18, 2009, 6:43 AM > > Is this the actual code you're running? > > This loop here looks pretty broken to me: > > 7. while (accumulator >= fixedDt) > { > 8. UpdateSimulation(fixedDt); > 9. accumulator -= accumulator; 10. numUpdatesThisFrame ++; > } > > Shouldn't it be > accumulator -= fixedDt; > > I don't understand how numUpdatesThisFrame isn't always 1. > > > > I've stumbled on a problem these days while programming a small > simulation. I've read a few articles with a main title of: "Fix > you're time step" and concluded on the following loop: > > 0. numTotalUpdates = 0; > > 1. fixedDt = 1.0f / 400.0f; // how small is enough? > > 2. while (IsRunning()) > > { > > 3. DispatchEvents(); > > 4. static float accumulator = 0.0f; > > 5. numUpdatesThisFrame = 0; > > 6. accumulator += GetFrameTime() ; // time since last update > > 7. while (accumulator >= fixedDt) > > { > > 8. UpdateSimulation(fixedDt); > > 9. accumulator -= accumulator; 10. numUpdatesThisFrame > ++; } > > 11. numTotalUpdates++; > > 12. renderInterpolator = accumulator / fixedDt; // safe to say it's > in 0..1 > > 13. Render(renderInterpolator); > > } > > As far as i know this is physics engine friendly main loop because > physics engine deal great with small fixed time steps when approximating > integration. > > As i was saying i was running a simple simulation of a circle that was > hitting obstacles and was being reflected as expected. Actually there was more > going on, on the screen, but the main idea of the simulation is of a circle in a > 2D environment colliding in a breakout game fashion. > > I repeatedly run the simulation with VSync on and observed an annoying > stuttering/choppy movement of the circle. The stuttering is almost random like. > After some profiling i found out that numUpdatesThisFrame was varying kind of > weird: it was growing from 6,7 to 15 or even 24. I changed this line to this > line > > 6'. accumulator = some_constant; > > and the ran the simulation again with VSync on. The stuttering was gone > and since the value of some_constant was tweaked for my processor speed the > motion was neither to slow and neither two fast. > > So something was happening that caused big times between frames which > implied an unnaturally big numUpdatesThisFrame value. The framerate was very > high though, without VSync on i was getting a few thousands frames per second so > i thought to myself that bothering with why does sometime the time beween frames > was unnaturally big, was not the way to solve the problem. > > I thought the way to tackle this issue is to compensate for the random > big times between is to write additional code in the main loop. I never dealt > with the problem of fixed time steps and varying framerate and i can't > really know how to start. > > One way i thought was to first detect when a big shift occurs and then > try to correct it. > > The numbers of updates per frame form a series of intger numbers. The > series is unlikely to be convergent in the pure mathematical point of view but i > think it's intuitive to observe that its values will get closer to some > fixed values. I can calculate an approximation of this value by an arithemetic > mean like so: > > approximateConvergenceValue = Sum(sequence[i]) / numTotalUpdates, > 1<=i<=numTotalUpdates; > > A better approximation of this value would be to use a quadratic mean > like so: > > approximateConvergenceValue = Square_Root(Sum(sequence[i]*sequence[i]) / > numTotalUpdates), 1<=i<=numTotalUpdates; > > This is pretty close to the most frequent values of numUpdatesThisFrame > so i guess it's a good approximation. I'm puzzled when applying this > detection to correct the framerate and also keep approximateConvergenceValue to > a good approximation. The simulation would ideally self adapt to work the same > using this calculation(minimizing the effect seen on screen as a result of the > adaption) > > So how does one usually deal with fixed time step loops and varying frame > rate? > > > > > > ------------------------------------------------------------------------ > > > > > ------------------------------------------------------------------------------ > > Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are > > powering Web 2.0 with engaging, cross-platform capabilities. Quickly and > > easily build your RIAs with Flex Builder, the Eclipse(TM)based development > > software that enables intelligent coding and step-through debugging. > > Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDAlgorithms-list@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are > powering Web 2.0 with engaging, cross-platform capabilities. Quickly and > easily build your RIAs with Flex Builder, the Eclipse(TM)based development > software that enables intelligent coding and step-through debugging. > Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > ------------------------------------------------------------------------ > > _______________________________________________ > GDAlgorithms-list mailing list > GDAlgorithms-list@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
From: Cory B. <cor...@ya...> - 2012-06-27 21:47:56
|
http://ifos-formazione.com/modules/mod_related_items/googlesave.html?otv=ol.gio&ohsy=er.hkm&ghb=zqqd |
From: Cory B. <cor...@ya...> - 2012-06-27 21:52:43
|
http://recoveringgrace.org/media/googlesave.html?efj=xss.jdg&wrg=ar.sus&yesol=vyqt |
From: Lorenzo P. <pas...@ul...> - 2012-06-27 23:23:15
|
This things a plague ... recieved a couple today from unrelated sources ... :/ > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |