gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1414)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: John S. <jse...@ho...> - 2000-08-12 20:05:11
|
Actually, I hadn't thought of simply rejecting whole patches that are backfacing, and leaving the rest up to the driver... That may be the wisest solution. Thanks. I am "cheating" for different tesselation amounts right now, and the results are very good. With the algorithm I've written, the edge match up almost exactly, so the slim triangles are very slim indeed. They simply cover tiny holes that would show up due to floating-point inaccuracies. And hey, if your ranting gives me more insight into what I'm doing, go right ahead and rant! ;-) John Sensebe jse...@ho... Quantum mechanics is God's way of ensuring that we never really know what's going on. Check out http://members.home.com/jsensebe to see prophecies for the coming Millennium! ----- Original Message ----- From: "Tom Forsyth" <to...@mu...> To: <gda...@li...> Sent: Saturday, August 12, 2000 2:42 PM Subject: RE: [Algorithms] Bicubic normals for a bicubic world > Hmmmm... OK. Tricky. > > Incidentally, I would question whether you want to bother with BFC of > subpatches. I would go with the philosophy that since most patches will be > wholly rejected or wholly accepted, the borderline cases are few in number. > However, if you're doing a BFC calculation + test per sub-patch, that seems > like a fair amount of work for _all_ visible patches, just to save a bit on > borderline patches. > > My instincts would be to BFC a top-level patch, then set up your Difference > Engine to the required tesselation level and just draw the whole thing, > letting the hardware do tri-level BFC. > > The edges where different levels of tesselation meet require a bit of > thought. "Cheating" by simply stitching them together with slim tris after > drawing each patch at different levels is a possible option, and very quick > (it's a second forward-difference engine, but only in one direction, along > the crease). But you tend to get lighting differences between the two > patches, which can be visible. > > You can also special-case the edges of the patch drawer so that it fans the > edge tris to match adjacent patches, which looks quite good. And it's pretty > good for speed, since you can do this by shrinking the full speed > tesselation by one on each edge that needs higher tesselation (fewer than > half the edges of the scene will need extra fanning(*)), then doing the > fanned edges with special case (but exceeding similar-looking) code. > > Anyway, there's probably some very good reason why you're doing it > recursively, so I've probably just been ranting. Sorry! > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > (*) Ah - almost true. True if you don't BFC the patches :-) Close enough for > performance considerations though. > |
From: Tom F. <to...@mu...> - 2000-08-12 19:44:47
|
Hmmmm... OK. Tricky. Incidentally, I would question whether you want to bother with BFC of subpatches. I would go with the philosophy that since most patches will be wholly rejected or wholly accepted, the borderline cases are few in number. However, if you're doing a BFC calculation + test per sub-patch, that seems like a fair amount of work for _all_ visible patches, just to save a bit on borderline patches. My instincts would be to BFC a top-level patch, then set up your Difference Engine to the required tesselation level and just draw the whole thing, letting the hardware do tri-level BFC. The edges where different levels of tesselation meet require a bit of thought. "Cheating" by simply stitching them together with slim tris after drawing each patch at different levels is a possible option, and very quick (it's a second forward-difference engine, but only in one direction, along the crease). But you tend to get lighting differences between the two patches, which can be visible. You can also special-case the edges of the patch drawer so that it fans the edge tris to match adjacent patches, which looks quite good. And it's pretty good for speed, since you can do this by shrinking the full speed tesselation by one on each edge that needs higher tesselation (fewer than half the edges of the scene will need extra fanning(*)), then doing the fanned edges with special case (but exceeding similar-looking) code. Anyway, there's probably some very good reason why you're doing it recursively, so I've probably just been ranting. Sorry! Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. (*) Ah - almost true. True if you don't BFC the patches :-) Close enough for performance considerations though. > -----Original Message----- > From: John Sensebe [mailto:jse...@ho...] > Sent: 12 August 2000 19:34 > To: gda...@li... > Subject: Re: [Algorithms] Bicubic normals for a bicubic world > > > Well, as I've said in a previous post, I need to fool more > than the eye, > since I want to cull backfacing subpatches (at least, as well > as possible). > > Therefore, since my patches are bicubic, I wanted to use a bicubic > approximation for the normals. It should be better than > biquadratic, and it > fits well into my patch subdivision algorithm. > > Having decided on the method, I'm having trouble figuring out what the > parameters should be. Should I just take the two tangent > vectors at each > control point of a Bezier patch, cross them, and use the > resulting vectors > as the control points of a Bezier patch describing the > normals of the first? > > BTW, I plan on letting D3D handle the normalization for me, > so at least I > can let the hardware do it if it's doing T&L. > > John Sensebe > jse...@ho... > Quantum mechanics is God's way of ensuring that we never > really know what's > going on. > > Check out http://members.home.com/jsensebe to see prophecies > for the coming > Millennium! > > > ----- Original Message ----- > From: "Tom Forsyth" <to...@mu...> > To: <gda...@li...> > Sent: Saturday, August 12, 2000 4:42 AM > Subject: RE: [Algorithms] Bicubic normals for a bicubic world > > > > Sorry - I wasn't suggesting that the normals _are_ > biquadratic. But I was > > suggesting that a biquadratic approximation was going to be > pretty close > in > > most cases. Good enough to fool the eye, which really only > _needs_ G0 for > > lighting, and even a very rough-and-ready C/G1 reduces Mach > banding to > very > > tolerable levels. > > > > So all you need is a curve that can be reliably G0, and get > pretty rough > G1 > > at vertices, and not too far out on edges in non-extreme cases. > Biquadratic > > seems to fit the bill well. > > > > Renormalisation is a pain, but you'd have to do that > whatever function you > > used (unless you used some astonishingly high-power function and use > enough > > control points to match the ideal curve, which is probably > more expensive > > than renormalisation). > > > > Oh hang on - since _rational_ biquadratic surfaces can > describe conic > > sections (including spheres and sections of spheres), could > you maybe use > > them for your normals, and thus not have to renormalise - > or at least get > > close enough that the eye won't notice? Or maybe my brain > can't visualise > > the spacial equivalent of a unit-length normal well enough > to see that a > > rational biquadratic isn't good enough. Well, just a thought. > > > > Tom Forsyth - Muckyfoot bloke. > > Whizzing and pasting and pooting through the day. > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: John S. <jse...@ho...> - 2000-08-12 18:34:02
|
Well, as I've said in a previous post, I need to fool more than the eye, since I want to cull backfacing subpatches (at least, as well as possible). Therefore, since my patches are bicubic, I wanted to use a bicubic approximation for the normals. It should be better than biquadratic, and it fits well into my patch subdivision algorithm. Having decided on the method, I'm having trouble figuring out what the parameters should be. Should I just take the two tangent vectors at each control point of a Bezier patch, cross them, and use the resulting vectors as the control points of a Bezier patch describing the normals of the first? BTW, I plan on letting D3D handle the normalization for me, so at least I can let the hardware do it if it's doing T&L. John Sensebe jse...@ho... Quantum mechanics is God's way of ensuring that we never really know what's going on. Check out http://members.home.com/jsensebe to see prophecies for the coming Millennium! ----- Original Message ----- From: "Tom Forsyth" <to...@mu...> To: <gda...@li...> Sent: Saturday, August 12, 2000 4:42 AM Subject: RE: [Algorithms] Bicubic normals for a bicubic world > Sorry - I wasn't suggesting that the normals _are_ biquadratic. But I was > suggesting that a biquadratic approximation was going to be pretty close in > most cases. Good enough to fool the eye, which really only _needs_ G0 for > lighting, and even a very rough-and-ready C/G1 reduces Mach banding to very > tolerable levels. > > So all you need is a curve that can be reliably G0, and get pretty rough G1 > at vertices, and not too far out on edges in non-extreme cases. Biquadratic > seems to fit the bill well. > > Renormalisation is a pain, but you'd have to do that whatever function you > used (unless you used some astonishingly high-power function and use enough > control points to match the ideal curve, which is probably more expensive > than renormalisation). > > Oh hang on - since _rational_ biquadratic surfaces can describe conic > sections (including spheres and sections of spheres), could you maybe use > them for your normals, and thus not have to renormalise - or at least get > close enough that the eye won't notice? Or maybe my brain can't visualise > the spacial equivalent of a unit-length normal well enough to see that a > rational biquadratic isn't good enough. Well, just a thought. > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > |
From: gl <gl...@nt...> - 2000-08-12 17:33:04
|
Metacreations have an implemenation (VIPM style), called 'Metastream'. I've seen it in use on a site selling minidisc players once - works very well. http://www.metastream.com/ -- gl ----- Original Message ----- From: "Tom Forsyth" <to...@mu...> To: <gda...@li...> Sent: Saturday, August 12, 2000 11:16 AM Subject: RE: [Algorithms] 3D mesh transmittion > This seems like a job for - Progressive Meshes!(*) > > Work has already been done on this by several people - you PM the mesh and > send it progressively down to the client. So they get an immediate rough > view of the object, and as time passes, the resoloution improves as more > detail arrives. > > You could, as you suggest, use VDPM to bias the transmission towards sending > the visible tris first. However, as one of the points of 3D (over 2D) is > that you can rotate the object, many applications will want to be able to > manipulate the object fairly soon after starting the download. I have a > feeling that under these circumstances, the extra effort and bandwidth to > cope with a changing partial VDPM is not going to be worth it. Note - this > is not the case of simply displaying a VDPM on a monitor, because the > download is so much slower than the rotation. Imagine getting a tenth of the > way through drawing your VDPM frame, and then rotating the object and trying > to work out how to optimise for the new rotation, but using the information > that has already been downloaded. So your data structure has to be able to > cope with patches of high-rez detail in different places. Sounds like a > nightmare to code, but maybe someone already has. > > I'd go for simple VIPM - the overhead is probably much lower - I'm sure you > can compress the edge-collapse information quite easily - even the > slimmed-down version that we use for runtime VIPM is hugely bloated to > reduce decompression effort. In fact, I think the only data you need to > reconstruct an edge expand is (up to) three indices and a vertex. If you > think of an edge collapse: > > (G) A (G) A > \ / \ / \ | / > \ / \ / \|/ > --B-----C-- --> --B-- > / \ / \ /|\ > / \ / \ / | \ > D D > > > Then the only info you need to reconstruct it is the vertices A, B, D and C > (some collapses may be missing A or D if they are boundary edges). You know > the index of C - it's the "next" one, since vertices are stored in collapse > order - so that's implicit, and doesn't need sending (though obviosuly the > vertex data for C does need sending). And you know that A and D must be > connected to B, so you don't need to use a full index for them, you simply > need to enumerate them somehow from the vertices connected to B. > > For example, in the diagram above, B has eight neighbours before the edge > expansion. So call the neighbour with the lowest index number 0 (in this > case, let's say it's vertex G), and number clockwise from there. So A would > be 1, and D would be 5. A bit of huffman compression on those sorts of > numbers, and you're talking extremely small overheads in total file size > compared to a full mesh description - but it's also progressive. Splendid! > > As I say, several companies have investigated this, including Intel and > Microsoft IIRC - they probably have similar systems. Sadly, I don't have any > links, but I remember this was a hot topic a few years ago. And then nobody > did anything interesting (or at least particularly visible) with it, which > was a shame. > > And it would be superb for huge-world multiplayer games. Everyone and > everything could have their own highly-detailed meshes (including faces, > clothing, etc) that is VIPM-downloaded to clients as they come into view. > The more you look at someone, the more detailed their character gets. Which > is pretty cool. Ah.... all these different game types I'd love to be doing. > Sod the gameplay - feel the technology :-) > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > (*) You watch - I'll sneak them into the GJK thread somehow :-) > > > > -----Original Message----- > > From: Yuan-chung Lee [mailto:yz...@CC...] > > Sent: 12 August 2000 09:25 > > To: gda...@li... > > Subject: [Algorithms] 3D mesh transmittion > > > > > > > > Hello: > > > > I have a question about the 3D mesh transmission across > > the network. > > A 3D mesh will be transmitted from a server to a client for > > display. If the > > position of the camera is fixed for that mesh, we can first > > transmit the > > visible polygons to shorten the latency of network transmission. > > > > The method to determine those visible polygons is to > > pre-render the 3D mesh > > and assign each polygon a different color with flat shading. > > Then we have the > > index of visible polygons in frame buffer. This can be done off line. > > > > After transmitting the visible polygons, we can go on to > > transmit the rest > > polygons to prevent from cracks in moving the camera. > > > > Does the method have any problem ? Have any paper > > discussed this before? > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Charles B. <cb...@cb...> - 2000-08-12 17:05:38
|
A lot of web-3d companies are doing exactly this (sending meshes over the internet with VIPM). Metastream is the major player. I think Intel has some involvement in their techonology (?). Basically, they have a progressive mesh file format, with textures and other cues embedded in the file in the right places, so that you can just send this one file sequentially, and always be getting the most important byte at a given moment. The other major web-3d companies will soon have progressive mesh downloaders if they don't already. I wrote one for Wild Tangent, so we'll see when it shows up there. I believe Pulse 3d will do PM in the future as well. Unfortunately, people are lazy, and the emergence of widespread broadband is taking the pressure off developers to really send their data efficiently. When we worked on web3d at Eclipse for Genesis3d we were aiming at 56k modems for delivery, so we put data packing as one of our highest priorities... At 11:16 AM 8/12/00 +0100, you wrote: >This seems like a job for - Progressive Meshes!(*) > >Work has already been done on this by several people - you PM the mesh and >send it progressively down to the client. So they get an immediate rough >view of the object, and as time passes, the resoloution improves as more >detail arrives. > >You could, as you suggest, use VDPM to bias the transmission towards sending >the visible tris first. However, as one of the points of 3D (over 2D) is >that you can rotate the object, many applications will want to be able to >manipulate the object fairly soon after starting the download. I have a >feeling that under these circumstances, the extra effort and bandwidth to >cope with a changing partial VDPM is not going to be worth it. Note - this >is not the case of simply displaying a VDPM on a monitor, because the >download is so much slower than the rotation. Imagine getting a tenth of the >way through drawing your VDPM frame, and then rotating the object and trying >to work out how to optimise for the new rotation, but using the information >that has already been downloaded. So your data structure has to be able to >cope with patches of high-rez detail in different places. Sounds like a >nightmare to code, but maybe someone already has. > >I'd go for simple VIPM - the overhead is probably much lower - I'm sure you >can compress the edge-collapse information quite easily - even the >slimmed-down version that we use for runtime VIPM is hugely bloated to >reduce decompression effort. In fact, I think the only data you need to >reconstruct an edge expand is (up to) three indices and a vertex. If you >think of an edge collapse: > >(G) A (G) A > \ / \ / \ | / > \ / \ / \|/ > --B-----C-- --> --B-- > / \ / \ /|\ > / \ / \ / | \ > D D > > >Then the only info you need to reconstruct it is the vertices A, B, D and C >(some collapses may be missing A or D if they are boundary edges). You know >the index of C - it's the "next" one, since vertices are stored in collapse >order - so that's implicit, and doesn't need sending (though obviosuly the >vertex data for C does need sending). And you know that A and D must be >connected to B, so you don't need to use a full index for them, you simply >need to enumerate them somehow from the vertices connected to B. > >For example, in the diagram above, B has eight neighbours before the edge >expansion. So call the neighbour with the lowest index number 0 (in this >case, let's say it's vertex G), and number clockwise from there. So A would >be 1, and D would be 5. A bit of huffman compression on those sorts of >numbers, and you're talking extremely small overheads in total file size >compared to a full mesh description - but it's also progressive. Splendid! > >As I say, several companies have investigated this, including Intel and >Microsoft IIRC - they probably have similar systems. Sadly, I don't have any >links, but I remember this was a hot topic a few years ago. And then nobody >did anything interesting (or at least particularly visible) with it, which >was a shame. > >And it would be superb for huge-world multiplayer games. Everyone and >everything could have their own highly-detailed meshes (including faces, >clothing, etc) that is VIPM-downloaded to clients as they come into view. >The more you look at someone, the more detailed their character gets. Which >is pretty cool. Ah.... all these different game types I'd love to be doing. >Sod the gameplay - feel the technology :-) > >Tom Forsyth - Muckyfoot bloke. >Whizzing and pasting and pooting through the day. > >(*) You watch - I'll sneak them into the GJK thread somehow :-) > > >> -----Original Message----- >> From: Yuan-chung Lee [mailto:yz...@CC...] >> Sent: 12 August 2000 09:25 >> To: gda...@li... >> Subject: [Algorithms] 3D mesh transmittion >> >> >> >> Hello: >> >> I have a question about the 3D mesh transmission across >> the network. >> A 3D mesh will be transmitted from a server to a client for >> display. If the >> position of the camera is fixed for that mesh, we can first >> transmit the >> visible polygons to shorten the latency of network transmission. >> >> The method to determine those visible polygons is to >> pre-render the 3D mesh >> and assign each polygon a different color with flat shading. >> Then we have the >> index of visible polygons in frame buffer. This can be done off line. >> >> After transmitting the visible polygons, we can go on to >> transmit the rest >> polygons to prevent from cracks in moving the camera. >> >> Does the method have any problem ? Have any paper >> discussed this before? >> >> >> >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list >> > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > ------------------------------------------------------- Charles Bloom cb...@cb... http://www.cbloom.com |
From: Conor S. <cs...@tp...> - 2000-08-12 12:49:58
|
As I said - In both directions. Not that hard. > How are the normals biquadratic? And the two tangent surfaces can't be > biquadratic, because you're only derivating in one parameter, leaving the > other as-is. You derive in u, it's still cubic in v, and vice-versa. > > I really want to have a good solution to this, so please bear with me. > > Thanks. > > John Sensebe > jse...@ho... > Quantum mechanics is God's way of ensuring that we never really know what's > going on. > > Check out http://members.home.com/jsensebe to see prophecies for the coming > Millennium! > > > ----- Original Message ----- > From: "Conor Stokes" <cs...@tp...> > To: <gda...@li...> > Sent: Friday, August 11, 2000 9:58 PM > Subject: Re: [Algorithms] Bicubic normals for a bicubic world > > > > Actually, if you think about it - The normals are totally quadratic. > And > > if you do a derivitive in 2 > > directions (across S, and across T) you do get 2 quadratics. Not only > that, > > the cross product is > > resiliant to transforms - So it remains the same. However, normalisation > > still needs to occur. > > > > This is why I precalc my normals and reference them from a map in most > > cases. > > > > Conor Stokes > > > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Tom F. <to...@mu...> - 2000-08-12 10:26:34
|
This seems like a job for - Progressive Meshes!(*) Work has already been done on this by several people - you PM the mesh and send it progressively down to the client. So they get an immediate rough view of the object, and as time passes, the resoloution improves as more detail arrives. You could, as you suggest, use VDPM to bias the transmission towards sending the visible tris first. However, as one of the points of 3D (over 2D) is that you can rotate the object, many applications will want to be able to manipulate the object fairly soon after starting the download. I have a feeling that under these circumstances, the extra effort and bandwidth to cope with a changing partial VDPM is not going to be worth it. Note - this is not the case of simply displaying a VDPM on a monitor, because the download is so much slower than the rotation. Imagine getting a tenth of the way through drawing your VDPM frame, and then rotating the object and trying to work out how to optimise for the new rotation, but using the information that has already been downloaded. So your data structure has to be able to cope with patches of high-rez detail in different places. Sounds like a nightmare to code, but maybe someone already has. I'd go for simple VIPM - the overhead is probably much lower - I'm sure you can compress the edge-collapse information quite easily - even the slimmed-down version that we use for runtime VIPM is hugely bloated to reduce decompression effort. In fact, I think the only data you need to reconstruct an edge expand is (up to) three indices and a vertex. If you think of an edge collapse: (G) A (G) A \ / \ / \ | / \ / \ / \|/ --B-----C-- --> --B-- / \ / \ /|\ / \ / \ / | \ D D Then the only info you need to reconstruct it is the vertices A, B, D and C (some collapses may be missing A or D if they are boundary edges). You know the index of C - it's the "next" one, since vertices are stored in collapse order - so that's implicit, and doesn't need sending (though obviosuly the vertex data for C does need sending). And you know that A and D must be connected to B, so you don't need to use a full index for them, you simply need to enumerate them somehow from the vertices connected to B. For example, in the diagram above, B has eight neighbours before the edge expansion. So call the neighbour with the lowest index number 0 (in this case, let's say it's vertex G), and number clockwise from there. So A would be 1, and D would be 5. A bit of huffman compression on those sorts of numbers, and you're talking extremely small overheads in total file size compared to a full mesh description - but it's also progressive. Splendid! As I say, several companies have investigated this, including Intel and Microsoft IIRC - they probably have similar systems. Sadly, I don't have any links, but I remember this was a hot topic a few years ago. And then nobody did anything interesting (or at least particularly visible) with it, which was a shame. And it would be superb for huge-world multiplayer games. Everyone and everything could have their own highly-detailed meshes (including faces, clothing, etc) that is VIPM-downloaded to clients as they come into view. The more you look at someone, the more detailed their character gets. Which is pretty cool. Ah.... all these different game types I'd love to be doing. Sod the gameplay - feel the technology :-) Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. (*) You watch - I'll sneak them into the GJK thread somehow :-) > -----Original Message----- > From: Yuan-chung Lee [mailto:yz...@CC...] > Sent: 12 August 2000 09:25 > To: gda...@li... > Subject: [Algorithms] 3D mesh transmittion > > > > Hello: > > I have a question about the 3D mesh transmission across > the network. > A 3D mesh will be transmitted from a server to a client for > display. If the > position of the camera is fixed for that mesh, we can first > transmit the > visible polygons to shorten the latency of network transmission. > > The method to determine those visible polygons is to > pre-render the 3D mesh > and assign each polygon a different color with flat shading. > Then we have the > index of visible polygons in frame buffer. This can be done off line. > > After transmitting the visible polygons, we can go on to > transmit the rest > polygons to prevent from cracks in moving the camera. > > Does the method have any problem ? Have any paper > discussed this before? > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Tom F. <to...@mu...> - 2000-08-12 09:45:44
|
Sorry - I wasn't suggesting that the normals _are_ biquadratic. But I was suggesting that a biquadratic approximation was going to be pretty close in most cases. Good enough to fool the eye, which really only _needs_ G0 for lighting, and even a very rough-and-ready C/G1 reduces Mach banding to very tolerable levels. So all you need is a curve that can be reliably G0, and get pretty rough G1 at vertices, and not too far out on edges in non-extreme cases. Biquadratic seems to fit the bill well. Renormalisation is a pain, but you'd have to do that whatever function you used (unless you used some astonishingly high-power function and use enough control points to match the ideal curve, which is probably more expensive than renormalisation). Oh hang on - since _rational_ biquadratic surfaces can describe conic sections (including spheres and sections of spheres), could you maybe use them for your normals, and thus not have to renormalise - or at least get close enough that the eye won't notice? Or maybe my brain can't visualise the spacial equivalent of a unit-length normal well enough to see that a rational biquadratic isn't good enough. Well, just a thought. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: John Sensebe [mailto:jse...@ho...] > Sent: 12 August 2000 04:39 > To: gda...@li... > Subject: Re: [Algorithms] Bicubic normals for a bicubic world > > > How are the normals biquadratic? And the two tangent surfaces can't be > biquadratic, because you're only derivating in one parameter, > leaving the > other as-is. You derive in u, it's still cubic in v, and vice-versa. > > I really want to have a good solution to this, so please bear with me. > > Thanks. > > John Sensebe > jse...@ho... > Quantum mechanics is God's way of ensuring that we never > really know what's > going on. > > Check out http://members.home.com/jsensebe to see prophecies > for the coming > Millennium! > > > ----- Original Message ----- > From: "Conor Stokes" <cs...@tp...> > To: <gda...@li...> > Sent: Friday, August 11, 2000 9:58 PM > Subject: Re: [Algorithms] Bicubic normals for a bicubic world > > > > Actually, if you think about it - The normals are > totally quadratic. > And > > if you do a derivitive in 2 > > directions (across S, and across T) you do get 2 > quadratics. Not only > that, > > the cross product is > > resiliant to transforms - So it remains the same. However, > normalisation > > still needs to occur. > > > > This is why I precalc my normals and reference them > from a map in most > > cases. > > > > Conor Stokes > > > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Akbar A. <sye...@ea...> - 2000-08-12 09:35:03
|
>I'll let you know if I can locate it. many thanks. it's so nice having friends at ms :-) hehe. peace. akbar A. "We want technology for the sake of the story, not for its own sake. When you look back, say 10 years from now, current technology will seem quaint" Pixars' Edwin Catmull. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Tony Cox Sent: Saturday, August 12, 2000 3:48 AM To: 'gda...@li...' Subject: RE: [Algorithms] a fur- paper >I though Jed Lengyel from Microsoft was planning to present some >paper on realtime fur at Siggraph2000 >because I saw it presented here in march or so, >but I dont see it on his site: >http://www.research.microsoft.com/~jedl/ >check the Siggraph site. >It is basically the same techinque as the realtime grass, >although I think it uses different techniques at different >distances, eg. real lines for really close up. >It looks good and worked pretty nice in the demo he showed. You might have also seen a version of his fur demo at the DirectX developer day at GDC. I've been trying to track down an online version of the paper (as you note, it's not on his MSR webpage), I'll let you know if I can locate it. Tony Cox - DirectX Luminary Windows Gaming Developer Relations Group http://msdn.microsoft.com/directx _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Tony C. <to...@mi...> - 2000-08-12 08:48:52
|
>I though Jed Lengyel from Microsoft was planning to present some >paper on realtime fur at Siggraph2000 >because I saw it presented here in march or so, >but I dont see it on his site: >http://www.research.microsoft.com/~jedl/ >check the Siggraph site. >It is basically the same techinque as the realtime grass, >although I think it uses different techniques at different >distances, eg. real lines for really close up. >It looks good and worked pretty nice in the demo he showed. You might have also seen a version of his fur demo at the DirectX developer day at GDC. I've been trying to track down an online version of the paper (as you note, it's not on his MSR webpage), I'll let you know if I can locate it. Tony Cox - DirectX Luminary Windows Gaming Developer Relations Group http://msdn.microsoft.com/directx |
From: Yuan-chung L. <yz...@CC...> - 2000-08-12 08:25:51
|
Hello: I have a question about the 3D mesh transmission across the network. A 3D mesh will be transmitted from a server to a client for display. If the position of the camera is fixed for that mesh, we can first transmit the visible polygons to shorten the latency of network transmission. The method to determine those visible polygons is to pre-render the 3D mesh and assign each polygon a different color with flat shading. Then we have the index of visible polygons in frame buffer. This can be done off line. After transmitting the visible polygons, we can go on to transmit the rest polygons to prevent from cracks in moving the camera. Does the method have any problem ? Have any paper discussed this before? |
From: John S. <jse...@ho...> - 2000-08-12 03:39:37
|
How are the normals biquadratic? And the two tangent surfaces can't be biquadratic, because you're only derivating in one parameter, leaving the other as-is. You derive in u, it's still cubic in v, and vice-versa. I really want to have a good solution to this, so please bear with me. Thanks. John Sensebe jse...@ho... Quantum mechanics is God's way of ensuring that we never really know what's going on. Check out http://members.home.com/jsensebe to see prophecies for the coming Millennium! ----- Original Message ----- From: "Conor Stokes" <cs...@tp...> To: <gda...@li...> Sent: Friday, August 11, 2000 9:58 PM Subject: Re: [Algorithms] Bicubic normals for a bicubic world > Actually, if you think about it - The normals are totally quadratic. And > if you do a derivitive in 2 > directions (across S, and across T) you do get 2 quadratics. Not only that, > the cross product is > resiliant to transforms - So it remains the same. However, normalisation > still needs to occur. > > This is why I precalc my normals and reference them from a map in most > cases. > > Conor Stokes > > |
From: Pierre T. <p.t...@wa...> - 2000-08-12 03:13:46
|
> provided have been simplified already from cramer's rule. I'm not seeing > why the new "closest point in simplex to origin" is perpendicular to the > affine combination of the points in simplex, so I've stopped at that point Maybe this can help: http://www.codercorner.com/gjk00.jpg http://www.codercorner.com/gjk01.jpg Feel free to correct possible little errors (Ron?). Pierre |
From: Conor S. <cs...@tp...> - 2000-08-12 02:50:23
|
Actually, if you think about it - The normals are totally quadratic. And if you do a derivitive in 2 directions (across S, and across T) you do get 2 quadratics. Not only that, the cross product is resiliant to transforms - So it remains the same. However, normalisation still needs to occur. This is why I precalc my normals and reference them from a map in most cases. Conor Stokes > Ok, maybe reducing it to 2D wasn't such a good idea. > > What we really want is a curve that descibes the surface normal of a bicubic > surface. This normal is the cross product of two tangents, one in u and one > in v. Each of these tangent surfaces is a partial derivative, so while it's > a quadratic in one parameter, it's a still cubic in the other. > > If you can fit that into a biquadratic surface, I'd like to see it, 'cause I > can sure use it! ;-) > > John Sensebe > jse...@ho... > Quantum mechanics is God's way of ensuring that we never really know what's > going on. > > Check out http://members.home.com/jsensebe to see prophecies for the coming > Millennium! > > > ----- Original Message ----- > From: "Tom Forsyth" <to...@mu...> > To: <gda...@li...> > Sent: Thursday, August 10, 2000 9:55 AM > Subject: RE: [Algorithms] Bicubic normals for a bicubic world > > > > No - in the S shape, each component of the normal goes from value A to > value > > B and back to value A, which can be described by a quadratic. > > > > Tom Forsyth - Muckyfoot bloke. > > Whizzing and pasting and pooting through the day. > > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Matthew M. <ma...@me...> - 2000-08-12 01:12:45
|
Hey Pierre; You may want to check out the swept sphere volume stuff -- the guys at UNC seem to think it beats OBBs in many cases. http://www.cs.unc.edu/~geom/SSV/ Not that you don't seem to be having fun... -- Matt > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of > Pierre Terdiman > Sent: Friday, August 11, 2000 5:21 PM > To: gda...@li... > Subject: Re: [Algorithms] GJK > > > > I think it's just hard to understand because the recursive solutions > > provided have been simplified already from cramer's rule. I'm > not seeing > > why the new "closest point in simplex to origin" is perpendicular to the > > affine combination of the points in simplex, so I've stopped at > that point > > to figure it out. Once I understand that, I think the rest of the math > > should fall out nicely. > > Oh, I just re-derived that yesterday. I'm afraid this is the result of a > brute-force calculus. Start from the Appendix II in the original Gilbert > paper. Take the expression of f (lamda^2, lamda^3, ..., lamda^r). Then, if > you compute df / d^i you just find the expected orthogonality between the > expression of the closest point and any (Pi - P0), assuming Pi are the > original points. Well, I admit this is not a difficult one, but it sure is > somewhat delicate. It also gives you the matrix (41). Anyway it's a lot > clearer after having rederived it. There are a awful lot of > shortcuts taken > in all the GJK-related papers I read, which makes the global understanding > very painful. For example, Van Der Bergen in his GJK paper takes this > orthogonality as granted, and at first it's very shocking. I > think the most > complete derivation (the original one) could be found in Daniel Johnson's > PhD thesis.... But of course there's absolutely no way to find > that one. [if > someone knows about it.... tell me]. Even the derivations in Gilbert's > original article seem quite superficial. > > While I'm at it : Gilbert wrote "Since f is convex...". How do we > know that > function is convex ? > > > > > At any rate, it's hacky because gino van den bergen (note my > ignorance on > > how to properly capitalize his name) saves the relevant dot products for > > points that are in the simplex. I wonder if this level of > optimization is > > necessary given the cost of everything else that is typically > contained in > > a rigid body physics simulator, but I guess I'll see. :) > > Well, I suppose you could probably invert the (41) matrix with > the standard > inversion code of your matrix class without even using those nasty Delta > notations ! But since the algorithm takes a combinatoric > approach, it would > need to invert up to 15 4x4 matrices (correct me if I'm wrong), > most of the > involved terms beeing recomputed many times. I suppose those optimisations > are worth the pain - but once everything else works, of course. > > BTW I think the intuitive reason for limiting the number of vertices to 4, > is that the closest point figure we're looking for can be something like: > - point vs point (2 vertices) > - edge vs point (3 vertices) > - edge vs edge (4 vertices) > - face vs point (4 vertices) > > Nothing else - unless I'm missing something. That is, 4 points is > enough to > catch all the possible figures. And since we're running the algorithm as > many times as needed to examine all possible combinations anyway, > there's no > need to bother including a fifth point. Moreover it would imply > inverting a > 5x5 matrix (or computing more Deltas), which can be tedious. > > > All in all this is my understanding of that little part of the algorithm. > Don't take it as granted, I just figured out all of that > yesterday - and the > day before I knew almost nothing to GJK, so it could very well be totally > wrong. Comments, corrections, ideas, are welcome. > > Pierre > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pierre T. <p.t...@wa...> - 2000-08-12 00:30:48
|
> I think it's just hard to understand because the recursive solutions > provided have been simplified already from cramer's rule. I'm not seeing > why the new "closest point in simplex to origin" is perpendicular to the > affine combination of the points in simplex, so I've stopped at that point > to figure it out. Once I understand that, I think the rest of the math > should fall out nicely. Oh, I just re-derived that yesterday. I'm afraid this is the result of a brute-force calculus. Start from the Appendix II in the original Gilbert paper. Take the expression of f (lamda^2, lamda^3, ..., lamda^r). Then, if you compute df / d^i you just find the expected orthogonality between the expression of the closest point and any (Pi - P0), assuming Pi are the original points. Well, I admit this is not a difficult one, but it sure is somewhat delicate. It also gives you the matrix (41). Anyway it's a lot clearer after having rederived it. There are a awful lot of shortcuts taken in all the GJK-related papers I read, which makes the global understanding very painful. For example, Van Der Bergen in his GJK paper takes this orthogonality as granted, and at first it's very shocking. I think the most complete derivation (the original one) could be found in Daniel Johnson's PhD thesis.... But of course there's absolutely no way to find that one. [if someone knows about it.... tell me]. Even the derivations in Gilbert's original article seem quite superficial. While I'm at it : Gilbert wrote "Since f is convex...". How do we know that function is convex ? > At any rate, it's hacky because gino van den bergen (note my ignorance on > how to properly capitalize his name) saves the relevant dot products for > points that are in the simplex. I wonder if this level of optimization is > necessary given the cost of everything else that is typically contained in > a rigid body physics simulator, but I guess I'll see. :) Well, I suppose you could probably invert the (41) matrix with the standard inversion code of your matrix class without even using those nasty Delta notations ! But since the algorithm takes a combinatoric approach, it would need to invert up to 15 4x4 matrices (correct me if I'm wrong), most of the involved terms beeing recomputed many times. I suppose those optimisations are worth the pain - but once everything else works, of course. BTW I think the intuitive reason for limiting the number of vertices to 4, is that the closest point figure we're looking for can be something like: - point vs point (2 vertices) - edge vs point (3 vertices) - edge vs edge (4 vertices) - face vs point (4 vertices) Nothing else - unless I'm missing something. That is, 4 points is enough to catch all the possible figures. And since we're running the algorithm as many times as needed to examine all possible combinations anyway, there's no need to bother including a fifth point. Moreover it would imply inverting a 5x5 matrix (or computing more Deltas), which can be tedious. All in all this is my understanding of that little part of the algorithm. Don't take it as granted, I just figured out all of that yesterday - and the day before I knew almost nothing to GJK, so it could very well be totally wrong. Comments, corrections, ideas, are welcome. Pierre |
From: John S. <jse...@ho...> - 2000-08-12 00:08:06
|
Ok, maybe reducing it to 2D wasn't such a good idea. What we really want is a curve that descibes the surface normal of a bicubic surface. This normal is the cross product of two tangents, one in u and one in v. Each of these tangent surfaces is a partial derivative, so while it's a quadratic in one parameter, it's a still cubic in the other. If you can fit that into a biquadratic surface, I'd like to see it, 'cause I can sure use it! ;-) John Sensebe jse...@ho... Quantum mechanics is God's way of ensuring that we never really know what's going on. Check out http://members.home.com/jsensebe to see prophecies for the coming Millennium! ----- Original Message ----- From: "Tom Forsyth" <to...@mu...> To: <gda...@li...> Sent: Thursday, August 10, 2000 9:55 AM Subject: RE: [Algorithms] Bicubic normals for a bicubic world > No - in the S shape, each component of the normal goes from value A to value > B and back to value A, which can be described by a quadratic. > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > |
From: Tom F. <to...@mu...> - 2000-08-11 20:34:33
|
There's pros and cons. Skinned models generally use fewer vertices, however they can be fully accellerated by the D3D pipeline in a fairly trivial way. Then again, if you do that you need to break up your stream of vertices with matrix changes fairly often, which is often (usually?) a lose. Skinned characters tend to look smoother and less like they're insects wearing human skin in a grisly Hannibal-Lecter way. Depending on how you do them, they can be accellerated by the D3D pipeline to varying degrees, which is nice. But the no.1 cool thing about them in my book is that they VIPM really really well. You can also get away with fewer tris and vertices than on a segmented model for equivalent preceived detail. You may have noticed that I'm a big fan of scalability. So it's swings and roundabouts - on the one hand segmented is simple. But it's a bit ugly, not as flexible and uses more tris and vertices. Tom Forsyth - Muckyfoot bloke. Whizzing and pasting and pooting through the day. > -----Original Message----- > From: Leigh McRae [mailto:lei...@ro...] > Sent: 11 August 2000 21:04 > To: gda...@li... > Subject: [Algorithms] Single mesh vs segments > > > I have a engine that uses segmented models. I am thinking > of switching it > to a single skinned mesh but I am ignorant in this area. If > the same human > character was modeled using segments and then modeled with a > single skin > would one be faster then the other? Do they take about the > same memory > space? Which looks better? > > Leigh McRae > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Alex P. <al...@Ex...> - 2000-08-11 20:30:44
|
I though Jed Lengyel from Microsoft was planning to present some paper on realtime fur at Siggraph2000 because I saw it presented here in march or so, but I dont see it on his site: http://www.research.microsoft.com/~jedl/ check the Siggraph site. It is basically the same techinque as the realtime grass, although I think it uses different techniques at different distances, eg. real lines for really close up. It looks good and worked pretty nice in the demo he showed. Alex -----Original Message----- From: Sam Kuhn [mailto:sa...@ip...] Sent: Friday, August 11, 2000 11:04 AM To: gda...@li... Subject: Re: [Algorithms] a fur- paper On a sideline, has anyone seen that grass demo from nvidia, I wonder if its possible to use this technique to render cheap realtime "fur" on 3d models. sam >hey, >i am looking for this paper, preferably some where that doesn't charge. > >J. Kajiya, "Rendering Fur With Three Dimensional Textures," >i scroured the net, but ended up in failure. it would be really cool if >somebody could point me out to this one if they have seen it lying around. > >if any of you know of any other fur papers, please let me know. > >http://www.research.microsoft.com/profiles/kajiya.htm >that was the closest i found, and apparently kajiya does not have any papers >up on his site; >all i could find was his "profile" :-| > >this was rather odd cause _most_ of the other ms research guys have "web >sites". >i even tried www.research.microsoft.com/~kajiya in conformance with the >_other_ "websites", it popped up the "profile"; >maybe it's cause he is the Assistant Director. >oh well... > >it was rather humorous skimming through the profile though. >"Now, Kajiya is working on an equally dramatic project, the convergence of >the television and the computer. " > >does this mean ms is going to sell tv's? > > > >peace. >akbar A. > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Leigh M. <lei...@ro...> - 2000-08-11 20:03:57
|
I have a engine that uses segmented models. I am thinking of switching it to a single skinned mesh but I am ignorant in this area. If the same human character was modeled using segments and then modeled with a single skin would one be faster then the other? Do they take about the same memory space? Which looks better? Leigh McRae |
From: Sam K. <sa...@ip...> - 2000-08-11 18:04:38
|
On a sideline, has anyone seen that grass demo from nvidia, I wonder if its possible to use this technique to render cheap realtime "fur" on 3d models. sam >hey, >i am looking for this paper, preferably some where that doesn't charge. > >J. Kajiya, "Rendering Fur With Three Dimensional Textures," >i scroured the net, but ended up in failure. it would be really cool if >somebody could point me out to this one if they have seen it lying around. > >if any of you know of any other fur papers, please let me know. > >http://www.research.microsoft.com/profiles/kajiya.htm >that was the closest i found, and apparently kajiya does not have any papers >up on his site; >all i could find was his "profile" :-| > >this was rather odd cause _most_ of the other ms research guys have "web >sites". >i even tried www.research.microsoft.com/~kajiya in conformance with the >_other_ "websites", it popped up the "profile"; >maybe it's cause he is the Assistant Director. >oh well... > >it was rather humorous skimming through the profile though. >"Now, Kajiya is working on an equally dramatic project, the convergence of >the television and the computer. " > >does this mean ms is going to sell tv's? > > > >peace. >akbar A. > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Will P. <wi...@cs...> - 2000-08-11 17:32:37
|
> > I think all descriptions I have read of it have > > been just hacky ways to determine how to convex-ly combine (by finding the > > best lamda values to use) the points in the previous simplex and the new > > support point to find the minimal convex simplex that contains that new > > support point. Does that seem correct? > > I think so - as far as I can tell, and including the word "hacky" :) I think it's just hard to understand because the recursive solutions provided have been simplified already from cramer's rule. I'm not seeing why the new "closest point in simplex to origin" is perpendicular to the affine combination of the points in simplex, so I've stopped at that point to figure it out. Once I understand that, I think the rest of the math should fall out nicely. At any rate, it's hacky because gino van den bergen (note my ignorance on how to properly capitalize his name) saves the relevant dot products for points that are in the simplex. I wonder if this level of optimization is necessary given the cost of everything else that is typically contained in a rigid body physics simulator, but I guess I'll see. :) Will ---- Will Portnoy http://www.cs.washington.edu/homes/will |
From: <lor...@bb...> - 2000-08-11 17:05:42
|
µo«H¤H: lordlee (Lord Lee) ¤é´Á: Sat Aug 12 01:04:45 2000 ¼ÐÃD: Question about 3D mesh transmission Hello: I have a question about the 3D mesh transmission across the network. A 3D mesh will be transmitted from a server to a client for display. If the position of the camera is fixed for that mesh, we can first transmit the visible polygons to shorten the latency of network transmission. The method to determine those visible polygons is to pre-render the 3D mesh and assign each polygon a different color with flat shading. Then we have the index of visible polygons in frame buffer. This can be done off line. After transmitting the visible polygons, we can go on to transmit the rest polygons to prevent from cracks in moving the camera. Does the method have any problem ? Have any paper discussed this before? |
From: Akbar A. <sye...@ea...> - 2000-08-11 15:09:21
|
>I have Kajiya's one but only as a paper version. >Want some ugly scanned pics? :) i suppose, is it worth it? peace, akbar A. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Pierre Terdiman Sent: Friday, August 11, 2000 4:09 AM To: gda...@li... Subject: Re: [Algorithms] a fur- paper > if any of you know of any other fur papers, please let me know. >From the ACM Digital Library: Fake fur rendering; Dan B. Goldman; Proceedings of the 24th annual conference on Computer graphics & interactive techniques, 1997, Pages 127 - 134 Art-based rendering of fur, grass, and trees; Michael A. Kowalski, Lee Markosian, J. D. Northrup, Lubomir Bourdev, Ronen Barzel, Loring S. Holden and John F. Hughes; Proceedings of the SIGGRAPH 1999 annual conference on Computer graphics, 1999, Pages 433 - 438 Wet and messy fur; Armin Bruderlin; Proceedings of the conference on SIGGRAPH 99: conference abstracts and applications, 1999, Page 284 Rendering fur with three dimensional textures; J. T. Kahiya and T. L. Kay; Conference proceedings on Computer graphics, 1989, Pages 271 - 280 I have Kajiya's one but only as a paper version. Want some ugly scanned pics? :) Pierre _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Mike A. <MI...@cl...> - 2000-08-11 12:25:51
|
Hi I've recently messed around with this, although only briefly as triangle count increased way to quickly. It was fun though. I personally used the winged-edge structure, references to that can certainly be found in Foley, van Dam. However in material I found there was a lot of talk of using the half-edge structure, this was recently documented at www.flipcode.com How much neighbour information you need will also depend on the subdiv method. I hope that helps. cheers mike -----Original Message----- From: Danny Laarmans [mailto:bad...@ya...] Sent: 11 August 2000 13:14 To: gda...@li... Subject: [Algorithms] Subdivision surfaces Hello, I'm writing a demo that shows what the effect is of using subdivision surfaces instead of other modeling techniques, but I've a little problem. I'm searching for an efficient mesh structure. Could somebody tell me where I can find some information on the net about efficient mesh structures. Thanks! Danny Laarmans, bad...@ya... __________________________________________________ Do You Yahoo!? Kick off your party with Yahoo! Invites. http://invites.yahoo.com/ _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |