Re: [Algorithms] 3D mesh transmittion
Brought to you by:
vexxed72
From: gl <gl...@nt...> - 2000-08-12 17:33:04
|
Metacreations have an implemenation (VIPM style), called 'Metastream'. I've seen it in use on a site selling minidisc players once - works very well. http://www.metastream.com/ -- gl ----- Original Message ----- From: "Tom Forsyth" <to...@mu...> To: <gda...@li...> Sent: Saturday, August 12, 2000 11:16 AM Subject: RE: [Algorithms] 3D mesh transmittion > This seems like a job for - Progressive Meshes!(*) > > Work has already been done on this by several people - you PM the mesh and > send it progressively down to the client. So they get an immediate rough > view of the object, and as time passes, the resoloution improves as more > detail arrives. > > You could, as you suggest, use VDPM to bias the transmission towards sending > the visible tris first. However, as one of the points of 3D (over 2D) is > that you can rotate the object, many applications will want to be able to > manipulate the object fairly soon after starting the download. I have a > feeling that under these circumstances, the extra effort and bandwidth to > cope with a changing partial VDPM is not going to be worth it. Note - this > is not the case of simply displaying a VDPM on a monitor, because the > download is so much slower than the rotation. Imagine getting a tenth of the > way through drawing your VDPM frame, and then rotating the object and trying > to work out how to optimise for the new rotation, but using the information > that has already been downloaded. So your data structure has to be able to > cope with patches of high-rez detail in different places. Sounds like a > nightmare to code, but maybe someone already has. > > I'd go for simple VIPM - the overhead is probably much lower - I'm sure you > can compress the edge-collapse information quite easily - even the > slimmed-down version that we use for runtime VIPM is hugely bloated to > reduce decompression effort. In fact, I think the only data you need to > reconstruct an edge expand is (up to) three indices and a vertex. If you > think of an edge collapse: > > (G) A (G) A > \ / \ / \ | / > \ / \ / \|/ > --B-----C-- --> --B-- > / \ / \ /|\ > / \ / \ / | \ > D D > > > Then the only info you need to reconstruct it is the vertices A, B, D and C > (some collapses may be missing A or D if they are boundary edges). You know > the index of C - it's the "next" one, since vertices are stored in collapse > order - so that's implicit, and doesn't need sending (though obviosuly the > vertex data for C does need sending). And you know that A and D must be > connected to B, so you don't need to use a full index for them, you simply > need to enumerate them somehow from the vertices connected to B. > > For example, in the diagram above, B has eight neighbours before the edge > expansion. So call the neighbour with the lowest index number 0 (in this > case, let's say it's vertex G), and number clockwise from there. So A would > be 1, and D would be 5. A bit of huffman compression on those sorts of > numbers, and you're talking extremely small overheads in total file size > compared to a full mesh description - but it's also progressive. Splendid! > > As I say, several companies have investigated this, including Intel and > Microsoft IIRC - they probably have similar systems. Sadly, I don't have any > links, but I remember this was a hot topic a few years ago. And then nobody > did anything interesting (or at least particularly visible) with it, which > was a shame. > > And it would be superb for huge-world multiplayer games. Everyone and > everything could have their own highly-detailed meshes (including faces, > clothing, etc) that is VIPM-downloaded to clients as they come into view. > The more you look at someone, the more detailed their character gets. Which > is pretty cool. Ah.... all these different game types I'd love to be doing. > Sod the gameplay - feel the technology :-) > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > (*) You watch - I'll sneak them into the GJK thread somehow :-) > > > > -----Original Message----- > > From: Yuan-chung Lee [mailto:yz...@CC...] > > Sent: 12 August 2000 09:25 > > To: gda...@li... > > Subject: [Algorithms] 3D mesh transmittion > > > > > > > > Hello: > > > > I have a question about the 3D mesh transmission across > > the network. > > A 3D mesh will be transmitted from a server to a client for > > display. If the > > position of the camera is fixed for that mesh, we can first > > transmit the > > visible polygons to shorten the latency of network transmission. > > > > The method to determine those visible polygons is to > > pre-render the 3D mesh > > and assign each polygon a different color with flat shading. > > Then we have the > > index of visible polygons in frame buffer. This can be done off line. > > > > After transmitting the visible polygons, we can go on to > > transmit the rest > > polygons to prevent from cracks in moving the camera. > > > > Does the method have any problem ? Have any paper > > discussed this before? > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |