gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1391)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Scott, J. <JS...@ra...> - 2000-09-06 21:14:11
|
zlib is easy once you grok it (isn't anything tho?) Its usage is completely obfuscated in libpng too. Basically, to decompress... z_stream zdata; memset(&zdata, 0, sizeof(z_stream)); inflateInit(&zdata); zdata.next_in = (byte *)compressed_data; zdata.avail_in = compressed_data_size; while(zdata.avail_in) { zdata.next_out = (byte *)uncompressed_dest; zdata.avail_out = number_of_bytes_you_want; inflate(&zdata, Z_SYNC_FLUSH); } inflateEnd(&zdata); So, for PNG, you'd decompress a byte for the filter type, then row bytes for the line of data, and repeat for the number of lines. Everything returns Z_OK normally, or Z_STREAM_END when you have decompressed all the data (normally when avail_in is 0). To compress, do the same, but deflate rather than inflate. I've no need to remind you gents to sprinkle liberal amounts of error checking in there =) Cheers John -----Original Message----- From: Bass, Garrett T. [mailto:gt...@ut...] I'm also interested in this information. Zlib breaks my noggin. -----Original Message----- I've read a lot here about loading png's with and without using the libraries (pnglib & zlib) both pros and cons. I was wondering if anyone had any sample code for just loading pngs. I've read through the src for both libraries and there's a LOT of stuff I can't get my head around. (most of which I think is for saving and such which I don't want) I can parse through the chunks and all but the zlib stuff for decoding each line is stumping me anyone out there that can give me a hand? Jeremy Bake RtroActiv RAMetal _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: mandor <ma...@sd...> - 2000-09-06 20:56:41
|
"Bass, Garrett T." wrote: > > I'm also interested in this information. Zlib breaks my noggin. > hi I can send you my code to load png with libpngp, but I just copied/pasted the sample given with the libpng source code.. However, e-mail me if you want my code... Mandor |
From: Sam K. <sa...@ip...> - 2000-09-06 20:48:25
|
Ok cool, thats what I hoped. This brings me to another thought... Say you had a roam implementation that subdivided down (ignoring camera distance, just using landscape variance) to a fixed level say 500,000 vertices or so and stored them on card for T&L. *then* on subsequent passes of the roam bintree, generate indexed lists into that hardware vertex pool based on the camera distance. This means you get true distance based roam LOD and cached t&l of the used verticies right? sam >-----Original Message----- >From: Cem Cebenoyan <CCe...@nv...> >To: 'sa...@ip...' <sa...@ip...> >Date: 06 September 2000 7:47 PM >Subject: Re: [Algorithms] VIPM With T&L > > >>Hi Sam, >> >>a) No, the card doesn't have to TnL the highest res model. TnL cards do >TnL >>"on demand" so that you only pay for the vertices that are indexed. >> >>b) All 1000 spaceships can share the same set of vertices, but they must >>have their own separate index lists, unless you want them to all be at the >>same LOD at all times. Again, only the indexed vertices will be >>transformed, but they will be transformed separately for every instance of >>the model (unless, of course, there is some vertex cache reuse) >> >>Let me know if you have any further questions about VIPM. >> >>-Cem >>NVIDIA Developer Relations >> >>-----Original Message----- >>From: Sam Kuhn [mailto:sa...@ip...] >>Sent: Wednesday, September 06, 2000 10:49 AM >>To: gda...@li... >>Subject: [Algorithms] VIPM With T&L >> >> >>Hi all, >> >>This is bound to have been covered a gazillion times before, but I'm having >>trouble searching the archives: >>I'm fairly new to VIPM, and have a couple of questions: >> >>a) I've read that its possible to throw the vertices of a model at the 3d >>card (So T&L is possible) >>And just maintain the indexed lists into that vertex array using collapses >>and merges. >>Doesn't that mean you have to make the card (hardware) transform all of the >>vertices in the highest resolution representation of the model each frame? >>even though the current lod might only need say 100.. or am I missing >>something? >> >>b) Say you wanted 1000 spaceships on screen sharing a single VIPM model, >how >>does this work with (a)? >>Does the card have to transform all the original (high lod) vertices 1000 >>times to screen space? >> >>Is it that the hardware T&L only transforms vertices that are pointed to by >>the supplied indexed lists? or what >> >>Thanks, >> >>sam >>sa...@ip... >> >> >>_______________________________________________ >>GDAlgorithms-list mailing list >>GDA...@li... >>http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list >> > > |
From: Akbar A. <sye...@ea...> - 2000-09-06 20:37:54
|
>The same situation exists in >the CAD world. Every solid modeler has its own native file format. CAD >vendors don't want portability because their established customers i know quite a few people that get paid decent money (better than most game modelers) to do just that. take old model's and convert them a new file format/program. imho i somewhat encourage this cause it opens up a lot of jobs. it's not like we are company owners. and i am sure nasa and raytheon can spare a few million ;) peace, akbar A. -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Graham S. Rhodes Sent: Wednesday, September 06, 2000 2:24 PM To: gda...@li... Subject: RE: [Algorithms] FW: [CsMain] Scene Graphs (long tangent rant on "standards") Steve wrote, > There are dozens of OpenSourced scene graph API's out there. > Inventor becomes yet another. There are dozens of file formats > out there - .iv becomes yet another (and it's essentially just > VRML anyway - IIRC) Yes, quite true. I hear you and feel your pain. The same situation exists in the CAD world. Every solid modeler has its own native file format. CAD vendors don't want portability because their established customers would then have a choice, an option to choose a different tool without losing their past models, their past investment. Competition would become more price based---prices would have to come down and all hell would break loose. There is currently no good and robust way to pass parametric models back and forth among different CAD/modeling systems. The best you can do is to pass an instantaneous boundary rep model, and even that usually requires a human expert to resolve inconsistencies. I haven't had too much good experience with true standards, beyond, say, HTML. And I'll expand here at length on one of my recent BAD experiences with standards. We've just built a STEP translator for our Inventor-based model assembler. STEP is an international (ISO) standard data schema and file format for the storage and exchange of product data. And by product data I mean geometry describing the product, material properties and thicknesses, dependencies, analysis data (such as finite element models and results), electrical wiring, etc. It is sort of a replacement for IGES, among many other things. STEP is huge and after 15 years of development it is still experimental and incomplete. Pre-alpha. Our purpose was to support STEP AP 209 (an "application protocol" of STEP for finite element models and data). We use NASTRAN and ANSYS here, two finite element packages that have together the largest market share of all finite element packages. Their file formats are straightforward, even human readable. The STEP AP 209 file is a nightmare. In theory, it is human readable---it is text and the entity names appear to be meaningful at first glance. But there are so many many many different levels of abstraction and generalized terminology that it is just not possible to read it without going insane. And that would be OK----since really we would rather that computers read it----except that someone has to implement the translators and that means someone has to understand all these levels of abstraction. We are working with a STEP expert on this. Its their full time job to understand STEP and write translators and mappings. The STEP expert is having to consult with other STEP experts to interpret the standard. That would be OK, except that on numerous occasions *NONE* of the experts are able to come up with a concrete decision on what the standard actually intends. Even with our in-house expertise in finite element modeling, there are unanswered questions about how to model certain things. The documentation for this thing sucks. There are only a very very few examples out there. And the examples are *all* trivial ones, while our models are not trivial. We wanted to do some testing on our translator, to make sure we could read STEP files produced by another application and vice versa. That's the purpose, anyway, isn't it? So we contacted MSC Software to license their STEP translator for their MSC.Patran product. OK, we send them a purchase order, all seems well. They're very happy that we will send them money. We're happy too, until we get the code and try it. We export an AP 209 file from Patran and none of the AP 209 data is there. Its just NOT THERE. The documentation states it works, but clearly it does not. For 2 weeks, we fight with them. Turns out, they have multiple versions of the thing and no one is certain which one works. We are in contact with two other customers of MSC who have had similar problems getting working code. Eventually, somehow, thanks to calling MSC multiple times every day, we're able to get the "working" version of their translator. The version that supposedly had been conformance tested by MSC. The files that it exports are actually wrong, although they do contain the relevant information. They do not conform to the AP 209 schema. We have to manually modify the file to get it to conform. And we have to strip some information out, namely curved geometry data, since that is just completely bad---curves with knots at infinity, lost associativity, etc. Even MSC's own product cannot read it back in. MSC is one of the more active players in the committee that created the AP 209 part of the standard. And yet they cannot seem to produce a compliant AP 209 translator. If they can't get it right, how can we expect AP 209 to really work? So, in short, I just don't really believe that truly good standard file formats can emerge when they are designed by committee. I see more success in standard API's. The OpenGL approach is something I like. A top-notch company designs something that is simple, works damn well, and they run with it, making it available to the world, teaching the world how to use it, proving that it is good, superior, extending it over time----and building in an extensions mechanism. De facto standards, I believe, are much better than true standards, with very few exceptions. I'll give you another example. Anyone ever heard of PHIGS/PHIGS PLUS? Probably a good number of you, depending on how long you've been around doing 3D graphics. How many of you use it? What about you Playstation 2 developers? No? Windows? No? Macintosh? No? Linux? No? And yet...PHIGS and PHIGS PLUS are ISO standard APIs for 3D graphics. Graham Rhodes _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Paul B. <pbl...@di...> - 2000-09-06 20:21:03
|
I think that is more of a abstract data type isn't it? Most scene graphs utilize a number of design patterns and would probably be best described as "composite" or "compound" patterns (not to be confused with the Composite [GOF] pattern). For example, Composite, Observer, Strategy, Dual Dispatch, Abstract Factory, Prototype, Builder, and Visitor have all been used in scene graph work. Paul > -----Original Message----- > From: Graham S. Rhodes [mailto:gr...@se...] > Sent: Wednesday, September 06, 2000 1:54 PM > To: gda...@li... > Subject: RE: [Algorithms] FW: [CsMain] Scene Graphs > > > Jon wrote, > > > Which reminds me, are there any design patterns relating to > scene graphs > > anywhere? Had a quick look but didn't find anything. > > "Directed Acyclic Graph"? Could that be considered a pattern? > > Graham > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Sam K. <sa...@ip...> - 2000-09-06 20:00:51
|
Ok cool, thats what I hoped. This brings me to another thought... Say you had a roam implementation that subdivided down (ignoring camera distance, just using landscape variance) to a fixed level say 500,000 vertices or so and stored them on card for T&L. *then* on subsequent passes of the roam bintree, generate indexed lists into that hardware vertex pool based on the camera distance. This means you get true distance based roam LOD and cached t&l of the used verticies right? sam -----Original Message----- From: Cem Cebenoyan <CCe...@nv...> To: 'sa...@ip...' <sa...@ip...> Date: 06 September 2000 7:47 PM Subject: Re: [Algorithms] VIPM With T&L >Hi Sam, > >a) No, the card doesn't have to TnL the highest res model. TnL cards do TnL >"on demand" so that you only pay for the vertices that are indexed. > >b) All 1000 spaceships can share the same set of vertices, but they must >have their own separate index lists, unless you want them to all be at the >same LOD at all times. Again, only the indexed vertices will be >transformed, but they will be transformed separately for every instance of >the model (unless, of course, there is some vertex cache reuse) > >Let me know if you have any further questions about VIPM. > >-Cem >NVIDIA Developer Relations > >-----Original Message----- >From: Sam Kuhn [mailto:sa...@ip...] >Sent: Wednesday, September 06, 2000 10:49 AM >To: gda...@li... >Subject: [Algorithms] VIPM With T&L > > >Hi all, > >This is bound to have been covered a gazillion times before, but I'm having >trouble searching the archives: >I'm fairly new to VIPM, and have a couple of questions: > >a) I've read that its possible to throw the vertices of a model at the 3d >card (So T&L is possible) >And just maintain the indexed lists into that vertex array using collapses >and merges. >Doesn't that mean you have to make the card (hardware) transform all of the >vertices in the highest resolution representation of the model each frame? >even though the current lod might only need say 100.. or am I missing >something? > >b) Say you wanted 1000 spaceships on screen sharing a single VIPM model, how >does this work with (a)? >Does the card have to transform all the original (high lod) vertices 1000 >times to screen space? > >Is it that the hardware T&L only transforms vertices that are pointed to by >the supplied indexed lists? or what > >Thanks, > >sam >sa...@ip... > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Bass, G. T. <gt...@ut...> - 2000-09-06 20:00:41
|
I'm also interested in this information. Zlib breaks my noggin. -----Original Message----- I've read a lot here about loading png's with and without using the libraries (pnglib & zlib) both pros and cons. I was wondering if anyone had any sample code for just loading pngs. I've read through the src for both libraries and there's a LOT of stuff I can't get my head around. (most of which I think is for saving and such which I don't want) I can parse through the chunks and all but the zlib stuff for decoding each line is stumping me anyone out there that can give me a hand? Jeremy Bake RtroActiv RAMetal |
From: Brian M. <bma...@ra...> - 2000-09-06 19:56:25
|
This might not help a lot but here are couple of references. Try: Pages 101-106 in Advanced Animation and Rendering Techniques - by Alan Watt / Mark Watt. and Piecewise Smooth Surface Reconstruction - Hughes Hoppe, Tony DeRose, Tom Duchamp, Mark Halstead (A Siggraph paper - its available on Hughes Hoppe's sight at Microsoft research.) The first reference goes a little into building a b-spline patch by fitting individual curves - hard to get the error control it sounds like you need, but relatively easy to implement. The paper details arbitrary surface fitting for b-splines. Thankfully a regular height field has a nice regular structure so you don't have to worry about the tricky continuity fitting that arises in general patch networks. This paper includes subdivision techniques to guarantee the error of the patch approximation. One other thing to think about is fitting bezier patches to small blocks of the heightfield (something like 4x4 blocks). Then do the standard trick of using multiple knots to represent the bezier patches inside one b-spline patch for the terrain. Then you remove the surplus multiple knots. Messy, but something that's easier to get going with the kind of patch toolkit people normally have lying around. Unfortunately patch fitting is tricky. :-) -Brian. |
From: Graham S. R. <gr...@se...> - 2000-09-06 19:27:14
|
Steve wrote, > There are dozens of OpenSourced scene graph API's out there. > Inventor becomes yet another. There are dozens of file formats > out there - .iv becomes yet another (and it's essentially just > VRML anyway - IIRC) Yes, quite true. I hear you and feel your pain. The same situation exists in the CAD world. Every solid modeler has its own native file format. CAD vendors don't want portability because their established customers would then have a choice, an option to choose a different tool without losing their past models, their past investment. Competition would become more price based---prices would have to come down and all hell would break loose. There is currently no good and robust way to pass parametric models back and forth among different CAD/modeling systems. The best you can do is to pass an instantaneous boundary rep model, and even that usually requires a human expert to resolve inconsistencies. I haven't had too much good experience with true standards, beyond, say, HTML. And I'll expand here at length on one of my recent BAD experiences with standards. We've just built a STEP translator for our Inventor-based model assembler. STEP is an international (ISO) standard data schema and file format for the storage and exchange of product data. And by product data I mean geometry describing the product, material properties and thicknesses, dependencies, analysis data (such as finite element models and results), electrical wiring, etc. It is sort of a replacement for IGES, among many other things. STEP is huge and after 15 years of development it is still experimental and incomplete. Pre-alpha. Our purpose was to support STEP AP 209 (an "application protocol" of STEP for finite element models and data). We use NASTRAN and ANSYS here, two finite element packages that have together the largest market share of all finite element packages. Their file formats are straightforward, even human readable. The STEP AP 209 file is a nightmare. In theory, it is human readable---it is text and the entity names appear to be meaningful at first glance. But there are so many many many different levels of abstraction and generalized terminology that it is just not possible to read it without going insane. And that would be OK----since really we would rather that computers read it----except that someone has to implement the translators and that means someone has to understand all these levels of abstraction. We are working with a STEP expert on this. Its their full time job to understand STEP and write translators and mappings. The STEP expert is having to consult with other STEP experts to interpret the standard. That would be OK, except that on numerous occasions *NONE* of the experts are able to come up with a concrete decision on what the standard actually intends. Even with our in-house expertise in finite element modeling, there are unanswered questions about how to model certain things. The documentation for this thing sucks. There are only a very very few examples out there. And the examples are *all* trivial ones, while our models are not trivial. We wanted to do some testing on our translator, to make sure we could read STEP files produced by another application and vice versa. That's the purpose, anyway, isn't it? So we contacted MSC Software to license their STEP translator for their MSC.Patran product. OK, we send them a purchase order, all seems well. They're very happy that we will send them money. We're happy too, until we get the code and try it. We export an AP 209 file from Patran and none of the AP 209 data is there. Its just NOT THERE. The documentation states it works, but clearly it does not. For 2 weeks, we fight with them. Turns out, they have multiple versions of the thing and no one is certain which one works. We are in contact with two other customers of MSC who have had similar problems getting working code. Eventually, somehow, thanks to calling MSC multiple times every day, we're able to get the "working" version of their translator. The version that supposedly had been conformance tested by MSC. The files that it exports are actually wrong, although they do contain the relevant information. They do not conform to the AP 209 schema. We have to manually modify the file to get it to conform. And we have to strip some information out, namely curved geometry data, since that is just completely bad---curves with knots at infinity, lost associativity, etc. Even MSC's own product cannot read it back in. MSC is one of the more active players in the committee that created the AP 209 part of the standard. And yet they cannot seem to produce a compliant AP 209 translator. If they can't get it right, how can we expect AP 209 to really work? So, in short, I just don't really believe that truly good standard file formats can emerge when they are designed by committee. I see more success in standard API's. The OpenGL approach is something I like. A top-notch company designs something that is simple, works damn well, and they run with it, making it available to the world, teaching the world how to use it, proving that it is good, superior, extending it over time----and building in an extensions mechanism. De facto standards, I believe, are much better than true standards, with very few exceptions. I'll give you another example. Anyone ever heard of PHIGS/PHIGS PLUS? Probably a good number of you, depending on how long you've been around doing 3D graphics. How many of you use it? What about you Playstation 2 developers? No? Windows? No? Macintosh? No? Linux? No? And yet...PHIGS and PHIGS PLUS are ISO standard APIs for 3D graphics. Graham Rhodes |
From: Jeremy B. <Jer...@in...> - 2000-09-06 19:24:19
|
I've read a lot here about loading png's with and without using the libraries (pnglib & zlib) both pros and cons. I was wondering if anyone had any sample code for just loading pngs. I've read through the src for both libraries and there's a LOT of stuff I can't get my head around. (most of which I think is for saving and such which I don't want) I can parse through the chunks and all but the zlib stuff for decoding each line is stumping me anyone out there that can give me a hand? Jeremy Bake RtroActiv RAMetal |
From: Charles B. <cb...@cb...> - 2000-09-06 19:19:59
|
Is there a fast way to transform a plane? Right now I'm doing it by rotating the normal and tranforming a point on the plane, then re-generating the 4d-vector form of the plane. It seems there should be a way to do it with a single 4x4 matrix multiply in some funny coordinate space. On a related note, is there a fast way to transform an axis-aligned bounding box? With an AABB defined by a 'min' and a 'max', I'm doing this : // the min transformed : VECTOR3D vMinT; frXForm.XFormVector(vMinT,min); // the three edges transformed : VECTOR3D vx,vy,vz; vx = (max.x - min.x) * frXForm.Axis(X_AXIS); vy = (max.y - min.y) * frXForm.Axis(Y_AXIS); vz = (max.z - min.z) * frXForm.Axis(Z_AXIS); min.x = vMinT.x + min(vx.x,0) + min(vy.x,0) + min(vz.x,0); min.y = vMinT.y + min(vx.y,0) + min(vy.y,0) + min(vz.y,0); min.z = vMinT.z + min(vx.z,0) + min(vy.z,0) + min(vz.z,0); max.x = vMinT.x + max(vx.x,0) + max(vy.x,0) + max(vz.x,0); max.y = vMinT.y + max(vx.y,0) + max(vy.y,0) + max(vz.y,0); max.z = vMinT.z + max(vx.z,0) + max(vy.z,0) + max(vz.z,0); Basically, transform the low corner (the 'min'), and then transform the edges along x,y, and z. I can't think of anything better. -------------------------------------- Charles Bloom www.cbloom.com |
From: Scott M. <sc...@3d...> - 2000-09-06 19:09:41
|
> -----Original Message----- > From: Jonathan Wight [mailto:JW...@bi...] > Sent: Wednesday, September 06, 2000 1:06 PM > To: gda...@li... > Subject: Re: [Algorithms] FW: [CsMain] Scene Graphs > > > on 9/6/00 12:37 PM, Stephen J Baker at sj...@li... wrote: > > >> So where does Inventor and the *.iv format fit into this > now that (I > >> believe) it has become an open-source environment? > > > > There are dozens of OpenSourced scene graph API's out there. > > Inventor becomes yet another. There are dozens of file formats > > out there - .iv becomes yet another (and it's essentially just > > VRML anyway - IIRC) > > > > Adding another to the existing pile doesn't make for > standardization. > > Would one size fit all? _Maybe_, but it is a tough problem to solve. Put too much functionality in a scene graph and it starts to look bloated because most apps will only use a (small?) subset of that functionality. OTOH, if you make it too simple and people end up having to write a bunch of code to use it then they will perceive it to be not very useful. This, combined with having to work within an externally imposed structure, could lead a person to consider "just doing it himself". > I have my own set of requirements for > a scene graph, > didn't find anything out there that suited these requirements > - which is why > I'm forced to write my own. I don't think it is possible to > create scene > graph library and make it as generically useful as for > example the C++ STL. > Agreed, because a scene graph is a much higher-level construct than the things in STL. > Which reminds me, are there any design patterns relating to > scene graphs > anywhere? Had a quick look but didn't find anything. > Composite is what a scene graph is IIRC. The Visitor pattern is also used for traversing the graph in some implementations. Scott McCaskill sc...@3d... |
From: Dave S. <Dav...@sd...> - 2000-09-06 19:05:41
|
Jonathan Wight wrote: > Which reminds me, are there any design patterns relating to scene graphs > anywhere? Had a quick look but didn't find anything. > > Jon. > Good question. I have just been browsing the popular API's (Performer, Java3D, PHIGS, etc..) for ideas but that's a major pain. -DaveS |
From: Graham S. R. <gr...@se...> - 2000-09-06 18:57:00
|
Jon wrote, > Which reminds me, are there any design patterns relating to scene graphs > anywhere? Had a quick look but didn't find anything. "Directed Acyclic Graph"? Could that be considered a pattern? Graham |
From: Ko, M. <MAN...@ca...> - 2000-09-06 18:37:28
|
Now there isn't. But you should be familiar with Performer and Fahrenheit as a start. Otherwise you are in a huge hole. I have a lot of experience designing scene-graphs. Will be happy to discuss it with u. I have a class structure that works very well. -----Original Message----- From: Jonathan Wight [mailto:JW...@bi...] Sent: Wednesday, September 06, 2000 11:06 AM To: gda...@li... Subject: Re: [Algorithms] FW: [CsMain] Scene Graphs on 9/6/00 12:37 PM, Stephen J Baker at sj...@li... wrote: >> So where does Inventor and the *.iv format fit into this now that (I >> believe) it has become an open-source environment? > > There are dozens of OpenSourced scene graph API's out there. > Inventor becomes yet another. There are dozens of file formats > out there - .iv becomes yet another (and it's essentially just > VRML anyway - IIRC) > > Adding another to the existing pile doesn't make for standardization. Would one size fit all? I have my own set of requirements for a scene graph, didn't find anything out there that suited these requirements - which is why I'm forced to write my own. I don't think it is possible to create scene graph library and make it as generically useful as for example the C++ STL. Which reminds me, are there any design patterns relating to scene graphs anywhere? Had a quick look but didn't find anything. Jon. _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Stephen J B. <sj...@li...> - 2000-09-06 18:34:04
|
On Wed, 6 Sep 2000, Jonathan Wight wrote: > >> So where does Inventor and the *.iv format fit into this now that (I > >> believe) it has become an open-source environment? > > > > There are dozens of OpenSourced scene graph API's out there. > > Inventor becomes yet another. There are dozens of file formats > > out there - .iv becomes yet another (and it's essentially just > > VRML anyway - IIRC) > > > > Adding another to the existing pile doesn't make for standardization. > > Would one size fit all? I have my own set of requirements for a scene graph, > didn't find anything out there that suited these requirements - which is why > I'm forced to write my own. I don't think it is possible to create scene > graph library and make it as generically useful as for example the C++ STL. I know what you mean (I've written my own scene graphs too)...but the point of my earlier (L-O-N-G) post was that when things like this initially appear, everyone feels like they could do better writing to a lower level and doing it themselves - but as time progresses, we'll get faster machines, better scene graph API's and there will come a point where (just as with C++ compilers) it's just better to use a standard API than roll your own for each new project. If there was a standard SG API, then there would eventually be hardware to accellerate it, lots of loaders for standard file formats, we could have standardized collision detection libraries, physics libraries...those things are not reasonably possible currently because there is no standard SG API for them to work with. Once all those things exist, I think you'd be much more inclined to accomodate the (hopefully few) limitations of the SG in order to reap the benefits of working at a higher level abstraction. This is precisely the state we were at when everyone was writing pixels into frame buffers and *thinking* about going to a standard rendering API rather than writing their own rasterizers for each new project. There were the exact same concerns about (say) OpenGL being not *quite* as flexible as a renderer you could write yourself - and it was lower in performance. Then hardware that could make use of OpenGL came about and over about a year, everyone forgot about writing their own rasterizers anymore. Perhaps we just aren't ready for that next step up yet. I happen to think that the time is about right. ---- Science being insufficient - neither ancient protein species deficient. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Christian R. M. K. <chr...@pu...> - 2000-09-06 18:33:56
|
Dave Forsey wrote: > > When you say it must be NURBS, do you mean it must have non-uniform > knot spacing and non-zero wieghts for the control points, or > would you be just as happy with a rational form of a standard > B-spline? (ie. uniform knots + all wieghts at one). It would to at first. > > Do you want quadratic, cubic or quartic surfaces? Cubic or quartic. > Another way of putting this is to ask if you are happy with a > smooth approximation or do you want a surface that has sharp > edges in it? I didn't get that right, I'm afraid. If the Heightfield implies sharp edges I'd need them. > Also do you want just a single surface to do the approximation > or are multiple surfaces ok? (If the latter then NURBs with triple > end knots are equivalent to Bezier patches). It needs to be a single one. > > Is the hieghtfield a grid or scattered data? It's distributed on a regular Grid. > What modelling system are you using? Most have some sort of "shrink-wrap" > facility built-in (or with a plug-in) that you can wrangle > into doing this for you (though probably quite slowly). We work together with geologist who model their data via single NURBS, with weights unequal zero and all that stuff. I'm working on a system for fast visualization of them. The Renderer works already. What I want to do, is to test if it works well for 'nurbsified' Heightfields too. And I want to take advantage of the fact that NURBS should be capable to model the same terrain as the heightfield with viewer controlpoints, since for flat areas few points are sufficient. That's why I need appro- ximation and not interpolation. I read over the approximation chapter in the NURBS Book and it's pretty complicated. I'm afraid it would take same time to implement this. So I thought maybe anyone here did this already. > Finally, how many times do you have to do this? If it is just once, > then contract a company like Paraform to do the fitting for you. > (or you can buy their software). That's impossible, unfortunately. Christian |
From: Jim O. <j.o...@in...> - 2000-09-06 18:33:33
|
> Is it that the hardware T&L only transforms vertices that are pointed to by > the supplied indexed lists? or what That's it :-). For software T&L you can sort your verts (this is not necessarily specific to VIPM btw.), so that verts [0...n] represent the first (lowest) lod level, verts [0...n+m] represents the next lod level and so forth. Using the dwStartVertex and dwNumVertices parameters in the DIPVB call you can instruct DX to transform *only* those verts that fall in the specified range. Jim Offerman Innovade The Atlantis Project http://www.theatlantisproject.com |
From: Graham S. R. <gr...@se...> - 2000-09-06 18:30:45
|
Steve wrote, > Yes - but Inventor isn't really appropriate for high performance > stuff - that's why SGI have the 'Performer' scene graph for that kind > of thing. I agree! You'll note that I stated as much in my original post, at the bottom, item #5 and the following paragraph. (I sometimes have the same habit as you----replying to a message before I read or fully comprehend the entire message. Or before I check to see if someone else has replied with the same response. But I'm getting better!) Its probably good that you stated your comment on Performer vs. Inventor at the top of your message, since possibly someone else didn't read to the bottom of my long message. Even so, you'll be interested to know that our customer has been able to display models up to around 250,000 polygons (tris and quads) in Inventor, on an HP Kayak PC with fx2 graphics. Frame rates were not good, though, for this model. With a 100,000 polygon model frame rates were much better, of course. Modern consumer T&L boards, which are much better than fx2 graphics, would result in better performance for the larger model. It is absolutely true that Inventor does not actively manage performance the way Performer does. It really is meant for inspecting objects and manipulating objects rather than navigating through large worlds. I do think Performer is going to be a better engine for games. But, alas, Performer doesn't work on Windows or Macintosh. Only Linux and IRIX. And it is unlikely that it will ever be ported to the more popular platforms. > Inventor is more for makeing cute presentations, rapid prototyping, > that kind of thing. We use it for a bit more than that. We have an Inventor-based model assembler (developed from 1996 through present) that we use to build models of NASA's Next Generation Space Telescope (NGST), pressure vessels for studying Mars habitats. And others have used our tool to assemble various large aircraft models (e.g., high speed civil transport). We assemble the model and apply loads and boundary conditions/constraints in our Inventor app, then invoke a multidisciplinary analysis of the assemblies, and finally import and animate the results in Inventor. For example, we've done an analysis of the NGST where nonlinear structural vibration was induced by radiation loading from the sun in space. The animation data is over 300MB of motion and scalar contour data (12000 time steps) for the vibrating space telescope. Inventor handles it just fine. The model itself is not as big as the large models mentioned above, though. We do the postprocessing as well in a Performer app, which maintains a relatively constant frame rate compared with the Inventor app. > > It is an extensible/programmable scene graph engine that is > quite powerful. > > It has a "standard" file format as well... > > It's essentially VRML...or more accurately - VRML is essentially > Inventor. Folks should keep one thing straight. VRML is a file format. Inventor is both a file format and an extensible C++ scene graph toolkit. When comparing Inventor and VRML, you can compare their file formats. And you can compare the Inventor toolkit with VRML file browsers. In general, it is *not* accurate to say "VRML is essentially Inventor" or vice versa. It is true that the VRML version #1 file format was based on the Inventor file format. Inventor's file format is both a superset and subset of VRML 1. Inventor has features, such as calculator engines, that the VRML 1 file format does not have. And VRML 1 has features that were not in Inventor (billboard node, sky/background node?). VRML 1 is quite obsolete, and Inventor is not related to the more current VRML97 (version #2 of VRML). Inventor the *engine* as opposed to inventor the *file format* is a HELL of a lot more than simple VRML #1 viewers in terms of programmability and extensibility. It is *not* merely a viewer for simple files. It does allow you to build interactive scenes and customized 3D interaction, even though it is not the best SG engine for this kind of application. You can write OpenGL callbacks to implement cinematics, etc. VRML97 viewers that support ECMA scripting or EAI allow significant customization and interactivity, but performance is not as good as Inventor (since customization is often done with a slower language than C++, such as Java). There, I think I'm through talking about Inventor and Performer now. Intrinsic Alchemy (www.intrinsic.com) is a scene graph engine developed I think by folks who worked on or with Performer. Alchemy does specifically target the game development market. There are versions for Windows and PS2. NetImmerse from NDL (www.ndl.com) is another example of a gaming scene graph engine that works on Windows and PS2. Both of these engines are going to be far more expensive than Inventor or Performer, but probably offer better performance for games and better portability options. Graham Rhodes > > ---- > Science being insufficient - neither ancient protein species deficient. > > Steve Baker (817)619-2657 (Vox/Vox-Mail) > L3Com/Link Simulation & Training (817)619-2466 (Fax) > Work: sj...@li... http://www.link.com > Home: sjb...@ai... http://web2.airmail.net/sjbaker1 > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Salah E. N. <se...@ub...> - 2000-09-06 18:22:44
|
well, you can extract a scale, but it will not always be a diagonal scale, it will look like R*S*R' where R is a rotation R' her transpose and S a diagonal matrix here's how: - The singular value decomposition theorem tells us that for avery matrix A, there exists two orthogonal matrices R and T, and a diagonal matrix S such that: A=R*S*T. what we want is to find a decomposition looking like: A=R1*(R2*S*R2'). R1 is the rotation part of A, and (R2*S*R2') is the scale part. so R=R1*R2 and T=R2', wich gives us R1=R*T and R2=T'. i don't have any algorithm to find the single value decomposition in mind,but you can try a search in www.google.com , i'm sure u'll find something out there. hope this helps. -----Message d'origine----- De: Jon Anderson [mailto:jan...@on...] Date: Wednesday, September 06, 2000 4:33 PM À: gda...@li... Objet: [Algorithms] Extracting scale from matrix Are there any good tricks for extracting the scale (x, y, z) from a 4x4 matrix? It seems pretty trivial to do it for matrices that consist of rotations about a single axis, but I'm having problems doing it for more complex matrices. Jon _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Jonathan W. <JW...@bi...> - 2000-09-06 18:04:09
|
on 9/6/00 12:37 PM, Stephen J Baker at sj...@li... wrote: >> So where does Inventor and the *.iv format fit into this now that (I >> believe) it has become an open-source environment? > > There are dozens of OpenSourced scene graph API's out there. > Inventor becomes yet another. There are dozens of file formats > out there - .iv becomes yet another (and it's essentially just > VRML anyway - IIRC) > > Adding another to the existing pile doesn't make for standardization. Would one size fit all? I have my own set of requirements for a scene graph, didn't find anything out there that suited these requirements - which is why I'm forced to write my own. I don't think it is possible to create scene graph library and make it as generically useful as for example the C++ STL. Which reminds me, are there any design patterns relating to scene graphs anywhere? Had a quick look but didn't find anything. Jon. |
From: Sam K. <sa...@ip...> - 2000-09-06 17:49:28
|
Hi all, This is bound to have been covered a gazillion times before, but I'm having trouble searching the archives: I'm fairly new to VIPM, and have a couple of questions: a) I've read that its possible to throw the vertices of a model at the 3d card (So T&L is possible) And just maintain the indexed lists into that vertex array using collapses and merges. Doesn't that mean you have to make the card (hardware) transform all of the vertices in the highest resolution representation of the model each frame? even though the current lod might only need say 100.. or am I missing something? b) Say you wanted 1000 spaceships on screen sharing a single VIPM model, how does this work with (a)? Does the card have to transform all the original (high lod) vertices 1000 times to screen space? Is it that the hardware T&L only transforms vertices that are pointed to by the supplied indexed lists? or what Thanks, sam sa...@ip... |
From: tSG <ts...@ma...> - 2000-09-06 17:40:18
|
hi! thx for the links Akbar A. anyway 16 bit colordepth is faster than 32 :) But it's too slow. It runs 30 fps in 640x480x16 colordepth on a cel500, tnt2 32mb. Q3 runs on this machine with tnt1 16mb ( and in 640x480) also 30 fps on the space-track. ( sorry Akbar that i have written that again to this list :) Frag Daddy : also thx you( but this result is also very slow) Bye tSG |
From: Stephen J B. <sj...@li...> - 2000-09-06 17:38:25
|
On Wed, 6 Sep 2000, Patrick E. Hughes wrote: > >> but there`s no standart, and now after Fahrenheit blowup, seems there`s no > >> hope for standart SG in nearest future, unfortuntaly > > > >The worst part of it is that without a standard SG, there is no > >way to promote the writing of generic 3D file loaders - which is > > So where does Inventor and the *.iv format fit into this now that (I > believe) it has become an open-source environment? There are dozens of OpenSourced scene graph API's out there. Inventor becomes yet another. There are dozens of file formats out there - .iv becomes yet another (and it's essentially just VRML anyway - IIRC) Adding another to the existing pile doesn't make for standardization. ---- Science being insufficient - neither ancient protein species deficient. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |
From: Stephen J B. <sj...@li...> - 2000-09-06 17:33:49
|
On Wed, 6 Sep 2000, Graham S. Rhodes wrote: > I don't know if you all are aware of this (but some probably are), but SGI > has released their Open Inventor scene graph engine (version 2.1 or maybe > 2.2 I think) as open source, see this link: > > http://oss.sgi.com/projects/inventor/ Yes - but Inventor isn't really appropriate for high performance stuff - that's why SGI have the 'Performer' scene graph for that kind of thing. Inventor is more for makeing cute presentations, rapid prototyping, that kind of thing. > It is an extensible/programmable scene graph engine that is quite powerful. > It has a "standard" file format as well... It's essentially VRML...or more accurately - VRML is essentially Inventor. ---- Science being insufficient - neither ancient protein species deficient. Steve Baker (817)619-2657 (Vox/Vox-Mail) L3Com/Link Simulation & Training (817)619-2466 (Fax) Work: sj...@li... http://www.link.com Home: sjb...@ai... http://web2.airmail.net/sjbaker1 |