You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(33) |
Nov
(13) |
Dec
|
---|
From: Martin R. <ma...@MP...> - 2004-11-25 09:52:06
|
Hi all, sorry for not responding to the last mails on the list. I'm currently too busy to look into the changes, and the situation will probably not improve until the middle of December. Just to let you know. Cheers, Martin |
From: Tim M. <ti...@se...> - 2004-11-18 22:51:16
|
On Monday 15 Nov 2004 23:24, Martin Reinecke wrote: > > Isn't that the job of make to worry about? I mean, its algorithms are > > impenetrable but once you get it set up there isn't a problem. > > make doesn't know automatically about the inter-file dependencies of the > C++ files. Obviously make can't determine the dependencies of the files from scratch, but once you have your makefile configured it ought to work. The dependencies shouldn't change that frequently relative to the rate of development work. A simpler solution occurred to me though - just set the makefile to rebuild all targets every time and use ccache (http://ccache.samba.org/) to figure out whether the file actually needs to be recompiled. This leaves the makefiles completely transparent and eliminates all unnecessary compiling. > You can generate dependency information for use with make by > using "g++ -M file.cxx" or similar, but this is not portable to other > compilers, and it also takes some time. So if you develop Ray++ on Windows > without g++, you won't have a way to keep your dependency information up to > date. But if you're developing on Windows without using the GNU toolchain then a GNU makefile is no use to you anyway. And since (as far as I'm aware) there are no UNIX tools that can read NT makefiles, the two systems will always be orthogonal. This in itself is no reason to cripple the GNU makefile. An alternative would be to use a proper cross-platform build tool like Jam (www.perforce.com), which has other advantages as well, but has the downside of making it a lot more difficult for someone who has only a casual interest in Ray++. > I'm against hardwiring of dependencies in the Makefile, because these > dependencies would have to be updated by hand and will get out of date > almost immediately. I agree with you up to a point. The way I see it, the job of the build tool is to make development as efficient as possible. The various solutions we've discussed here all make a trade off between efficiency of day-to-day development and the complexity of maintaining the build environment. Your solution is a bit too far on one extreme for my liking, but as I say I'm happy to stick with it (or at least use my own tools) if everybody else is. Tim |
From: Andreas K. <and...@we...> - 2004-11-18 12:45:58
|
Hi! I've committed some stuff to the CVS repository yesterday: * The sampling class (already sent as a patch). I moved it into the utils directory as suggested * The GLUT output class. I put it into a new "extras" folder and didn't include it into the ray++.cxx * A new template class IMAGE (image.h in the utils dir). Currently only used by GL_OUTPUT. Right now it can handle 24bit and 16bit RGB images (origin and row alignment can be specified) but it is easy to add new pixel formats. Also has a routine to save to TGA. * I added triangle_mesh.cxx to ray++.cxx Next things I want to do: * Enhance IMAGE: add floating point pixel format, add RGBE pixel format (Greg Ward's 'Real Pixels'), add save routine for HDR format (high dynamic range format used in Radiance) * Write IMAGE_OUTPUT class that can render to given IMAGE instance (and probably makes MEM_OUTPUT obolete) * Add more sampling strategies (Quasi Monte Carlo and adaptive methods) and testing programs for it * Add kd-tree template class (at least 2D and 3D), probably two versions optimized for dynamic (you can at least add new items) and static use (build once, then provide fast queries) BTW: I posted a request for a statistics/profiling helper class. See http://sourceforge.net/tracker/index.php?func=detail&aid=1047598&group_id=119265&atid=683483 Does someone already have something like this? Hope everything works like expected... :-) Andreas |
From: Tim M. <ti...@se...> - 2004-11-16 13:17:52
|
Andreas Kahler said: >> I think a reasonable compromise would be to build multiple translation >> units in the CVS, but possibly release tarballs with the fast compile >> switched on. Any thoughts? > > I personally prefer a single simple solution, since it much easier to > maintain. Adding a file to the ray++.cxx is easier than modifying the > Makefile especially if you work on Windows where you cannot test if > the Makefile works. But this doesn't get round the problem that if you modify one of the existing files you're stuck with another long compile. If everyone else is happy with things the way they are, then I'll just use my own build scripts myself and not submit anything. Tim |
From: Andreas K. <and...@we...> - 2004-11-16 10:00:21
|
Hi all! > I think a reasonable compromise would be to build multiple translation units > in the CVS, but possibly release tarballs with the fast compile switched on. > Any thoughts? I personally prefer a single simple solution, since it much easier to maintain. Adding a file to the ray++.cxx is easier than modifying the Makefile especially if you work on Windows where you cannot test if the Makefile works. Side note: the MSVC++60 workspace already has two projects for the ray++ library, one using ray++.cxx and one compling the file individually. It's out of date, new file were not added there... What I do to avoid long compile times for new stuff: I do not add new files to ray++.cxx directly but add the new file to the project/Makefile. Before I submit it CVS I add it to the ray++.cxx. > > > Could you please upload it as a sourceforge patch first? > > > > I didn't know this was possible. I'll have to tidy up the makefile a bit and > > investigate how to achieve building a sourceforge patch. > > I'm not sure how it works, but Andreas has done it already for his flexible > sampling class. But sending the modified files to the mailing list is > also fine, of course. Not very hard. Create a patch the usual way (using diff against the CVS version) and save it ti a file. On our sorceforge page, go to "Patches" and "Submit new". Attach the patch file you created. Andreas |
From: Martin R. <ma...@MP...> - 2004-11-15 23:24:31
|
Hi Tim! > A brief and rather unscientific investigation into the relative times > convinced me that a one-off full compile is indeed much quicker when done as > a single translation unit. However, it seems to me that the average user of > raypp is going to compile quite a few times, as people aren't going to be > using it out of the box as-is. > > Furthermore, it seems to me that we want to avoid wasting time during the > development cycle (when you usually have nothing better to do than sit and > wait for a build) even at the expense of it taking longer for people to do a > one-off build. > > I think a reasonable compromise would be to build multiple translation units > in the CVS, but possibly release tarballs with the fast compile switched on. > Any thoughts? If we can get the dependencies right without spending too much CPU time on them, compilation of individual files is certainly the best solution. We won't need the big transltion unit any more then. > > But in this case all the dependency rules must be regenerated, > > which also takes some time. > > Isn't that the job of make to worry about? I mean, its algorithms are > impenetrable but once you get it set up there isn't a problem. make doesn't know automatically about the inter-file dependencies of the C++ files. You can generate dependency information for use with make by using "g++ -M file.cxx" or similar, but this is not portable to other compilers, and it also takes some time. So if you develop Ray++ on Windows without g++, you won't have a way to keep your dependency information up to date. I'm against hardwiring of dependencies in the Makefile, because these dependencies would have to be updated by hand and will get out of date almost immediately. > > > Does anyone object to me submitting this? > > > > Could you please upload it as a sourceforge patch first? > > I didn't know this was possible. I'll have to tidy up the makefile a bit and > investigate how to achieve building a sourceforge patch. I'm not sure how it works, but Andreas has done it already for his flexible sampling class. But sending the modified files to the mailing list is also fine, of course. Cheers, Martin |
From: Tim M. <ti...@se...> - 2004-11-15 21:23:36
|
On Sunday 07 Nov 2004 15:29, Martin Reinecke wrote: > Well, the main reason was that the compilation of the big translation unit > is very fast compared to separate compilation of all source files. > This is the case because each source file includes quite a lot of > large standard library headers, which dominate the compiled lines of code. > With separate compilation they need to be parsed many times. > Of course this argument is not valid if you edit only one source file and > recompile. A brief and rather unscientific investigation into the relative times convinced me that a one-off full compile is indeed much quicker when done as a single translation unit. However, it seems to me that the average user of raypp is going to compile quite a few times, as people aren't going to be using it out of the box as-is. Furthermore, it seems to me that we want to avoid wasting time during the development cycle (when you usually have nothing better to do than sit and wait for a build) even at the expense of it taking longer for people to do a one-off build. I think a reasonable compromise would be to build multiple translation units in the CVS, but possibly release tarballs with the fast compile switched on. Any thoughts? > But in this case all the dependency rules must be regenerated, > which also takes some time. Isn't that the job of make to worry about? I mean, its algorithms are impenetrable but once you get it set up there isn't a problem. > > Does anyone object to me submitting this? > > Could you please upload it as a sourceforge patch first? I didn't know this was possible. I'll have to tidy up the makefile a bit and investigate how to achieve building a sourceforge patch. Tim |
From: Andreas K. <and...@we...> - 2004-11-09 20:06:19
|
Hi! > Would it be possible to offer some form of hook, callback function or > callback class? > That way the user of Ray++ can make a separate compilation to quickly > see the rendering. > Objects would have to offer the option of rendering themselves quickly > I presume, not just > polygons, but blobs, star fields etcetera too. Sorry, what I wrote was maybe a bit misleading. I do not use any of the OpenGL/GLUT 3D features. All I do is normal raytracing and using OpenGL's 2D raster image drawing capabilities to display the result. What makes this more useful than existing output classes is that I do not simply loop through all pixels to render them, but I do hierachical refinement so that you see a coarse image pretty quickly. Overall rendering time is of course not better than "normal" rendering. > How do you plan to support it? Will it be rendering in the background > as a separate thread, displayed > through OpenGL? Or through an idle function that incidentally updates > the screen while rendering in time slices? I render in the idle function. Not ideal, but works reasonanly well. I don't want to use threads, since this cannot be done in a platform independent way. > This could also be an excellent tool for tweaking things like positions > of objects and camera position, > if you keep the render at coarse level. A poor-mans 3d scene authoring > tool that would suffice for a programmer. This would be nice of course and I already thought about doing that. But I basically only would show bounding boxes and some simple primitives. Everything else is quite some work. It would be nice though. If someone is interested to do it, it would be great! Andreas __________________________________________________________ Mit WEB.DE FreePhone mit hoechster Qualitaet ab 0 Ct./Min. weltweit telefonieren! http://freephone.web.de/?mc=021201 |
From: Ayal P. <ap...@xs...> - 2004-11-09 18:43:13
|
Hi, First off, sorry I am still lurking on the mailing list, and not actively participating. I think allowing for fast previews is a really great feature. GL/GLUT should be supported on all platforms. However, indeed, you might want to keep dependencies down. So I would make it optional. It could be a separate library? Would it be possible to offer some form of hook, callback function or callback class? That way the user of Ray++ can make a separate compilation to quickly see the rendering. Objects would have to offer the option of rendering themselves quickly I presume, not just polygons, but blobs, star fields etcetera too. How do you plan to support it? Will it be rendering in the background as a separate thread, displayed through OpenGL? Or through an idle function that incidentally updates the screen while rendering in time slices? This could also be an excellent tool for tweaking things like positions of objects and camera position, if you keep the render at coarse level. A poor-mans 3d scene authoring tool that would suffice for a programmer. At any length a very important addition I think! Andreas Kahler heeft op dinsdag, 9 nov 2004 om 16:45 (Europe/Amsterdam) het volgende geschreven: > Hi! > > I'm currently working on a GLUT based output class for ray++ > which does progressive refinement for the rendered image. > Saves a lot of time when setting up new scenes as you can see > the coarse scene appearance very quickly. > > Question about this: As this has a dependency to the GL/GLUT > headers and libs I don't want this to be part of the standard > ray++ static library, as it is quite possible that you do not have > the GLUT stuff but you still want to use ray++ as you do not > need the GLUT output. How should we handle this? > My suggestion to but it into the same source folder structure > (and therefore put it into the main raypp CVS project and > distribute with standard builds), but put it into an extra folder > (maybe called "extras"). The files in there are NOT compiled > into the raypp lib. You have to compile them separately. > > Another thing: Should we include test programs into the CVS > project? I think about e.g. a small program showing the > distribution of various sampling strategies or maybe later some > visualization for the photon map. What about if we put them > into a programs/tests folder? > > Comments? > > Andreas > > > ------------------------------------------------------- > This SF.Net email is sponsored by: > Sybase ASE Linux Express Edition - download now for FREE > LinuxWorld Reader's Choice Award Winner for best database on Linux. > http://ads.osdn.com/?ad_id=5588&alloc_id=12065&op=click > _______________________________________________ > Raypp-general mailing list > Ray...@li... > https://lists.sourceforge.net/lists/listinfo/raypp-general > |
From: Andreas K. <and...@we...> - 2004-11-09 15:45:09
|
Hi! I'm currently working on a GLUT based output class for ray++ which does progressive refinement for the rendered image. Saves a lot of time when setting up new scenes as you can see the coarse scene appearance very quickly. Question about this: As this has a dependency to the GL/GLUT headers and libs I don't want this to be part of the standard ray++ static library, as it is quite possible that you do not have the GLUT stuff but you still want to use ray++ as you do not need the GLUT output. How should we handle this? My suggestion to but it into the same source folder structure (and therefore put it into the main raypp CVS project and distribute with standard builds), but put it into an extra folder (maybe called "extras"). The files in there are NOT compiled into the raypp lib. You have to compile them separately. Another thing: Should we include test programs into the CVS project? I think about e.g. a small program showing the distribution of various sampling strategies or maybe later some visualization for the photon map. What about if we put them into a programs/tests folder? Comments? Andreas |
From: Martin R. <ma...@MP...> - 2004-11-07 15:29:19
|
Hi Tim! > Sorry I haven't contributed much recently, I've been having fun trying to get > my wireless network card to work in linux. :-) Sounds like a daunting task :) > One thing that's been bothering me with the raypp build system as it is is the > way the Makefile works. Since we're building as one big translation unit, > every single change to the code means a long compile. I was wondering why we > chose to do it this way? Well, the main reason was that the compilation of the big translation unit is very fast compared to separate compilation of all source files. This is the case because each source file includes quite a lot of large standard library headers, which dominate the compiled lines of code. With separate compilation they need to be parsed many times. Of course this argument is not valid if you edit only one source file and recompile. But in this case all the dependency rules must be regenerated, which also takes some time. When I changed to the one translation unit a few years ago, this was the quickest overall solution, but maybe compilers have changed. > I've frobbed the Makefile a bit and got a working system which compiles each > unit separately then archives them all into a single libraypp.a, which makes > subsequent linkage just as easy, but gets round the issue of rebuilding > everything each time a change is made. > > Does anyone object to me submitting this? Could you please upload it as a sourceforge patch first? > Is there some problem with this approach that I'm missing? Automatic dependency generation is a problem. If you have this functionality, your Makefile should work as well as the current one. Cheers, Martin |
From: Tim M. <ti...@se...> - 2004-11-07 13:50:57
|
Hi all, Sorry I haven't contributed much recently, I've been having fun trying to get my wireless network card to work in linux. :-) One thing that's been bothering me with the raypp build system as it is is the way the Makefile works. Since we're building as one big translation unit, every single change to the code means a long compile. I was wondering why we chose to do it this way? I've frobbed the Makefile a bit and got a working system which compiles each unit separately then archives them all into a single libraypp.a, which makes subsequent linkage just as easy, but gets round the issue of rebuilding everything each time a change is made. Does anyone object to me submitting this? Is there some problem with this approach that I'm missing? Tim |
From: Martin R. <ma...@MP...> - 2004-11-03 12:53:02
|
Hi all, sorry, I completely forgot about this email... > >>There also seems to be some complicated interaction between the renderer >>and the sampling object in RAYTRACER::Get_Pixel(). It might be easier >>to have a method SAMPLING2d::Calc_Pixel(u,v,du,dv), which in turn calls >>Renderer->Trace_Camera_Ray() as often as it needs to, and returns the >>final colour. This is possible since the renderer is a public global >>variable, and it would make the interfaces and interactions much simpler. > > > This would work for the current use case. But the sampling class is intended > to be used in other sampling cases as well. It can be used to sample any > 2-dimension function, using the warp functions it can be used e.g. to sample > a hemisphere of incoming directions. > If you have a suggestion for an easier interface without having the > SAMPLING2D class to know abou the current user (the ray tracer in this > case) let me know. I see, and I don't have a better idea at the moment. So please commit your patch. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-28 13:29:01
|
Hi! > I can't look very deeply at the moment. In any case I think you should > move the SAMPLING2D class into utils, since it is not needed by anything > in kernel/. Yes, that makes sense > There also seems to be some complicated interaction between the renderer > and the sampling object in RAYTRACER::Get_Pixel(). It might be easier > to have a method SAMPLING2d::Calc_Pixel(u,v,du,dv), which in turn calls > Renderer->Trace_Camera_Ray() as often as it needs to, and returns the > final colour. This is possible since the renderer is a public global > variable, and it would make the interfaces and interactions much simpler. This would work for the current use case. But the sampling class is intended to be used in other sampling cases as well. It can be used to sample any 2-dimension function, using the warp functions it can be used e.g. to sample a hemisphere of incoming directions. If you have a suggestion for an easier interface without having the SAMPLING2D class to know abou the current user (the ray tracer in this case) let me know. Andreas |
From: Martin R. <ma...@MP...> - 2004-10-28 13:09:13
|
Hi Andreas! > I committed some improvements to the triangle mesh code. > It now uses a fast rejection mechanism, which can speed up > rendering of the teapot scene by a factor of about 4. It seems to work nicely for me, thanks! > Next thing would be hierachical ordering of the triangles to > handle hudge meshes. > > I also added code to smooth the triangle mesh using normal > interpolation. The teapot looks much nicer now. > > Apart from the triangle mesh stuff I submitted a patch in the > source forge system. It introduces a flexible sampling class. > This comes from my path tracing code, but I already added > it to the ray tracer which uses it for subsampling pixels. > See phongtest.cxx for an example how to use it. > > Please have a look at the patch. I'd like to commit it to CVS > if nobody shouts... ;-) I can't look very deeply at the moment. In any case I think you should move the SAMPLING2D class into utils, since it is not needed by anything in kernel/. There also seems to be some complicated interaction between the renderer and the sampling object in RAYTRACER::Get_Pixel(). It might be easier to have a method SAMPLING2d::Calc_Pixel(u,v,du,dv), which in turn calls Renderer->Trace_Camera_Ray() as often as it needs to, and returns the final colour. This is possible since the renderer is a public global variable, and it would make the interfaces and interactions much simpler. Thanks, Martin |
From: Andreas K. <and...@we...> - 2004-10-27 14:30:13
|
Hi all! I committed some improvements to the triangle mesh code. It now uses a fast rejection mechanism, which can speed up rendering of the teapot scene by a factor of about 4. Next thing would be hierachical ordering of the triangles to handle hudge meshes. I also added code to smooth the triangle mesh using normal interpolation. The teapot looks much nicer now. Apart from the triangle mesh stuff I submitted a patch in the source forge system. It introduces a flexible sampling class. This comes from my path tracing code, but I already added it to the ray tracer which uses it for subsampling pixels. See phongtest.cxx for an example how to use it. Please have a look at the patch. I'd like to commit it to CVS if nobody shouts... ;-) Have fun, Andreas |
From: Andreas K. <and...@we...> - 2004-10-13 14:49:02
|
Hi! > I have a demo executable (generic_test.cxx), which should illustrate how everything works. > It's now in CVS. Nice textures. Nice concept. Compact code. Good performance. :-) Well done! Andreas |
From: Martin R. <ma...@MP...> - 2004-10-13 14:23:44
|
Hi! > Could you wait with that a bit? I still have to implement quite some stuff and I don't > want to break compilation of the whole library too often... Sure :) >>Then I'd like to commit the RPN calculator which can be used to create generic >>textures. This was already in my own CVS, but after 0.4 was released. > > > Do you already have some textures ready? I'm curious what you can do with that. I have a demo executable (generic_test.cxx), which should illustrate how everything works. It's now in CVS. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-13 14:10:31
|
Hi! > I committed the new bifstream/bofstream classes and the new 3DS reading function > which uses them. Already tested them. Work nicely. > Next in my tree would be the activation of the triangle mesh (i.e. inclusion > into ray++.c and creation of the teapot executable. Could you wait with that a bit? I still have to implement quite some stuff and I don't want to break compilation of the whole library too often... > Then I'd like to commit the RPN calculator which can be used to create generic > textures. This was already in my own CVS, but after 0.4 was released. Do you already have some textures ready? I'm curious what you can do with that. Regards, Andreas |
From: Martin R. <ma...@MP...> - 2004-10-13 13:07:30
|
Hi Andreas! >>It's probably better to keep the whole mesh in a single object and build >>a sort of octtree (or similar) inside it, so that only a small part of the >>triangles need to be tested. > > > I'll check if I could re-use the hmaker for this purpose nethertheless. I > could > make the triangle mesh to be one object for the renderer but build my own > object/triangle hierachy inside the triangle mesh (like you suggest). Is > this would > be quite similar to what hmaker does, so the code could be re-used (although > the interface might have to change a bit for this, maybe make it a template > class). I think you should definitely try that. The interface changes shouldn't be a problem. > > BTW: In the paper I mentioned some time ago (XU Zhi-Yuan, et al: An > Efficient > Rejection Test for Ray/Triangle Mesh Intersection) they use a different > approach. > They have a fast rejection test for ray/triangle intesection which is based > on the > following fact: Given any plane the ray is part of. A triangle then cannot > intersect > with the ray, if all its 3 vertices lie on the same side of the a plane. (I > plan to use > this rejection test as well). I'm sure this could help a lot. > But they do not build a 3D object hierachy like we discuss here. Instead, > they > build a quad-tree for the vertices (as seen from the observer), so that the > position > of vertices relative to the plane can the determined quickly. > However, this approach only works for primary rays and only if the primary > rays > really come from the same point in space as far as I understood it (which is > not the > case if e.g. you do focal blurring). > Therefore I do not want to use this approach (it would be a nasty to > implement > anyway, since the object needs to have information about the observer) but > go > the 3D hierachy way. I agree completely, even if this solution might not be as fast as the one proposed in the paper. The object shouldn't know about the camera, and should even work properly for focal blurring and other things we are not even thinking about yet. Cheers, Martin |
From: Martin R. <ma...@MP...> - 2004-10-13 13:01:00
|
Hi all, I committed the new bifstream/bofstream classes and the new 3DS reading function which uses them. Next in my tree would be the activation of the triangle mesh (i.e. inclusion into ray++.c and creation of the teapot executable. Then I'd like to commit the RPN calculator which can be used to create generic textures. This was already in my own CVS, but after 0.4 was released. After that I'll think a bit about 2D and 3D Bresenham DDAs, because we will certainly need them for height field traversals and probably also for octtree-like bouding hierarchies (which may also be interesting for the triangle meshes). Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-13 07:44:24
|
Hi! > I think it would be very hard (and confusing) to subdivide the mesh in a > way that it appears as several objects to the renderer. This approach would > let the global hmaker do all the work automatically, but I expect it would > be really nasty to write. Yes, you're right. The implementation would be a bit nasty... > It's probably better to keep the whole mesh in a single object and build > a sort of octtree (or similar) inside it, so that only a small part of the > triangles need to be tested. I'll check if I could re-use the hmaker for this purpose nethertheless. I could make the triangle mesh to be one object for the renderer but build my own object/triangle hierachy inside the triangle mesh (like you suggest). Is this would be quite similar to what hmaker does, so the code could be re-used (although the interface might have to change a bit for this, maybe make it a template class). BTW: In the paper I mentioned some time ago (XU Zhi-Yuan, et al: An Efficient Rejection Test for Ray/Triangle Mesh Intersection) they use a different approach. They have a fast rejection test for ray/triangle intesection which is based on the following fact: Given any plane the ray is part of. A triangle then cannot intersect with the ray, if all its 3 vertices lie on the same side of the a plane. (I plan to use this rejection test as well). But they do not build a 3D object hierachy like we discuss here. Instead, they build a quad-tree for the vertices (as seen from the observer), so that the position of vertices relative to the plane can the determined quickly. However, this approach only works for primary rays and only if the primary rays really come from the same point in space as far as I understood it (which is not the case if e.g. you do focal blurring). Therefore I do not want to use this approach (it would be a nasty to implement anyway, since the object needs to have information about the observer) but go the 3D hierachy way. Regards, Andreas |
From: Martin R. <ma...@MP...> - 2004-10-12 19:19:39
|
Hi! > Another thing about the triangle mesh: I thought about splitting up huge > triangle meshes into several smaller ones for speeding up the ray > intersection test (I'll implement a fast rejection test in the mesh as well, > but a object hierachy approach would additionally be good for huge > meshes). Both method will be very necessary for large meshes. > On the other hand it would be nice to share one data structure for the > mesh between the sub meshes. Ideally, a shape would have some > interface to the hmaker so that they can work together on building > an appropiate object hierachy. Did you spend some thoughts on > something like this already? Yes, and so far it has always given me a headache :) I think it would be very hard (and confusing) to subdivide the mesh in a way that it appears as several objects to the renderer. This approach would let the global hmaker do all the work automatically, but I expect it would be really nasty to write. It's probably better to keep the whole mesh in a single object and build a sort of octtree (or similar) inside it, so that only a small part of the triangles need to be tested. But this certainly needs a lot of thought, and exactly for that reason I was so keen to see Mikael's implementation. I'll let you know if I have an idea. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-12 16:20:23
|
Hi! > > Looks nice! Thanks! > > (I assume that it also works..:-)) > > For me, it did; I cannot say much more yet... Well, my code was only tested with teapot.3ds too. So if this still works I'm a happy man... ;-) > The only reason I looked through the code was because I was curious > and tried to get it working by myself. And I thought nobody will take notice... Another thing about the triangle mesh: I thought about splitting up huge triangle meshes into several smaller ones for speeding up the ray intersection test (I'll implement a fast rejection test in the mesh as well, but a object hierachy approach would additionally be good for huge meshes). On the other hand it would be nice to share one data structure for the mesh between the sub meshes. Ideally, a shape would have some interface to the hmaker so that they can work together on building an appropiate object hierachy. Did you spend some thoughts on something like this already? Andreas |
From: Martin R. <ma...@MP...> - 2004-10-12 15:49:54
|
Hi Andreas! > Looks nice! Thanks! > (I assume that it also works..:-)) For me, it did; I cannot say much more yet... > Can you commit this when your new bifstream class is ready? Sure, will do. > In the future I'll try to submit new code only when I'm really > done with it, so that's not too much work for you fixing my > stuff... ;-) Please con't say that. Your changes didn't break anything in Ray++ because the big_endian constant was not yet used in the code, and because you didn't enable compilation of the trianglemesh by default. The only reason I looked through the code was because I was curious and tried to get it working by myself. Cheers, Martin |