You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(33) |
Nov
(13) |
Dec
|
---|
From: Andreas K. <and...@we...> - 2004-10-12 15:24:29
|
Hi Martin, > I have put together a version of Load3DS() which uses the new bifstream class. > Does this look OK? Looks nice! Thanks! (I assume that it also works..:-)) Can you commit this when your new bifstream class is ready? In the future I'll try to submit new code only when I'm really done with it, so that's not too much work for you fixing my stuff... ;-) Andreas |
From: Martin R. <ma...@MP...> - 2004-10-12 12:21:24
|
Hi! >>In any case, we could replace the >>#ifdef _MSC_VER >>lines in config.h with something like >>#ifdef WIN32 >>and the problem should go away. > > > I wouldn't do that. MSC++7.1 can cope with the new templates to select > the right types. That's good news! > With the endianess tester we could use the standard > mechanism > for Windows as well and just have some other specialities (min, max etc). > Moreover, I'm not sure what will happen on 64bit Windows, probably the > defines are wrong there. > Compile time checks are better... Certainly. > >>Completely apart from that: I have written binary stream classes which > > take > >>the endianness as a constructor argument, not as template parameter. >>I'll check them in later today. > > > Nice. So I can leave my endianess test class? Yes. > Something different: How do you create patches? I did a test yesterday and > I was not able to apply the patch. What I did was > >>cvs diff -u config\config.h programs\teapot.cxx utils\twister.h > > in the raypp directory. > Then I wanted to apply the patch (on my laptop) using > patch -u -i mypatch.txt > but that didn't work. First it did not find the files (although when typing > them > manually they were found) and then all changes were rejected. > I did all that on the Windows platform, so it could be wrong/non-working/ > incompatible versions of diff/patch but I want to make sure I do not have > a general problem there. > Any ideas? What you did sounds correct. I don't use diff and patch very often, because I can access CVS from all my machines, so I don't have much experience. If you can send me the patch, I can try to find out what's wrong. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-12 11:56:03
|
Hi! > In any case, we could replace the > #ifdef _MSC_VER > lines in config.h with something like > #ifdef WIN32 > and the problem should go away. I wouldn't do that. MSC++7.1 can cope with the new templates to select the right types. With the endianess tester we could use the standard mechanism for Windows as well and just have some other specialities (min, max etc). Moreover, I'm not sure what will happen on 64bit Windows, probably the defines are wrong there. Compile time checks are better... > Completely apart from that: I have written binary stream classes which take > the endianness as a constructor argument, not as template parameter. > I'll check them in later today. Nice. So I can leave my endianess test class? Something different: How do you create patches? I did a test yesterday and I was not able to apply the patch. What I did was > cvs diff -u config\config.h programs\teapot.cxx utils\twister.h in the raypp directory. Then I wanted to apply the patch (on my laptop) using patch -u -i mypatch.txt but that didn't work. First it did not find the files (although when typing them manually they were found) and then all changes were rejected. I did all that on the Windows platform, so it could be wrong/non-working/ incompatible versions of diff/patch but I want to make sure I do not have a general problem there. Any ideas? Andreas |
From: Martin R. <ma...@MP...> - 2004-10-12 11:47:02
|
Hi Andreas, I have put together a version of Load3DS() which uses the new bifstream class. Does this look OK? Cheers, Martin /* Function to load a triangle mesh from a 3DS file based on code of Damiano Vitulli <in...@sp...> */ bool TRIANGLE_MESH::Load3DS (const char *filename) { bifstream file (filename,file_is_lsb); if (!file) return false; while (!file.eof()) { uint2 chunk_id,qty; uint4 chunk_length; file >> chunk_id; file >> chunk_length; cout << "Chunk ID: " << chunk_id << " Length:" << chunk_length << endl; switch (chunk_id) { // TRI_VERTEXL: Vertices list // Chunk Length: 1 x uint2 (number of vertices) // + 3 x float4 (vertex coordinates) x (number of vertices) // + sub chunks case 0x4110: file >> qty; cout << "Vertices list: " << qty << " vertices" << endl; for (int i=0; i<qty; ++i) { float4 x, y, z; file >> x >> y >> z; AddVertex (VECTOR (x, y, z)); } break; // TRI_FACEL1: Polygons (faces) list // Chunk ID: 4120 (hex) // Chunk Length: 1 x uint2 (number of polygons) // + 3 x uint2 (polygon points) x (number of polygons) // + sub chunks case 0x4120: file >> qty; cout << "Faces list: " << qty << " faces" << endl; for (int i=0; i<qty; ++i) { uint2 a, b, c, face_flags; file >> a >> b >> c >> face_flags; AddTriangle (a, b, c); } break; // EDIT_OBJECT: Object block, info for each object case 0x4000: cout << "Object block: skipping name" << endl; { byte c; do { file >> c; } while (c!=0); } break; case 0x4d4d: // main chunk case 0x3d3d: // editor chunk case 0x4100: // triangle mesh chunk cout << " ...superchunk" << endl; break; // skip all other chunks default: cout << " ...skipping" << endl; file.seekg(chunk_length-6,ios::cur); } } return true; } |
From: Martin R. <ma...@MP...> - 2004-10-12 11:19:48
|
Hi Andreas! > Well, yes. I'm currently developing under Win32 using Dev-C++, a IDE > using gcc. My problem is that I do not have the environment to run the > configure scripts... :-( Hmm, I thought that you need Cygwin or Mingw in order to run gcc on Windows, and both of these packages should have enough functionality to run the configure script. In any case, we could replace the #ifdef _MSC_VER lines in config.h with something like #ifdef WIN32 and the problem should go away. Completely apart from that: I have written binary stream classes which take the endianness as a constructor argument, not as template parameter. I'll check them in later today. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-12 10:59:45
|
> Do you still encounter configure problems with the new configure scripts? Well, yes. I'm currently developing under Win32 using Dev-C++, a IDE using gcc. My problem is that I do not have the environment to run the configure scripts... :-( Andreas |
From: Martin R. <ma...@MP...> - 2004-10-11 16:43:50
|
Hi! > Ah, didn't think of that :-( > I'll change it back. I have to do some changes for MSVC++ anyway. Ok, let's change it back for now. But probably we can switch back soon. > > If we could do it via expression templates (like we do with the > datatypes) that > > would work, but I don't know how to do that. > > The expression template code you introduced for compile time selection of > types is really cool. But I have no idea how to use something like this for > a > endianess test either. I think it doesn't work for endianness, since the test always requires taking the address of a (constant) variable, and this destroys the compiler's ability to evaluate it at compile time. > First I wanted to do something like > > const byte swap_test[2] = { 0, 1}; > const bool big_endian = (*(uint2*)swap_test) != 1; > > But this cannot be done either because the order the constants are getting > initailized is not defined. It is actually defined: they are initialized in the order they appear in the translation unit. There is no defined order for constants in different translation units. > But I guess it wouldn't work for the bistream > class > anyway... That's true, I tried... > Another option would be to change the bistream class to not use templates. > Would be a bit less elegant and performance would be a bit worse, but the > class is not really a central point of ray++. I agree, and I have already some ideas how to do it. > So I think it would be worth > to > chnage it if we get rid of the configure problems. Do you still encounter configure problems with the new configure scripts? > Other opinions? > > > - In trianglemesh.cxx a function filelength() is called, which does not > exist > > on Linux. We will have to find something more portable. > > Yeah, I thought that might be a problem, but didn't change it yet. In principle one could just execute the while loop until EOF is encountered. > The class is far from being finished so there might be a few more problems > (apart > from the fact that it is still veeeery sloooow). But I was able to render my > first > teapot already! Me too :) Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-11 16:19:45
|
Hi! > - I would love to remove the configure test for endianness and replace it with > a class like you did, but unfortunately this breaks the binary stream classes > (kernel/bstream.h) in the respect that you can no longer write > bistream<file_is_lsb> inputfile("foo.bin"); > This only works if "file_is_lsb" is a compile-time constant, and this can only > be done as long as we use a configure mechanism. Ah, didn't think of that :-( I'll change it back. I have to do some changes for MSVC++ anyway. > If we could do it via expression templates (like we do with the datatypes) that > would work, but I don't know how to do that. The expression template code you introduced for compile time selection of types is really cool. But I have no idea how to use something like this for a endianess test either. First I wanted to do something like const byte swap_test[2] = { 0, 1}; const bool big_endian = (*(uint2*)swap_test) != 1; But this cannot be done either because the order the constants are getting initailized is not defined. But I guess it wouldn't work for the bistream class anyway... Another option would be to change the bistream class to not use templates. Would be a bit less elegant and performance would be a bit worse, but the class is not really a central point of ray++. So I think it would be worth to chnage it if we get rid of the configure problems. Other opinions? > - In trianglemesh.cxx a function filelength() is called, which does not exist > on Linux. We will have to find something more portable. Yeah, I thought that might be a problem, but didn't change it yet. > - TRIANGLE_MESH::All_Intersections() appears to return only the first intersection > with the mesh. I'll have a look. The class is far from being finished so there might be a few more problems (apart from the fact that it is still veeeery sloooow). But I was able to render my first teapot already! > TRIANGLE_MESH::Load3DS() might be a good candidate for using the bistream class. > (I expect that the 3DS files are always little-endian, so they have to be swapped > on a big-endian machine). Yes, I guess so as well. Thanks for the code review, Andreas |
From: Martin R. <ma...@MP...> - 2004-10-11 11:45:27
|
Hi Andreas, I just noticed your checkins from last night. It seems that the triangle mesh is progressing, that's good to know! I tried to compile the code on Linux, and have a few comments: - I would love to remove the configure test for endianness and replace it with a class like you did, but unfortunately this breaks the binary stream classes (kernel/bstream.h) in the respect that you can no longer write bistream<file_is_lsb> inputfile("foo.bin"); This only works if "file_is_lsb" is a compile-time constant, and this can only be done as long as we use a configure mechanism. If we could do it via expression templates (like we do with the datatypes) that would work, but I don't know how to do that. - In trianglemesh.cxx a function filelength() is called, which does not exist on Linux. We will have to find something more portable. - TRIANGLE_MESH::All_Intersections() appears to return only the first intersection with the mesh. TRIANGLE_MESH::Load3DS() might be a good candidate for using the bistream class. (I expect that the 3DS files are always little-endian, so they have to be swapped on a big-endian machine). Cheers, Martin |
From: Martin R. <ma...@MP...> - 2004-10-08 10:18:25
|
Hi! > Sounds like a good way to do it... > I guess we should add comments like this to the source code. Maybe with > citation of > related papers/books. Would help others to understand it easier. I'll try to do that from now on. > Do know that algorithm. But I have no objections if you go ahead and see if > it is > faster than the current one ;-) If you are curious, a description is available at http://www.acm.org/tog/resources/RTNews/html/rtnews3a.html#art6 Not sure when I'll have the time to implement it though. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-08 09:46:50
|
Hi! > > Could you give a short intro how it works? > > I'll try (and hope I remember correctly). > > 1) first all infinite objects are discarded (they don't belong in the bounding box hierarchy). > 2) if the number of remaining objects in the list is smaller than some constant, the algorithm stops. > 3) The list of objects is sorted along the x-axis and is split in two parts > in such a way that a cost function is minimized. The cost function measures the cost for > an intersection test, if both parts were enclosed in a bounding box each). > 4) The same is done for y- and z- directions > 5) The split direction with the smallest cost is chosen. > 6) For both newly-created parts, step 2) is executed recursively. Sounds like a good way to do it... I guess we should add comments like this to the source code. Maybe with citation of related papers/books. Would help others to understand it easier. > This is meant to be an approximation of the Goldsmith & Salmon algorithm, which I didn't fully > understand at the time I read it. It might be worthwhile to try this one now. Do know that algorithm. But I have no objections if you go ahead and see if it is faster than the current one ;-) Andreas |
From: Andreas K. <and...@we...> - 2004-10-08 09:42:09
|
Hi! > > And I do not have a Get_MC_Diffuse_Dir equivalent, I assume these > > rays to be distributed equally over the whole hemisphere. > > That is a model assumption which I wouldn't like to make. If we want > to do Monte Carlo tracing for diffuse radiation, we should add this method. Hm, I'm not sure. It is not a real limitation to assume that you have "perfectly diffuse" radition. The light reflected from the surface is devided into 2 parts: diffuse + specular (or better non-diffuse). As the non-diffuse reflection can be an abitrary distribution you can just define the diffuse part to be zero and add your "custom" diffuse refelection to the non-diffuse reflection (actually to be correct the diffuse part should be "perfectly diffuse" fraction of your custom diffuse distribution. OK, there might be algorithms that sample diffuse and non-diffuse reflection differently which could be a drawback of this approach. But on the other hand an algorithm could only sample non-diffuse reflection and calculate diffuse reflection using another method (e.g. using a radiosity-like approach). In this case you have to have some knowledge about the surfaces's diffuse reflection distribution. The easiest way is to assume "perfectly diffuse" reflection. Or do I miss a point here? > > For photon tracing I additionally needed the Get_BRDF method. > > I need some time to understand how this works, but it should be OK. There might be a way to merge the Monte Carlo methods and the BRDF method, but I have to get deeper into photon tracing again first... Andreas |
From: Martin R. <ma...@MP...> - 2004-10-08 08:01:26
|
Hi! > I'm not really an expert on the subject as I just discovered this technique > recently. I thought that you could get some performance optimizations > whenever more vector or matrix operations are combined to a long expression. > But you are probably right, I just skimmed through some of the code > and there's actually nothing like that... Actually, there might be quite a lot of such expressions (for example, a ray is transformed into another coordinate system, and afterwards evaluated at a certain distance). But we usually have to keep the intermediate results (in this case the transformed ray), since we need them again later. So the expression templates (which eliminate the intermediate result) don't help here. >>The most promising area of speedup is IMO the bounding box hierarchy >>generation. The current algorithm is very ad-hoc and could certainly be > > improved. > > You're talking about POV_HMAKER, right? Yes. > From that name I was assuming > it is the same technique POVRay uses (which can't be that bad, although I > don't > know it). It was at first based on the POVRay one, but I added some additional criteria which seemed to speed things up, and by now it is probably quite different. > Could you give a short intro how it works? I'll try (and hope I remember correctly). 1) first all infinite objects are discarded (they don't belong in the bounding box hierarchy). 2) if the number of remaining objects in the list is smaller than some constant, the algorithm stops. 3) The list of objects is sorted along the x-axis and is split in two parts in such a way that a cost function is minimized. The cost function measures the cost for an intersection test, if both parts were enclosed in a bounding box each). 4) The same is done for y- and z- directions 5) The split direction with the smallest cost is chosen. 6) For both newly-created parts, step 2) is executed recursively. This is meant to be an approximation of the Goldsmith & Salmon algorithm, which I didn't fully understand at the time I read it. It might be worthwhile to try this one now. Cheers, Martin |
From: Martin R. <ma...@MP...> - 2004-10-08 08:00:55
|
Hi! > Here are important parts inline (and I attached it again): Thanks, now that I'm subscribed I get the attachments too. > I used a approach slightly different to your Get_MC_xxx_Dir methods. > My functions ("general ray" is probably not a very good name...) get a > parameter (Uni), which is "warped" into the right probability distribution. > This approach has the advantage that different sampling methods (random, > regular grid, jittered) can be used by the caller. I see. > And I do not have a Get_MC_Diffuse_Dir equivalent, I assume these > rays to be distributed equally over the whole hemisphere. That is a model assumption which I wouldn't like to make. If we want to do Monte Carlo tracing for diffuse radiation, we should add this method. > For my path tracer implementation I also needed the Get_xxx_Percent > methods (could be merged into one method). These numbers are needed > to decide which next-level ray to shoot (using the "Russian roulette" > method). Alternatively we could let the surface itself decide whether to trace a diffuse, reflected or refracted ray. That would be the most general solution, but it interferes badly with your option to control the sampling. > For photon tracing I additionally needed the Get_BRDF method. I need some time to understand how this works, but it should be OK. Cheers, Martin |
From: Martin R. <ma...@MP...> - 2004-10-07 17:15:30
|
Hi Ayal! > I joined too. I didn't see the announcement about the mailing list > before, inbetween all the spam. Sorry about that. > Is the cvs repository up already? Yes. Just try the CVS link from the project page for access instructions. The home page has also moved to raypp.sf.net, but has not been updated yet. Cheers, Martin |
From: Ayal P. <ap...@xs...> - 2004-10-07 16:51:05
|
Hi, I joined too. I didn't see the announcement about the mailing list before, inbetween all the spam. Sorry about that. Is the cvs repository up already? |
From: Andreas K. <and...@we...> - 2004-10-07 12:37:10
|
Hi! > I agree that the technique is really exciting, but I don't see many places > where it can be used in Ray++. For example, all vector and matrix operations > are already unrolled by hand and inlined where I thought that it is necessary. > But if you have a counter-example, where expression templates make a noticeable > difference, I would certainly like to see it! I'm not really an expert on the subject as I just discovered this technique recently. I thought that you could get some performance optimizations whenever more vector or matrix operations are combined to a long expression. But you are probably right, I just skimmed through some of the code and there's actually nothing like that... > [...] Ray++ uses virtual functions only where this is absolutely > necessary. If we tried to remove them, all of the flexibility would be lost. That's true. There's probably no better way to do it without sacrificing the fexibility. > The most promising area of speedup is IMO the bounding box hierarchy > generation. The current algorithm is very ad-hoc and could certainly be improved. You're talking about POV_HMAKER, right? From that name I was assuming it is the same technique POVRay uses (which can't be that bad, although I don't know it). Could you give a short intro how it works? Andreas. |
From: Andreas K. <and...@we...> - 2004-10-07 11:54:27
|
Hi! > [...] But I have Ray++-0.3a still lying around (from March 1999). > If this helps you, it is available from > http://raypp.sourceforge.net/download/ray++-0.3a.tar.gz Thanks a lot. > > I attached my version of surface.h. > I can't find the attachment in the mailing list archive. Do you know how to access > it? No, doesn't seem to be supported :-( Here are important parts inline (and I attached it again): virtual float4 Get_Reflected_Percent () const = 0; virtual float4 Get_Transmitted_Percent () const = 0; virtual float4 Get_Diffuse_Percent () const = 0; /** Indicates, if Get_General_Reflected_Ray and Get_General_Refracted_Ray are implemented for the surface. These functions are needed for calculating non-perfectly reflected/refracted rays. */ virtual bool Supports_General_Rays() const = 0; /** This method takes a point (Uni) out of [0,1]x[0,1] and maps it to a reflected ray (Reflected). If Uni is a uniformly distributed probability variable, the distribution of the outgoing direction matches the reflection distribution function of the surface under the conditions described in Info. Thus, uniform sampling for Uni will result in an importance sampling for the outgoing direction (more likely regions of the hemisphere of outgoing directions will be sampled better). This method takes only specular reflection into account, (perfectly) diffuse reflection is NOT included. Modifier decribes which parts of the incoming light will be reflected (OUT = Modifier * IN) */ virtual void Get_General_Reflected_Ray (const SHADING_INFO &Info, XY Uni, RAY &Reflected, COLOUR &Modifier) const = 0; /** Same as Get_General_Reflected_Ray, but for refraction */ virtual void Get_General_Refracted_Ray (const SHADING_INFO &Info, XY Uni, RAY &Reflected, COLOUR &Modifier) const = 0; /** This method evaluates the bidirectional reflectance distribution function of the surface with incoming direction Incident_Dir under the conditions specified by Info */ virtual COLOUR Get_BRDF (const SHADING_INFO &Info, const VECTOR &Incident_Dir) const = 0; > > Martin, what are the Get_MC_xxx_Dir methods for in the current > > interface? Does MC stand for Monte Carlo? > > Yes. They are intended to return a "random" direction, into which a reflected > or refracted ray could be traced. The randomness would be such that, for many > reflected and refracted rays, their distribution would be the real one. > This should be all that is necessary for path tracing, but I don't know > about photon tracing. I used a approach slightly different to your Get_MC_xxx_Dir methods. My functions ("general ray" is probably not a very good name...) get a parameter (Uni), which is "warped" into the right probability distribution. This approach has the advantage that different sampling methods (random, regular grid, jittered) can be used by the caller. And I do not have a Get_MC_Diffuse_Dir equivalent, I assume these rays to be distributed equally over the whole hemisphere. For my path tracer implementation I also needed the Get_xxx_Percent methods (could be merged into one method). These numbers are needed to decide which next-level ray to shoot (using the "Russian roulette" method). For photon tracing I additionally needed the Get_BRDF method. I'll have a look into Jensen's book about photon tracing (I ordered it a few days ago, unfortunately it was not yet published back when I implemented the stuff) and see if there's a suggestion for an interface for surfaces. Andreas |
From: Martin R. <ma...@MP...> - 2004-10-07 11:00:38
|
> I just came across an interesting technique for speeding up vector and > matrix operations: expression templates > > This really cool technique uses C++ templates to parse expressions at > compile time and (ideally) creates optimized code that can be close to > what you can reach by writing hand optimized code for the given specific > expression. > See http://tvmet.sourceforge.net/introduction.html for details and a ready > to use library for vector/matrix operations. I have been following expression templates and template metaprograms since the early days of Blitz++ and also visited an expression template workshop. I agree that the technique is really exciting, but I don't see many places where it can be used in Ray++. For example, all vector and matrix operations are already unrolled by hand and inlined where I thought that it is necessary. But if you have a counter-example, where expression templates make a noticeable difference, I would certainly like to see it! > Also we should think about the performance penalties ray++ might have > because it uses polymorphism/virtual functions. I haven"t really had a > close look but I could image that there"s some potential for performance > optimization. I don't think so. Ray++ uses virtual functions only where this is absolutely necessary. If we tried to remove them, all of the flexibility would be lost. BTW, POVRay also uses virtual functions (emulated in C by function pointers) and has no performance advantage in that respect. The most promising area of speedup is IMO the bounding box hierarchy generation. The current algorithm is very ad-hoc and could certainly be improved. Cheers, Martin |
From: Martin R. <ma...@MP...> - 2004-10-07 10:50:44
|
Hi! > I started re-integrating my Monte Carlo code into raypp. > I noticed that the interface of SURFACE differs quite drastically > between the current raypp distribution and my code. > Of course, I had to do some changes to support Monte Carlo > sampling, but in raypp the inteface was changing as well since > I branched off back in "99 I think. Unfortunately I do not have > such an old raypp version lying around. > > Can anybody help me out here? Not sure. My own CVS repository of Ray++ died in a head crash, as far as I remember. But I have Ray++-0.3a still lying around (from March 1999). If this helps you, it is available from http://raypp.sourceforge.net/download/ray++-0.3a.tar.gz > Martin, what are the Get_MC_xxx_Dir methods for in the current > interface? Does MC stand for Monte Carlo? Yes. They are intended to return a "random" direction, into which a reflected or refracted ray could be traced. The randomness would be such that, for many reflected and refracted rays, their distribution would be the real one. This should be all that is necessary for path tracing, but I don't know about photon tracing. > I attached my version of surface.h. I can't find the attachment in the mailing list archive. Do you know how to access it? > I guess we have to find > an easier interface, this one is quite complex. However, photon > tracing needs quite a few operations on the surface so it won"t > be easy. As long as no temporary data has to be stored in the surface object, it should be OK. But of course, the simpler we can make it, the better. Cheers, Martin |
From: Andreas K. <and...@we...> - 2004-10-01 14:04:58
|
Hi! I just came across an interesting technique for speeding up vector and matrix operations: expression templates This really cool technique uses C++ templates to parse expressions at compile time and (ideally) creates optimized code that can be close to what you can reach by writing hand optimized code for the given specific expression. See http://tvmet.sourceforge.net/introduction.html for details and a ready to use library for vector/matrix operations. Also we should think about the performance penalties ray++ might have because it uses polymorphism/virtual functions. I haven't really had a close look but I could image that there's some potential for performance optimization. There's a good paper that describes techniques for speeding up scientific C++ code: http://osl.iu.edu/~tveldhui/papers/techniques/ Andreas |
From: Andreas K. <and...@we...> - 2004-09-29 22:04:58
|
Hi! I started re-integrating my Monte Carlo code into raypp. I noticed that the interface of SURFACE differs quite drastically between the current raypp distribution and my code. Of course, I had to do some changes to support Monte Carlo sampling, but in raypp the inteface was changing as well since I branched off back in '99 I think. Unfortunately I do not have such an old raypp version lying around. Can anybody help me out here? Martin, what are the Get_MC_xxx_Dir methods for in the current interface? Does MC stand for Monte Carlo? I attached my version of surface.h. I guess we have to find an easier interface, this one is quite complex. However, photon tracing needs quite a few operations on the surface so it won't be easy. Andreas _______________________________________________________ WEB.DE Video-Mail - Sagen Sie mehr mit bewegten Bildern Informationen unter: http://freemail.web.de/?mc=021199 |