You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

S  M  T  W  T  F  S 




1
(30) 
2
(21) 
3
(16) 
4
(5) 
5
(6) 
6
(44) 
7
(36) 
8
(41) 
9
(25) 
10
(17) 
11
(19) 
12
(7) 
13
(26) 
14
(33) 
15
(30) 
16
(44) 
17
(38) 
18
(10) 
19
(16) 
20
(39) 
21
(44) 
22
(8) 
23
(23) 
24
(49) 
25
(26) 
26
(15) 
27
(41) 
28
(29) 
29
(41) 
30
(40) 


From: Adam Moravanszky <amoravanszky@dp...>  20001112 19:41:26

Hi, the good news is that the raytracer works. I can render objects with Phong shading perfectly. I also wrote a shader that shades using arbitrary BRDFs, as approximated by the generalized cosine lobe model, described in the SIGGRAPH 97 paper: "NonLinear Approximation of Reflectance Functions", by Lafortune, Foo, Torrance, and Greenberg. This is a long paper that describes something very simple. A BRDF is approximated by a finite sum of exponential basis functions, of the form: BRDF(u,v) = Sigma [i=1..m] (Cxi u.x v.x + Cyi u.u v.y + Czi u.z v.z)^ni Sigma denotes the sum sign, here we are summing m basis functions, each of which has four scalar parameters: Cx Cy Cz and n. u and v are the usual vector valued parameters to the BRDF, denoting the incidend and exitant ray (for simple rendering, eye and light vector). Note that in practice, one has three sets of these parameters, one for each color channel. The run of the mill BRDF can usually be approximated by m = two or three basis functions, called lobes . Now for the questions. First, an easy one. I found sets of parameters for this model in several places, one being part of Marschner et al.'s paper "ImageBased BRDF Measurement Including Human Skin". He gives parameters for a white and a black guy (an interesting aside is that he approximates the black person's skin with only two lobes, while the white person gets three. Does this imply that in the near future it will be faster to render black characters in hardware accelerated computer games? :) ) Along with the data, he gives the following matrix to convert the resulting colors from the RGB color space of his digital camera, to monitor RGB: 1.063302, 0.382044, 0.445346 0.298125, 1.667665, 0.369540 1.322302, 0.446321, 2.768624 What confuses me here is that it transforms a LOT of colors to have negative component values. How am I to deal with that? Do I add the largest negative value in the image to all values, and then normalize the image to [0,1]?? The second question has to do with the data itself. The exponents (the nis) are really huge. These behave something like the specular exponents. In the paper itself, data is given for matte blue paint. The exponents for the three lobes are: 18.6 2.58 63.8 Unfortunately, in my implementation, an exponent greater than about 8 completely forces the lobe's contribution to zero, over the entire surface of a sphere. Note that the formula is the dot product of u and v, if you set Cxyz to 1. Thus a lobe with such a high exponent only contributes a tiny dot at the mirror angle?? The other problem is that I am not really sure if I'm doing things right. My results look plausible, but I have no idea. I did get some tiny offspecular reflection in the case of the blue paint like the paper says I should, so I'm not completely off. Did anyone implement this stuff and have some images I can compare against? I take it that I will be working on this for some time, and I wound't mind if I could discuss things with someone at length. I did find a renderman shader written by a Stephen H. Westin that does the same thing I want. What I am confused about is how he puts the BRDF data into the shading pipeline. I am no renderman expert, but I think he does: pixel_color = (material.ambient+material.diffuse)*color_texture_map for each lobe: pixel_color += BRDF(u,v) * light_color * (u . N ) where . is the dot product, N the surface normal, and u the to_light vector. What bothers me about this is that he does not multiply the diffuse component by (u . N), but he does the BRDF, which already has this info in it. Also, does the BRDF not already account for any and all diffuse? So in short, I'm wondering how a BRDF is supposed to be evaluated in a shader. Also, every paper I saw evaluates BRDFs via some type of radiosity  most certainly not point lights like me. Is this even supposed to look right when I use a point source? There is one example I found in Debevec's "Aquiring the reflectance field of a human face" where he renders a face lit by three monochromatic, effectively point sources. There the result is about as unspectacular as what I'm getting now. To summarize, I am a little too confused now to ask good questions, but considering I wrote all this stuff from scratch since today morning, I can't say I'm disappointed. :)  Adam Moravanszky http://www.n.ethz.ch/student/adammo 
From: Adam Moravanszky <amoravanszky@dp...>  20001112 14:38:01

>Exactly analogous to the singularity that is already bothering him >when his chosen world axis becomes perpendicular to his surface. Right, but this is still a more elegant way to do it, not sure why. Its like dying with boots on vs. off, I guess. Not that I have any experience there, or anything. Oh well, whatever. Probably that this algorithm is symmetric as opposed to choosing an arbitrary axis. I appreciate everyone's input.  Adam Moravanszky http://www.n.ethz.ch/student/adammo 
From: Adam Moravanszky <amoravanszky@dp...>  20001112 14:27:26

>Already you are inconsistent. A polyhedral surface (made of polygons) >has no well defined surface normal along an edge between noncoplanar >faces. No way! I never said that 1) all surfaces I sample are continuous. I said that IF the surface is continuous, I want to do such and such. 2) that the normals I use are a natural choice  I could come up with arbitrary normals at edges (the same way we come up with arbitrary normals at vertices) if I wanted to. I just said some normals are given, from whatever source. >If, as you say above, you know nothing about the surface except the >information contained in some discrete sample points, then you have no >way to know that it is "continuous", do you? And also, no way to >determine if the basis set is continuous if it is defined only at >discrete points. The concept "continuous" is vacuous in this context, >no? Again, I truly don't know if the surface is continuous. I didn't say I did. But if it IS continuous, I don't want to introduce many artifical discontinuities. If you are really wondering, all of the surfaces are piecewise continuous, like a polygonal surface made up of triangles. As I mentioned, I am raytracing, and thus I don't hit an infinetesimally thin edge by a ray, but rather I always hit either adjoining poligon. So the normal at the edge is really of no interest. The normals toward the edge of a polygon is the limit of the normal as we approach the edge, which is the same normal, as it is constant. I don't see why we have to lead discussions about this instead of focusing on my question, already answered more than well by Charles and Mark. >Sure, if you have a parametrization, i.e. a coordinate patch, then you >have a lot more information than just some discrete sample points, >enough info to be able to decide whether the surface, the >approximating planes, or a basis set for the approxmating planes are >continuous. Now it makes sense to use the word "continuous". My message contained the word continuous exactly twice. Please read that part more carefully. >A surface is said to be "smooth" at every point where it has a well >defined tangent plane in this sense. The partial derivatives of the >parametrization with respect to the two parameters give you a basis >for the tangent plane at that point, provided they are linearly >independent. If the parametrization is continuously differentiable, >with linearly independent partial derivatives, then you have a >continuously varying basis vector set for the tangent planes. It >might not be orthonormal, but so long as the two partial derivative >vectors are linearly independent you can naturally construct a >continuous orthonormal set from them by GramSchmidt. Right. I could make use of the parameter space of a sphere, for example, if I would render it by traversing the parameter space via a regular sampling, and then transforming the resulting samples into the camera space / etc, the opposite of raytracing. Unfortunately this leaves holes in the frame buffer. What I am doing is shooting rays for each pixel on the screen, poking the object with it, and essentially using the implicit form (x^2+y^2+z^2 = 1 for the sphere). So I don't have coordinates in the parameter space. I know from scientific visualization that transforming a point from world space to the parameter space of an object can be very expensive  is this ever done in raytracers anyway? > >Note that for many interesting simple surfaces, e.g., the sphere, you >cannot have a globally continuous or differentiable parametrization, >even though the surface is everywhere smooth. This is the fact that >forces us to invent the concepts of "manifold" and "differentiable >manifold". > >Further, for the sphere, it is a fact that you cannot find EVEN ONE >continuous unit tangent vector field on the entire sphere, let alone >the continous orthonormal pair of tangent vector fields that you >seeka celebrated result of elementary differential geometry. This I wouldn't celebrate this fact too much  it may be one of the most annoying problems of texture mapping and CG in general >does not mean that the tangent plane itself has singularitiesit is >well defined at every point and varies smoothly from point to point. >But there exists NO globally nonsingular continuously varying basis >set for the tangent planesthat's a fact. If there were, then we >would not have to suffer the polar singularities of the (latitude, >longitude) parameter patch (as well as every other possible >parametrization). Yes, I know. This is often visualized with 'hairy' spheres where one can try to brush the hair in some way so that there are no 'sources' or 'sinks'. >>So is there some better way ? >Generally, having local parametrizations (coordinate patches) for a >continuous surface is the best way to deal with it for most problems, >including this one. > >>I also tried things like projecting a world >>coordinate system axis on the normal and so, but then I end up having >>problems when the dot product happens to be zero. > >Your use of the projection onto the approximating plane of fixed world >coordinate basis vector works locally, but has singularities where the >world axis happens to be perpendicular to the surface. But no matter, >because, in general, any other means would also be likely to have at >least isolated singularities, you just have to work around them >somehow. In general, YOU CANNOT EXPECT TO HAVE A GLOBALLY CONTINUOUS >TANGENT PLANE BASIS SET. But you can in special casesfor example, >on the torus it is no problem, even trivial. Unfortunately, the torus is more difficult to collision detect against. :) I really am wondering if the parametizations you mention are of any use in a raytracer. At the moment I just don't render areas around singularities  so spheres have holes at the poles  instead of switching to some other basis vector pair in a discontinuous way. I want to make sure bugs in the shader are not caused by this. At the moment there must be some bugs  I'm just trying to implement a simple lambertian shader, but I get the typical bowtie shaped highlight (on a sphere) one usually sees in anisotropic rendering demos. I want anisotropic shading later, but not yet!!!  Adam Moravanszky http://www.n.ethz.ch/student/adammo 
From: <ron@do...>  20001112 03:08:54

Charles Bloom wrote: > >The "standard/traditional" way to get a frame from a vector is like this : > >MakeFrame(N) >{ > >assert( N.IsNormalized() ); > >Vector U,V; > >i = Smallest Absolute Component of N. > >U = 0; >U[i] = 1.0; > >V = U x N; >V.Normalize(); >U = N x V; >U.Normalize(); > >return (U,V,N); >} > >This works great if you don't really care about the "twist" around N. > It also >has the property that if you make two frames for slightly different N's, the >resulting frames are *usually* epsilon apart; unfortunately, there's a >singularity >in the mapping which occurs as the smallest component of N changes identity. > Exactly analogous to the singularity that is already bothering him when his chosen world axis becomes perpendicular to his surface. 
From: <ron@do...>  20001112 03:03:23

Adam Moravanszky wrote: >Hi, >I ran into a problem I had a few times before, and I can never seem to solve >it in a way that feels 'right'. > >In this particular case I am in the middle of writing a custom software >raytracerlike renderer to test some ideas for the upcoming hardwareshader >age, without the limitations of real hardware or APIs. > >Anyway, I have a point P on the surface of an object being sampled, and I >have a surface normal N at the point. We don't know anything else about the >object (it may or may not be made of polygons)  but we assume that the >surface can locally be approximated by a plane L at P, to which N is a >normal. > Already you are inconsistent. A polyhedral surface (made of polygons) has no well defined surface normal along an edge between noncoplanar faces. >To do various rendering related activities, it would be convenient to have a >local coordinate system centered at P, with N being one of the basis >vectors, and 2 other vectors lying in the plane L. I would like to find >two such basis vectors. > >Now, obviously, an infinite number of such orthogonal vector pairs exist in >the plane  the trouble is, I don't know how to decide on which two to >pick  the renderer doesn't care, it just wants two vectors. > >The way I solve this now is I literally pick a random point in the plane, >and choose the vectors based on that. > >However, it would be nice if these basis vectors, when we move the point P >along a continuous surface, also changed their directions continuously. >This is not the case for a random pick, and it is also not very elegant. > If, as you say above, you know nothing about the surface except the information contained in some discrete sample points, then you have no way to know that it is "continuous", do you? And also, no way to determine if the basis set is continuous if it is defined only at discrete points. The concept "continuous" is vacuous in this context, no? >I looked at how this is done in the Renderman shading language, and there >all objects somehow magically have a parameter space, to which this local >frame is fitted. Sure, if you have a parametrization, i.e. a coordinate patch, then you have a lot more information than just some discrete sample points, enough info to be able to decide whether the surface, the approximating planes, or a basis set for the approxmating planes are continuous. Now it makes sense to use the word "continuous". And, of course, as is well known, the surface has a tangent plane, the BEST approximating plane, at each point at which the parametrization is differentiable. This is the analogue of the fact that the derivative of a function of a single variable gives you the slope of the tangent line to its graph. A surface is said to be "smooth" at every point where it has a well defined tangent plane in this sense. The partial derivatives of the parametrization with respect to the two parameters give you a basis for the tangent plane at that point, provided they are linearly independent. If the parametrization is continuously differentiable, with linearly independent partial derivatives, then you have a continuously varying basis vector set for the tangent planes. It might not be orthonormal, but so long as the two partial derivative vectors are linearly independent you can naturally construct a continuous orthonormal set from them by GramSchmidt. Note that for many interesting simple surfaces, e.g., the sphere, you cannot have a globally continuous or differentiable parametrization, even though the surface is everywhere smooth. This is the fact that forces us to invent the concepts of "manifold" and "differentiable manifold". Further, for the sphere, it is a fact that you cannot find EVEN ONE continuous unit tangent vector field on the entire sphere, let alone the continous orthonormal pair of tangent vector fields that you seeka celebrated result of elementary differential geometry. This does not mean that the tangent plane itself has singularitiesit is well defined at every point and varies smoothly from point to point. But there exists NO globally nonsingular continuously varying basis set for the tangent planesthat's a fact. If there were, then we would not have to suffer the polar singularities of the (latitude, longitude) parameter patch (as well as every other possible parametrization). >The trouble is, I don't know how I could make a polygonal >object have a parameter space, You can certainly locally parametrize a polyhedron, but not smoothly; i.e., there will necessarily be discontinuities of the tangent space on the edges between noncoplanar faces. >So is there some better way ? Generally, having local parametrizations (coordinate patches) for a continuous surface is the best way to deal with it for most problems, including this one. >I also tried things like projecting a world >coordinate system axis on the normal and so, but then I end up having >problems when the dot product happens to be zero. Your use of the projection onto the approximating plane of fixed world coordinate basis vector works locally, but has singularities where the world axis happens to be perpendicular to the surface. But no matter, because, in general, any other means would also be likely to have at least isolated singularities, you just have to work around them somehow. In general, YOU CANNOT EXPECT TO HAVE A GLOBALLY CONTINUOUS TANGENT PLANE BASIS SET. But you can in special casesfor example, on the torus it is no problem, even trivial. The determination of the exact conditions under which you can construct such basis set fields is part of the study of the differential topology of surfaces in 3space, and the differential topology of nmanifolds in arbitrary dimensionsgreat subjects to study both for their intrinsic beauty and their help in understanding problems such as yours. 
From: Jim Offerman <j.offerman@in...>  20001112 00:21:13

Thanks Charles! Looks fast enough. Actually, my bounding boxes are also of the {min,max} variety internally; so this will work just fine. :) Jim Offerman Innovade  Original Message  From: "Charles Bloom" <cbloom@...> To: <gdalgorithmslist@...> Sent: Saturday, November 11, 2000 11:56 PM Subject: Re: [Algorithms] Transforming AABBs... > > Hi Jim, > > this is the fastest way I know to do it. My bbox is > of the {min,max} variety (faster to do CSG and building), > instead of {center,extents} which is slightly more efficient > to transform and collide against. > > // transform the bbox {min,max} > // by the xform frXForm : > > // the min transformed : > VECTOR3D vMinT; > vMinT = frXForm * min; > > // the three edges transformed : > // you can efficiently transform an Xonly vector > // by just getting the "X" column of the matrix > VECTOR3D vx,vy,vz; > > frXForm.GetAxis(vx,X_AXIS); vx *= (max.x  min.x); > frXForm.GetAxis(vy,Y_AXIS); vy *= (max.y  min.y); > frXForm.GetAxis(vz,Z_AXIS); vz *= (max.z  min.z); > > // take the transformed min & axes and > // find new extents > min = max = vMinT; > if ( vx.x < 0 ) min.x += vx.x; else max.x += vx.x; > if ( vx.y < 0 ) min.y += vx.y; else max.y += vx.y; > if ( vx.z < 0 ) min.z += vx.z; else max.z += vx.z; > if ( vy.x < 0 ) min.x += vy.x; else max.x += vy.x; > if ( vy.y < 0 ) min.y += vy.y; else max.y += vy.y; > if ( vy.z < 0 ) min.z += vy.z; else max.z += vy.z; > if ( vz.x < 0 ) min.x += vz.x; else max.x += vz.x; > if ( vz.y < 0 ) min.y += vz.y; else max.y += vz.y; > if ( vz.z < 0 ) min.z += vz.z; else max.z += vz.z; > > tada! > >  > Charles Bloom http://www.cbloom.com > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Jason Zisk <ziskj@n...>  20001112 00:14:45

A related question to this one, if you transform the AABB won't that cause a new AABB thats going to not be as closely a fit? Wouldn't transforming the AABB be horrible for things like octrees where the size of the box doesn't fit around a single piece of geometry, rather it defines a piece of space? I am having a wierd problem, we have octrees that are not in world space and I want to test them against a frustum for culling. I was thinking about transforming the nodes to world space but realized this would cause the above mentioned box bloat. Would it be better to transform the frustum planes to local space then do the test? Or maybe the speed win of transforming the AABB outweighs the extra bit of size. Or maybe just use the OBB that results from transforming the AABB for culling. So many options. Anyway, I'd think that transforming the frustum planes would be the best thing to do in this case (please prove me wrong). Is there a fast way to transform the planes into local space? Thanks,  Jason Zisk  nFusion Interactive LLC  Original Message  From: Jim Offerman <j.offerman@...> To: Algorithms List <gdalgorithmslist@...> Sent: Saturday, November 11, 2000 4:53 PM Subject: [Algorithms] Transforming AABBs... > Hey, > > I'm having a hard time figuring out the fastest way to transform an AABB > from local to world space, using the center + extents method. I get the > center part, that one is easy ;) > > But how do I construct an AABB, using only the (transformed) extents, which > is guaranteed to enclose the entire the original (local space) AABB? > > thanks, > > Jim Offerman > Innovade > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 