You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}
(1) 
2016 
_{Jan}

_{Feb}
(1) 
_{Mar}
(1) 
_{Apr}
(1) 
_{May}

_{Jun}
(1) 
_{Jul}

_{Aug}
(1) 
_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1
(30) 
2
(21) 
3
(16) 
4
(5) 
5
(6) 
6
(44) 
7
(36) 
8
(41) 
9
(25) 
10
(17) 
11
(19) 
12
(7) 
13
(26) 
14
(33) 
15
(30) 
16
(44) 
17
(38) 
18
(10) 
19
(16) 
20
(39) 
21
(44) 
22
(8) 
23
(23) 
24
(49) 
25
(26) 
26
(15) 
27
(41) 
28
(29) 
29
(41) 
30
(40) 


From: Ron Levine <ron@do...>  20001113 23:33:39

Blake Senftner wrote > > I think you might be looking for frenet frames, > which is a technique that maintains a reference > frame for building geometry around 3D curves that > avoids the twisting. > Well, Adam's query did not seem to indicate he was looking to construct the frame along any particular smooth curvethat would have been considerably more information than he gave about the points at which he wanted to construct the frames, namely, just points sampled from a smooth surface with a defined approximating plane at each such point. Certainly, you need a smooth curve, actually a C2 curve, to have the Frenet frames welldefined. And the Frenet frame along the curve does not "avoid the twisting", but rather the twisting of the Frenet frame exactly fits the twisting (torsion) of the space curve. If the curve has zero torsion, then the Frenet rame does not twist. But, I believe that a curve whose torsion is welldefined and zero everywhere must be planar. > At least that's my weak understanding. It's one > of those techniques that I'm aware of but have not > invested the time to learn yet. A former coworker > swears by them for his plant growing software. The place to learn about Frenet frames and the FrenetSerret theory of space curves is in Chapter One of essentially any book on elementary differential geometry. The FrenetSerret theorem gives an elegant expression of the derivatives along the curve of the vectors of the Frenet frame in terms of their components with respect the Frenet frame. The powerful result is that for any smooth (C2) curve there exist two functions of a single variable, called the curvature and the torsion, which completely characterize the intrinsic geometry of the curve. In other words, if two space curves have the same curvature and torsion functions, then there is an isometry of 3space that maps one curve onto the other one. Frenet frames are useful for constructing tubes or ribbons around arbitrarily twisting smooth space curves. See my website http://www.dorianresearch.com for an amusing example, including sample code for the computing the Frenet frame. 
From: Mark Wayland <mwayland@to...>  20001113 23:06:03

Hmm, all very good questions! > Two questions: > > (a) would deaf people "get" it? Those who used to hear and are now deaf > probably will. But it's a bit like trying to explain to the blind what > colour is. I guess you can it's like texture, but different. You'd have to > ask a deaf person I guess. I guess it's just another system to get used to  it would also be useful for people who for some reason or another don't or can't play with sound on. > (b) is it good enough to allow them to play on an equal footing with hearing > people with headphones? It should certainly even the field out a bit, but I > can't help feeling it's like allowing people to play FPS games on the > keyboard instead of a mouse  it sounds good in theory, but in practice it's > fairly pointless  in multiplayer, you'd get obliterated. Of course, it may > work very well indeed  you'd have to test it. True. Cheers, Mark 
From: Pallister, Kim <kim.pallister@in...>  20001113 21:00:22

Some guys in the Intel Architecture Labs have done a library for toon rendering that includes both 'toon' and 'pencil sketch styles. You can dowload a demo (well worth the quick download) here: http://developer.intel.com/ial/3dsoftware/demo.htm KP > Original Message > From: Jose Marin [mailto:jose_marin2@...] > Sent: Wednesday, November 08, 2000 9:01 AM > To: gdalgorithmslist@... > Subject: [Algorithms] Cartoon rendering > > > Hi! > > I'm looking for some articles and/or samples about "Cartoon > Rendering", or > nonrealistic rendering. > > Any help? > > Thanks in advance! > > Jose > ______________________________________________________________ > ___________ > Get Your Private, Free Email from MSN Hotmail at http://www.hotmail.com. Share information about yourself, create your own public profile at http://profiles.msn.com. _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Corrinne Yu <corrinney@ho...>  20001113 19:55:36

I am working on AI right now, and come across something that, if there are solutions, would make programming them quite fun and interesting. To keep AI cycles predictable, I have a steadypace "raytrace" loop, where I "fill" a regularlytemporalspaced history of all the characters, NPC's, objects (i.e., dynamic) that are visible by the 90 degrees AI viewcone of each AI at that time. Currently, how this is done is I "walk my nodes" from each AI eye center point, and store away NPC's and objects into each AI's queue. I do a "new walk" per AI, per "filling loop call." Given: 1. while still inside the same global filling loop call, the "world", including NPC's, characters, dynamic objects stay "the same" (until the call is finished) 2. _but_, the starting call of walking my nodes pos center is different at each call Is there a way to further optimize the "filling object loop" call for my AI's, to make it even faster to fill each AI's visible object list? It sounds like there should be some ways since while inside the loop indeed even the dynamic objects are stationary. Is there a way to "fill my group of AI's visible object list" "globally", kind of I do one more timeconsuming walk, but instead of starting a new walk every AI (inside each loop), I dump off objects into these AI bins globally, and therefore overall spend less time filling all my AI's visibility requirements? The "walks" are different depending if the AI is indoors and outdoors. Thanks for any information on how to share data across walks of same tree / how to optimize walks of same tree but different "center" position of cone. Corrinne Yu 
From: PeterPike Sloan <ppsloan@mi...>  20001113 18:52:56

Here's the reply Steve sent... PeterPike Sloan (The general problem of mapping from some color space to the gamut of a particular device is called tone mapping, and there is a ton of graphics literature on the subject  I would search for tone mapping in the siggraph bibliography if you are interestead. There have been a couple of recent papers on visual adaptation (one at last years Rendering Workshop by fredo durand: http://graphics.lcs.mit.edu/~fredo/PUBLI/EGWR2000/index.htm and one at eurographics: A. Scheel, M. Stamminger, H.P. Seidel, Tone Reproduction for Interactive Walkthroughs, in: Computer Graphics Forum (Proc. EUROGRAPHICS 2000), to appear (couldn't find a preprint for that though...))) Original Message From: Steve Marschner [mailto:srm@...] Sent: Monday, November 13, 2000 10:12 AM To: amoravanszky@... Cc: PeterPike Sloan; westin@... Subject: Re: [Algorithms] BRDFs and Lafortune's cosine lobes I think the most reliable way to check your BRDF implementation is by making a plot of the values and comparing with the plots in the paper (the blue paint would be a good candidate). It's true that the color matrix can result in colors with negative components. This means that those colors are outside the range of colors that can be displayed by the monitor that matrix was computed for. The simplest way to deal with this is to clip negative values to zero. (This shouldn't come up too often with the data measured using this camera, since the reason the matrix is so extreme is that the raw colors from this camera are very unsaturated.) If a Phong exponent of 8 produces an extremely sharp specular peak, it sounds like something might be wrong... usually it should take a much higher value to make a surface look very shiny. Steve PeterPike Sloan wrote: > > Hi, > > I'm cc'ing steve marschner, he probably can point you at some images if you > want. It's fairly common to factor the BRDF into a diffuse and a specular > part  that's probably what you were seeing with what westin did. You > should still be able to see a difference when using a point light source  > look at the seperable BRDF work that Jan Kautz did for example (demos on > NVidias web page.) > > PeterPike Sloan > > (You also want to look at the new cloth modeling book by AK Peters  there > is a very good rendering chapter by the late Alain Fournier that also uses a > multilobe approximation to BRDFs, there is a tech report on the web at UBC, > but it is from a paper that was in the 1992 GI symposium on local > illumination and I've never seen one of those before: > > http://www.cs.ubc.ca/labs/imager/tr/fournier.1992c.html > > ) > > Original Message > From: Adam Moravanszky [mailto:amoravanszky@...] > Sent: Sunday, November 12, 2000 11:41 AM > To: gdalgorithmslist@... > Subject: [Algorithms] BRDFs and Lafortune's cosine lobes > > Hi, > > the good news is that the raytracer works. I can render objects with Phong > shading perfectly. > > I also wrote a shader that shades using arbitrary BRDFs, as approximated by > the generalized cosine lobe model, described in the SIGGRAPH 97 paper: > > "NonLinear Approximation of Reflectance Functions", by Lafortune, Foo, > Torrance, and Greenberg. > > This is a long paper that describes something very simple. A BRDF is > approximated by a finite sum of exponential basis functions, of the form: > > BRDF(u,v) = Sigma [i=1..m] (Cxi u.x v.x + Cyi u.u v.y + Czi u.z v.z)^ni > > Sigma denotes the sum sign, here we are summing m basis functions, each of > which has four scalar parameters: Cx Cy Cz and n. > u and v are the usual vector valued parameters to the BRDF, denoting the > incidend and exitant ray (for simple rendering, eye and light vector). Note > that in practice, one has three sets of these parameters, one for each color > channel. > > The run of the mill BRDF can usually be approximated by m = two or three > basis functions, called lobes . > > Now for the questions. First, an easy one. I found sets of parameters for > this model in several places, one being part of Marschner et al.'s paper > "ImageBased BRDF Measurement Including Human Skin". He gives parameters > for a white and a black guy (an interesting aside is that he approximates > the black person's skin with only two lobes, while the white person gets > three. Does this imply that in the near future it will be faster to render > black characters in hardware accelerated computer games? :) ) > Along with the data, he gives the following matrix to convert the resulting > colors from the RGB color space of his digital camera, to monitor RGB: > > 1.063302, 0.382044, 0.445346 > 0.298125, 1.667665, 0.369540 > 1.322302, 0.446321, 2.768624 > > What confuses me here is that it transforms a LOT of colors to have negative > component values. How am I to deal with that? Do I add the largest > negative value in the image to all values, and then normalize the image to > [0,1]?? > > The second question has to do with the data itself. The exponents (the > nis) are really huge. These behave something like the specular exponents. > In the paper itself, data is given for matte blue paint. The exponents for > the three lobes are: > > 18.6 2.58 63.8 > > Unfortunately, in my implementation, an exponent greater than about 8 > completely forces the lobe's contribution to zero, over the entire surface > of a sphere. Note that the formula is the dot product of u and v, if you > set Cxyz to 1. Thus a lobe with such a high exponent only contributes a > tiny dot at the mirror angle?? > > The other problem is that I am not really sure if I'm doing things right. > My results look plausible, but I have no idea. I did get some tiny > offspecular reflection in the case of the blue paint like the paper says I > should, so I'm not completely off. Did anyone implement this stuff and have > some images I can compare against? I take it that I will be working on this > for some time, and I wound't mind if I could discuss things with someone at > length. > > I did find a renderman shader written by a Stephen H. Westin that does the > same thing I want. What I am confused about is how he puts the BRDF data > into the shading pipeline. I am no renderman expert, but I think he does: > > pixel_color = (material.ambient+material.diffuse)*color_texture_map > > for each lobe: > > pixel_color += BRDF(u,v) * light_color * (u . N ) > > where . is the dot product, N the surface normal, and u the to_light vector. > What bothers me about this is that he does not multiply the diffuse > component by (u . N), but he does the BRDF, which already has this info in > it. Also, does the BRDF not already account for any and all diffuse? So in > short, I'm wondering how a BRDF is supposed to be evaluated in a shader. > > Also, every paper I saw evaluates BRDFs via some type of radiosity  most > certainly not point lights like me. Is this even supposed to look right > when I use a point source? There is one example I found in Debevec's > "Aquiring the reflectance field of a human face" where he renders a face lit > by three monochromatic, effectively point sources. There the result is > about as unspectacular as what I'm getting now. > > To summarize, I am a little too confused now to ask good questions, but > considering I wrote all this stuff from scratch since today morning, I can't > say I'm disappointed. :) > >  > Adam Moravanszky > http://www.n.ethz.ch/student/adammo > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: Blake Senftner <bsenftner@ro...>  20001113 17:58:36

I think you might be looking for frenet frames, which is a technique that maintains a reference frame for building geometry around 3D curves that avoids the twisting. At least that's my weak understanding. It's one of those techniques that I'm aware of but have not invested the time to learn yet. A former coworker swears by them for his plant growing software. Here's some links from google: http://www.cs.indiana.edu/pub/techreports/TR407.html http://www.ugrad.cs.ubc.ca/spider/cs414/frenetframes.html http://www.math.umd.edu/~jmr/241/curves2.htm http://www.cs.waikato.ac.nz/~stevef/Museum/390/cge/frames/frenet.html Blake 
From: Davide Pirola <dado@pr...>  20001113 16:20:19

 Original Message  From: Lionel Fumery <lifml@...> To: <gdalgorithmslist@...> Sent: Monday, November 13, 2000 2:58 PM Subject: Re: [Algorithms] Bounding volumes for physic engine > > Well, the stage beyond OBB Tree vs OBB Tree in our physics / collisions > code > > is OBB vs triangles, where the OBB is used for the larger of the colliding > > entities. I'm not sure this is really what you're looking for, though. > > OBB are fine, but we loose some datas, like the normal to the surface needed > for the reaction. On the other hand, testing at the triangles level is too > expensive, even limited to a leaf of the OBBTree. Maybe using more simple > models for the physic could be fine ? I'm using a simple mesh for "physic volume" (~12 vertices), and this works fine. This help to find better contact forces too... Davide Pirola http://www.prograph.it http://www.protonic.net 
From: Giovanni Bajo <bagio@pr...>  20001113 16:05:06

And... ehm, we did cartoon rendering too, and even before Jet Set Radio <g> :) http://www.protonic.net/English/tsunami/tsunami.htm About the original post, I like very much the cartoon rendering via vertex shaders that NVIDIA has developed. I think that it will be released in their site in a few days. It is very effective and fast.  Giovanni Bajo Lead Programmer Protonic Interactive http://www.protonic.net a brand of Prograph Research S.r.l. http://www.prograph.it > Original Message > From: gdalgorithmslistadmin@... > [mailto:gdalgorithmslistadmin@...]On Behalf Of Tom > Forsyth > Sent: Monday, November 13, 2000 12:26 PM > To: gdalgorithmslist@... > Subject: RE: [Algorithms] Cartoon rendering > > > Same game, just called different things in different territories. > Don't know > why, but it's common enough  Megadrive/Genesis, etc. > > > Tom Forsyth  purely hypothetical Muckyfoot bloke. > > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. > > > > > Original Message > > From: Daniel Renkel [mailto:DanielRenkel@...] > > Sent: 11 November 2000 15:35 > > To: gdalgorithmslist@... > > Subject: Re: [Algorithms] Cartoon rendering > > > > > > sam, > > > > do you mean Jet Grind Radio ? > > > > i couldn't find JSetR, > > Daniel "SirLeto" Renkel [D.Renkel@...] > > technical design director  creactivity and technowhow > > Future Interactive [http://www.FutureInt.de] > > > > > Original Message  > > >From: "Sam McGrath" <sammy@...> > > > Have you seen the new game Jet Set Radio for the Dreamcast? > > >Your mind will be changed! > > > > > > Sam McGrath > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > > 
From: Lionel Fumery <lifml@mi...>  20001113 15:21:38

> Things get a lot more simple if you use convex hulls instead of BSPs. Could you give me more details ? Do you test the convex hulls triangle per triangle, or do you push the triangles of the CH 1 into the second BSP, or do you work with BSP at the convex hull level ??? Lionel. 
From: Neal Tringham <neal@ps...>  20001113 14:21:05

From: Lionel Fumery <lifml@...> > > Well, the stage beyond OBB Tree vs OBB Tree in our physics / collisions > code > > is OBB vs triangles, where the OBB is used for the larger of the colliding > > entities. I'm not sure this is really what you're looking for, though. > > OBB are fine, but we loose some datas, like the normal to the surface needed > for the reaction. On the other hand, testing at the triangles level is too > expensive, even limited to a leaf of the OBBTree. Maybe using more simple > models for the physic could be fine ? I believe its possible to extract the normal to the OBB face if that's good enough for what you're doing. If, on the other hand, you need the normal to the actual surface, then I suspect you're going to have to collide against some representation using triangles, whether its a BSP or whatever... Neal Tringham (Sick Puppies / Empire Interactive) neal@... neal@... 
From: Stephen J Baker <sjbaker@li...>  20001113 14:09:24

On Fri, 10 Nov 2000, Jamie Fowlston wrote: > Also rather unfair on deaf folk. But *more* fair on the blind... :)  Steve Baker (817)6192657 (Vox/VoxMail) L3Com/Link Simulation & Training (817)6192466 (Fax) Work: sjbaker@... http://www.link.com Home: sjbaker1@... http://web2.airmail.net/sjbaker1 
From: Lionel Fumery <lifml@mi...>  20001113 13:57:11

> Well, the stage beyond OBB Tree vs OBB Tree in our physics / collisions code > is OBB vs triangles, where the OBB is used for the larger of the colliding > entities. I'm not sure this is really what you're looking for, though. OBB are fine, but we loose some datas, like the normal to the surface needed for the reaction. On the other hand, testing at the triangles level is too expensive, even limited to a leaf of the OBBTree. Maybe using more simple models for the physic could be fine ? Lionel. 
From: Angel Popov <jumpo@bi...>  20001113 13:42:17

> Hello, > > At this time, my 3D engine use the following bounding volumes : >  AABB and AABSPTree for global clipping >  OBB Tree for collision. > > Now, I would like to get more precise collision informations between 2 > 3DObjects, in order to make strong and precise physic algorithms. > > Which kind of bounding volumes can I use ? Could anybody give me some > starting point ? You can push the polygons from the first object into the second object's BSP, if any of these polys end inside an "IN" BSP leaf  we've got a collision. Getting a precise collision information (the exact shape where the two meshes intersect and the normals of the 2 objects in each vert from the shape )is a bit more tricky. When you split a poly  get the new edge introduced by the splitting and store the node and polygon normals in it. Then push this edge first to BSPNode>Back and collect the edge pieces as they reach a leaf setting a flag to indicate whether the piece has reached an "IN" or an "OUT" leaf. Then push the collected pieces to BSPNode>Front. Now if an edge with the "IN" flag ends in "OUT" leaf ot an "OUT" edge ends in "IN" leaf  store that edge in the intersection shape, othervise  delete it. Things get a lot more simple if you use convex hulls instead of BSPs. 
From: gl <gl@nt...>  20001113 13:20:06

I'd try something similar, but more like the Elite radar display. To make it fair, subtle sounds would have to be barely visible, so that you couldn't spot an emeny behind you unless he was close. You'd have to tweak it until it gave a similar warning to the equivialent sound. I'm pretty sure that it would be pretty intuitive to deaf people. You are in effect directly representing vibration of different frequencies/intensities  a universal concept. Whether it would put them on an equal footing; I'm pretty sure it _could_, but if would have to be executed flawlessly (and few things are).  gl  Original Message  From: "Tom Forsyth" <tomf@...> To: <gdalgorithmslist@...> Sent: Monday, November 13, 2000 11:11 AM Subject: RE: [Algorithms] Sound support for the deaf > Quick thought  a bar across the bottom of the screen  sort of like a fancy > stereo's equaliser display. Sound position (leftright pan) is indicated by > horizontal position (middle  straight ahead/behind). Sound intensity is > indicated by bigger "blobs" appearing on the line. Sound frequency is > indicated by colour frequency  so footsteps are a low red, people talking > is a multicoloured higherpitched changing greenblue, gunfire is literally > white noise. > > Two questions: > > (a) would deaf people "get" it? Those who used to hear and are now deaf > probably will. But it's a bit like trying to explain to the blind what > colour is. I guess you can it's like texture, but different. You'd have to > ask a deaf person I guess. > > (b) is it good enough to allow them to play on an equal footing with hearing > people with headphones? It should certainly even the field out a bit, but I > can't help feeling it's like allowing people to play FPS games on the > keyboard instead of a mouse  it sounds good in theory, but in practice it's > fairly pointless  in multiplayer, you'd get obliterated. Of course, it may > work very well indeed  you'd have to test it. > > > Neither of the above are showstoppers of course, and if it doesn't take long > to do, and actually enhances the gameplay for some people  go for it. > > > Tom Forsyth  purely hypothetical Muckyfoot bloke. > > This email is the product of your deranged imagination, > and does not in any way imply existence of the author. > > > > Original Message > > From: Mark Wayland [mailto:mwayland@...] > > Sent: 11 November 2000 00:28 > > To: gdalgorithmslist@... > > Subject: [Algorithms] Sound support for the deaf > > > > > > On a similar topic mentioned in the isshadow thread, has anyone > > thought of a sound interface for the deaf that provides visual > > clues instead of actual sound? Just a thought anyway... > > > > Mark > > > > > > _______________________________________________ > > GDAlgorithmslist mailing list > > GDAlgorithmslist@... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Neal Tringham <neal@ps...>  20001113 12:07:24

From: Lionel Fumery <lifml@...> > At this time, my 3D engine use the following bounding volumes : >  AABB and AABSPTree for global clipping >  OBB Tree for collision. > > Now, I would like to get more precise collision informations between 2 > 3DObjects, in order to make strong and precise physic algorithms. > > Which kind of bounding volumes can I use ? Could anybody give me some > starting point ? Well, the stage beyond OBB Tree vs OBB Tree in our physics / collisions code is OBB vs triangles, where the OBB is used for the larger of the colliding entities. I'm not sure this is really what you're looking for, though. Neal Tringham (Sick Puppies / Empire Interactive) neal@... neal@... 
From: Tom Forsyth <tomf@mu...>  20001113 11:45:25

> From: Adam Moravanszky [mailto:amoravanszky@...] > > Hi, [snip] > Along with the data, he gives the following matrix to convert > the resulting > colors from the RGB color space of his digital camera, to monitor RGB: > > 1.063302, 0.382044, 0.445346 > 0.298125, 1.667665, 0.369540 > 1.322302, 0.446321, 2.768624 > > What confuses me here is that it transforms a LOT of colors > to have negative > component values. How am I to deal with that? Do I add the largest > negative value in the image to all values, and then normalize > the image to > [0,1]?? I know little about BRDFs, but this question is a simple one of changing colour spaces. The answer is yes  you convert numbers to be ve, sum as many as you can, and only at the end do you clamp to [0,1]. The problem is that a monitor can't go below or above a certain amount of colour in various colourspaces, so it may not be able to represent some components of the scene corrently. This you just have to deal with. However, it may be able to represent the total scene (or bits of the total scene) correctly, since it may fall within it gamut. But if you have clamped too early, you will get incorrect results. The trick is to clamp as late as you can while retaining performance. I think the easiest example is to take an object of brightness 2.0, and one of brightness 4.0 (where the monitor can onyl display up to 1.0). Normally, both would get maxed to 1.0 and look the same. However, now the viewer puts a welding mask on that quarters the brightness. If you clamp too early, both lights are now displayed at 0.25, and so (a) don't look very bright and (b) still look the same. Whereas doing the full calculation and clamping at the end allows you to see that they are indeed different brightnesses, and brighter than most of the rest of the scene. For the same reason, ideally we would model with a full spectrum, not just three samples (RGB or any other threecomponent colourspace). It's the old sodiumlampshiningonaredcar thing (car should be black, with RGB colourspace it's red). But again, it's a question of performance vs quality. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > > The second question has to do with the data itself. The > exponents (the > nis) are really huge. These behave something like the > specular exponents. > In the paper itself, data is given for matte blue paint. The > exponents for > the three lobes are: > > 18.6 2.58 63.8 > > Unfortunately, in my implementation, an exponent greater than about 8 > completely forces the lobe's contribution to zero, over the > entire surface > of a sphere. Note that the formula is the dot product of u > and v, if you > set Cxyz to 1. Thus a lobe with such a high exponent only > contributes a > tiny dot at the mirror angle?? > > The other problem is that I am not really sure if I'm doing > things right. > My results look plausible, but I have no idea. I did get some tiny > offspecular reflection in the case of the blue paint like > the paper says I > should, so I'm not completely off. Did anyone implement this > stuff and have > some images I can compare against? I take it that I will be > working on this > for some time, and I wound't mind if I could discuss things > with someone at > length. > > I did find a renderman shader written by a Stephen H. Westin > that does the > same thing I want. What I am confused about is how he puts > the BRDF data > into the shading pipeline. I am no renderman expert, but I > think he does: > > pixel_color = (material.ambient+material.diffuse)*color_texture_map > > for each lobe: > > pixel_color += BRDF(u,v) * light_color * (u . N ) > > where . is the dot product, N the surface normal, and u the > to_light vector. > What bothers me about this is that he does not multiply the diffuse > component by (u . N), but he does the BRDF, which already has > this info in > it. Also, does the BRDF not already account for any and all > diffuse? So in > short, I'm wondering how a BRDF is supposed to be evaluated > in a shader. > > Also, every paper I saw evaluates BRDFs via some type of > radiosity  most > certainly not point lights like me. Is this even supposed to > look right > when I use a point source? There is one example I found in Debevec's > "Aquiring the reflectance field of a human face" where he > renders a face lit > by three monochromatic, effectively point sources. There the > result is > about as unspectacular as what I'm getting now. > > To summarize, I am a little too confused now to ask good > questions, but > considering I wrote all this stuff from scratch since today > morning, I can't > say I'm disappointed. :) > >  > Adam Moravanszky > http://www.n.ethz.ch/student/adammo > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Tom Forsyth <tomf@mu...>  20001113 11:28:06

Same game, just called different things in different territories. Don't know why, but it's common enough  Megadrive/Genesis, etc. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Daniel Renkel [mailto:DanielRenkel@...] > Sent: 11 November 2000 15:35 > To: gdalgorithmslist@... > Subject: Re: [Algorithms] Cartoon rendering > > > sam, > > do you mean Jet Grind Radio ? > > i couldn't find JSetR, > Daniel "SirLeto" Renkel [D.Renkel@...] > technical design director  creactivity and technowhow > Future Interactive [http://www.FutureInt.de] > > > Original Message  > >From: "Sam McGrath" <sammy@...> > > Have you seen the new game Jet Set Radio for the Dreamcast? > >Your mind will be changed! > > > > Sam McGrath 
From: Tom Forsyth <tomf@mu...>  20001113 11:14:01

Quick thought  a bar across the bottom of the screen  sort of like a fancy stereo's equaliser display. Sound position (leftright pan) is indicated by horizontal position (middle  straight ahead/behind). Sound intensity is indicated by bigger "blobs" appearing on the line. Sound frequency is indicated by colour frequency  so footsteps are a low red, people talking is a multicoloured higherpitched changing greenblue, gunfire is literally white noise. Two questions: (a) would deaf people "get" it? Those who used to hear and are now deaf probably will. But it's a bit like trying to explain to the blind what colour is. I guess you can it's like texture, but different. You'd have to ask a deaf person I guess. (b) is it good enough to allow them to play on an equal footing with hearing people with headphones? It should certainly even the field out a bit, but I can't help feeling it's like allowing people to play FPS games on the keyboard instead of a mouse  it sounds good in theory, but in practice it's fairly pointless  in multiplayer, you'd get obliterated. Of course, it may work very well indeed  you'd have to test it. Neither of the above are showstoppers of course, and if it doesn't take long to do, and actually enhances the gameplay for some people  go for it. Tom Forsyth  purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > Original Message > From: Mark Wayland [mailto:mwayland@...] > Sent: 11 November 2000 00:28 > To: gdalgorithmslist@... > Subject: [Algorithms] Sound support for the deaf > > > On a similar topic mentioned in the isshadow thread, has anyone > thought of a sound interface for the deaf that provides visual > clues instead of actual sound? Just a thought anyway... > > Mark > > > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist > 
From: Lionel Fumery <lifml@mi...>  20001113 10:56:09

Hello, At this time, my 3D engine use the following bounding volumes :  AABB and AABSPTree for global clipping  OBB Tree for collision. Now, I would like to get more precise collision informations between 2 3DObjects, in order to make strong and precise physic algorithms. Which kind of bounding volumes can I use ? Could anybody give me some starting point ? Thank you for any advice, Lionel. Hydravision Entertainment 
From: Jamie Fowlston <j.fowlston@re...>  20001113 09:58:55

> I know from scientific visualization that transforming a > point from world space to the parameter space of an object can be very > expensive  is this ever done in raytracers anyway? > I suspect it may be done for parametric procedural textures, but I could well be wrong. Jamie Virus Scanned and cleared ok 
From: Chris Jurney <cjurney1@ho...>  20001113 09:17:33

This algorithm also has the problem that it isn't necessarily tighter than an axisaligned bounding box. For example, if your point set is the 8 corners of an axisaligned bounding box centered on the origin with a width of 2, this algorithm produces a box with a width of around 3.46. You could use this algorithm to find the tightest bounding box by trying all combinations of point pairs to define the first 4 faces of the box (the last 2 can be calculated), but that's O(n^4) or so. And if you just tried all pairs blindly, you'd have to check to make sure each box generated actually included all points in the set. In the realm of more useful information if noone has a good algorithm, This and the axisaligned box are 2 good guesses at a reasonably tight box, so you could choose the better of the two (by volume, or shorter longest axis, or some other metric). Chris Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...]On Behalf Of int18@... Sent: Monday, November 13, 2000 2:31 AM To: gdalgorithmslist@... Subject: Re: [Algorithms] Bounding Box (James Zhang) if you calculate the two points that are farthest away from each other, you can use this info to calculate two of the planes of the bounding box(1)(2). the normal of these two planes can be calculated from the equation of the line between p1 and p2. now look for the point that is farthest away from the line between p1 and p2 you will get a point in yet another of the planes of the bounding box(3). (4) can be calculated by finding the point that is farthest away from (3). (5) and (6) can be found by computing the equation of the plane that is perpendicular to both (1) and (3) or (2) and (4) and find the two points that are farthest away from this plane. this algorithm runs in O(6*numpoints) which might or might not be good enought for you. int _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: PeterPike Sloan <ppsloan@mi...>  20001113 08:52:22

Hi, Here are some things to look at: http://www.cs.unc.edu/~geom/V_COLLIDE/index.html http://www.cs.unc.edu/~dm/collision.html (look at the OBB tree paper, from siggraph 96.) If you just have points the easiest thing to do is build the covariance matrix and solve for it's eigen vectors (these will be the axis of the bounding box, centered at the mean of the points.) The covariance matrix is a symmetric 3x3 matrix: A B C B D E C E F where: A = sum(dx*dx)/N D = sum(dy*dy)/N F = sum(dz*dz)/N B = sum(dx*dy)/N C = sum(dx*dz)/N E = sum(dy*dz)/N dx = x  ux // x coordinate for a point minus the mean dy = y  uy dz = z  uz ux = sum(Xi)/N // average x coordinate uy = sum(Yi)/N // average y coordinate uz = sum(Zi)/N // average z coordinate The OBB paper discuses how to compute a covariance matrix analytical for polygonal objects, and discusses why interior points (to the convex hull) can bias the results (ie: you should use the convex hull of the point set.) Compute the eigenvectors of this matrix (look in numerical recipes for ways to do this  remember it's a symmetric matrix and I think it is positive definite in the nondegenerate case...) Now find the extents of the point set in the coordinate frame centered at the mean with the eigenvectors as the axis (you might be able to compute this directly from the eigenvalues, but I'm not sure...) This requires 1 pass through the point set to compute the mean, 1 pass to compute the 3x3 matrix, the computation of the eigenvectors of a 3x3 matrix and 1 pass to build the extents in the coordinate frame based on the eigenvectors and mean. Hope that helps, PeterPike Sloan Original Message From: ZhangYanci [mailto:zhang_yan_c@...] Sent: Sunday, November 12, 2000 7:11 PM To: gdalgorithmslist@... Subject: [Algorithms] Bounding Box (James Zhang) Hi: I want to use bounding boxes to bound a 3D point set tightly. I know some bounding box technique but they can't meet my "tight bounding" request, and on the other hand the technique of surface reconstruction from scattered points reconstruct a too complex and accurate surface. In a word, I need a comparetively simple bounding box to bound the 3D points set as tight as possible. Does anybody know such algorithm or internet resources? _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 
From: oles <oles@gs...>  20001113 08:49:40

OBB the only one simple to handle and has not a very large volume relative to ideal volume. The other way to go  convex hulls, but the math is quite complex. 
From: int18@MIT.EDU  20001113 07:30:56

if you calculate the two points that are farthest away from each other, you can use this info to calculate two of the planes of the bounding box(1)(2). the normal of these two planes can be calculated from the equation of the line between p1 and p2. now look for the point that is farthest away from the line between p1 and p2 you will get a point in yet another of the planes of the bounding box(3). (4) can be calculated by finding the point that is farthest away from (3). (5) and (6) can be found by computing the equation of the plane that is perpendicular to both (1) and (3) or (2) and (4) and find the two points that are farthest away from this plane. this algorithm runs in O(6*numpoints) which might or might not be good enought for you. int 
From: PeterPike Sloan <ppsloan@mi...>  20001113 06:07:54

Hi, I'm cc'ing steve marschner, he probably can point you at some images if you want. It's fairly common to factor the BRDF into a diffuse and a specular part  that's probably what you were seeing with what westin did. You should still be able to see a difference when using a point light source  look at the seperable BRDF work that Jan Kautz did for example (demos on NVidias web page.) PeterPike Sloan (You also want to look at the new cloth modeling book by AK Peters  there is a very good rendering chapter by the late Alain Fournier that also uses a multilobe approximation to BRDFs, there is a tech report on the web at UBC, but it is from a paper that was in the 1992 GI symposium on local illumination and I've never seen one of those before: http://www.cs.ubc.ca/labs/imager/tr/fournier.1992c.html ) Original Message From: Adam Moravanszky [mailto:amoravanszky@...] Sent: Sunday, November 12, 2000 11:41 AM To: gdalgorithmslist@... Subject: [Algorithms] BRDFs and Lafortune's cosine lobes Hi, the good news is that the raytracer works. I can render objects with Phong shading perfectly. I also wrote a shader that shades using arbitrary BRDFs, as approximated by the generalized cosine lobe model, described in the SIGGRAPH 97 paper: "NonLinear Approximation of Reflectance Functions", by Lafortune, Foo, Torrance, and Greenberg. This is a long paper that describes something very simple. A BRDF is approximated by a finite sum of exponential basis functions, of the form: BRDF(u,v) = Sigma [i=1..m] (Cxi u.x v.x + Cyi u.u v.y + Czi u.z v.z)^ni Sigma denotes the sum sign, here we are summing m basis functions, each of which has four scalar parameters: Cx Cy Cz and n. u and v are the usual vector valued parameters to the BRDF, denoting the incidend and exitant ray (for simple rendering, eye and light vector). Note that in practice, one has three sets of these parameters, one for each color channel. The run of the mill BRDF can usually be approximated by m = two or three basis functions, called lobes . Now for the questions. First, an easy one. I found sets of parameters for this model in several places, one being part of Marschner et al.'s paper "ImageBased BRDF Measurement Including Human Skin". He gives parameters for a white and a black guy (an interesting aside is that he approximates the black person's skin with only two lobes, while the white person gets three. Does this imply that in the near future it will be faster to render black characters in hardware accelerated computer games? :) ) Along with the data, he gives the following matrix to convert the resulting colors from the RGB color space of his digital camera, to monitor RGB: 1.063302, 0.382044, 0.445346 0.298125, 1.667665, 0.369540 1.322302, 0.446321, 2.768624 What confuses me here is that it transforms a LOT of colors to have negative component values. How am I to deal with that? Do I add the largest negative value in the image to all values, and then normalize the image to [0,1]?? The second question has to do with the data itself. The exponents (the nis) are really huge. These behave something like the specular exponents. In the paper itself, data is given for matte blue paint. The exponents for the three lobes are: 18.6 2.58 63.8 Unfortunately, in my implementation, an exponent greater than about 8 completely forces the lobe's contribution to zero, over the entire surface of a sphere. Note that the formula is the dot product of u and v, if you set Cxyz to 1. Thus a lobe with such a high exponent only contributes a tiny dot at the mirror angle?? The other problem is that I am not really sure if I'm doing things right. My results look plausible, but I have no idea. I did get some tiny offspecular reflection in the case of the blue paint like the paper says I should, so I'm not completely off. Did anyone implement this stuff and have some images I can compare against? I take it that I will be working on this for some time, and I wound't mind if I could discuss things with someone at length. I did find a renderman shader written by a Stephen H. Westin that does the same thing I want. What I am confused about is how he puts the BRDF data into the shading pipeline. I am no renderman expert, but I think he does: pixel_color = (material.ambient+material.diffuse)*color_texture_map for each lobe: pixel_color += BRDF(u,v) * light_color * (u . N ) where . is the dot product, N the surface normal, and u the to_light vector. What bothers me about this is that he does not multiply the diffuse component by (u . N), but he does the BRDF, which already has this info in it. Also, does the BRDF not already account for any and all diffuse? So in short, I'm wondering how a BRDF is supposed to be evaluated in a shader. Also, every paper I saw evaluates BRDFs via some type of radiosity  most certainly not point lights like me. Is this even supposed to look right when I use a point source? There is one example I found in Debevec's "Aquiring the reflectance field of a human face" where he renders a face lit by three monochromatic, effectively point sources. There the result is about as unspectacular as what I'm getting now. To summarize, I am a little too confused now to ask good questions, but considering I wrote all this stuff from scratch since today morning, I can't say I'm disappointed. :)  Adam Moravanszky http://www.n.ethz.ch/student/adammo _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... http://lists.sourceforge.net/mailman/listinfo/gdalgorithmslist 