Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Rightclick on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2001 
_{Jan}

_{Feb}

_{Mar}

_{Apr}
(1) 
_{May}
(612) 
_{Jun}
(916) 
_{Jul}
(1079) 
_{Aug}
(536) 
_{Sep}
(476) 
_{Oct}
(354) 
_{Nov}

_{Dec}


2005 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}
(143) 
2006 
_{Jan}
(828) 
_{Feb}
(774) 
_{Mar}
(1114) 
_{Apr}
(795) 
_{May}
(566) 
_{Jun}
(857) 
_{Jul}
(336) 
_{Aug}
(412) 
_{Sep}
(213) 
_{Oct}
(385) 
_{Nov}
(1065) 
_{Dec}
(514) 
2007 
_{Jan}
(864) 
_{Feb}
(501) 
_{Mar}
(346) 
_{Apr}
(357) 
_{May}
(720) 
_{Jun}
(598) 
_{Jul}
(714) 
_{Aug}
(275) 
_{Sep}
(366) 
_{Oct}
(428) 
_{Nov}
(674) 
_{Dec}
(1197) 
2008 
_{Jan}
(582) 
_{Feb}
(357) 
_{Mar}
(456) 
_{Apr}
(314) 
_{May}
(150) 
_{Jun}
(188) 
_{Jul}
(205) 
_{Aug}
(440) 
_{Sep}
(349) 
_{Oct}
(705) 
_{Nov}
(643) 
_{Dec}
(781) 
2009 
_{Jan}
(684) 
_{Feb}
(456) 
_{Mar}
(450) 
_{Apr}
(371) 
_{May}
(294) 
_{Jun}
(319) 
_{Jul}
(199) 
_{Aug}
(349) 
_{Sep}
(570) 
_{Oct}
(720) 
_{Nov}
(547) 
_{Dec}
(481) 
2010 
_{Jan}
(535) 
_{Feb}
(680) 
_{Mar}
(666) 
_{Apr}
(345) 
_{May}
(283) 
_{Jun}
(216) 
_{Jul}
(241) 
_{Aug}
(195) 
_{Sep}
(240) 
_{Oct}
(350) 
_{Nov}
(498) 
_{Dec}
(452) 
2011 
_{Jan}
(550) 
_{Feb}
(577) 
_{Mar}
(439) 
_{Apr}
(495) 
_{May}
(343) 
_{Jun}
(473) 
_{Jul}
(287) 
_{Aug}
(337) 
_{Sep}
(481) 
_{Oct}
(495) 
_{Nov}
(409) 
_{Dec}
(500) 
2012 
_{Jan}
(335) 
_{Feb}
(490) 
_{Mar}
(382) 
_{Apr}
(464) 
_{May}
(234) 
_{Jun}
(255) 
_{Jul}
(226) 
_{Aug}
(266) 
_{Sep}
(213) 
_{Oct}
(201) 
_{Nov}
(268) 
_{Dec}
(186) 
2013 
_{Jan}
(196) 
_{Feb}
(363) 
_{Mar}
(164) 
_{Apr}
(222) 
_{May}
(146) 
_{Jun}
(256) 
_{Jul}
(126) 
_{Aug}
(150) 
_{Sep}
(196) 
_{Oct}
(201) 
_{Nov}
(270) 
_{Dec}
(196) 
2014 
_{Jan}
(232) 
_{Feb}
(282) 
_{Mar}
(234) 
_{Apr}
(189) 
_{May}
(262) 
_{Jun}
(132) 
_{Jul}
(44) 
_{Aug}
(157) 
_{Sep}
(192) 
_{Oct}
(164) 
_{Nov}
(252) 
_{Dec}
(71) 
2015 
_{Jan}
(147) 
_{Feb}
(369) 
_{Mar}
(641) 
_{Apr}
(443) 
_{May}
(213) 
_{Jun}
(476) 
_{Jul}
(211) 
_{Aug}
(103) 
_{Sep}
(275) 
_{Oct}
(281) 
_{Nov}
(275) 
_{Dec}
(253) 
2016 
_{Jan}
(423) 
_{Feb}
(463) 
_{Mar}
(392) 
_{Apr}
(456) 
_{May}
(631) 
_{Jun}
(411) 
_{Jul}
(248) 
_{Aug}
(265) 
_{Sep}
(294) 
_{Oct}
(323) 
_{Nov}
(745) 
_{Dec}
(508) 
2017 
_{Jan}
(689) 
_{Feb}
(833) 
_{Mar}
(940) 
_{Apr}
(593) 
_{May}
(937) 
_{Jun}
(349) 
_{Jul}
(283) 
_{Aug}
(430) 
_{Sep}
(523) 
_{Oct}
(428) 
_{Nov}
(374) 
_{Dec}
(348) 
2018 
_{Jan}
(398) 
_{Feb}
(329) 
_{Mar}
(551) 
_{Apr}
(589) 
_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 


1
(15) 
2
(4) 
3
(16) 
4
(16) 
5
(12) 
6
(10) 
7
(20) 
8
(20) 
9
(6) 
10
(8) 
11
(3) 
12

13
(1) 
14
(6) 
15
(5) 
16
(7) 
17
(5) 
18
(12) 
19
(5) 
20
(2) 
21

22
(3) 
23
(6) 
24
(9) 
25
(3) 
26
(1) 
27

28
(2) 
29
(4) 
30

31




From: Mathias FrÃ¶hlich <Mathias.Froehlich@gm...>  20121016 16:38:34

Hi, On Tuesday, October 16, 2012 15:17:04 Tim Moore wrote: > I don't have access to a local copy of the tree at the mo', but I > remember that this was introduced by Mathias when he added BVH. Yes. That is to align the bounding volumes boxes as well as the drawables bounding boxes to the earths surface  which makes most of them smaller on average. But never rely on something like this in a renderer. Mathias 
From: Frederic Bouvier <fredfgfs01@fr...>  20121016 13:17:25

> De: "James Turner" > > On 16 Oct 2012, at 13:38, Tim Moore wrote: > > > The tile data on disk is actually stored in a > > coordinate system that is aligned with the earthcentric system, so > > Z > > points to the north pole. We rotate the coordinates back to a local > > coordinate system because that provides a much more useful bounding > > box for intersection testing and culling... and also lets you > > program > > snow lines in shaders :) > > Uh, are you sure about that? My understanding is that the BTG coords > on the disk are in 'tile local' coords, i.e 'Z is up' BTG are in cartesian coordinates and are rotated at load time here : http://gitorious.org/fg/simgear/blobs/next/simgear/scene/tgdb/obj.cxx#line923 Regards, Fred 
From: Tim Moore <timoore33@gm...>  20121016 13:17:14

On Tue, Oct 16, 2012 at 2:54 PM, James Turner <zakalawe@...> wrote: > > On 16 Oct 2012, at 13:38, Tim Moore wrote: > >> The tile data on disk is actually stored in a >> coordinate system that is aligned with the earthcentric system, so Z >> points to the north pole. We rotate the coordinates back to a local >> coordinate system because that provides a much more useful bounding >> box for intersection testing and culling... and also lets you program >> snow lines in shaders :) > > Uh, are you sure about that? My understanding is that the BTG coords on the disk are in 'tile local' coords, i.e 'Z is up' > > James https://gitorious.org/fg/simgear/blobs/next/simgear/scene/tgdb/obj.cxx#line925 I don't have access to a local copy of the tree at the mo', but I remember that this was introduced by Mathias when he added BVH. Tim 
From: James Turner <zakalawe@ma...>  20121016 12:54:58

On 16 Oct 2012, at 13:38, Tim Moore wrote: > The tile data on disk is actually stored in a > coordinate system that is aligned with the earthcentric system, so Z > points to the north pole. We rotate the coordinates back to a local > coordinate system because that provides a much more useful bounding > box for intersection testing and culling... and also lets you program > snow lines in shaders :) Uh, are you sure about that? My understanding is that the BTG coords on the disk are in 'tile local' coords, i.e 'Z is up' James 
From: Tim Moore <timoore33@gm...>  20121016 12:38:46

On Tue, Oct 16, 2012 at 12:05 PM, Renk Thorsten <thorsten.i.renk@...> wrote: >> One can assume that >> a vec4 varying is no more expensive than a vec3. > (...) >> I'm not sure it's useful to think of each component of a varying >> vector as a "varying" i.e., three vec3 varying values use up three >> varying "slots," and so do 3 float varying values > > I dunno... > > Just counting the number of operations, mathematically the best case scenario for interpolating a general vector across a triangle is in Cartesian coordinates where each coordinate interpolates as an independent number, so the cost of a vec4 would be the same as the cost for 4 floats. In any other case, like curved coordinates or Minkowski space, a Jacobian comes to bite and the vector is more expensive than just 4 scalar numbers. Yes, I acknowledge that interpolating a vec4 requires more operations than interpolating a float :) > > Now, what I don't know if there's some fancy hardware trick which makes a Cartesian vec4 as cheap as a float. In this case, we could use this by combining every four varying float into one varying vec4 and get the same job done for 25% of the cost. But... That's the crux of it. I thought the answer was obvious, but it very much depends on the hardware. For a very long time graphics hardware has had to rasterize, i.e., interpolate, multiple values across screen space: depth, color, alpha, texture coordinates.... I just assumed that it would be no more expensive to interpolate vector values. However, this very good web page, http://fgiesen.wordpress.com/2011/07/10/atripthroughthegraphicspipeline2011part8/, contains this quote: Update: Marco Salvi points out (in the comments below) that while there used to be dedicated interpolators, by now the trend is towards just having them return the barycentric coordinates to plug into the plane equations. The actual evaluation (two multiplyadds per attribute) can be done in the shader unit. So the cost of interpolating values is indeed incurred as operations in the (prolog of the) fragment shader. Even the oldest hardware that supports OpenGL programmable shaders implements vector operations, and a vector multiplyadd has, as far as I know, the same cost as a scalar operation. On the other hand, the shader compiler might be able to combine multiple scalar interpolations into vector ops. You can examine the assembly language for shaders if you want to see what's actually going on. I do recommend that web page and the others in the series; they are quite interesting. > > ... the thing I did try is that in adapting the urban shader to atmospheric scattering I ran out of varying slots, I needed two more varying float. I solved this by deleting one varying vec3 (the binormal) and computing it as the cross product  and that gave me the two slots I needed (and presumably one left, but I didn't try that). So this would suggest that indeed each vector component counts the same as a varying float. They do at the OpenGL API level, which doesn't necessarily correspond to the hardware implementation. > > >> One reason to pass this as a varying is that on old hardware, GeForce >> 7 and earlier, it is very expensive to change a uniform that is used >> by a fragment shader. It forces the shader to be recompiled. So, this >> is actually a wellknown optimization for old machines. > > Okay, I didn't know that... But pretty much all weather and environmentdependent stuff (ground haze functions, the wave amplitude for the water shader, overcast haze for the skydome,...) makes use of slowly but continuously changing uniforms (I think gl_LightSource is technically also a uniform), so it doesn't really make sense to have this old machine friendly code in one place in the shader but not in other places in the same shader. > True. >> Also, I want to point out that, in your example, lightdir is in the >> local coordinate system of the terrain, if in fact you are shading >> terrain. I would call "world space" the earthcentric coordinate >> system in which the camera orientation is defined. > > gl_Vertex is in some coordinate system which I've usually encountered as 'world space' in shader texts as opposed to gl_Position which is supposed to contain the vertex coordinates in 'eye space'. I realize that gl_Vertex is *not* in the global (xyz) coordinates of Flightgear Earth, although I don't know how the two relate. Somehow once in the shader world, z is always up... Just a matter of semantics? I think more usual usage for the local coordinate system is "model coordinates." The model matrix transforms those coordinates into world coordinates; the view matrix transforms world coordinates into eye coordinates. In OpenGL, even in preshader days, we tend not to talk about "world" space much because there is (was) only one matrix stack, which contains the concatenation of the model and view matrices. "z is always up" is a matter of convenience. "Z is up" only at the center of a tile. The tile data on disk is actually stored in a coordinate system that is aligned with the earthcentric system, so Z points to the north pole. We rotate the coordinates back to a local coordinate system because that provides a much more useful bounding box for intersection testing and culling... and also lets you program snow lines in shaders :) Tim 
From: Renk Thorsten <thorsten.i.renk@jy...>  20121016 10:05:26

> One can assume that > a vec4 varying is no more expensive than a vec3. (...) > I'm not sure it's useful to think of each component of a varying > vector as a "varying" i.e., three vec3 varying values use up three > varying "slots," and so do 3 float varying values I dunno... Just counting the number of operations, mathematically the best case scenario for interpolating a general vector across a triangle is in Cartesian coordinates where each coordinate interpolates as an independent number, so the cost of a vec4 would be the same as the cost for 4 floats. In any other case, like curved coordinates or Minkowski space, a Jacobian comes to bite and the vector is more expensive than just 4 scalar numbers. Now, what I don't know if there's some fancy hardware trick which makes a Cartesian vec4 as cheap as a float. In this case, we could use this by combining every four varying float into one varying vec4 and get the same job done for 25% of the cost. But... ... the thing I did try is that in adapting the urban shader to atmospheric scattering I ran out of varying slots, I needed two more varying float. I solved this by deleting one varying vec3 (the binormal) and computing it as the cross product  and that gave me the two slots I needed (and presumably one left, but I didn't try that). So this would suggest that indeed each vector component counts the same as a varying float. > One reason to pass this as a varying is that on old hardware, GeForce > 7 and earlier, it is very expensive to change a uniform that is used > by a fragment shader. It forces the shader to be recompiled. So, this > is actually a wellknown optimization for old machines. Okay, I didn't know that... But pretty much all weather and environmentdependent stuff (ground haze functions, the wave amplitude for the water shader, overcast haze for the skydome,...) makes use of slowly but continuously changing uniforms (I think gl_LightSource is technically also a uniform), so it doesn't really make sense to have this old machine friendly code in one place in the shader but not in other places in the same shader. > Also, I want to point out that, in your example, lightdir is in the > local coordinate system of the terrain, if in fact you are shading > terrain. I would call "world space" the earthcentric coordinate > system in which the camera orientation is defined. gl_Vertex is in some coordinate system which I've usually encountered as 'world space' in shader texts as opposed to gl_Position which is supposed to contain the vertex coordinates in 'eye space'. I realize that gl_Vertex is *not* in the global (xyz) coordinates of Flightgear Earth, although I don't know how the two relate. Somehow once in the shader world, z is always up... Just a matter of semantics? > I don't think that any varyings  except for the fragment coordinates >  are mandatory, except perhaps on very old hardware. I remember at least one text claiming that once you use ftransform(), gl_FrontColor and gl_BackColor get values as well  but that source or myself may be mistaken. Cheers, * Thorsten 
From: Tim Moore <timoore33@gm...>  20121016 08:34:35

Thanks for writing this up. I have a couple of comments, nitpicks really. On Mon, Oct 15, 2012 at 9:43 AM, Renk Thorsten <thorsten.i.renk@...> wrote: > > I thought it might be a good idea to write up a few things I've tried recently and not seen in widespread use  so that either others know about them as well or I can find out what the pitfalls are. > > Basically this is about reducing the number of varyings, which is desirable for at least two reasons. First, their total amout is quite limited (I think 32?). Second, they cause work per vertex and per pixel, so their load scales always with the current bottleneck. Their actual workload is just a linear interpolation across a triangle though, so the optimization I'm talking about here is maybe all together 1020% gains, not something dramatic, and it's not unconditionally superior to save a varying if the additional workload in the fragment shader is substantial. > > Also, the techniques are somewhat 'dirty' in the sense that they make it a bit harder to understand what is happening inside the shader. > > * making use of gl_FrontColor and gl_BackColor > gl_Color > > As far as I know, these are builtin varyings which are already there regardless if we use them or not. So if we don't use them at all because all color I don't think that any varyings  except for the fragment coordinates  are mandatory, except perhaps on very old hardware. Generally the total shader program is optimized to remove any unnecessary computations. However... >computations are in the fragment shader, they can carry four components of geometry, if we use a color but know the alpha, there is one varying which >can be saved by using gl_Color.a to encode it. I agree that using a coordinate in a varying that you already need is a good trick, better than assigning a new varying. One can assume that a vec4 varying is no more expensive than a vec3. By the way, we only encode the front/back facing info in alpha in order to get around shader language bugs. > > The prime example is terrain rendering where we know that the alpha channel is always 1.0 since the terrain mesh is never transparent. In default.vert/frag gl_Color.a is used to transport the information if a surface is front or backfacing, but in terrain rendering we know we're always above the mesh, so all surfaces we see are frontfacing, and we do backface culling in any case. ... > * light in classic rendering > > Leaving Rembrandt aside, the direction of the light source (the sun) is not a varying but actually a uniform. In case we need this in world space in the fragment shader, doing a > lightdir = normalize(vec3(gl_ModelViewMatrixInverse * gl_LightSource[0].position)); > in the vertex shader and passing this as varying vec3 is quite an overkill. One reason to pass this as a varying is that on old hardware, GeForce 7 and earlier, it is very expensive to change a uniform that is used by a fragment shader. It forces the shader to be recompiled. So, this is actually a wellknown optimization for old machines. Also, I want to point out that, in your example, lightdir is in the local coordinate system of the terrain, if in fact you are shading terrain. I would call "world space" the earthcentric coordinate system in which the camera orientation is defined. > > Due to the complexity of the coordinate system of the terrain, it's not clear to me how to get the world space light direction really into a uniform, but we do have it's zcomponent (the sun angle above the horizon) as a property and can use this as uniform. Since light direction is a unit vector, it means that only the polar angle of the light needs to be passed as a varying then, saving two components. We could include pertile uniforms as state attributes in the scene graph if we decide that we really want them.. > > In particular for water reflections computed in world space, passing normal, view direction and light direction in world coordinates from the vertex shader (9 varying) is really not efficient  the normal of water surfaces in world space is (0,0,1) and not varying at all (we do have formally water on steep surfaces in the terrain, but we never render this correct in any case since in reality rivers don't run up and down mountainslopes and foam when they run really fast on slopes, and to worry about getting light reflection wrong when the whole setup is wrong is a bit academic), the light direction is really just the polar angle, since we later dot everyting with the normal we really only need the zcomponent of the halfvector, and that means just two components of the view direction  so it can in principle be done with 3 varyings rather than 9. I'm not sure it's useful to think of each component of a varying vector as a "varying" i.e., three vec3 varying values use up three varying "slots," and so do 3 float varying values. On the other hand, if you can back all the lighting info into one varying, great. > > I've tested and used some of those, for instance making use of the cross product or the alpha channel in terrain rendering performs just fine. A 10%ish performance gain for the terrain shader isn't something to be massively excited about, but in the end combining a few 10% gains from various tricks does make an impact. Well, anyway  a few people just might be interested. Absolutely! Tim 