From: Jason M. <ko...@gm...> - 2008-08-29 20:40:30
|
I like this. It puts into contrast what separates tutorials from examples from demos. A *demo *is just a program with documentation that shows how to do something. The code itself may not be well formatted, but you can learn some effect by reading the demo's docs. An *example *is a well-documented program. That is, the documentation actually walks you, step-by-step, through the relevant parts of code. Examples can often skip initialization or so forth, but they need to explain in detail the code relevant to the purpose of the example. A *tutorial *is a teaching aid. It is important for tutorials to have a sequence (one tutorial teaches things that are used in the next), and it is important for them to engage the reader. The code listing for the tutorial isn't nearly as important as how the tutorial itself is written. And as you point out, a tutorial doesn't even need to have a functional codelist; it exists to help the reader learn the concepts involved. This tells me that tutorials need to be something we do "in-house," rather than commissioning people to provide them. Or, at the very least, we should provide the actual subject for each tutorial. A certain standard level of quality needs to exist for them, and it will take concerted effort to maintain that level of quality. As for the "stops on the road", the main problem is the dramatic jump between "I've got a GL window" and "render something". It requires introducing VBOs and shaders. The VBO stuff is fairly simple; it's much like a form of malloc. The problem comes with shaders, which are incredibly complicated. I suppose in a tutorial, you can just handwave it as "magical setup work that we'll detail later," but I'd be at least a little concerned about how that would read to a user. To expand on your list a bit: - How do I have a camera independent of the transform for the object? - Requires multiple objects - Introduces multiple sequential transforms - Concatenates them into one matrix for ease-of-use - How do I map textures? - Introduces passing of texture coordinates - Introduces texture mapping fragment operations - How do I introduce lighting? - Needs to be after the camera one. - Start with directional lights and a semi-complicated object (sphere). - Introduce normals and the diffuse lighting equation - Do per-vertex lighting, passing a color value. - Deal with normal transforms as separate from position transforms. - Interactions with textures. - What about better lighting? - Introduce a specular term. - Use it per-vertex. Show why this is bad. - Move to per-fragment lighting. - Different interactions with textures. On Fri, Aug 29, 2008 at 9:41 AM, Rob Barris <rb...@bl...> wrote: > > Hello Henri , yep I'm still here.. > > I think the #1 goal has to be "walk developers step by step through > use of the API". I think to keep focus that we may need to set some > explicit *non*-goals for a period of time, and here are a few I would > suggest, since they don't help teach the API. > > a) SIMD optimization for the math lib. <- fun, but off course. > b) anything that would tip the balance towards "web site authoring > and maint" and away from making code. > c) 3D geometry/math tutorials - we can link to any number of books/ > sites for this stuff > d) anything else that consumes calendar time at the expense of demo > code. > > One class of example that might be interesting would be a Rosetta > stone, where several ways to accomplish the same goal are shown. For > example, simplest thing I can think of, a tumbling triangle. You can > do it with all the math on the CPU and push finished verts to the > GPU. You can then say "maybe I should do the rotation in the shader" > and demonstrate how to hoist code from CPU to GPU and have it run in > the shader. You could show one version where we compute the rotation > matrix on the CPU and pass that as a uniform to be used trivially in > the shader, then you could do one where (just for kicks) we pass Euler > angles as uniforms and let the shader generate the matrix itself. We > could do one where we pass non-Cartesian verts (azimuth/elevation and > radius) and let the shader do polar-to-cartesian conversion. By the > end of it you've learned a few different ways to push data from CPU to > GPU and have it acted on. > > stops on the API-learning road.. please add some stops here > > - how do I establish a context and put it on the screen > - make a window, init context, clear, swap > > - how do I put data in a buffer so GPU can see it > - no drawing - just demonstrate the buffer API's > - perhaps BufferData some floats into a buffer, then map it and > print the values seen > > - how do I draw the simplest possible triangle > - pass-through shader for vert position > - "write red" for pixel > - constant verts for geometry in a VBO > > - how can I change the color of the triangle > - introduce per vertex color attribute > - vertex shader passes it through > - alter pixel shader to read it and emit it > > - introduction to uniforms - communicate to shaders > - show color change using uniform (several ways to do this) > > - how can I move the triangle > - see above > > that's a lot of good tutorial steps before we even start talking > about more than one triangle. Ideally you get to the end of this > phase and you have an idea of how to put data in a buffer, how to lay > out some vertex data, how to draw a triangle, and how to communicate > on a very basic level with the shaders, and no usage of deprecated APIs. > > Rob > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Glsdk-devel mailing list > Gls...@li... > https://lists.sourceforge.net/lists/listinfo/glsdk-devel > |