Travis Heppe <heppe@...> writes:
> tom fogal wrote:
> > I'm not really sure how this fits into GLEW. `Shader Model' is a
> > DirectX concept; OpenGL 3.1 at least doesn't define any kind of
> > `model'.
> that's not strictly true. Perhaps not explicitly, but each
> successive generation of hardware has added capabilities that inform
> what may be put into the GLSL contents.
That's exactly my point: this is not an issue with loading an OpenGL
extension, but rather an issue that occurs because OpenGL vendors
routinely break the OpenGL abstraction.
> Three examples
> gl_FrontFacing as fragment shader input
> instruction limit
> ...and there are many more.
I don't see anything in GLSL 1.20 which allows an implementation to
*not* implement dFdx. Nor can I find anything about shader length
limits in the 3.1 OGL spec.
I don't have the 1.10 document to compare with, but assuming they're
not there, you can use `#version 120' or #ifdef's on __VERSION__ to
verify your shader runs in a 1.20 supporting environment.
Anyway, if you've got a 1.20-compliant shader, which you declare
requires 1.20, and a driver does not compile it ... that's a bug in the
driver. Or AMD's ShaderAnalyzer program, whatever. Ditto for a shader
being too long for the hardware.
While these are very real problems that occur all too often (and as
application developers we *need* to provide workarounds), I still do
not feel that GLEW is the appropriate place for such a workaround.
> AMD's ShaderAnalyzer asks the user to specify a GFX board before it
> will tell you what shader assembly GLSL code resolves to and will
> report that you have errors if you specify a target that is too old.
Again, this is only resolvable correctly with a per-card database of
some kind. I'd bet AMD's program is essentially doing that under the
> > The natural complaint is `but I need to know which operations
> > will be done in HW'. I sympathize, this hits us as well, but the
> > solution should come in OpenGL, because anywhere downstream can
> > only give doomed-to-be-poor approximations.
> I agree that a solution inside of OpenGL would be prefereable, but if
> OpenGL were making it easy to address its extensions, GLEW wouldn't
> have any need exist.
I think we might just disagree on what GLEW is for. I happen to think
OpenGL extension management is done very well; it's a necessarily hard
problem. Further, GLEW abstracts it fairly well. That's where GLEW
fits: I view it as a simple way to dlsym() (basically) all the OpenGL
symbols I care about (and nothing more).
> In any case, waiting for OpenGL to resolve these things internally is
> probably going to take longer than I care to wait.
It sounds like you want a library that fixes/abstracts some of the
more difficult problems in developing OpenGL applications. I think
it would be very useful, but I don't think this library should be
GLEW, and I certainly don't think GLEW should ever contain any sort of
card-specific logic (other than loading vendor-specific extensions).
> It's not always a matter of whether the GLSL code will be accepted at
> all by the compiler. Our experience is that older parts will fail
> at either shader compile or shader link time [. . .]
Yeah, we hit this as well, and it sucks. If you develop a library
of workarounds, perhaps on top of GLEW, I'd like to see it licensed
however GLEW (BSD, I think?) is. I imagine we'd make use of it around