Menu

#24 PLSciFiDemo_Windows_0_2 shader compile errors with Intel GPU

open
nobody
None
5
2012-12-28
2012-08-24
logzero
No

See the attached log file for details.

Discussion

  • logzero

    logzero - 2012-08-24

    log

     
  • Christian Ofenberg

    Thank you for this information and the log. Currently I have only access to systems with AMD & NVIDIA GPU's. Never had integrated Intel GPU's.

    When looking at the log, "Mobile Intel(R) 4 Series Express Chipset Family" only supports OpenGL version 2.1.0, meaning GLSL version 120. In PixelLight, I sticked to version 110 & 120 were ever possible to make it work on as many systems as possible. Only for the volume renderer plugins I choose GLSL 330 because I wanted to finally be able to use a little more sophisticated shader features, and volume rendering on slow or outdated GPUs would be no fun anyway.

    The Sci-Fi demo is using the two render passes "PLCompositing::SRPDeferredHBAO" & "PLCompositing::SRPEndFXAA" which currently use GLSL version 130. Both are optional post processing effects. Both can be deactivated by modifying the script (automatic fallbacks would of course be reasonable).

    Within the "Interaction.lua"-script, change
    "
    local function ConfigureSceneRenderer()
    -- Get scene renderer tool
    local sceneRendererTool = cppApplication:GetSceneRendererTool()

    \-- By default the faster SPAAO is enabled, use higher quality but slower HBAO instead
    sceneRendererTool:SetPassAttribute\("DeferredSPAAO", "Flags", "Inactive"\)
    local sceneRendererPass = sceneRendererTool:GetPassByName\("DeferredHBAO"\)
    if sceneRendererPass ~= nil then
        sceneRendererPass:SetValues\("Flags=\"\" NumberOfSteps=\"16\" AORadius=\"0.05\""\)
    end
    // ...
    

    "
    into
    "
    local function ConfigureSceneRenderer()
    -- Get scene renderer tool
    local sceneRendererTool = cppApplication:GetSceneRendererTool()

    \-- Disable post processing steps using GLSL version 130
    sceneRendererTool:SetPassAttribute\("EndFXAA", "Flags", "Inactive"\)    -- Disable
    sceneRendererTool:SetPassAttribute\("End", "Flags", ""\)                -- Enable
    // ...
    

    "
    and there's a chance that the demo will run correctly.

    We definitely need more active developers with different kinds of GPUs.

     
  • Stephan Wezel

    Stephan Wezel - 2012-08-25

    I have it also tested on my secondary laptop with an HD3650 Mobility under linux where with latest mesa git the opensource driver supports GLSL 1.30. But it cannot compile the FXAA shader. The GLSL compiler fails with following error:
    error: syntax error, unexpected EXTENSION, expecting $end

    The corresponding line in the GLSL source code ist:
    #extension GL_ARB_texture_rectangle : enable

    But the GLSL compiler supports this extension. It seems that the "#extension" line cannot be located after a definition of a variable.
    Because after i changed the source code of the FXAA shader so that the #extension line was before the line
    "const vec2 InvTextureSize = vec2(0.000976562,0.00130208);<----->// 1/(texture size) - set when creating the shadern"

    The compiler error was gone.
    But the app crashed in llvm (used as the glsl compiler backend)
    After that i disabled this feature in mesa the shader could be compiled (with the above mentioned change) but the rendering was dead slow *g* (~1 FPS right from the beginning )

    After a quick look on the GLSL specifiaction (4.3, http://www.opengl.org/registry/doc/GLSLangSpec.4.30.6.pdf\) i found following sentence(Page 20, 3 Basic 3.3 Preprocessor):

    Any extended behavior must first be enabled. Directives to
    control the behavior of the compiler with respect to extensions are declared with the #extension directive

    For me this reads that a "#extension" line must come before any shader code.

     
  • Christian Ofenberg

    Yes, in GLSL you first can (and you really should if you don't want to shoot yourself into the foot!) define the GLSL version the shader is written in, then you have to enable all the used extensions which are not part of the chosen GLSL version.

    Sadly, in practice this is heavily error prone because there are multiple GLSL versions and multiple GLSL extensions. You have to work highly concentrated to only use GLSL features which are part of the used GLSL version and also enabling the used GLSL extensions at the right place... after you ensured from the C++ side that they exist in the first place. If they do not, you have to deal with this situation. A lot of room for errors. The fact that every shader compiler behaves slightly different does not make it easier because you don't always notice errors at once. Sadly, some shader compilers are far too tolerant and do not blame violations of the official GLSL specification. So, thank you for locating the issue! In case you find more issues, please let me know.

    I removed "#extension GL_ARB_texture_rectangle : enable" because it's not used. Must have been a holdover, an evil one.

    ... slow, yes, the fully featured generic deferred renderer eats up a lot of filtrate. For concrete products using PixelLight it might be a good idea to optimize it by removing e.g. features not required for the individual project.

     
  • Stephan Wezel

    Stephan Wezel - 2012-08-25

    mainly that the opensource radeon driver is so slow is, that currently the driver isn't optimized. The main work is to make the driver feature complete to the latest opengl spec

     

Log in to post a comment.