HDR image wierdness

Help
2016-09-24
2016-10-28
1 2 > >> (Page 1 of 2)
  • Stephen Parry

    Stephen Parry - 2016-09-24

    OK. So I want to do some HDR lit outdoor photo-realistic rendering. I download good free sample 360 x 180 degree panoramic landscape HDRI from chocofur. Both the high and low res versions work pretty well in 'another open source modelling package not as good as AOI'.
    When I load them into AOI as environment textures however, I get a number of problems.
    The high res version (12000 x 6000) just blows AOI's mind - e.g. renders with half the sky appearing in front of the scene instead of behind it.
    The lower res version ( 8000 x 4000) kind of works but, scaling the image either manually or to fit, you get ghosting of the background image (like a double exposure) on preview and render; In some views AOI seems to think the image's aspect ratio is 1:1 not 1:2; but stretching to fit overstreches the image height; I can't map the image with anything other than projection, cylinder always maps to at most half a cylinder, sphere at best gives you a tiny fraction of a sphere, whatever settings you choose. I believe that if cylinder and sphere worked, they should give a less distorted result for an image like this. Any thoughts please?
    Am I just pushing AOI too hard with an image of that res? Do the calcs need an overhaul to handle resolutions that high? Or am I missing something?
    I have attached the preview version of the HDRI below and a screenshot mid-render, showing the ghosting on the clouds. In the camera view, the ghosting it's much worse - like seeing double,

     
    Last edit: Stephen Parry 2016-09-24
  • Luke S

    Luke S - 2016-09-24

    Need a bit more info. Do you experience these issues with either a smaller image or a high-res standard dynamic range image?

    It'd also be handy to look at your scene. If you can't/don't want to upload with the HDRI, just replace with a reasonable-sized SDR image.

    Please check and make a note of all the settings in the image texture that you are using for the environment.

     
  • Peter Eastman

    Peter Eastman - 2016-09-24

    8000 by 4000 is 32 million pixels. It stores 4 bytes per pixel, so that's 128 MB. Mipmaps increase that by a third, but it's still under 200 MB. That's a lot of memory for one image, but not that much. (Although I think the installers may set a fairly low default heap limit, like 1 GB. We should look into updating that.)

    Anyway, your render doesn't really look like memory problems. Mostly it just looks like there's a lot of noise from stochastic sampling of the enivonment map. Two settings in the render window that will affect that a lot are "rays to sample environment" and "extra smoothing for global illumination". What are those set to?

     
  • Stephen Parry

    Stephen Parry - 2016-09-24

    What is up with sf? It's taken four attempts to get this posted.
    P.S. Thanks for the help guys. You responded so quickly!

     
  • Luke S

    Luke S - 2016-09-25

    @Stephen, Was the HDR you posted exactly what you had loaded to the scene? I was able to load a full resolution, SDR PNG created from it, and successfully map it as environment.

    I was also able to load the HDR itself, and map that.

    I was able to see the "ghosting" issue with the texture already in your file, but was not able to duplicate it when I mapped in my copies of the image. I was also able to verify that the texture you posted cannot be mapped properly... One Seriously Messed up Texture!

    At rest, my AOI was running at about 550MB used, with all three images loaded into memory.

    @Peter I did notice that there is some funkiness when you change environment mapping type from Cylindrical to Spherical and back - the fields don't update properly. The workaround is to just save the mapping and go back into it.

     
  • Pete

    Pete - 2016-09-25

    @Stephen: Check the Tile-boxes in the texture editor. (See the attached .jpg). That takes care of the partial or none of the image showing up and now you can use spherical mapping with default settings. Unfortunately this seems to be the only way to get it right with the ImageMapped textures. On a procedural 2D you could perform a coordinate transformation, that (as I recall) would be somehow proportional to the w and h of the picture (I mean somewhere in the range of [-1.0 to 1.0]).

    Sorry to say but I have never felt comfortable with the placement of the origin of images in textures. The expectation would be that for any mapping the image should be placed center-to-center (image center to object center) and in the cases of cylindrical or spherical mappings also be wrapped seamlessly by default, without tiling or any transformations.

    And you and I are not the only people facing this matter.

    Changing this would of course affect old scenes -- But still I'd say that the behaviour would be worth changing at some point. -- Maybe so, that the old files would be converted correctly into the new system without the user having to worry about it?

     
    Last edit: Pete 2016-09-25
  • Luke S

    Luke S - 2016-09-25

    @Pete, good detective work. I missed that last night. The tile settings are on by default, and it never occured to me to check them.

    We may have to take a look at how image mapping is handled. I beleive that coordinates in image space go from [0,0] to [1,1] starting at one of the corners.

    I also just double-checked, and this little tidbit is not documented in the manual. At the very least, we should document this better.

    @Stephen, For a cylindrical mapping to match your expectation, you'll also need to set the height parameter to 2, to balance out the aspect ratio of the image. For this particular image, though, spherical is the correct mapping to use. You can rotate it through the y-axis, if the default does not have the image pointing the correct direction.

     
  • Stephen Parry

    Stephen Parry - 2016-09-25

    @Peter: I think what you described regarding changing the mapping behaviour makes sense. I think that even if it means manually remapping older scenes it would be worth it to make the overall behaviour more sensible.
    @Luke S: thanks for digging into this. If I follow you correctly - if I load a fresh copy of the HDR file into a new texture, possibly in a new scene file, for you that worked OK? I'll try that. I wonder if it's a hangover from the earlier experiments with the 12 x 6000 version - maybe I have some garbage left in the scene. I'll try reloading.
    Just a note too - I run mostly loading the jar at -Xmx=-Xmx5128m - it avoids hitting limits too often.

     
  • Luke S

    Luke S - 2016-09-25

    @Stephen: Not even that. go into your original texture to edit it (The texture itself, not just the mapping) and check the 'tile x' and 'tile y' boxes. You will now be able to map it properly.

     
  • Stephen Parry

    Stephen Parry - 2016-10-08

    OK, more HDRI illumination headaches and another issue.
    I have moved on from test scenes to my 'live' one. The lighting is proving a nightmare. My HDRI has a nice bright sun (4.5>V>3.5) and I've created a procedural to multiply the background and / or the sun independently. But no matter what I try I either get a twilight scene (Monte Carlo) or my materials glow in the dark if they have V>0.5 (Photon mapping). Any thoughts here? I have linked my aoi. I realise the manual suggests using an exposure filter, but from my limited experience of photography, I was always taught that if you needed to exposure correct your images, you were doing something wrong when shooting.

    My other issue is that when raster rendering the same model, some of the surfaces that should be flat, are showing edges between tris. If you look carefully in the corner wells of the chassis, you will see a lighter triangular area. This should be flat.

    Just to explain, the aoi has a light in it so that I can still raster render; I turn it off (intensity 0) for my raytraces.

    EDIT: Still confused on the flat/ non-flat issue as to where the error is creeping in, but somewhere between my blender model, the obj imex file and the copy paste into my master aoi file, there is just enough of an othwerwise invisible discrepancy creeping in only some raster renders show. I have eliminated by writing a 'make planar' script, derived from the mesh mirror script, which I will have to share somehow. It seems to do the trick, although amongst the blizzard of vertices its painful to apply.

    Worryingly, I can confirm that increasing accuracy to <0.02 on an interpolating mesh can lead to very bizarre results.

     
    Last edit: Stephen Parry 2016-10-09
  • Luke S

    Luke S - 2016-10-09

    Spent a bit of time playing with this one.

    First: While exposure correction of photographs probably means you made a mistake somewhere, this is really only significant because you are running in to the physical limits of your sensing medium. A rendered image is just numbers. As long as the image is rendered into a range that allows you to differentiate well between the different tones, the rest is just range mapping. Should be fairly straightforward.

    • Exposure filter just adjustes Gamma on the LDR output - the HDR internal representation that AOI generates is clipped to [0...1] before this filter is applied. There is a plugin, HDR Filters, that has a better exposure setting for such images.
    • My personal preference would probably be to save the result as HDRi, and do the mapping to displayable/printable images in dedicated software. I've been playing with Luminance, but This Page for one has a couple of alternatives listed.

    On to the model and renderign settings:

    That's quite the model you have there. Looks almost as if it's a technical drawing.

    Lots of things to consider, but the main thing is that your extra smoothing for environment textures is too much. Its muddled all of the color from the map into a single shade, and you might as well simply run a ambient light.

    Your discontinuous proceedural scaling seems to be adding a lot of noise to the image.
    Samples below: first rendered with the proceedural environment and smoothing at 2000, second rendered with the plain environment, and smoothing at 1000. I also turned down AA and MC samples for shorter rendering times. (For the purposes of the experiment) This does cause some extra noise, but you can see that the light does have a more directional, higher contrast quality.

    More later, after work.

     
  • Stephen Parry

    Stephen Parry - 2016-10-10

    Thanks hugely for that Luke. Yours look vastly better than mine already, even with the noise. So many parameters to fight, but that gives me some direction. Do you think a more continous scaling on the HDRi might produce good results? I might give that a try.
    I wish we could get rid of the pesky interpolation distortions. I must ask Peter how AOI compares with catmull clarke, opensubdiv et al and whether there is any maths / code out there we could 'borrow'. One of AOI's serious selling points is its way of representing the geometry in a 'precisely inaccurate'(!) or 'as-precisely-as-you-want-it' way.

    The model is a Dagu Rover 5 robot chassis and in a way the model is a technical drawing. Firstly because it is derived very indirectly from an early version of the actual 3D model used to manufacture it. The designer, Russell Cameron (sound guy and robotic genius) kindly gave me an export of the model via Sketchup. I added the missing parts, materials etc, corrected the funnies sketchup kindly introduced (it randomly loses parts of the mesh at detail of less than one mm). Sketchup is truly awful. Avoid.

    The second reason the model is like a technical drawing is one of the intended purposes. I am hoping to use it produce a 2D scalable vector image for use in Frtizing, the electrnoics package, hence the top down ortho view. Once the raytrace render is usable, I am hoping to superimpose the projected 2D mesh on the bitmap render and by sampling key points, set the gradient on the polygons, including from the shadows, to give it a photo-realistic-ish look. If I can achieve that using code, that would be a seriously useful extension to the vector renderer I think. Photo-realistic meets scalable if you will.

    I also intend to use the model in a simulator for my students, but mostly it allows me to scratch the serious PBR photo-realistic itch I seem to have acquired!

    Speaking of my students I am running extra-curricula 3D modelling classes for them. AOI is star of the show currently.

     
  • Peter Eastman

    Peter Eastman - 2016-10-10

    I must ask Peter how AOI compares with catmull clarke

    Catmull Clark surfaces are also approximating surfaces, but defined by quads instead of triangles. AoI uses Loop subdivision for approximating, and Butterfly subdivision for interpolating.

     
  • Stephen Parry

    Stephen Parry - 2016-10-10

    Thanks @Peter. I have been doing some reading and had just found that out. I am looking at 'enhanced butterflies' just now to try and see what kinds of surface behave well or badly. Is point of the AOI optimze mesh function to reorganise a mesh with a valence of 6?

     
  • Peter Eastman

    Peter Eastman - 2016-10-10

    Partly, yes. Often that's impossible, of course, but it tries to make the valence of each vertex as close as possible to 6 (or 4 on the boundary). It also tries to eliminate very narrow triangles by making all angles as equal as possible.

     
  • Luke S

    Luke S - 2016-10-12

    Sorry for the delay. Life got crazy.

    If you choose to scale the environment at all, I'd suggest staying with a strictly linear scaling. The proportionate strengths between the brightest spots, such as the sun, and the darker areas is an important part of the data, and the whole reason for using such an environment texture in the first place.

    If you enhance the hot spots, you'll get a harsher lighting contrast. Taken far enough, the sky will become insignificant in its contribution to the overall lighting. Think of the sort of lighting that you see in photos taken on the lunar surface, or a rendered scene with no ambient light, and only one strong directional light.

    I think that there might be some improvements possible in textures, etc. as well. That will take more time to go over. Let me know if you are interested.

    Glad to hear that You're finding AOI useful for your class, as well. What do the kids think of it?

    If you're interested in comparing a Catmull-Clark mesh to interpolated or approximated trimesh, you can check out the Polymesh plugin. It allows for arbitrary n-gon control meshes, but the apporximated smoothing mode runs Catmull-Clark.

     
  • Stephen Parry

    Stephen Parry - 2016-10-12

    Thanks Luke. IThe texures were very quickly thrown together - I will probably come back to you for some help on that.

    The kids seem to like it - they find some areas a little frustrating but that's mostly down to the hourglass tutorial being slightly inconsistent between platforms and versus the current version. The glass part needs attention to point position and getting the snapping right - they all seem to have struggled with that. They are 16 year old comp sci students so they have the intelligence but the patience is sometimes lacking! I haven't had a chance yet, but I think I am going to review all the standard tutorials and vids and see if any need any corrections.

    Have look at the polygonal plugin, but I think getting trimesh working solidly is really important - catmull is not perfect even for quads and sucks for trimesh. I suspect P.E.'s choice of algorithm for trimesh is sound, I suspect it just needs some tweaking for edge cases and high valences. My model is full of high valence nodes and thin triangles that are a **** to eliminate and they seem to cause the biggest issues.

     
  • Stephen Parry

    Stephen Parry - 2016-10-13

    @Luke - realized we'd both made the same mistake - forgot to turn the extra raster rendering light off. I've turned that off now and as you suggested, I've also turned down the smoothing to 1000. If you try the model now (link should still work) you can see it comes out way darker Could you give it another quick look please? Ta.

     
  • Luke S

    Luke S - 2016-10-13

    Yes, It does come out darker. If you want a straight-from-AOI render, you might need to put some scaling on the texture. Again, you want it to be lit like a realistic scene, keep it a purely linear scaling factor. There's still enough dynamic range that you should be able to post-process an HDRi, though.

    a Couple of things that are going to skew results:

    • You've got a rather sterile scene overall. No surfaces,etc. for secondary/tertiery light bounce. Even with an environment map, this will give you a fairly harsh lighting condition - much like you would see in a very open parking lot, or in the middle of a flat desert or largeish body of water.
    • Your textures may not be interacting with light the way you expect. In particular:
      • You are getting hot-spots in the render from all of your screws. They are reflecting around 3-5 times as much light as the brighter parts of the plastic materials, and are way over exposed.
      • AOI uses terminology in its texture model that is a bit different from what some other graphics systems use. Specularity refers to the strenght of mirror type reflections, such as off of glass or polished metal, rather than specular highlights. Shininess refers to a white-light reflection off of plasticy surfaces. Some of the plastic materials seem to have this backwards.
     
  • Stephen Parry

    Stephen Parry - 2016-10-17

    @Luke,
    I have been comparing Monte Carlo and Photon Mapping to see which gives me better results, but PM seems to be completetly broken for anything other than direct lighting - I cannot get objects lit either by environmental lighting or emmissive textures to cast any shadows, except on themselves - see attached AOI. Is this a known issue?

     
  • Luke S

    Luke S - 2016-10-17

    I've been able to see the effect you are referring to.

    Your "Photons to sample environment" is too high in relation to the density of photons in the map. When sampling the map for GI, the algorithm selects the closest "NumToSample" photons, without trying to analyze the geometry. So, when the map is too sparse, you pick up a lot of samples from the lighted side of your shading object.

    Reducing samples does increase noise in the image, So you need to increase teh total photons. I'm attaching renders of your scene, all done with 50,000,000 (5xe7) total photons. The number of sample photons are 250, 500, and 1,250 respectively.

    This shows me that the photon map is rather noisy, which is something that might be improvable... I'll have to do some research.

     
  • Stephen Parry

    Stephen Parry - 2016-10-22

    Thanks Luke, I was able to reproduce your results here.
    Now this is really starting to drive me nuts. I have tried to apply the photon experiences from above to the cylinders environmental lighting scene. I have ramped the total photons up as to 100M which is the max I can reasonably do in the 6GB my two rigs have. I have squashed the sample photons down to 20(!). I have tried various smoothing levels and some higher sample levels. No matter what I do I cannot get more than a hint of shadow in the scene. The sun is from the left in the scene. The block and the cylinder sometimes show a ghost of a hint of shadow. The narrow pole shows nothing, nada nicht nein nyet bleep all.
    https://owncloud.mainscreen.com/index.php/s/am1lYtCTjfvltZz
    Any ideas, or have I just hit the limit with photon mapping?
    left to right:
    sample:20, env smoothing:1
    sample:20, env smoothing:2000
    sample:2000, env smoothing:1

     
    Last edit: Stephen Parry 2016-10-22
  • Luke S

    Luke S - 2016-10-23

    You've got a couple of things going on here, one of which is stretching the limits of PM as it's currenlty implemented...

    I duplicated your 100M photon test, and found that this does get to be a choke point. That's added another thing to my "Things to do" backlog, but out of scope for this discussion.

    Back to the scene:

    The shadow effect is there, its simply getting buried under the noise. Its subtle to begin with, because your environment texture is lighting brightly from pretty much any angle. This is normal for the sort of day portrayed in that texture.

    20 samples is way too low, and you do want some environment smoothing. 2000 samples may even be too low for a 100M photon map of such a scene. In general, you will want the samples to change roughly in proportion to the total photons, with some fine tuning to minimize noise and light-bleed.

    This particular scene seems to benefit from "Final Gather." This means that it uses Monte-Carlo sampling for the first bounce. The manual says this is the slowest method (Given identical settings), but that is partly compensated by the fact that you can get fairly good results with a much smaller map, which is faster to build, and smaller number of sample rays, so you don't have to spend as much time tracing them.

    I did some renders with final gather for you, with 20M photons and 8 MC sample rays. they had Med AA, 4/8 turned on.

    First has 200 sample photons, which is a bit noisy. Second has 2000, which is starting to cause some color bleed. (See the bottom edge of the cube! This means that 2000 are a bit too many for that map, but may still not be enough for a larger map.)

    I realized that the cylinder, with its reflective surface, is exactly the thing to cause caustics, so I did a third run. I started with the same settings as the second, but added a 2M Caustics map, sampled at 1000. See the difference?

     
  • Luke S

    Luke S - 2016-10-23

    You've got a couple of things going on here, one of which is stretching the limits of PM as it's currenlty implemented...

    I duplicated your 100M photon test, and found that this does get to be a choke point. That's added another thing to my "Things to do" backlog, but out of scope for this discussion.

    Back to the scene:

    The shadow effect is there, its simply getting buried under the noise. Its subtle to begin with, because your environment texture is lighting brightly from pretty much any angle. This is normal for the sort of day portrayed in that texture.

    20 samples is way too low, and you do want some environment smoothing. 2000 samples may even be too low for a 100M photon map of such a scene. In general, you will want the samples to change roughly in proportion to the total photons, with some fine tuning to minimize noise and light-bleed.

    This particular scene seems to benefit from "Final Gather." This means that it uses Monte-Carlo sampling for the first bounce. The manual says this is the slowest method (Given identical settings), but that is partly compensated by the fact that you can get fairly good results with a much smaller map, which is faster to build, and smaller number of sample rays, so you don't have to spend as much time tracing them.

    I did some renders with final gather for you, with 20M photons and 8 MC sample rays. they had Med AA, 4/8 turned on.

    First has 200 sample photons, which is a bit noisy. Second has 2000, which is starting to cause some color bleed. (See the bottom edge of the cube! This means that 2000 are a bit too many for that map, but may still not be enough for a larger map.)

    I realized that the cylinder, with its reflective surface, is exactly the thing to cause caustics, so I did a third run. I started with the same settings as the second, but added a 2M Caustics map, sampled at 1000. See the difference?

     
1 2 > >> (Page 1 of 2)

Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:





No, thanks