Thread: [Algorithms] Texture mip streaming prioritization
Brought to you by:
vexxed72
From: Josh G. <in...@gm...> - 2013-07-22 07:16:22
|
Hi There, I'm currently working on a texture streaming system and looking into ways to prioritize which mips on which textures should be loaded. I was hoping that people on this list may have experience with these algorithms that they could share? Currently I know which mips need to be loaded in order to render a scene without degradation in image quality. I would now like to apply a memory budget to textures, and make decisions about which mips of which textures should be loaded over others. My current line of thinking involves comparing cost / benefit of loading each individual mip. i.e. How many pixels would be affected by a particular mip level being loaded versus how much memory would that mip level use? This becomes Priority = Benefit / Cost I'd then sort for highest priority and assign memory budget to those mips at the top of the list. Any thoughts? comments? Alternatives that work better? Alternatives that have worked enough? Problems that arise with the above algorithm are quite broad, and in particular, optimising the tipping point between cost and benifit would be necessary. Also, calculating the number of pixels affected would involve crude calculations that aren't really going to affect real world values.... Thanks all for your thoughts! Josh |
From: Manuel M. <m.m...@wa...> - 2013-07-22 08:12:18
|
Hi Josh, > I'm currently working on a texture streaming system and looking into ways > to prioritize which mips on which textures should be loaded. > I was hoping that people on this list may have experience with these > algorithms that they could share? > > Currently I know which mips need to be loaded in order to render a scene > without degradation in image quality. > I would now like to apply a memory budget to textures, and make decisions > about which mips of which textures should be loaded over others. > > My current line of thinking involves comparing cost / benefit of loading > each individual mip. > i.e. > > How many pixels would be affected by a particular mip level being loaded > versus how much memory would that mip level use? > > This becomes Priority = Benefit / Cost > > I'd then sort for highest priority and assign memory budget to those mips > at the top of the list. > > Any thoughts? comments? Alternatives that work better? Alternatives that > have worked enough? > > Problems that arise with the above algorithm are quite broad, and in > particular, optimising the tipping point between cost and benifit would be > necessary. Also, calculating the number of pixels affected would involve > crude calculations that aren't really going to affect real world values.... I think it might boil down to the question if it is possible to implement efficient histograms on the GPU (e.g. via http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.70.40), keyed on mip-map level and texture ID. Doing a low-res histogram pass should give you a very good indication on the number of affected pixels. cheers, Manuel |
From: Sebastian S. <seb...@gm...> - 2013-07-22 20:55:30
|
On Mon, Jul 22, 2013 at 12:16 AM, Josh Green <in...@gm...> wrote: > Hi There, > > I'm currently working on a texture streaming system and looking into ways > to prioritize which mips on which textures should be loaded. > I was hoping that people on this list may have experience with these > algorithms that they could share? > > Currently I know which mips need to be loaded in order to render a scene > without degradation in image quality. > I would now like to apply a memory budget to textures, and make decisions > about which mips of which textures should be loaded over others. > > My current line of thinking involves comparing cost / benefit of loading > each individual mip. > i.e. > > How many pixels would be affected by a particular mip level being loaded > versus how much memory would that mip level use? > > This becomes Priority = Benefit / Cost > > I'd then sort for highest priority and assign memory budget to those mips > at the top of the list. > > Any thoughts? comments? Alternatives that work better? Alternatives that > have worked enough? > I would bake in some kind of factor that relates to "quality improvement" as well. If you have a 4k eye ball texture and it's currently at MIP 2 (1Kx1K), it probably won't gain much from loading another MIP even if it's covering a lot of pixels. So you need to compute how far off the pixel currently is from the *ideal* MIP level, and improve pixels that are far off first (weighted by pixel count). Assuming you only load one MIP level at a time (i.e. if MIP N is currently loaded, you'll only consider N-1), maybe something like: (desired_mip - current_mip) * num_pixels / memory_usage, add tuning knobs as needed... That first factor relates to how far "off" a pixel currently is from its ideal MIP level, i.e. how much in need of improvement those pixels are. -- Sebastian Sylvan |
From: Krzysztof N. <k.n...@gm...> - 2013-07-22 21:26:34
|
Hi, A good solution would be just to precompute some data. For example camera is at point X - what should I load. Alternatively - object X is Y meters from camera - what should I load. Without precomputation it's hard to know what will be needed in nearest time. Current frame usually doesn't contain enough information. BTW isn't loading single mip-maps too fine grained, at least from seek time perspective? Most popular solutions are based on two states - ~64x64 thumb / full texture. -- Krzysztof Narkowicz On Mon, Jul 22, 2013 at 9:16 AM, Josh Green <in...@gm...> wrote: > Hi There, > > I'm currently working on a texture streaming system and looking into ways > to prioritize which mips on which textures should be loaded. > I was hoping that people on this list may have experience with these > algorithms that they could share? > > Currently I know which mips need to be loaded in order to render a scene > without degradation in image quality. > I would now like to apply a memory budget to textures, and make decisions > about which mips of which textures should be loaded over others. > > My current line of thinking involves comparing cost / benefit of loading > each individual mip. > i.e. > > How many pixels would be affected by a particular mip level being loaded > versus how much memory would that mip level use? > > This becomes Priority = Benefit / Cost > > I'd then sort for highest priority and assign memory budget to those mips > at the top of the list. > > Any thoughts? comments? Alternatives that work better? Alternatives that > have worked enough? > > Problems that arise with the above algorithm are quite broad, and in > particular, optimising the tipping point between cost and benifit would be > necessary. Also, calculating the number of pixels affected would involve > crude calculations that aren't really going to affect real world values.... > > Thanks all for your thoughts! > > Josh > > > ------------------------------------------------------------------------------ > See everything from the browser to the database with AppDynamics > Get end-to-end visibility with application monitoring from AppDynamics > Isolate bottlenecks and diagnose root cause in seconds. > Start your free trial of AppDynamics Pro today! > http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
From: Sebastian S. <seb...@gm...> - 2013-07-22 23:34:55
|
On Mon, Jul 22, 2013 at 2:26 PM, Krzysztof Narkowicz <k.n...@gm...>wrote: A good solution would be just to precompute some data. For example camera > is at point X - what should I load. Alternatively - object X is Y meters > from camera - what should I load. Without precomputation it's hard to know > what will be needed in nearest time. Current frame usually doesn't contain > enough information. > > Another idea for utilizing precomputation: for each 'cell' in your precomputation structure, render a bunch of sample viewpoints at full resolution, all the shaders, fog, DOF, post processing etc. turned on, with *all* MIPs loaded, then remove one MIP level at a time for each texture and measure the amount of additional error doing so caused in the sample frames. Now you have a per-cell ordered list of (texture,mip) pairs ordered in terms of importance for overall scene error. Ideally use some perceptual metric for your error function. |
From: Josh G. <in...@gm...> - 2013-07-23 00:21:21
|
Thanks for your thoughts, >Alternatively - object X is Y meters from camera - what should I load. This is essentially what my current calculation gives me. It describes what I need to load in order to render the seen at it's highest image quality. The problem though is then applying a memory constraint to that, that says, I can't have everything I need.... So what will I choose to load? I must admit, I am currently thinking down the line that Sebastian described above: >Assuming you only load one MIP level at a time (i.e. if MIP N is currently loaded, you'll only consider N-1), maybe something like: >(desired_mip - current_mip) * num_pixels / memory_usage, add tuning knobs as needed... I have the concept of the "Ideal" mip I would like loaded. And I also think dealing with one mip at a time (from prioritisation point of view at least) makes sense. Though that doesn't necessarily mean only one mip is loaded at a time, just one mip is considered for being loaded at a time. I'm thinking if I start with my "target set" as equal to the "ideal set", then execute the following: while (memoryUsage of target set > budget) { sort target texture set according to the "Priority" of having it's current target mip loaded. where priority = (idealMip - currentTargetMip) * numPixelsUsingTexture / MemoryUsage Select texture with the lowest priority subtract 1 from it's currentTargetMip. } This would remove all the mips that will have the least benifit to the scene. Then I would task the resource system with making sure the target set gets loaded. Unfortunately the above algorithm looks like it'd be very serial, and involve a bunch of sorts... But I think it might acheive good results? Have any of you seen these implementations executed in a product or game before? Was it successful? Tricky? worth the time? Thanks for the comments and suggestions so far, Josh |
From: Jamie F. <ja...@qu...> - 2013-07-23 01:18:05
|
On 23/07/2013 01:21, Josh Green wrote: <snip> > Have any of you seen these implementations executed in a product or game > before? Was it successful? Tricky? worth the time? We've seen sorting costs for texture prioritization be surprisingly high, without doing anything that complex, particularly as you're often moving around, priorities change, and you haven't even finished loading the mip / texture you wanted to load in the first place (depending on hardware, etc.) by the time it's no longer the most important. Our experience has been that it's not worth over-thinking it. Prioritize higher mips close to your points of interest (usually cameras), but don't keep too big a queue for low priority things, unless you like doing lots of sorting. Once you've started loading something, finish loading it; yes, you might bin it soon, but you need some hysteresis in the system. Large mips generally are worth seeking and loading on their own; small mips may be worth generating from the smallest mip you think is worth loading. For our tech, prioritization is configurable via plug-ins, so our users can get in and prioritize whatever they think is most important, but it's usually just whatever's near the camera (or is expected to be near the camera soon). You can do a bit better job if you know something about texture density from a pre-process, but it's not necessary, particularly if things are built with a reasonably consistent texture density. So I'd build something simple, and see how well it works. Almost never a bad way to go about it :) Jamie > Thanks for the comments and suggestions so far, > > Josh |
From: Sebastian S. <seb...@gm...> - 2013-07-23 02:25:07
|
On Mon, Jul 22, 2013 at 5:21 PM, Josh Green <in...@gm...> wrote: > > Unfortunately the above algorithm looks like it'd be very serial, and > involve a bunch of sorts... But I think it might acheive good results? > Seems reasonable. You should of course try to keep your texture requests in a priority queue instead of re-sorting in each iteration, and use "adjust key"/"decrease key" whenever you decide to drop a MIP level for a key. There are heap implementation where this operation is O(1) (e.g. Fibonacci heap), so it's at least possible to be efficient (in practice, depending on the number of textures, a straight binary heap a la STL may be faster due to being less complicated). Seb -- Sebastian Sylvan |
From: Sebastian S. <seb...@gm...> - 2013-07-23 02:35:06
|
On Mon, Jul 22, 2013 at 7:24 PM, Sebastian Sylvan < seb...@gm...> wrote: > > > On Mon, Jul 22, 2013 at 5:21 PM, Josh Green <in...@gm...> wrote: > >> >> Unfortunately the above algorithm looks like it'd be very serial, and >> involve a bunch of sorts... But I think it might acheive good results? >> > > Seems reasonable. You should of course try to keep your texture requests > in a priority queue instead of re-sorting in each iteration, and use > "adjust key"/"decrease key" whenever you decide to drop a MIP level for a > key. There are heap implementation where this operation is O(1) (e.g. > Fibonacci heap), so it's at least possible to be efficient (in practice, > depending on the number of textures, a straight binary heap a la STL may be > faster due to being less complicated). > > Also, it's worth considering your expectation here. If you intend to keep enough memory for textures that you'll "almost never" have to turn down a MIP-loading request, you don't really have to over-think this too much. Simply loading the lowest-resolution MIP always (taking object visibility into account) will do a decent job - it's sort of like having a global resolution clamp and just lowering it until stuff fits in memory, which won't be optimal, but tends to behave consistently and predictably. > -- > Sebastian Sylvan > |