From: Gareth H. <ga...@va...> - 2001-02-04 20:14:08
|
After looking into the current single-copy texture mechanism, I'd like to propose a couple of changes. This will involve completely removing the old interface, as well as providing a much more driver-friendly approach to texture image storage. Many of these ideas are similar to features found in the SI, at least from the top level. The implementation of these ideas would be quite different in our case, however. First, we add a struct gl_texture_format as follows: struct gl_texture_format { GLint internalFormat; /* Mesa format */ GLint redSize; GLint greenSize; GLint blueSize; GLint alphaSize; GLint luminanceSize; GLint intensitySize; GLint indexSize; GLint bytesPerTexel; /* Functions to return individual texels. */ void (*extractTexel)(...); ... }; We define a set of these structures: const struct gl_texture_format gl_tex_format_argb8888 = { MESA_FORMAT_ARGB8888, 8, 8, 8, 8, 0, 0, 0, 4, ... }; const struct gl_texture_format gl_tex_format_rgb565 = { MESA_FORMAT_RGB565, 5, 6, 5, 0, 0, 0, 0, 2, ... }; and so on. We add a pointer to such a structure to struct gl_texture_image, and add a driver callback to allow the driver to select the best format. Thus, _mesa_TexImage*() has something like the following: struct gl_texture_format *texFormat; texFormat = (*ctx->Driver.ChooseTextureFormat)( ... ); texImage->texFormat = texFormat; texImage->RedBits = texFormat->redSize; texImage->GreenBits = texFormat->greenSize; texImage->BlueBits = texFormat->blueSize; texImage->AlphaBits = texFormat->alphaSize; texImage->IntensityBits = texFormat->intensitySize; texImage->LuminanceBits = texFormat->luminanceSize; texImage->IndexBits = texFormat->indexSize; The Driver.TexImage* callbacks are now no longer allowed to fail. _mesa_TexImage* hands control over to the driver, which does any driver-specific work (such as the tdfx driver making sure the aspect ratio is okay), before calling the _mesa_convert_teximage helper function. I still have to work out the nicest way to handle this, as _mesa_convert_teximage or equivalent has changed quite a bit. It may be that the core Mesa code actually does the image conversion with the information returned from the driver callback. Basically, _mesa_convert_teximage has a couple of layers of conversion function tables, depending on the input and output formats etc. This allows MMX-optimized conversion routines to be used in a variety of situations. Things like the current pixelstore attributes, whether the image needs rescaling, and whether the input and output types are supported are used to determine if a fast conversion can be done. If a fast conversion can be done, the function tables are used to select the correct function and we're done. If not, _mesa_convert_teximage will fall back to doing a generic image conversion. The critical difference here is that the driver selects the internal format to be used (like MESA_FORMAT_ARGB4444, MESA_FORMAT_AL88 etc), and the conversion function is guaranteed to return a tightly-packed image with that type. The conversion function is smart enough to pack RGB input data into RGBA textures, or alpha/luminance only data into an AL88 texture and so on. We do not need extra internal types to handle this. The software rasterizer uses the texFormat->extractTexel()-like functions to extract individual or blocks of texels. The SI has a number of texel extraction functions per format, to support textures with and without borders and so on. The correct one would be plugged into texImage->extract(). This means that the image can be converted once only, stored in a hardware-friendly format, and interacted with by the software rasterizer. I think this is nicer than having to unconvert the entire image when there's a software fallback. Brian, I'm not sure if you're going to get this, and I know you said you were going to look into a texel extraction interface as well. I wanted to let you know where I'm going with this to avoid duplicate work and let you comment on my thoughts. -- Gareth |
From: Brian P. <br...@va...> - 2001-02-05 21:49:04
|
Gareth Hughes wrote: > > After looking into the current single-copy texture mechanism, I'd like > to propose a couple of changes. This will involve completely removing > the old interface, as well as providing a much more driver-friendly > approach to texture image storage. Many of these ideas are similar to > features found in the SI, at least from the top level. The > implementation of these ideas would be quite different in our case, > however. > > First, we add a struct gl_texture_format as follows: > > struct gl_texture_format { > GLint internalFormat; /* Mesa format */ > > GLint redSize; > GLint greenSize; > GLint blueSize; > GLint alphaSize; > GLint luminanceSize; > GLint intensitySize; > GLint indexSize; > > GLint bytesPerTexel; > > /* Functions to return individual texels. > */ > void (*extractTexel)(...); > ... > }; > > We define a set of these structures: > > const struct gl_texture_format gl_tex_format_argb8888 = { > MESA_FORMAT_ARGB8888, > 8, 8, 8, 8, 0, 0, 0, > 4, > ... > }; > > const struct gl_texture_format gl_tex_format_rgb565 = { > MESA_FORMAT_RGB565, > 5, 6, 5, 0, 0, 0, 0, > 2, > ... > }; Do these structs initialize the extractTexel function pointer? I don't think they can. Or the driver would have to override the defaults. > and so on. > > We add a pointer to such a structure to struct gl_texture_image, and add > a driver callback to allow the driver to select the best format. Thus, > _mesa_TexImage*() has something like the following: > > struct gl_texture_format *texFormat; > > texFormat = (*ctx->Driver.ChooseTextureFormat)( ... ); > > texImage->texFormat = texFormat; > > texImage->RedBits = texFormat->redSize; > texImage->GreenBits = texFormat->greenSize; > texImage->BlueBits = texFormat->blueSize; > texImage->AlphaBits = texFormat->alphaSize; > texImage->IntensityBits = texFormat->intensitySize; > texImage->LuminanceBits = texFormat->luminanceSize; > texImage->IndexBits = texFormat->indexSize; It seems redundant to store the bits/channel info in both structs, but that's a minor issue. > The Driver.TexImage* callbacks are now no longer allowed to fail. Because it'll be up to core Mesa to unpack and store the texture image under all circumstances? I'm a bit worried about expecting core Mesa to do all this. Consider a card that uses some sort of tiled texture memory layout. I think it should be up to the driver to handle image conversion for something like this and not burdon core Mesa with strange conversion routines. The idea behind the texutil.c functions was to simply provide some helper routines for drivers. Drivers can take them or leave them. > _mesa_TexImage* hands control over to the driver, which does any > driver-specific work (such as the tdfx driver making sure the aspect > ratio is okay), before calling the _mesa_convert_teximage helper > function. I still have to work out the nicest way to handle this, as > _mesa_convert_teximage or equivalent has changed quite a bit. It may be > that the core Mesa code actually does the image conversion with the > information returned from the driver callback. Core Mesa never directly calls the texutil.c functions at this time. > Basically, _mesa_convert_teximage has a couple of layers of conversion > function tables, depending on the input and output formats etc. This > allows MMX-optimized conversion routines to be used in a variety of > situations. Things like the current pixelstore attributes, whether the > image needs rescaling, and whether the input and output types are > supported are used to determine if a fast conversion can be done. If a > fast conversion can be done, the function tables are used to select the > correct function and we're done. If not, _mesa_convert_teximage will > fall back to doing a generic image conversion. OK. > The critical difference here is that the driver selects the internal > format to be used (like MESA_FORMAT_ARGB4444, MESA_FORMAT_AL88 etc), and > the conversion function is guaranteed to return a tightly-packed image > with that type. The conversion function is smart enough to pack RGB > input data into RGBA textures, or alpha/luminance only data into an AL88 > texture and so on. We do not need extra internal types to handle this. Are you saying something new here? This is the way things currently work. > The software rasterizer uses the texFormat->extractTexel()-like > functions to extract individual or blocks of texels. The SI has a > number of texel extraction functions per format, to support textures > with and without borders and so on. The correct one would be plugged > into texImage->extract(). This means that the image can be converted > once only, stored in a hardware-friendly format, and interacted with by > the software rasterizer. I think this is nicer than having to unconvert > the entire image when there's a software fallback. I like this too. The trick is not penalizing the performance of software Mesa (like the Xlib driver). I think that the optimized textured triangle functions (for example) could detect when the texture image is in a simple GLubyte, GL_RGB format and directly address texels. It would be the unoptimized cases, like mipmapped textures, where we might be incurring a new performance hit. Even that could probably be avoided with some extra code. > Brian, I'm not sure if you're going to get this, and I know you said you > were going to look into a texel extraction interface as well. I wanted > to let you know where I'm going with this to avoid duplicate work and > let you comment on my thoughts. I hadn't done any work on this yet. We can do it together. I think the first things to do are: 1. Add the ExtractTexel() function pointer to the gl_texture_image struct. 2. Remove the Driver.GetTexImage() code and write new code to use the ExtractTexel() function. I'll need some clarifications on the issues above before doing more. -Brian |
From: Gareth H. <ga...@va...> - 2001-02-06 01:02:15
|
Brian Paul wrote: > > > We define a set of these structures: > > > > const struct gl_texture_format gl_tex_format_argb8888 = { > > MESA_FORMAT_ARGB8888, > > 8, 8, 8, 8, 0, 0, 0, > > 4, > > ... > > }; > > > > const struct gl_texture_format gl_tex_format_rgb565 = { > > MESA_FORMAT_RGB565, > > 5, 6, 5, 0, 0, 0, 0, > > 2, > > ... > > }; > > Do these structs initialize the extractTexel function pointer? > I don't think they can. Or the driver would have to override > the defaults. Yes they can. The driver would only need to provide extractTexel functionality if if defined custom texture formats. Consider this: we want to store a texture as ARGB4444. So, we point texImage->TexFormat at gl_texformat_argb4444 (defined as above) and use the utility functions to produce a tightly-packed ARGB4444 texture. The structure includes one or more initialized extractTexel function pointers, which are aimed at function(s) that know how to read ARGB4444 textures. Depending on the mode (border/no border, 2D/3D is what the SI has), we pick one of the texture format's extractTexel functions and plug it into texImage->extractTexel(). I don't see the problem here. > It seems redundant to store the bits/channel info in both structs, but > that's a minor issue. No. The texImage->TexFormat pointer gets initialized to the default format (the one that core Mesa will produce if the driver doesn't request a different format). GL_RGBA textures are gl_texformat_rgba8888 and so on. The driver can change this if needed, as I described. There's an update_teximage_component_sizes() call that copies the sizes from the format structure to the teximage structure. This is nice as it's all done with a single entry point. If you change the format, you just call the update function and you're done. > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > Because it'll be up to core Mesa to unpack and store the texture image > under all circumstances? > > I'm a bit worried about expecting core Mesa to do all this. Consider > a card that uses some sort of tiled texture memory layout. I think it > should be up to the driver to handle image conversion for something > like this and not burdon core Mesa with strange conversion routines. I'm still toying with a few ideas, but you make a good point. Drivers should be able to provide custom texture formats, with their custom extractTexel functions, as needed. What I want to avoid is the huge performance hit we see from using the utility functions. > The idea behind the texutil.c functions was to simply provide some > helper routines for drivers. Drivers can take them or leave them. Indeed. > Core Mesa never directly calls the texutil.c functions at this time. It might be nice to have the driver call a function, saying "I want this texture in ARGB4444 format, go and do it" and have it never fail. It's a small extension to the layered architecture I have in place now, that basically adds the generic case support to the image conversion routines as a kind of fallback path. I'll have to prototype this to see if it's worth it. The current method isn't bad, don't get me wrong. > > The critical difference here is that the driver selects the internal > > format to be used (like MESA_FORMAT_ARGB4444, MESA_FORMAT_AL88 etc), and > > the conversion function is guaranteed to return a tightly-packed image > > with that type. The conversion function is smart enough to pack RGB > > input data into RGBA textures, or alpha/luminance only data into an AL88 > > texture and so on. We do not need extra internal types to handle this. > > Are you saying something new here? This is the way things currently work. Yes. I've removed the F8_R8_G8_B8 format. We don't need it, or formats like it. I've also extended the accepted inputs into a number of formats to make it useful to more drivers, but this is a lesser point. > > The software rasterizer uses the texFormat->extractTexel()-like > > functions to extract individual or blocks of texels. The SI has a > > number of texel extraction functions per format, to support textures > > with and without borders and so on. The correct one would be plugged > > into texImage->extract(). This means that the image can be converted > > once only, stored in a hardware-friendly format, and interacted with by > > the software rasterizer. I think this is nicer than having to unconvert > > the entire image when there's a software fallback. > > I like this too. The trick is not penalizing the performance of software > Mesa (like the Xlib driver). I think that the optimized textured triangle > functions (for example) could detect when the texture image is in a > simple GLubyte, GL_RGB format and directly address texels. It would be > the unoptimized cases, like mipmapped textures, where we might be > incurring a new performance hit. Even that could probably be avoided > with some extra code. And as you say, the driver can store the texture in weird and wonderful ways and there's still a consistent interface for accessing the internals. > I hadn't done any work on this yet. We can do it together. > > I think the first things to do are: > > 1. Add the ExtractTexel() function pointer to the gl_texture_image struct. > > 2. Remove the Driver.GetTexImage() code and write new code to use the > ExtractTexel() function. > > I'll need some clarifications on the issues above before doing more. My highest priority at the moment in relation to this work is to move the Radeon driver over to this interface. Perhaps we can partition the work after that's done. -- Gareth |
From: Gareth H. <ga...@va...> - 2001-02-06 01:26:58
|
Gareth Hughes wrote: > > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > Because it'll be up to core Mesa to unpack and store the texture image > > under all circumstances? > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > a card that uses some sort of tiled texture memory layout. I think it > > should be up to the driver to handle image conversion for something > > like this and not burdon core Mesa with strange conversion routines. I should add the following: Cards that do tiled textures will generally tile them for you on the texture upload blit. This is perhaps the only really efficient way to do tiled textures, as tiling them manually would be a comparatively expensive operation. This is true in all the cases I've seen, at least, and my experience may be unfairly biased. -- Gareth |
From: Keith W. <ke...@va...> - 2001-02-06 02:12:05
|
Gareth Hughes wrote: > > Gareth Hughes wrote: > > > > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > > > Because it'll be up to core Mesa to unpack and store the texture image > > > under all circumstances? > > > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > > a card that uses some sort of tiled texture memory layout. I think it > > > should be up to the driver to handle image conversion for something > > > like this and not burdon core Mesa with strange conversion routines. > > I should add the following: > > Cards that do tiled textures will generally tile them for you on the > texture upload blit. This is perhaps the only really efficient way to > do tiled textures, as tiling them manually would be a comparatively > expensive operation. This is true in all the cases I've seen, at least, > and my experience may be unfairly biased. My experience (i810) is the same. The i810 handles all tiling transparently to the driver, just imposes a few rules about surfaces not crossing boundaries between tiled/untiled memory.h Keith |
From: Keith W. <ke...@va...> - 2001-02-06 02:17:45
|
Gareth Hughes wrote: > > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > Because it'll be up to core Mesa to unpack and store the texture image > > under all circumstances? > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > a card that uses some sort of tiled texture memory layout. I think it > > should be up to the driver to handle image conversion for something > > like this and not burdon core Mesa with strange conversion routines. > > I'm still toying with a few ideas, but you make a good point. Drivers > should be able to provide custom texture formats, with their custom > extractTexel functions, as needed. What I want to avoid is the huge > performance hit we see from using the utility functions. I'll chime in without having read the whole thread. Allowing DD functions to fail is always a bad idea. It blurs the distinction between the actuall work required in core mesa to implement a function, and the fallback case. These two pieces of code are at very different levels of abstraction -- they are actually seperated by a whole level. You can get the same effect by bundling up the fallback case and giving it a well-known name. If the driver can't handle a case, it explicitly calls the fallback code. This is what 'swrast' is all about. A good example is ctx->DD.Clear, which used to be allowed to fail or perform a partial execution. As a result the core coded needed to either implement software fallbacks, or know about the existence of 'swrast' and call into it itself -- this is pretty clunky. Now, the same behaviour is simply achieved by requiring the driver to call into swrast if it is unable to fulfill the request from core mesa. To my eye this is cleaner, and the requirement on the driver is pretty trivial. Keith |
From: Gareth H. <ga...@va...> - 2001-02-06 04:19:06
|
Keith Whitwell wrote: > > I'll chime in without having read the whole thread. > > Allowing DD functions to fail is always a bad idea. It blurs the distinction > between the actuall work required in core mesa to implement a function, and > the fallback case. These two pieces of code are at very different levels of > abstraction -- they are actually seperated by a whole level. You can get the > same effect by bundling up the fallback case and giving it a well-known name. > If the driver can't handle a case, it explicitly calls the fallback code. > > This is what 'swrast' is all about. A good example is ctx->DD.Clear, which > used to be allowed to fail or perform a partial execution. As a result the > core coded needed to either implement software fallbacks, or know about the > existence of 'swrast' and call into it itself -- this is pretty clunky. Now, > the same behaviour is simply achieved by requiring the driver to call into > swrast if it is unable to fulfill the request from core mesa. To my eye this > is cleaner, and the requirement on the driver is pretty trivial. Thank you, Keith. This is exactly what I'm going for. What I'd like to see is the Tex[Sub]Image* DD calls hand the raw image data to the driver, and what gets hung off the gl_texture_image structure is an image in whatever format the driver wants it in. It is up to the driver to choose the format and to do the conversion, if required. This is why I'd like the conversion utility functions to accept any input, have conversion fastpaths for common cases in a pluggable way (to allow MMX assembly), but have the generic case handling as well. I think this fits the spirit of 3.5 more than the current approach. -- Gareth |
From: Brian P. <br...@va...> - 2001-02-06 15:30:11
|
Gareth Hughes wrote: > > Keith Whitwell wrote: > > > > I'll chime in without having read the whole thread. > > > > Allowing DD functions to fail is always a bad idea. It blurs the distinction > > between the actuall work required in core mesa to implement a function, and > > the fallback case. These two pieces of code are at very different levels of > > abstraction -- they are actually seperated by a whole level. You can get the > > same effect by bundling up the fallback case and giving it a well-known name. > > If the driver can't handle a case, it explicitly calls the fallback code. > > > > This is what 'swrast' is all about. A good example is ctx->DD.Clear, which > > used to be allowed to fail or perform a partial execution. As a result the > > core coded needed to either implement software fallbacks, or know about the > > existence of 'swrast' and call into it itself -- this is pretty clunky. Now, > > the same behaviour is simply achieved by requiring the driver to call into > > swrast if it is unable to fulfill the request from core mesa. To my eye this > > is cleaner, and the requirement on the driver is pretty trivial. > > Thank you, Keith. This is exactly what I'm going for. > > What I'd like to see is the Tex[Sub]Image* DD calls hand the raw image > data to the driver, and what gets hung off the gl_texture_image > structure is an image in whatever format the driver wants it in. It is > up to the driver to choose the format and to do the conversion, if > required. Right. > This is why I'd like the conversion utility functions to > accept any input, have conversion fastpaths for common cases in a > pluggable way (to allow MMX assembly), but have the generic case > handling as well. > > I think this fits the spirit of 3.5 more than the current approach. Yes, this is all fine. I think we're in complete agreement with the goals, but still not clear on the implementation. More in another follow-up. -Brian |
From: Brian P. <br...@va...> - 2001-02-06 15:51:51
|
Gareth Hughes wrote: > > Brian Paul wrote: > > > > > We define a set of these structures: > > > > > > const struct gl_texture_format gl_tex_format_argb8888 = { > > > MESA_FORMAT_ARGB8888, > > > 8, 8, 8, 8, 0, 0, 0, > > > 4, > > > ... > > > }; > > > > > > const struct gl_texture_format gl_tex_format_rgb565 = { > > > MESA_FORMAT_RGB565, > > > 5, 6, 5, 0, 0, 0, 0, > > > 2, > > > ... > > > }; > > > > Do these structs initialize the extractTexel function pointer? > > I don't think they can. Or the driver would have to override > > the defaults. > > Yes they can. The driver would only need to provide extractTexel > functionality if if defined custom texture formats. Consider this: we > want to store a texture as ARGB4444. So, we point texImage->TexFormat > at gl_texformat_argb4444 (defined as above) and use the utility > functions to produce a tightly-packed ARGB4444 texture. The structure > includes one or more initialized extractTexel function pointers, which > are aimed at function(s) that know how to read ARGB4444 textures. > Depending on the mode (border/no border, 2D/3D is what the SI has), we > pick one of the texture format's extractTexel functions and plug it into > texImage->extractTexel(). I don't see the problem here. No problem. You didn't make the possibility of overriding explicit in your first message and I wanted to make sure what the intention was. > > It seems redundant to store the bits/channel info in both structs, but > > that's a minor issue. > > No. The texImage->TexFormat pointer gets initialized to the default > format (the one that core Mesa will produce if the driver doesn't > request a different format). GL_RGBA textures are gl_texformat_rgba8888 > and so on. The driver can change this if needed, as I described. > There's an update_teximage_component_sizes() call that copies the sizes > from the format structure to the teximage structure. This is nice as > it's all done with a single entry point. If you change the format, you > just call the update function and you're done. I still don't see why the channel sizes have to be in both structures. If the gl_texture_image struct always points to a gl_texture_format struct, then glGetTexLevelParameter(GL_TEXTURE_RED_SIZE, &size) could get its info from the gl_texture_format struct. > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > Because it'll be up to core Mesa to unpack and store the texture image > > under all circumstances? > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > a card that uses some sort of tiled texture memory layout. I think it > > should be up to the driver to handle image conversion for something > > like this and not burdon core Mesa with strange conversion routines. > > I'm still toying with a few ideas, but you make a good point. Drivers > should be able to provide custom texture formats, with their custom > extractTexel functions, as needed. What I want to avoid is the huge > performance hit we see from using the utility functions. I think the existing texutil code is pretty good for plain C. :) Moving the image rescale operation into a separate function is something I've wanted to do for a while. Also, the _mesa_convert_teximage() and _mesa_convert_texsubimage() functions could be merged into one with a little bit of work. > > The idea behind the texutil.c functions was to simply provide some > > helper routines for drivers. Drivers can take them or leave them. > > Indeed. > > > Core Mesa never directly calls the texutil.c functions at this time. > > It might be nice to have the driver call a function, saying "I want this > texture in ARGB4444 format, go and do it" and have it never fail. It's > a small extension to the layered architecture I have in place now, that > basically adds the generic case support to the image conversion routines > as a kind of fallback path. I'll have to prototype this to see if it's > worth it. The current method isn't bad, don't get me wrong. Right. glTexImage2D should do error checking, then call Driver.TexImage2D(). The driver function could either do all the work or call a fallback routine, like _mesa_store_teximage2D(). This sounds simple but things like image unpacking, image transfer ops, image rescaling, texture compression, etc make it a little complicated. > My highest priority at the moment in relation to this work is to move > the Radeon driver over to this interface. Perhaps we can partition the > work after that's done. I've started implementing the FetchTexel() function pointer stuff. I'm in the process or removing the Driver.GetTexImage() code. -Brian |
From: Gareth H. <ga...@va...> - 2001-02-06 16:13:47
|
Brian Paul wrote: > > > Yes they can. The driver would only need to provide extractTexel > > functionality if if defined custom texture formats. Consider this: we > > want to store a texture as ARGB4444. So, we point texImage->TexFormat > > at gl_texformat_argb4444 (defined as above) and use the utility > > functions to produce a tightly-packed ARGB4444 texture. The structure > > includes one or more initialized extractTexel function pointers, which > > are aimed at function(s) that know how to read ARGB4444 textures. > > Depending on the mode (border/no border, 2D/3D is what the SI has), we > > pick one of the texture format's extractTexel functions and plug it into > > texImage->extractTexel(). I don't see the problem here. > > No problem. You didn't make the possibility of overriding explicit in > your first message and I wanted to make sure what the intention was. Okay, my mistake. > I still don't see why the channel sizes have to be in both structures. > If the gl_texture_image struct always points to a gl_texture_format struct, > then glGetTexLevelParameter(GL_TEXTURE_RED_SIZE, &size) could get its > info from the gl_texture_format struct. When I was checking out the SI code I thought there was a good reason for this. Perhaps I was mistaken. > > I'm still toying with a few ideas, but you make a good point. Drivers > > should be able to provide custom texture formats, with their custom > > extractTexel functions, as needed. What I want to avoid is the huge > > performance hit we see from using the utility functions. > > I think the existing texutil code is pretty good for plain C. :) I'll accept that :-) See my post to dri-devel with some updated scores... > Moving the image rescale operation into a separate function is something > I've wanted to do for a while. Yes, this makes it so much nicer. > Also, the _mesa_convert_teximage() and _mesa_convert_texsubimage() > functions could be merged into one with a little bit of work. You really don't want to do that. You can specifically optimize for nice power-of-two dimensions with the full teximage convert, while you have to handle the subimage with a little more care. In fact, the driver should check for a subimage update that updates the full image and call _mesa_convert_teximage() instead of _mesa_convert_texsubimage() -- the tdfx driver does this now. > Right. glTexImage2D should do error checking, then call Driver.TexImage2D(). > The driver function could either do all the work or call a fallback > routine, like _mesa_store_teximage2D(). > > This sounds simple but things like image unpacking, image transfer ops, > image rescaling, texture compression, etc make it a little complicated. Sure, no argument there. It sounds like we agree on the fact that once Driver.TexImage*D returns, we should have a valid texture image, even if the driver has to call the fallback routine manually. Right? -- Gareth |
From: Brian P. <br...@va...> - 2001-02-06 16:29:39
|
Gareth Hughes wrote: > > Also, the _mesa_convert_teximage() and _mesa_convert_texsubimage() > > functions could be merged into one with a little bit of work. > > You really don't want to do that. You can specifically optimize for > nice power-of-two dimensions with the full teximage convert, while you > have to handle the subimage with a little more care. In fact, the > driver should check for a subimage update that updates the full image > and call _mesa_convert_teximage() instead of _mesa_convert_texsubimage() > -- the tdfx driver does this now. How do power-of-two dimensions make for better optimization? It's the case when then the source image row stride equals the dest image row stride and the stride equals the row length (all in bytes) that you can simplify the conversion down to a single loop. As far as I see it, the only real difference between full image conversion and sub-image conversion is that dest image stride may not match the row length. We handle that now as-is. The only reason I had separate _mesa_convert_teximage() and _mesa_convert_texsubimage() functions was the weirdness involving image rescaling. > > Right. glTexImage2D should do error checking, then call Driver.TexImage2D(). > > The driver function could either do all the work or call a fallback > > routine, like _mesa_store_teximage2D(). > > > > This sounds simple but things like image unpacking, image transfer ops, > > image rescaling, texture compression, etc make it a little complicated. > > Sure, no argument there. It sounds like we agree on the fact that once > Driver.TexImage*D returns, we should have a valid texture image, even if > the driver has to call the fallback routine manually. Right? Right. -Brian |
From: Gareth H. <ga...@va...> - 2001-02-07 01:42:38
|
Brian Paul wrote: > > How do power-of-two dimensions make for better optimization? > > It's the case when then the source image row stride equals the dest > image row stride and the stride equals the row length (all in bytes) > that you can simplify the conversion down to a single loop. > > As far as I see it, the only real difference between full image > conversion and sub-image conversion is that dest image stride > may not match the row length. We handle that now as-is. Sigh. I need some sleep... I've changed the parameters of the texsubimage call to take x, y, width, height and pitch (like the Utah-style conversion functions). Simple test of width == pitch is all we need, as you say. I'll add this to the tdfx-3-1-0 branch Mesa code today. -- Gareth |
From: Brian P. <br...@va...> - 2001-02-06 21:50:55
|
I've just checked in my first round of texture interface changes. Drivers can now hang their texture image data off of the gl_texture_image struct's Data pointer. The FetchTexel() function pointer in the gl_texture_image struct is now used to fetch texels by the software texturing code. The device driver functions for glTex[Sub]Image() have changed. They no longer return true/false for success/failure. There are fallback functions for these Driver functions in src/texstore.c. I haven't implemented a gl_texture_format structure as Gareth suggested. After Gareth's overhauled the texutil.c code that may be useful, but I'm not sure it'll be necessary. As it is now, the driver's TexImage2D function (for example) simply has to fill in the RedBits, GreenBits, etc and FetchTexel() fields in struct gl_texture_image and core Mesa is happy. Next, I have to update the FX driver for these changes. -Brian |
From: Gareth H. <ga...@va...> - 2001-02-07 01:46:50
|
Brian Paul wrote: > > I've just checked in my first round of texture interface changes. > > Drivers can now hang their texture image data off of the gl_texture_image > struct's Data pointer. The FetchTexel() function pointer in the > gl_texture_image struct is now used to fetch texels by the software > texturing code. > > The device driver functions for glTex[Sub]Image() have changed. They > no longer return true/false for success/failure. There are fallback > functions for these Driver functions in src/texstore.c. > > I haven't implemented a gl_texture_format structure as Gareth suggested. > After Gareth's overhauled the texutil.c code that may be useful, but > I'm not sure it'll be necessary. As it is now, the driver's TexImage2D > function (for example) simply has to fill in the RedBits, GreenBits, > etc and FetchTexel() fields in struct gl_texture_image and core Mesa > is happy. I'm pretty close to being done with the overhaul, so I'll be able to merge the work back into Mesa CVS soon. I just have to make sure the drivers are sane. I'll bring your changes into my DRI branch first, so feel free to check it out. -- Gareth |