From: Gareth H. <ga...@va...> - 2001-02-06 01:02:15
|
Brian Paul wrote: > > > We define a set of these structures: > > > > const struct gl_texture_format gl_tex_format_argb8888 = { > > MESA_FORMAT_ARGB8888, > > 8, 8, 8, 8, 0, 0, 0, > > 4, > > ... > > }; > > > > const struct gl_texture_format gl_tex_format_rgb565 = { > > MESA_FORMAT_RGB565, > > 5, 6, 5, 0, 0, 0, 0, > > 2, > > ... > > }; > > Do these structs initialize the extractTexel function pointer? > I don't think they can. Or the driver would have to override > the defaults. Yes they can. The driver would only need to provide extractTexel functionality if if defined custom texture formats. Consider this: we want to store a texture as ARGB4444. So, we point texImage->TexFormat at gl_texformat_argb4444 (defined as above) and use the utility functions to produce a tightly-packed ARGB4444 texture. The structure includes one or more initialized extractTexel function pointers, which are aimed at function(s) that know how to read ARGB4444 textures. Depending on the mode (border/no border, 2D/3D is what the SI has), we pick one of the texture format's extractTexel functions and plug it into texImage->extractTexel(). I don't see the problem here. > It seems redundant to store the bits/channel info in both structs, but > that's a minor issue. No. The texImage->TexFormat pointer gets initialized to the default format (the one that core Mesa will produce if the driver doesn't request a different format). GL_RGBA textures are gl_texformat_rgba8888 and so on. The driver can change this if needed, as I described. There's an update_teximage_component_sizes() call that copies the sizes from the format structure to the teximage structure. This is nice as it's all done with a single entry point. If you change the format, you just call the update function and you're done. > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > Because it'll be up to core Mesa to unpack and store the texture image > under all circumstances? > > I'm a bit worried about expecting core Mesa to do all this. Consider > a card that uses some sort of tiled texture memory layout. I think it > should be up to the driver to handle image conversion for something > like this and not burdon core Mesa with strange conversion routines. I'm still toying with a few ideas, but you make a good point. Drivers should be able to provide custom texture formats, with their custom extractTexel functions, as needed. What I want to avoid is the huge performance hit we see from using the utility functions. > The idea behind the texutil.c functions was to simply provide some > helper routines for drivers. Drivers can take them or leave them. Indeed. > Core Mesa never directly calls the texutil.c functions at this time. It might be nice to have the driver call a function, saying "I want this texture in ARGB4444 format, go and do it" and have it never fail. It's a small extension to the layered architecture I have in place now, that basically adds the generic case support to the image conversion routines as a kind of fallback path. I'll have to prototype this to see if it's worth it. The current method isn't bad, don't get me wrong. > > The critical difference here is that the driver selects the internal > > format to be used (like MESA_FORMAT_ARGB4444, MESA_FORMAT_AL88 etc), and > > the conversion function is guaranteed to return a tightly-packed image > > with that type. The conversion function is smart enough to pack RGB > > input data into RGBA textures, or alpha/luminance only data into an AL88 > > texture and so on. We do not need extra internal types to handle this. > > Are you saying something new here? This is the way things currently work. Yes. I've removed the F8_R8_G8_B8 format. We don't need it, or formats like it. I've also extended the accepted inputs into a number of formats to make it useful to more drivers, but this is a lesser point. > > The software rasterizer uses the texFormat->extractTexel()-like > > functions to extract individual or blocks of texels. The SI has a > > number of texel extraction functions per format, to support textures > > with and without borders and so on. The correct one would be plugged > > into texImage->extract(). This means that the image can be converted > > once only, stored in a hardware-friendly format, and interacted with by > > the software rasterizer. I think this is nicer than having to unconvert > > the entire image when there's a software fallback. > > I like this too. The trick is not penalizing the performance of software > Mesa (like the Xlib driver). I think that the optimized textured triangle > functions (for example) could detect when the texture image is in a > simple GLubyte, GL_RGB format and directly address texels. It would be > the unoptimized cases, like mipmapped textures, where we might be > incurring a new performance hit. Even that could probably be avoided > with some extra code. And as you say, the driver can store the texture in weird and wonderful ways and there's still a consistent interface for accessing the internals. > I hadn't done any work on this yet. We can do it together. > > I think the first things to do are: > > 1. Add the ExtractTexel() function pointer to the gl_texture_image struct. > > 2. Remove the Driver.GetTexImage() code and write new code to use the > ExtractTexel() function. > > I'll need some clarifications on the issues above before doing more. My highest priority at the moment in relation to this work is to move the Radeon driver over to this interface. Perhaps we can partition the work after that's done. -- Gareth |