From: Gareth H. <ga...@va...> - 2001-03-06 16:18:28
|
Is the plan to support GLushort or GLfloat per component texture images, as well as the regular GLubyte per component? If so, I'll change what was the old-style Mesa formats to be based on GLchan, which would give us things like _mesa_format_rgba_chan or _mesa_format_default_rgba instead of _mesa_format_rgba8888 and the like. And, I'm assuming that GLchan != GLubyte doesn't work at the moment, correct? -- Gareth |
From: Brian P. <br...@va...> - 2001-03-06 16:46:26
|
Gareth Hughes wrote: > > Is the plan to support GLushort or GLfloat per component texture images, > as well as the regular GLubyte per component? With 3.5 drivers can implement _any_ format or type of texture images they want. They just have to provide the appropriate FetchTexel() functions so the software fallbacks can operate. FetchTexel() should return GLchan values. But yes, the s/w Mesa teximage routines should be written with the GLchan type so that 16-bit and float color components will be possible. > If so, I'll change what > was the old-style Mesa formats to be based on GLchan, which would give > us things like _mesa_format_rgba_chan or _mesa_format_default_rgba > instead of _mesa_format_rgba8888 and the like. Originally, the "Mesa formats" and texutil.[ch] were just helper routines for drivers; core Mesa knew nothing about them. It sounds like you're bringing that into core Mesa. I'm not sure of all the ramifications of that. > And, I'm assuming that GLchan != GLubyte doesn't work at the moment, > correct? It was compiling about a month ago. I started testing OSMesa with 16-bit color channels but didn't do any verification. It's definitely not ready for prime time yet. -Brian |
From: Gareth H. <ga...@va...> - 2001-03-07 02:16:15
|
Brian Paul wrote: > > With 3.5 drivers can implement _any_ format or type of texture images they > want. They just have to provide the appropriate FetchTexel() functions > so the software fallbacks can operate. FetchTexel() should return GLchan > values. > > But yes, the s/w Mesa teximage routines should be written with the GLchan > type so that 16-bit and float color components will be possible. texImage->TexFormat can't be NULL, so there has to be texure formats for the s/w texture images. I'm trying to make this as clean as possible, as the old "Mesa formats" were based on GLubyte per channel only. > Originally, the "Mesa formats" and texutil.[ch] were just helper routines > for drivers; core Mesa knew nothing about them. It sounds like you're > bringing that into core Mesa. I'm not sure of all the ramifications of > that. Only the texture format stuff. The texutil code is used by drivers to convert into hardware-friendly texture formats, but the texstore utilities need to have corresponding gl_texture_format structures for tightly-packed GLchan images. Now, the gl_texture_image structure has a pointer to a gl_texture_format structure that defines the internal format of the image. The gl_texture_format has the component sizes, bytes per texel, and custom FetchTexel routines that are plugged into the main gl_texture_image structure. > It was compiling about a month ago. I started testing OSMesa with 16-bit > color channels but didn't do any verification. It's definitely not > ready for prime time yet. Okay, I was just wondering if it was possible to test it out. Some of the colormac.h macros look a little wrong, particularly this one at 16 bits: #define UBYTE_TO_CHAN(b) ((GLchan) (((b) << 8) | (b))) -- Gareth |
From: Brian P. <br...@va...> - 2001-03-07 02:54:29
|
Gareth Hughes wrote: > > Brian Paul wrote: > > > > With 3.5 drivers can implement _any_ format or type of texture images they > > want. They just have to provide the appropriate FetchTexel() functions > > so the software fallbacks can operate. FetchTexel() should return GLchan > > values. > > > > But yes, the s/w Mesa teximage routines should be written with the GLchan > > type so that 16-bit and float color components will be possible. > > texImage->TexFormat can't be NULL, so there has to be texure formats for > the s/w texture images. I'm trying to make this as clean as possible, > as the old "Mesa formats" were based on GLubyte per channel only. > > > Originally, the "Mesa formats" and texutil.[ch] were just helper routines > > for drivers; core Mesa knew nothing about them. It sounds like you're > > bringing that into core Mesa. I'm not sure of all the ramifications of > > that. > > Only the texture format stuff. The texutil code is used by drivers to > convert into hardware-friendly texture formats, but the texstore > utilities need to have corresponding gl_texture_format structures for > tightly-packed GLchan images. > > Now, the gl_texture_image structure has a pointer to a gl_texture_format > structure that defines the internal format of the image. The > gl_texture_format has the component sizes, bytes per texel, and custom > FetchTexel routines that are plugged into the main gl_texture_image > structure. OK, this sounds good. I see why you were asking. > > It was compiling about a month ago. I started testing OSMesa with 16-bit > > color channels but didn't do any verification. It's definitely not > > ready for prime time yet. > > Okay, I was just wondering if it was possible to test it out. Some of > the colormac.h macros look a little wrong, particularly this one at 16 > bits: > > #define UBYTE_TO_CHAN(b) ((GLchan) (((b) << 8) | (b))) I think it's correct. To convert a GLubyte color in [0,255] to a 16-bit GLchan value in [0,65535] you'd do: chan = b * 65535 / 255; which is equivalent to: chan = (b << 8) | b; Here's the proof: #include <assert.h> int main(int argc, char *argv[]) { int b; for (b = 0; b < 256; b++) { int c1 = b * 65535 / 255; int c2 = (b << 8) | b; assert(c1 == c2); } } :) -Brian |
From: Gareth H. <ga...@va...> - 2001-03-07 03:06:53
|
Brian Paul wrote: > > OK, this sounds good. I see why you were asking. Cool :-) > I think it's correct. To convert a GLubyte color in [0,255] to a > 16-bit GLchan value in [0,65535] you'd do: > > chan = b * 65535 / 255; > > which is equivalent to: > > chan = (b << 8) | b; > > Here's the proof: > > #include <assert.h> > int main(int argc, char *argv[]) > { > int b; > for (b = 0; b < 256; b++) { > int c1 = b * 65535 / 255; > int c2 = (b << 8) | b; > assert(c1 == c2); > } > } > > :) I'm not disagreeing re: the math, I'm concerned about shifting GLubytes up by 8. Maybe it does automatic conversion to GLushort and thus you won't overflow. On testing this, it appears that this is the case. Oh well :-) -- Gareth |
From: Brian P. <br...@va...> - 2001-03-07 03:12:43
|
Gareth Hughes wrote: > > Brian Paul wrote: > > > > OK, this sounds good. I see why you were asking. > > Cool :-) > > > I think it's correct. To convert a GLubyte color in [0,255] to a > > 16-bit GLchan value in [0,65535] you'd do: > > > > chan = b * 65535 / 255; > > > > which is equivalent to: > > > > chan = (b << 8) | b; > > > > Here's the proof: > > > > #include <assert.h> > > int main(int argc, char *argv[]) > > { > > int b; > > for (b = 0; b < 256; b++) { > > int c1 = b * 65535 / 255; > > int c2 = (b << 8) | b; > > assert(c1 == c2); > > } > > } > > > > :) > > I'm not disagreeing re: the math, I'm concerned about shifting GLubytes > up by 8. Maybe it does automatic conversion to GLushort and thus you > won't overflow. On testing this, it appears that this is the case. Oh > well :-) gcc may be doing what I intend but it probably would be safer to put a cast in there for the sake of other compilers. Good catch. -Brian |
From: Gareth H. <ga...@va...> - 2001-03-07 03:19:40
|
Brian Paul wrote: > > > I'm not disagreeing re: the math, I'm concerned about shifting GLubytes > > up by 8. Maybe it does automatic conversion to GLushort and thus you > > won't overflow. On testing this, it appears that this is the case. Oh > > well :-) > > gcc may be doing what I intend but it probably would be safer to put > a cast in there for the sake of other compilers. Yeah, it never hurts to be explicit about these things. > Good catch. Thanks :-) -- Gareth |