You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(10) |
Apr
(28) |
May
(41) |
Jun
(91) |
Jul
(63) |
Aug
(45) |
Sep
(37) |
Oct
(80) |
Nov
(91) |
Dec
(47) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(48) |
Feb
(121) |
Mar
(126) |
Apr
(16) |
May
(85) |
Jun
(84) |
Jul
(115) |
Aug
(71) |
Sep
(27) |
Oct
(33) |
Nov
(15) |
Dec
(71) |
2002 |
Jan
(73) |
Feb
(34) |
Mar
(39) |
Apr
(135) |
May
(59) |
Jun
(116) |
Jul
(93) |
Aug
(40) |
Sep
(50) |
Oct
(87) |
Nov
(90) |
Dec
(32) |
2003 |
Jan
(181) |
Feb
(101) |
Mar
(231) |
Apr
(240) |
May
(148) |
Jun
(228) |
Jul
(156) |
Aug
(49) |
Sep
(173) |
Oct
(169) |
Nov
(137) |
Dec
(163) |
2004 |
Jan
(243) |
Feb
(141) |
Mar
(183) |
Apr
(364) |
May
(369) |
Jun
(251) |
Jul
(194) |
Aug
(140) |
Sep
(154) |
Oct
(167) |
Nov
(86) |
Dec
(109) |
2005 |
Jan
(176) |
Feb
(140) |
Mar
(112) |
Apr
(158) |
May
(140) |
Jun
(201) |
Jul
(123) |
Aug
(196) |
Sep
(143) |
Oct
(165) |
Nov
(158) |
Dec
(79) |
2006 |
Jan
(90) |
Feb
(156) |
Mar
(125) |
Apr
(146) |
May
(169) |
Jun
(146) |
Jul
(150) |
Aug
(176) |
Sep
(156) |
Oct
(237) |
Nov
(179) |
Dec
(140) |
2007 |
Jan
(144) |
Feb
(116) |
Mar
(261) |
Apr
(279) |
May
(222) |
Jun
(103) |
Jul
(237) |
Aug
(191) |
Sep
(113) |
Oct
(129) |
Nov
(141) |
Dec
(165) |
2008 |
Jan
(152) |
Feb
(195) |
Mar
(242) |
Apr
(146) |
May
(151) |
Jun
(172) |
Jul
(123) |
Aug
(195) |
Sep
(195) |
Oct
(138) |
Nov
(183) |
Dec
(125) |
2009 |
Jan
(268) |
Feb
(281) |
Mar
(295) |
Apr
(293) |
May
(273) |
Jun
(265) |
Jul
(406) |
Aug
(679) |
Sep
(434) |
Oct
(357) |
Nov
(306) |
Dec
(478) |
2010 |
Jan
(856) |
Feb
(668) |
Mar
(927) |
Apr
(269) |
May
(12) |
Jun
(13) |
Jul
(6) |
Aug
(8) |
Sep
(23) |
Oct
(4) |
Nov
(8) |
Dec
(11) |
2011 |
Jan
(4) |
Feb
(2) |
Mar
(3) |
Apr
(9) |
May
(6) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
(1) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gareth H. <ga...@va...> - 2001-02-07 01:46:50
|
Brian Paul wrote: > > I've just checked in my first round of texture interface changes. > > Drivers can now hang their texture image data off of the gl_texture_image > struct's Data pointer. The FetchTexel() function pointer in the > gl_texture_image struct is now used to fetch texels by the software > texturing code. > > The device driver functions for glTex[Sub]Image() have changed. They > no longer return true/false for success/failure. There are fallback > functions for these Driver functions in src/texstore.c. > > I haven't implemented a gl_texture_format structure as Gareth suggested. > After Gareth's overhauled the texutil.c code that may be useful, but > I'm not sure it'll be necessary. As it is now, the driver's TexImage2D > function (for example) simply has to fill in the RedBits, GreenBits, > etc and FetchTexel() fields in struct gl_texture_image and core Mesa > is happy. I'm pretty close to being done with the overhaul, so I'll be able to merge the work back into Mesa CVS soon. I just have to make sure the drivers are sane. I'll bring your changes into my DRI branch first, so feel free to check it out. -- Gareth |
From: Gareth H. <ga...@va...> - 2001-02-07 01:42:38
|
Brian Paul wrote: > > How do power-of-two dimensions make for better optimization? > > It's the case when then the source image row stride equals the dest > image row stride and the stride equals the row length (all in bytes) > that you can simplify the conversion down to a single loop. > > As far as I see it, the only real difference between full image > conversion and sub-image conversion is that dest image stride > may not match the row length. We handle that now as-is. Sigh. I need some sleep... I've changed the parameters of the texsubimage call to take x, y, width, height and pitch (like the Utah-style conversion functions). Simple test of width == pitch is all we need, as you say. I'll add this to the tdfx-3-1-0 branch Mesa code today. -- Gareth |
From: Brian P. <br...@va...> - 2001-02-06 21:50:55
|
I've just checked in my first round of texture interface changes. Drivers can now hang their texture image data off of the gl_texture_image struct's Data pointer. The FetchTexel() function pointer in the gl_texture_image struct is now used to fetch texels by the software texturing code. The device driver functions for glTex[Sub]Image() have changed. They no longer return true/false for success/failure. There are fallback functions for these Driver functions in src/texstore.c. I haven't implemented a gl_texture_format structure as Gareth suggested. After Gareth's overhauled the texutil.c code that may be useful, but I'm not sure it'll be necessary. As it is now, the driver's TexImage2D function (for example) simply has to fill in the RedBits, GreenBits, etc and FetchTexel() fields in struct gl_texture_image and core Mesa is happy. Next, I have to update the FX driver for these changes. -Brian |
From: Keith W. <ke...@va...> - 2001-02-06 19:37:08
|
rt wrote: > > Okay. I see. A quick grep in src/FX doesn't reveal exactly where > the fastpath stuff hooks into the driver framework. How does it, or > rather, who calls fxDDFastPathInit()? > > rt > The init function is called at startup to initialize some static data. The faspath code hooks into 'BuildPrecalcPipeline' or something like that. Consider looking at 3.5 or at least a DRI driver; the 3.4 fx driver is pretty crufty. Keith |
From: rt <rt...@um...> - 2001-02-06 18:33:31
|
Okay. I see. A quick grep in src/FX doesn't reveal exactly where the fastpath stuff hooks into the driver framework. How does it, or rather, who calls fxDDFastPathInit()? rt On Tue, 6 Feb 2001, Keith Whitwell wrote: > rt wrote: > > > > > The drivers we have don't do this. Instead they take a seperate approach of > > > overriding the whole render stage. In the mesa-3-5 branch on dri cvs, the > > > > Is this the approach in 3.4? > > > > To a certain extent -- the 'fastpath' stuff in the dri drivers and in > Mesa/src/FX overrides the whole pipeline, and does clipping, etc in an > hardware-optimized fashion. It doesn't attempt to make use of hardware strip > support as it's not appropriate for the DrawElements case that is being > accelerated. The mga 'eltpath' uses hardware indexed vertices, which is > another way of reducing bus traffic. > > Keith > > _______________________________________________ > Mesa3d-dev mailing list > Mes...@li... > http://lists.sourceforge.net/lists/listinfo/mesa3d-dev > ------------- rt 734-332-4562 "Most people's lives are taken up with a great many trivial things that they don't really care about, but which they feel they have to do. I just don't do that." - esr |
From: Brian P. <br...@va...> - 2001-02-06 16:29:39
|
Gareth Hughes wrote: > > Also, the _mesa_convert_teximage() and _mesa_convert_texsubimage() > > functions could be merged into one with a little bit of work. > > You really don't want to do that. You can specifically optimize for > nice power-of-two dimensions with the full teximage convert, while you > have to handle the subimage with a little more care. In fact, the > driver should check for a subimage update that updates the full image > and call _mesa_convert_teximage() instead of _mesa_convert_texsubimage() > -- the tdfx driver does this now. How do power-of-two dimensions make for better optimization? It's the case when then the source image row stride equals the dest image row stride and the stride equals the row length (all in bytes) that you can simplify the conversion down to a single loop. As far as I see it, the only real difference between full image conversion and sub-image conversion is that dest image stride may not match the row length. We handle that now as-is. The only reason I had separate _mesa_convert_teximage() and _mesa_convert_texsubimage() functions was the weirdness involving image rescaling. > > Right. glTexImage2D should do error checking, then call Driver.TexImage2D(). > > The driver function could either do all the work or call a fallback > > routine, like _mesa_store_teximage2D(). > > > > This sounds simple but things like image unpacking, image transfer ops, > > image rescaling, texture compression, etc make it a little complicated. > > Sure, no argument there. It sounds like we agree on the fact that once > Driver.TexImage*D returns, we should have a valid texture image, even if > the driver has to call the fallback routine manually. Right? Right. -Brian |
From: Keith W. <ke...@va...> - 2001-02-06 16:17:34
|
rt wrote: > > > The drivers we have don't do this. Instead they take a seperate approach of > > overriding the whole render stage. In the mesa-3-5 branch on dri cvs, the > > Is this the approach in 3.4? > To a certain extent -- the 'fastpath' stuff in the dri drivers and in Mesa/src/FX overrides the whole pipeline, and does clipping, etc in an hardware-optimized fashion. It doesn't attempt to make use of hardware strip support as it's not appropriate for the DrawElements case that is being accelerated. The mga 'eltpath' uses hardware indexed vertices, which is another way of reducing bus traffic. Keith |
From: Gareth H. <ga...@va...> - 2001-02-06 16:13:47
|
Brian Paul wrote: > > > Yes they can. The driver would only need to provide extractTexel > > functionality if if defined custom texture formats. Consider this: we > > want to store a texture as ARGB4444. So, we point texImage->TexFormat > > at gl_texformat_argb4444 (defined as above) and use the utility > > functions to produce a tightly-packed ARGB4444 texture. The structure > > includes one or more initialized extractTexel function pointers, which > > are aimed at function(s) that know how to read ARGB4444 textures. > > Depending on the mode (border/no border, 2D/3D is what the SI has), we > > pick one of the texture format's extractTexel functions and plug it into > > texImage->extractTexel(). I don't see the problem here. > > No problem. You didn't make the possibility of overriding explicit in > your first message and I wanted to make sure what the intention was. Okay, my mistake. > I still don't see why the channel sizes have to be in both structures. > If the gl_texture_image struct always points to a gl_texture_format struct, > then glGetTexLevelParameter(GL_TEXTURE_RED_SIZE, &size) could get its > info from the gl_texture_format struct. When I was checking out the SI code I thought there was a good reason for this. Perhaps I was mistaken. > > I'm still toying with a few ideas, but you make a good point. Drivers > > should be able to provide custom texture formats, with their custom > > extractTexel functions, as needed. What I want to avoid is the huge > > performance hit we see from using the utility functions. > > I think the existing texutil code is pretty good for plain C. :) I'll accept that :-) See my post to dri-devel with some updated scores... > Moving the image rescale operation into a separate function is something > I've wanted to do for a while. Yes, this makes it so much nicer. > Also, the _mesa_convert_teximage() and _mesa_convert_texsubimage() > functions could be merged into one with a little bit of work. You really don't want to do that. You can specifically optimize for nice power-of-two dimensions with the full teximage convert, while you have to handle the subimage with a little more care. In fact, the driver should check for a subimage update that updates the full image and call _mesa_convert_teximage() instead of _mesa_convert_texsubimage() -- the tdfx driver does this now. > Right. glTexImage2D should do error checking, then call Driver.TexImage2D(). > The driver function could either do all the work or call a fallback > routine, like _mesa_store_teximage2D(). > > This sounds simple but things like image unpacking, image transfer ops, > image rescaling, texture compression, etc make it a little complicated. Sure, no argument there. It sounds like we agree on the fact that once Driver.TexImage*D returns, we should have a valid texture image, even if the driver has to call the fallback routine manually. Right? -- Gareth |
From: rt <rt...@um...> - 2001-02-06 16:00:17
|
> The drivers we have don't do this. Instead they take a seperate approach of > overriding the whole render stage. In the mesa-3-5 branch on dri cvs, the Is this the approach in 3.4? rt ------------- rt 734-332-4562 "Most people's lives are taken up with a great many trivial things that they don't really care about, but which they feel they have to do. I just don't do that." - esr |
From: Brian P. <br...@va...> - 2001-02-06 15:51:51
|
Gareth Hughes wrote: > > Brian Paul wrote: > > > > > We define a set of these structures: > > > > > > const struct gl_texture_format gl_tex_format_argb8888 = { > > > MESA_FORMAT_ARGB8888, > > > 8, 8, 8, 8, 0, 0, 0, > > > 4, > > > ... > > > }; > > > > > > const struct gl_texture_format gl_tex_format_rgb565 = { > > > MESA_FORMAT_RGB565, > > > 5, 6, 5, 0, 0, 0, 0, > > > 2, > > > ... > > > }; > > > > Do these structs initialize the extractTexel function pointer? > > I don't think they can. Or the driver would have to override > > the defaults. > > Yes they can. The driver would only need to provide extractTexel > functionality if if defined custom texture formats. Consider this: we > want to store a texture as ARGB4444. So, we point texImage->TexFormat > at gl_texformat_argb4444 (defined as above) and use the utility > functions to produce a tightly-packed ARGB4444 texture. The structure > includes one or more initialized extractTexel function pointers, which > are aimed at function(s) that know how to read ARGB4444 textures. > Depending on the mode (border/no border, 2D/3D is what the SI has), we > pick one of the texture format's extractTexel functions and plug it into > texImage->extractTexel(). I don't see the problem here. No problem. You didn't make the possibility of overriding explicit in your first message and I wanted to make sure what the intention was. > > It seems redundant to store the bits/channel info in both structs, but > > that's a minor issue. > > No. The texImage->TexFormat pointer gets initialized to the default > format (the one that core Mesa will produce if the driver doesn't > request a different format). GL_RGBA textures are gl_texformat_rgba8888 > and so on. The driver can change this if needed, as I described. > There's an update_teximage_component_sizes() call that copies the sizes > from the format structure to the teximage structure. This is nice as > it's all done with a single entry point. If you change the format, you > just call the update function and you're done. I still don't see why the channel sizes have to be in both structures. If the gl_texture_image struct always points to a gl_texture_format struct, then glGetTexLevelParameter(GL_TEXTURE_RED_SIZE, &size) could get its info from the gl_texture_format struct. > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > Because it'll be up to core Mesa to unpack and store the texture image > > under all circumstances? > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > a card that uses some sort of tiled texture memory layout. I think it > > should be up to the driver to handle image conversion for something > > like this and not burdon core Mesa with strange conversion routines. > > I'm still toying with a few ideas, but you make a good point. Drivers > should be able to provide custom texture formats, with their custom > extractTexel functions, as needed. What I want to avoid is the huge > performance hit we see from using the utility functions. I think the existing texutil code is pretty good for plain C. :) Moving the image rescale operation into a separate function is something I've wanted to do for a while. Also, the _mesa_convert_teximage() and _mesa_convert_texsubimage() functions could be merged into one with a little bit of work. > > The idea behind the texutil.c functions was to simply provide some > > helper routines for drivers. Drivers can take them or leave them. > > Indeed. > > > Core Mesa never directly calls the texutil.c functions at this time. > > It might be nice to have the driver call a function, saying "I want this > texture in ARGB4444 format, go and do it" and have it never fail. It's > a small extension to the layered architecture I have in place now, that > basically adds the generic case support to the image conversion routines > as a kind of fallback path. I'll have to prototype this to see if it's > worth it. The current method isn't bad, don't get me wrong. Right. glTexImage2D should do error checking, then call Driver.TexImage2D(). The driver function could either do all the work or call a fallback routine, like _mesa_store_teximage2D(). This sounds simple but things like image unpacking, image transfer ops, image rescaling, texture compression, etc make it a little complicated. > My highest priority at the moment in relation to this work is to move > the Radeon driver over to this interface. Perhaps we can partition the > work after that's done. I've started implementing the FetchTexel() function pointer stuff. I'm in the process or removing the Driver.GetTexImage() code. -Brian |
From: Brian P. <br...@va...> - 2001-02-06 15:30:11
|
Gareth Hughes wrote: > > Keith Whitwell wrote: > > > > I'll chime in without having read the whole thread. > > > > Allowing DD functions to fail is always a bad idea. It blurs the distinction > > between the actuall work required in core mesa to implement a function, and > > the fallback case. These two pieces of code are at very different levels of > > abstraction -- they are actually seperated by a whole level. You can get the > > same effect by bundling up the fallback case and giving it a well-known name. > > If the driver can't handle a case, it explicitly calls the fallback code. > > > > This is what 'swrast' is all about. A good example is ctx->DD.Clear, which > > used to be allowed to fail or perform a partial execution. As a result the > > core coded needed to either implement software fallbacks, or know about the > > existence of 'swrast' and call into it itself -- this is pretty clunky. Now, > > the same behaviour is simply achieved by requiring the driver to call into > > swrast if it is unable to fulfill the request from core mesa. To my eye this > > is cleaner, and the requirement on the driver is pretty trivial. > > Thank you, Keith. This is exactly what I'm going for. > > What I'd like to see is the Tex[Sub]Image* DD calls hand the raw image > data to the driver, and what gets hung off the gl_texture_image > structure is an image in whatever format the driver wants it in. It is > up to the driver to choose the format and to do the conversion, if > required. Right. > This is why I'd like the conversion utility functions to > accept any input, have conversion fastpaths for common cases in a > pluggable way (to allow MMX assembly), but have the generic case > handling as well. > > I think this fits the spirit of 3.5 more than the current approach. Yes, this is all fine. I think we're in complete agreement with the goals, but still not clear on the implementation. More in another follow-up. -Brian |
From: Keith W. <ke...@va...> - 2001-02-06 15:12:50
|
Alan Hourihane wrote: > > On Mon, Feb 05, 2001 at 04:48:09PM -0700, Keith Whitwell wrote: > > Dieter Ntzel wrote: > > > > > > Hello Keith, > > > > > > I've tried to compile the DRI CVS mesa-3-5-branch and got this: > > > > > > ln -s /opt/Mesa/src/vtxfmt.c vtxfmt.c > > > rm -f vtxfmt.h > > > ln -s /opt/Mesa/src/vtxfmt.h vtxfmt.h > > > make[4]: *** No rule to make target `/opt/Mesa/src/vtxfmt_tmp.h', needed by > > > `vtxfmt_tmp.h'. Stop. > > > make[4]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL/mesa/src' > > > make[3]: *** [includes] Error 2 > > > make[3]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL' > > > make[2]: *** [includes] Error 2 > > > make[2]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib' > > > make[1]: *** [includes] Error 2 > > > make[1]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc' > > > make: *** [World] Error 2 > > > > > > Mesa CVS taken today. > > > > I updated the DRI branch on sunday to fix this. I assume from your message > > that you understand that you need an external checkout of Mesa CVS. In > > addition you should know that only the i810 and radeon drivers are working at > > this point. > > > The problem still exists Keith. > > I guess the solution is to remove the reference to vtxfmt_tmp.h from the > Imakefile in xc/lib/GL/mesa/src anyway, as this file doesn't exist in the > Mesa 3.5 sources. OK. I guess I missed that one. I see you've fixed the problem -- thanks. Keith |
From: Keith W. <ke...@va...> - 2001-02-06 15:10:36
|
rt wrote: > > what about specialized hardware that performs the rendering based > on a final list of vertices? how can a hardware driver deal with this > situation? do hardware drivers operate on lists of vertices? > This handled by overriding the decomposition to triangles at one of two levels. The render stage is parameterized with a table of functions which handle whole begin/end objects. In mesa 3.5 these are called ctx->Driver.RenderTabXYZ. These are called for unclipped primitives only. You can override these to just emit vertices directly. The drivers we have don't do this. Instead they take a seperate approach of overriding the whole render stage. In the mesa-3-5 branch on dri cvs, the i810 and radeon drivers implement a specialized render stage that emits strips and other primitives directly to hardware in strip (or whatever) order. This is probably closest to what you are looking for. Keith |
From: Gareth H. <ga...@va...> - 2001-02-06 11:35:17
|
rt wrote: > > what about specialized hardware that performs the rendering based > on a final list of vertices? how can a hardware driver deal with this > situation? do hardware drivers operate on lists of vertices? Perhaps you should take a look at some of the DRI drivers. Basically all of them do exactly this (mga, i810, r128 and radeon at least). -- Gareth |
From: Alan H. <aho...@va...> - 2001-02-06 10:45:03
|
On Tue, Feb 06, 2001 at 10:31:57AM +0000, Alan Hourihane wrote: > On Mon, Feb 05, 2001 at 04:48:09PM -0700, Keith Whitwell wrote: > > Dieter Ntzel wrote: > > > > > > Hello Keith, > > > > > > I've tried to compile the DRI CVS mesa-3-5-branch and got this: > > > > > > ln -s /opt/Mesa/src/vtxfmt.c vtxfmt.c > > > rm -f vtxfmt.h > > > ln -s /opt/Mesa/src/vtxfmt.h vtxfmt.h > > > make[4]: *** No rule to make target `/opt/Mesa/src/vtxfmt_tmp.h', needed by > > > `vtxfmt_tmp.h'. Stop. > > > make[4]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL/mesa/src' > > > make[3]: *** [includes] Error 2 > > > make[3]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL' > > > make[2]: *** [includes] Error 2 > > > make[2]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib' > > > make[1]: *** [includes] Error 2 > > > make[1]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc' > > > make: *** [World] Error 2 > > > > > > Mesa CVS taken today. > > > > I updated the DRI branch on sunday to fix this. I assume from your message > > that you understand that you need an external checkout of Mesa CVS. In > > addition you should know that only the i810 and radeon drivers are working at > > this point. > > > The problem still exists Keith. > > I guess the solution is to remove the reference to vtxfmt_tmp.h from the > Imakefile in xc/lib/GL/mesa/src anyway, as this file doesn't exist in the > Mesa 3.5 sources. > There's another reference to m_debug_xform.h too. I've just committed a fix to remove them from the Imakefiles as they don't exist in the mesa 3.5 source tree. Alan. |
From: Alan H. <aho...@va...> - 2001-02-06 10:29:59
|
On Mon, Feb 05, 2001 at 04:48:09PM -0700, Keith Whitwell wrote: > Dieter Ntzel wrote: > > > > Hello Keith, > > > > I've tried to compile the DRI CVS mesa-3-5-branch and got this: > > > > ln -s /opt/Mesa/src/vtxfmt.c vtxfmt.c > > rm -f vtxfmt.h > > ln -s /opt/Mesa/src/vtxfmt.h vtxfmt.h > > make[4]: *** No rule to make target `/opt/Mesa/src/vtxfmt_tmp.h', needed by > > `vtxfmt_tmp.h'. Stop. > > make[4]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL/mesa/src' > > make[3]: *** [includes] Error 2 > > make[3]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL' > > make[2]: *** [includes] Error 2 > > make[2]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib' > > make[1]: *** [includes] Error 2 > > make[1]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc' > > make: *** [World] Error 2 > > > > Mesa CVS taken today. > > I updated the DRI branch on sunday to fix this. I assume from your message > that you understand that you need an external checkout of Mesa CVS. In > addition you should know that only the i810 and radeon drivers are working at > this point. > The problem still exists Keith. I guess the solution is to remove the reference to vtxfmt_tmp.h from the Imakefile in xc/lib/GL/mesa/src anyway, as this file doesn't exist in the Mesa 3.5 sources. Alan. |
From: rt <rt...@um...> - 2001-02-06 06:59:45
|
what about specialized hardware that performs the rendering based on a final list of vertices? how can a hardware driver deal with this situation? do hardware drivers operate on lists of vertices? rt On Tue, 6 Feb 2001, Gareth Hughes wrote: > rt wrote: > > > > does PointFunc, QuadFunc, LineFunc and the like provide for > > accelerated rasterization, or accelerated rendering, or both? it looks > > like the 3Dfx driver acclerates the rendering - is this correct? the > > xmesa driver, on the other hand, looks like it is concerned with > > rasterization. if they provide for both, how does one get them to provide > > for accelerated rendering working with a list of vertices? > > I'll take a stab at this... > > It kind of depends on which version of Mesa you're looking at. > Essentially what used to happen was core Mesa would allow a > rasterization-level driver (one that takes a batch of primitives and can > render them natively, like the pre-T&L hardware drivers) to hook out the > primitive functions. Drivers would supply functions if it could handle > rendering with the current state. > > If not, it would set them to NULL and mark the IndirectTriangles > bitfield with DD_TRI_SW_RASTERIZE and the like. Core Mesa would then > plug in a software triangle function, which spits out spans that must be > handled by the span-level interface. For hardware drivers, the > implementation of the span-level interface typically involves taking a > bunch of pixels and sticking them in the framebuffer (or vice versa, for > read functions). > > In the latest, still-under-active-development Mesa code, if the driver > hooks out the primitive functions and it can't handle the primitives > natively, it must call the software rasterizer itself. The same thing > happens with the software primitive functions, they still spit out spans > that are copyied to/from the framebuffer (typically). > > This is a pretty rough overview, completely from memory (so it may be > slightly inaccurate). I'm a little confused by the terminology you use > -- it's more a matter of rasterization-level versus > dumb-framebuffer-level (or perhaps the original Voodoo Graphics) > drivers, rather than rasterize versus render. > > -- Gareth > > _______________________________________________ > Mesa3d-dev mailing list > Mes...@li... > http://lists.sourceforge.net/lists/listinfo/mesa3d-dev > |
From: Gareth H. <ga...@va...> - 2001-02-06 04:40:31
|
rt wrote: > > does PointFunc, QuadFunc, LineFunc and the like provide for > accelerated rasterization, or accelerated rendering, or both? it looks > like the 3Dfx driver acclerates the rendering - is this correct? the > xmesa driver, on the other hand, looks like it is concerned with > rasterization. if they provide for both, how does one get them to provide > for accelerated rendering working with a list of vertices? I'll take a stab at this... It kind of depends on which version of Mesa you're looking at. Essentially what used to happen was core Mesa would allow a rasterization-level driver (one that takes a batch of primitives and can render them natively, like the pre-T&L hardware drivers) to hook out the primitive functions. Drivers would supply functions if it could handle rendering with the current state. If not, it would set them to NULL and mark the IndirectTriangles bitfield with DD_TRI_SW_RASTERIZE and the like. Core Mesa would then plug in a software triangle function, which spits out spans that must be handled by the span-level interface. For hardware drivers, the implementation of the span-level interface typically involves taking a bunch of pixels and sticking them in the framebuffer (or vice versa, for read functions). In the latest, still-under-active-development Mesa code, if the driver hooks out the primitive functions and it can't handle the primitives natively, it must call the software rasterizer itself. The same thing happens with the software primitive functions, they still spit out spans that are copyied to/from the framebuffer (typically). This is a pretty rough overview, completely from memory (so it may be slightly inaccurate). I'm a little confused by the terminology you use -- it's more a matter of rasterization-level versus dumb-framebuffer-level (or perhaps the original Voodoo Graphics) drivers, rather than rasterize versus render. -- Gareth |
From: Gareth H. <ga...@va...> - 2001-02-06 04:19:06
|
Keith Whitwell wrote: > > I'll chime in without having read the whole thread. > > Allowing DD functions to fail is always a bad idea. It blurs the distinction > between the actuall work required in core mesa to implement a function, and > the fallback case. These two pieces of code are at very different levels of > abstraction -- they are actually seperated by a whole level. You can get the > same effect by bundling up the fallback case and giving it a well-known name. > If the driver can't handle a case, it explicitly calls the fallback code. > > This is what 'swrast' is all about. A good example is ctx->DD.Clear, which > used to be allowed to fail or perform a partial execution. As a result the > core coded needed to either implement software fallbacks, or know about the > existence of 'swrast' and call into it itself -- this is pretty clunky. Now, > the same behaviour is simply achieved by requiring the driver to call into > swrast if it is unable to fulfill the request from core mesa. To my eye this > is cleaner, and the requirement on the driver is pretty trivial. Thank you, Keith. This is exactly what I'm going for. What I'd like to see is the Tex[Sub]Image* DD calls hand the raw image data to the driver, and what gets hung off the gl_texture_image structure is an image in whatever format the driver wants it in. It is up to the driver to choose the format and to do the conversion, if required. This is why I'd like the conversion utility functions to accept any input, have conversion fastpaths for common cases in a pluggable way (to allow MMX assembly), but have the generic case handling as well. I think this fits the spirit of 3.5 more than the current approach. -- Gareth |
From: Keith W. <ke...@va...> - 2001-02-06 02:17:45
|
Gareth Hughes wrote: > > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > Because it'll be up to core Mesa to unpack and store the texture image > > under all circumstances? > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > a card that uses some sort of tiled texture memory layout. I think it > > should be up to the driver to handle image conversion for something > > like this and not burdon core Mesa with strange conversion routines. > > I'm still toying with a few ideas, but you make a good point. Drivers > should be able to provide custom texture formats, with their custom > extractTexel functions, as needed. What I want to avoid is the huge > performance hit we see from using the utility functions. I'll chime in without having read the whole thread. Allowing DD functions to fail is always a bad idea. It blurs the distinction between the actuall work required in core mesa to implement a function, and the fallback case. These two pieces of code are at very different levels of abstraction -- they are actually seperated by a whole level. You can get the same effect by bundling up the fallback case and giving it a well-known name. If the driver can't handle a case, it explicitly calls the fallback code. This is what 'swrast' is all about. A good example is ctx->DD.Clear, which used to be allowed to fail or perform a partial execution. As a result the core coded needed to either implement software fallbacks, or know about the existence of 'swrast' and call into it itself -- this is pretty clunky. Now, the same behaviour is simply achieved by requiring the driver to call into swrast if it is unable to fulfill the request from core mesa. To my eye this is cleaner, and the requirement on the driver is pretty trivial. Keith |
From: Keith W. <ke...@va...> - 2001-02-06 02:12:05
|
Gareth Hughes wrote: > > Gareth Hughes wrote: > > > > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > > > Because it'll be up to core Mesa to unpack and store the texture image > > > under all circumstances? > > > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > > a card that uses some sort of tiled texture memory layout. I think it > > > should be up to the driver to handle image conversion for something > > > like this and not burdon core Mesa with strange conversion routines. > > I should add the following: > > Cards that do tiled textures will generally tile them for you on the > texture upload blit. This is perhaps the only really efficient way to > do tiled textures, as tiling them manually would be a comparatively > expensive operation. This is true in all the cases I've seen, at least, > and my experience may be unfairly biased. My experience (i810) is the same. The i810 handles all tiling transparently to the driver, just imposes a few rules about surfaces not crossing boundaries between tiled/untiled memory.h Keith |
From: Gareth H. <ga...@va...> - 2001-02-06 01:26:58
|
Gareth Hughes wrote: > > > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > > > Because it'll be up to core Mesa to unpack and store the texture image > > under all circumstances? > > > > I'm a bit worried about expecting core Mesa to do all this. Consider > > a card that uses some sort of tiled texture memory layout. I think it > > should be up to the driver to handle image conversion for something > > like this and not burdon core Mesa with strange conversion routines. I should add the following: Cards that do tiled textures will generally tile them for you on the texture upload blit. This is perhaps the only really efficient way to do tiled textures, as tiling them manually would be a comparatively expensive operation. This is true in all the cases I've seen, at least, and my experience may be unfairly biased. -- Gareth |
From: Gareth H. <ga...@va...> - 2001-02-06 01:02:15
|
Brian Paul wrote: > > > We define a set of these structures: > > > > const struct gl_texture_format gl_tex_format_argb8888 = { > > MESA_FORMAT_ARGB8888, > > 8, 8, 8, 8, 0, 0, 0, > > 4, > > ... > > }; > > > > const struct gl_texture_format gl_tex_format_rgb565 = { > > MESA_FORMAT_RGB565, > > 5, 6, 5, 0, 0, 0, 0, > > 2, > > ... > > }; > > Do these structs initialize the extractTexel function pointer? > I don't think they can. Or the driver would have to override > the defaults. Yes they can. The driver would only need to provide extractTexel functionality if if defined custom texture formats. Consider this: we want to store a texture as ARGB4444. So, we point texImage->TexFormat at gl_texformat_argb4444 (defined as above) and use the utility functions to produce a tightly-packed ARGB4444 texture. The structure includes one or more initialized extractTexel function pointers, which are aimed at function(s) that know how to read ARGB4444 textures. Depending on the mode (border/no border, 2D/3D is what the SI has), we pick one of the texture format's extractTexel functions and plug it into texImage->extractTexel(). I don't see the problem here. > It seems redundant to store the bits/channel info in both structs, but > that's a minor issue. No. The texImage->TexFormat pointer gets initialized to the default format (the one that core Mesa will produce if the driver doesn't request a different format). GL_RGBA textures are gl_texformat_rgba8888 and so on. The driver can change this if needed, as I described. There's an update_teximage_component_sizes() call that copies the sizes from the format structure to the teximage structure. This is nice as it's all done with a single entry point. If you change the format, you just call the update function and you're done. > > The Driver.TexImage* callbacks are now no longer allowed to fail. > > Because it'll be up to core Mesa to unpack and store the texture image > under all circumstances? > > I'm a bit worried about expecting core Mesa to do all this. Consider > a card that uses some sort of tiled texture memory layout. I think it > should be up to the driver to handle image conversion for something > like this and not burdon core Mesa with strange conversion routines. I'm still toying with a few ideas, but you make a good point. Drivers should be able to provide custom texture formats, with their custom extractTexel functions, as needed. What I want to avoid is the huge performance hit we see from using the utility functions. > The idea behind the texutil.c functions was to simply provide some > helper routines for drivers. Drivers can take them or leave them. Indeed. > Core Mesa never directly calls the texutil.c functions at this time. It might be nice to have the driver call a function, saying "I want this texture in ARGB4444 format, go and do it" and have it never fail. It's a small extension to the layered architecture I have in place now, that basically adds the generic case support to the image conversion routines as a kind of fallback path. I'll have to prototype this to see if it's worth it. The current method isn't bad, don't get me wrong. > > The critical difference here is that the driver selects the internal > > format to be used (like MESA_FORMAT_ARGB4444, MESA_FORMAT_AL88 etc), and > > the conversion function is guaranteed to return a tightly-packed image > > with that type. The conversion function is smart enough to pack RGB > > input data into RGBA textures, or alpha/luminance only data into an AL88 > > texture and so on. We do not need extra internal types to handle this. > > Are you saying something new here? This is the way things currently work. Yes. I've removed the F8_R8_G8_B8 format. We don't need it, or formats like it. I've also extended the accepted inputs into a number of formats to make it useful to more drivers, but this is a lesser point. > > The software rasterizer uses the texFormat->extractTexel()-like > > functions to extract individual or blocks of texels. The SI has a > > number of texel extraction functions per format, to support textures > > with and without borders and so on. The correct one would be plugged > > into texImage->extract(). This means that the image can be converted > > once only, stored in a hardware-friendly format, and interacted with by > > the software rasterizer. I think this is nicer than having to unconvert > > the entire image when there's a software fallback. > > I like this too. The trick is not penalizing the performance of software > Mesa (like the Xlib driver). I think that the optimized textured triangle > functions (for example) could detect when the texture image is in a > simple GLubyte, GL_RGB format and directly address texels. It would be > the unoptimized cases, like mipmapped textures, where we might be > incurring a new performance hit. Even that could probably be avoided > with some extra code. And as you say, the driver can store the texture in weird and wonderful ways and there's still a consistent interface for accessing the internals. > I hadn't done any work on this yet. We can do it together. > > I think the first things to do are: > > 1. Add the ExtractTexel() function pointer to the gl_texture_image struct. > > 2. Remove the Driver.GetTexImage() code and write new code to use the > ExtractTexel() function. > > I'll need some clarifications on the issues above before doing more. My highest priority at the moment in relation to this work is to move the Radeon driver over to this interface. Perhaps we can partition the work after that's done. -- Gareth |
From: Keith W. <ke...@va...> - 2001-02-05 22:46:07
|
Dieter Nützel wrote: > > Hello Keith, > > I've tried to compile the DRI CVS mesa-3-5-branch and got this: > > ln -s /opt/Mesa/src/vtxfmt.c vtxfmt.c > rm -f vtxfmt.h > ln -s /opt/Mesa/src/vtxfmt.h vtxfmt.h > make[4]: *** No rule to make target `/opt/Mesa/src/vtxfmt_tmp.h', needed by > `vtxfmt_tmp.h'. Stop. > make[4]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL/mesa/src' > make[3]: *** [includes] Error 2 > make[3]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib/GL' > make[2]: *** [includes] Error 2 > make[2]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc/lib' > make[1]: *** [includes] Error 2 > make[1]: Leaving directory `/tmp/INSTALL/SOURCE/dri/xc/xc' > make: *** [World] Error 2 > > Mesa CVS taken today. I updated the DRI branch on sunday to fix this. I assume from your message that you understand that you need an external checkout of Mesa CVS. In addition you should know that only the i810 and radeon drivers are working at this point. Keith |
From: rt <rt...@um...> - 2001-02-05 22:28:03
|
does PointFunc, QuadFunc, LineFunc and the like provide for accelerated rasterization, or accelerated rendering, or both? it looks like the 3Dfx driver acclerates the rendering - is this correct? the xmesa driver, on the other hand, looks like it is concerned with rasterization. if they provide for both, how does one get them to provide for accelerated rendering working with a list of vertices? thanks in advance, rt |