You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(10) |
Apr
(28) |
May
(41) |
Jun
(91) |
Jul
(63) |
Aug
(45) |
Sep
(37) |
Oct
(80) |
Nov
(91) |
Dec
(47) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(48) |
Feb
(121) |
Mar
(126) |
Apr
(16) |
May
(85) |
Jun
(84) |
Jul
(115) |
Aug
(71) |
Sep
(27) |
Oct
(33) |
Nov
(15) |
Dec
(71) |
2002 |
Jan
(73) |
Feb
(34) |
Mar
(39) |
Apr
(135) |
May
(59) |
Jun
(116) |
Jul
(93) |
Aug
(40) |
Sep
(50) |
Oct
(87) |
Nov
(90) |
Dec
(32) |
2003 |
Jan
(181) |
Feb
(101) |
Mar
(231) |
Apr
(240) |
May
(148) |
Jun
(228) |
Jul
(156) |
Aug
(49) |
Sep
(173) |
Oct
(169) |
Nov
(137) |
Dec
(163) |
2004 |
Jan
(243) |
Feb
(141) |
Mar
(183) |
Apr
(364) |
May
(369) |
Jun
(251) |
Jul
(194) |
Aug
(140) |
Sep
(154) |
Oct
(167) |
Nov
(86) |
Dec
(109) |
2005 |
Jan
(176) |
Feb
(140) |
Mar
(112) |
Apr
(158) |
May
(140) |
Jun
(201) |
Jul
(123) |
Aug
(196) |
Sep
(143) |
Oct
(165) |
Nov
(158) |
Dec
(79) |
2006 |
Jan
(90) |
Feb
(156) |
Mar
(125) |
Apr
(146) |
May
(169) |
Jun
(146) |
Jul
(150) |
Aug
(176) |
Sep
(156) |
Oct
(237) |
Nov
(179) |
Dec
(140) |
2007 |
Jan
(144) |
Feb
(116) |
Mar
(261) |
Apr
(279) |
May
(222) |
Jun
(103) |
Jul
(237) |
Aug
(191) |
Sep
(113) |
Oct
(129) |
Nov
(141) |
Dec
(165) |
2008 |
Jan
(152) |
Feb
(195) |
Mar
(242) |
Apr
(146) |
May
(151) |
Jun
(172) |
Jul
(123) |
Aug
(195) |
Sep
(195) |
Oct
(138) |
Nov
(183) |
Dec
(125) |
2009 |
Jan
(268) |
Feb
(281) |
Mar
(295) |
Apr
(293) |
May
(273) |
Jun
(265) |
Jul
(406) |
Aug
(679) |
Sep
(434) |
Oct
(357) |
Nov
(306) |
Dec
(478) |
2010 |
Jan
(856) |
Feb
(668) |
Mar
(927) |
Apr
(269) |
May
(12) |
Jun
(13) |
Jul
(6) |
Aug
(8) |
Sep
(23) |
Oct
(4) |
Nov
(8) |
Dec
(11) |
2011 |
Jan
(4) |
Feb
(2) |
Mar
(3) |
Apr
(9) |
May
(6) |
Jun
|
Jul
(1) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2013 |
Jan
(2) |
Feb
(2) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(7) |
Nov
(1) |
Dec
|
2014 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Corbin S. <mos...@gm...> - 2010-03-14 08:26:46
|
On Sat, Mar 13, 2010 at 3:13 AM, Peter Dolding <oi...@gm...> wrote: > http://ccache.samba.org/ is where part of this idea comes from. > > > Speeding up glsl conversion to gpu code will make things more effective. > There is no point running the glsl to native gpu conversions more times than > that are require particularly on devices or anything that is depending on > battery life. > > This brings me to the second half of the idea common storage framework. > > Directory struct that comes to mind for me. > > /usr/shared/galuim3d/<target>/<application name>/<shader>/<version as > filename> > > Target would contain glsl for like the raw glsl and like R300 for card > particular implementations as per a list. > > Of course version of compiler would have to be stuck at the start of the > pre built GPU code and checked on load. If out of date rebuild. > > Then like a opengl extension to request shared glsl code access. > galuim3dloader(application,shader,version) direct. > galuim3dloaderlatest(application,shader) just to load the newest version of > that shader. > > Few advantages of common glsl storage applications able to share glsl code > like they can share libraries making it simple to implement fancy features > on the gpu. Able to reduce how much glsl code has to be built to > particular cards. And the possibility of allowing applications to migrate > from card to card of different types without issues. Reason shader > information commonally exposed. > > Same kind of caching for opencl would also be good. No point wasting > cpu/gpu time running compilers when we can cache or store the results. > > Of course these ideas really were not possible to implement without access > to raw GPU native code. Already done, at least the parts that make sense. We have a system called CSO (Constant State Objects) that caches native shader objects. We don't store them as files for a handful of reasons; the largest reason is that shaders need to be recompiled depending on other state inside the driver. -- Corbin Simpson <Mos...@gm...> |
From: Chia-I Wu <ol...@gm...> - 2010-03-14 06:53:08
|
On Fri, Mar 12, 2010 at 1:12 PM, Chia-I Wu <ol...@gm...> wrote: > On Fri, Mar 12, 2010 at 12:20 PM, Jakob Bornecrantz > <wal...@gm...> wrote: >> On Fri, Mar 12, 2010 at 3:00 AM, Chia-I Wu <ol...@gm...> wrote: >>> On Thu, Mar 11, 2010 at 12:15 PM, Chia-I Wu <ol...@gm...> wrote: >>>> This patch series adds st_api interface to st/mesa and st/vega, and >>>> switch st/egl and st/glx from st_public interface to the new interface. >>> I've pushed most of the this patch series to gallium-st-api. I'd like to have >>> this topic branch focus on the switch of st/egl and st/glx to use st_api. >>> Further works, such as the switch of st/dri, EGLImage extensions will happen >>> directly in master or some other topic branches, whichever suits better. >>> >>> The implementations of the new interfaces (st_api, st_framebuffer_iface) are >>> isolated in new files in each state tracker. The isolation makes it easier to >>> locate the changes. But more importantly, unlike the rest of a state tracker, >>> the interfaces might be called from different threads. I used whatever >>> existing mechanisms available to protect those callbacks, but when there is no >>> such mechanism, I ignored the issue mostly. >>> >>> I have one open question so far. With st_api, co state trackers no longer have >>> access to pipe_contexts. The pipe_contexts are usually used to implement >>> >>> * glXCopySubBufferMESA: copy a region of the back buffer to the front >>> * eglCopyBuffers: copy the render buffer of a surface to a pixmap >>> * SwapBuffers and FlushFront in DRI1 >>> >>> The copying is a framebuffer operation and it does not make sense to steal a >>> random API context for the copying. Remember that the co state trackers always >>> own the pipe_textures of a drawable in st_api. One way to solve the problem is >>> to let the co state trackers own pipe_contexts of their own. A co state tracker >>> may create its pipe_context (and probably a blitter context) on demand when >>> glXCopySubBufferMESA or eglCopyBuffers is called and do the copying directly. >>> >>> But as all the 3 cases need to do a pipe_screen::flush_frontbuffer after the >>> copying, a better approach might be to rename and enhance >>> pipe_screen::flush_frontbuffer to support them directly, or a mix of both >>> approaches. >> Hmm maybe flush_frontbuffer should be made into a context function, or >> take a pipe_context. But that would probably require >> st_framebuffer_iface::flush_front to take a st_context as well. Not >> that I have anything against that. >> For eglCopyBuffers the manager probably need a pipe_context for >> itself. If you don't want to randomly steal one that is currently >> bound, even then it is not guarantied to be any context created, so as >> a fallback you will probably always have a context around. I've changed st/egl and st/glx to create a pipe_context as needed to copy between pipe_textures. Only pipe_context::surface_copy is used. It seems to be the easiest way to do things right now. As a result, gallium-st-api passes as many piglit quick tests as the master branch does! > I am thinking change pipe_screen::flush_frontbuffer to > > void > pipe_screen::display_texture(struct pipe_screen *, > struct pipe_texture *, > void *winsys_drawable_handle, > unsigned x, unsigned y, > unsigned width, unsigned height); > > The pipe_texture must be created with DISPLAY_TARGET or SCANOUT. The type of > the opaque drawable handle is pipe_screen specific and is known to the co state > tracker, or whoever, created the pipe_screen. > > This solves the problem with glXCopySubBufferMESA and eglCopyBuffers. What > remain to be done for the DRI1 case is to define the DRI1 displaytarget and > switch st/dri to use it. > > Down to the lowest level, DRI1 displaytarget still faces the problem of > displaying a pipe_texture, given struct dri1_api. In the case > dri1_api::front_srf_locked is supported, it still needs to copy the texture. > > Therefore, a mixed approach seems be the best. With the change to > pipe_screen::flush_frontbuffer, glXCopySubBufferMESA and eglCopyBuffers do not > need a pipe_context. And a pipe_context will be created by st/dri when DRI1 is > used. This mixed approach requires less work. As DRI1 is phasing out, I think > it is a sensible approach. > > This is just some ideas. The switch of st/dri can be delayed until > gallium-st-api is merged. But if you are interested, I also have a local > branch that switches st/dri to use st_api, but only the DRI2 part. It is > functioning, but the patch is a little bit dirty.. > > -- > ol...@Lu... > -- ol...@Lu... |
From: Chia-I Wu <ol...@gm...> - 2010-03-14 06:46:09
|
On Fri, Mar 12, 2010 at 10:08 PM, José Fonseca <jfo...@vm...> wrote: > On Thu, 2010-03-11 at 21:12 -0800, Chia-I Wu wrote: >> On Fri, Mar 12, 2010 at 12:20 PM, Jakob Bornecrantz >> <wal...@gm...> wrote: >> > On Fri, Mar 12, 2010 at 3:00 AM, Chia-I Wu <ol...@gm...> wrote: >> >> On Thu, Mar 11, 2010 at 12:15 PM, Chia-I Wu <ol...@gm...> wrote: >> >>> This patch series adds st_api interface to st/mesa and st/vega, and >> >>> switch st/egl and st/glx from st_public interface to the new interface. >> >> I've pushed most of the this patch series to gallium-st-api. I'd like to have >> >> this topic branch focus on the switch of st/egl and st/glx to use st_api. >> >> Further works, such as the switch of st/dri, EGLImage extensions will happen >> >> directly in master or some other topic branches, whichever suits better. >> >> >> >> The implementations of the new interfaces (st_api, st_framebuffer_iface) are >> >> isolated in new files in each state tracker. The isolation makes it easier to >> >> locate the changes. But more importantly, unlike the rest of a state tracker, >> >> the interfaces might be called from different threads. I used whatever >> >> existing mechanisms available to protect those callbacks, but when there is no >> >> such mechanism, I ignored the issue mostly. >> >> >> >> I have one open question so far. With st_api, co state trackers no longer have >> >> access to pipe_contexts. The pipe_contexts are usually used to implement >> >> >> >> * glXCopySubBufferMESA: copy a region of the back buffer to the front >> >> * eglCopyBuffers: copy the render buffer of a surface to a pixmap >> >> * SwapBuffers and FlushFront in DRI1 >> >> >> >> The copying is a framebuffer operation and it does not make sense to steal a >> >> random API context for the copying. Remember that the co state trackers always >> >> own the pipe_textures of a drawable in st_api. One way to solve the problem is >> >> to let the co state trackers own pipe_contexts of their own. A co state tracker >> >> may create its pipe_context (and probably a blitter context) on demand when >> >> glXCopySubBufferMESA or eglCopyBuffers is called and do the copying directly. >> >> >> >> But as all the 3 cases need to do a pipe_screen::flush_frontbuffer after the >> >> copying, a better approach might be to rename and enhance >> >> pipe_screen::flush_frontbuffer to support them directly, or a mix of both >> >> approaches. >> > Hmm maybe flush_frontbuffer should be made into a context function, or >> > take a pipe_context. But that would probably require >> > st_framebuffer_iface::flush_front to take a st_context as well. Not >> > that I have anything against that. >> > For eglCopyBuffers the manager probably need a pipe_context for >> > itself. If you don't want to randomly steal one that is currently >> > bound, even then it is not guarantied to be any context created, so as >> > a fallback you will probably always have a context around. >> I am thinking change pipe_screen::flush_frontbuffer to >> >> void >> pipe_screen::display_texture(struct pipe_screen *, >> struct pipe_texture *, >> void *winsys_drawable_handle, >> unsigned x, unsigned y, >> unsigned width, unsigned height); >> >> The pipe_texture must be created with DISPLAY_TARGET or SCANOUT. The type of >> the opaque drawable handle is pipe_screen specific and is known to the co state >> tracker, or whoever, created the pipe_screen. > I'd support this. This is pretty much what I'd need for WGL too. > Regarding being pipe_screen vs pipe_context I don't have a strong > opinion about this, but I believe making it a context operation is > adequate. > Presents are translated either to a series of blits or a fullscreen > flip. Certain hardware deos not have dedicate blit engines, so having a > context for doing the blits via 3d is very handy. > On Windows XP presents don't require context. On Vista and later > presents already require a rendering context (I typically use the last > one that rendered to a surface). On X11 with software rasterizer, flush_frontbuffer (I use flush_frontbuffer in this mail, but it also applies to display_texture) is implemented through X(Shm)PutImage. When DRI2 is used, it should be implemented through DRI2CopyRegion. The required parameters for calling X(Shm)PutImage or DRI2CopyRegion are provided by screen-specific winsys_drawable_handle. The callback should generally be implemented with window system APIs. But, there is no efficient way to implement the callback without a pipe_context when DRI1 is used. DRI1 needs to copy the textures (fake front or back) to the real front buffer. Passing pipe_context as a member of winsys_drawable_handle seems to solve the problem, but I am not sure if it is considered a fix or a hack though. flush_frontbuffer is only implemented on software pipe drivers. What DRI1 faces might exist on other window systems (Windows?). It seems the callback cannot be generally implemented without a pipe_context. It should be either move the callback to pipe_context, or keep it in pipe_screen and pass a pipe_context as a member of winsys_drawable_handle for window systems that need it. I am leaning toward the latter right now. > Furthermore, now it is very easy for any state tracker entity to create > a dummy context for if needed, via pipe_screen::context_create(). -- ol...@Lu... |
From: <bug...@fr...> - 2010-03-14 05:22:57
|
http://bugs.freedesktop.org/show_bug.cgi?id=27065 Chia-I Wu <ol...@gm...> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED --- Comment #1 from Chia-I Wu <ol...@gm...> 2010-03-13 21:22:46 PST --- Does it help to run "make clean" before "make"? Or simply $ rm -f rm src/gallium/state_trackers/egl/depend* before "make"? -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug. |
From: Jeff S. <why...@ya...> - 2010-03-14 04:20:43
|
>From: Dan Nicholson <dbn...@gm...> >To: Jeff Smith <why...@ya...> >Cc: Brian Paul <br...@vm...>; David Miller <da...@da...>; "mes...@li..." <mes...@li...> >Sent: Sat, March 13, 2010 2:02:17 PM >Subject: Re: [Mesa3d-dev] xdemos build breakage... > >> If we just use X_{CFLAGS,LIBS}, then we don't have to do the dance >> with X11_{CFLAGS,LIBS} and it will work for manual overrides whether >> people have pkg-config or not. So, I'd suggest changing the first >> argument to PKG_CHECK_MODULES to just X and using X_{CFLAGS,LIBS} >> everywhere else. > >I went ahead and committed the patch with these changes since I needed >it for something else. See 8d86d395dcf6a5f192b6987485bb7aef49f1fefc. Except that AC_PATH_XTRA returns X_LIBS without '-lX11', while PKG_CHECK_MODULES returns X_LIBS with it. In the attached patch I add '-lX11' to the former. Of course, with '-lX11' as part of X_LIBS, the explicit '-lX11' can be removed from the places that use X_LIBS. -- Jeff |
From: <bug...@fr...> - 2010-03-14 03:50:15
|
http://bugs.freedesktop.org/show_bug.cgi?id=27065 Summary: build failure: No rule to make target `../../../../src/gallium/drivers/softpipe/sp_winsys.h', needed by `x11/native_ximage.o'. Product: Mesa Version: git Platform: x86 (IA32) OS/Version: Linux (All) Status: NEW Severity: critical Priority: medium Component: Other AssignedTo: mes...@li... ReportedBy: Dav...@Mc... I updated master with git pull rebase, ran autogen.sh and make. The build dies with: gmake[4]: Entering directory `/home/ronis/Project/notar/X/mesa/src/gallium/state_trackers/egl' gmake[4]: *** No rule to make target `../../../../src/gallium/drivers/softpipe/sp_winsys.h', needed by `x11/native_ximage.o'. Stop. -- Configure bugmail: http://bugs.freedesktop.org/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are the assignee for the bug. |
From: Dan N. <dbn...@gm...> - 2010-03-13 20:03:41
|
On Fri, Mar 12, 2010 at 8:23 AM, Chris Ball <cj...@la...> wrote: > Hi, > > http://tinderbox.x.org/builds/2010-03-12-0016/logs/libGL/#build > > In file included from xlib_sw_winsys.c:42: > ../../../../src/gallium/include/state_tracker/xlib_sw_winsys.h:5:22: > error: X11/Xlib.h: No such file or directory > > (This regression appears to be new today. I'm building without a > system libX11 installed.) I'm glad you're doing that. I don't think mesa does a very good job of using the X packages the user said they wanted to use. Or, at least it doesn't do it consistently. Anyway, I committed something to hopefully fix it. We'll see on the next run. -- Dan |
From: Dan N. <dbn...@gm...> - 2010-03-13 20:02:28
|
On Sat, Mar 13, 2010 at 11:18 AM, Dan Nicholson <dbn...@gm...> wrote: > On Fri, Mar 12, 2010 at 5:25 PM, Jeff Smith <why...@ya...> wrote: >>>From: Dan Nicholson <dbn...@gm...> >> >>>To: Brian Paul <br...@vm...> >>>Cc: Jeff Smith <why...@ya...>; David Miller <da...@da...>; "mes...@li..." <mes...@li...> >>>Sent: Fri, March 12, 2010 10:51:29 AM >>>Subject: Re: [Mesa3d-dev] xdemos build breakage... >>> >>>>>That's not really the right thing, though. You're assuming that I have >>>>>libX11 in the same libdir as I'm installing to and I want to use it. >>>>>The fact is that configure uses pkg-config to check for x11 and other >>>>>libraries needed to link the demos. It certainly was working before >>>>>without requiring hardcoding things into the Makefiles. >>>> >>>> Oops, I didn't see your reply, Dan. I already committed Jeff's patch. If you have better fix, please revert. >>> >>>No problem. I'll look at it a little later and see if there's more of >>>a general fix from autoconf. I imagine it's not the last time we'll >>>see build breakage in the demos. >>> >>>-- >>>Dan >> >> >> Dan, >> Can you please review this patch? I believe it handles the case described. > > Yeah, I think this is a better way to handle it. Still not 100% > foolproof, and we've still got -lpthread kludged in there, but this > should work for more people. Some comments below. > > diff --git a/configs/autoconf.in b/configs/autoconf.in > index bf34f3b..66c1ee4 100644 > --- a/configs/autoconf.in > +++ b/configs/autoconf.in > @@ -24,6 +24,8 @@ RADEON_CFLAGS = @RADEON_CFLAGS@ > RADEON_LDFLAGS = @RADEON_LDFLAGS@ > INTEL_LIBS = @INTEL_LIBS@ > INTEL_CFLAGS = @INTEL_CFLAGS@ > +X11_LIBS = @X11_LIBS@ > +X11_CFLAGS = @X11_CFLAGS@ > > # Assembler > MESA_ASM_SOURCES = @MESA_ASM_SOURCES@ > diff --git a/configure.ac b/configure.ac > index c5ff8dc..ccc3107 100644 > --- a/configure.ac > +++ b/configure.ac > @@ -547,8 +547,16 @@ else > x11_pkgconfig=no > fi > dnl Use the autoconf macro if no pkg-config files > -if test "$x11_pkgconfig" = no; then > +if test "$x11_pkgconfig" = yes; then > + PKG_CHECK_MODULES([X11], [x11]) > +else > AC_PATH_XTRA > + if test -z "$X11_CFLAGS"; then > + X11_CFLAGS="$X_CFLAGS" > + fi > + if test -z "$X11_LIBS"; then > + X11_LIBS="$X_LIBS -lX11" > + fi > fi > > If we just use X_{CFLAGS,LIBS}, then we don't have to do the dance > with X11_{CFLAGS,LIBS} and it will work for manual overrides whether > people have pkg-config or not. So, I'd suggest changing the first > argument to PKG_CHECK_MODULES to just X and using X_{CFLAGS,LIBS} > everywhere else. I went ahead and committed the patch with these changes since I needed it for something else. See 8d86d395dcf6a5f192b6987485bb7aef49f1fefc. Thanks. -- Dan |
From: Dan N. <dbn...@gm...> - 2010-03-13 19:18:13
|
On Fri, Mar 12, 2010 at 5:25 PM, Jeff Smith <why...@ya...> wrote: >>From: Dan Nicholson <dbn...@gm...> > >>To: Brian Paul <br...@vm...> >>Cc: Jeff Smith <why...@ya...>; David Miller <da...@da...>; "mes...@li..." <mes...@li...> >>Sent: Fri, March 12, 2010 10:51:29 AM >>Subject: Re: [Mesa3d-dev] xdemos build breakage... >> >>>>That's not really the right thing, though. You're assuming that I have >>>>libX11 in the same libdir as I'm installing to and I want to use it. >>>>The fact is that configure uses pkg-config to check for x11 and other >>>>libraries needed to link the demos. It certainly was working before >>>>without requiring hardcoding things into the Makefiles. >>> >>> Oops, I didn't see your reply, Dan. I already committed Jeff's patch. If you have better fix, please revert. >> >>No problem. I'll look at it a little later and see if there's more of >>a general fix from autoconf. I imagine it's not the last time we'll >>see build breakage in the demos. >> >>-- >>Dan > > > Dan, > Can you please review this patch? I believe it handles the case described. Yeah, I think this is a better way to handle it. Still not 100% foolproof, and we've still got -lpthread kludged in there, but this should work for more people. Some comments below. diff --git a/configs/autoconf.in b/configs/autoconf.in index bf34f3b..66c1ee4 100644 --- a/configs/autoconf.in +++ b/configs/autoconf.in @@ -24,6 +24,8 @@ RADEON_CFLAGS = @RADEON_CFLAGS@ RADEON_LDFLAGS = @RADEON_LDFLAGS@ INTEL_LIBS = @INTEL_LIBS@ INTEL_CFLAGS = @INTEL_CFLAGS@ +X11_LIBS = @X11_LIBS@ +X11_CFLAGS = @X11_CFLAGS@ # Assembler MESA_ASM_SOURCES = @MESA_ASM_SOURCES@ diff --git a/configure.ac b/configure.ac index c5ff8dc..ccc3107 100644 --- a/configure.ac +++ b/configure.ac @@ -547,8 +547,16 @@ else x11_pkgconfig=no fi dnl Use the autoconf macro if no pkg-config files -if test "$x11_pkgconfig" = no; then +if test "$x11_pkgconfig" = yes; then + PKG_CHECK_MODULES([X11], [x11]) +else AC_PATH_XTRA + if test -z "$X11_CFLAGS"; then + X11_CFLAGS="$X_CFLAGS" + fi + if test -z "$X11_LIBS"; then + X11_LIBS="$X_LIBS -lX11" + fi fi If we just use X_{CFLAGS,LIBS}, then we don't have to do the dance with X11_{CFLAGS,LIBS} and it will work for manual overrides whether people have pkg-config or not. So, I'd suggest changing the first argument to PKG_CHECK_MODULES to just X and using X_{CFLAGS,LIBS} everywhere else. -- Dan |
From: Maxim L. <max...@gm...> - 2010-03-13 18:29:05
|
On Thu, 2010-03-11 at 19:05 +0200, Maxim Levitsky wrote: > On Wed, 2010-03-10 at 13:04 +0100, Florian Mickler wrote: > > On Sun, 07 Mar 2010 12:39:15 +0200 > > Maxim Levitsky <max...@gm...> wrote: > > > > > On Sat, 2010-03-06 at 23:55 +0100, Florian Mickler wrote: > > > > On Sun, 07 Mar 2010 00:24:24 +0200 > > > > Maxim Levitsky <max...@gm...> wrote: > > > > > > > > > On Sun, 2010-03-07 at 00:05 +0200, Maxim Levitsky wrote: > > > > > > On Sat, 2010-03-06 at 22:35 +0100, Florian Mickler wrote: > > > > > > > On Sat, 06 Mar 2010 18:02:51 +0100 > > > > > > > Stephan Raue <mai...@op...> wrote: > > > > > > > > > > > > > > > looks this like my problems that i have reported some days ago with > > > > > > > > Subject "Problem using an Mesa based App with recent > > > > > > > > xorg/mesa/xf86-video-intel (loop?)" to Mesa-dev, xorg and intel-gfx list? > > > > > > > > > > > > > > > > i have still this issue, but i dont know what you need for informations > > > > > > > > to fix the issues? > > > > > > > > > > > > > > > > with ati driver i dont have problems, only here with intel driver on my > > > > > > > > Thinkpad X200t with intel HDA Graphics card > > > > > > > > > > > > > > > > > > > > I now see that compiz hangs in same way. > > > > > > > > > > > > Attached are backtrace of the compiz, and backtrace of etracer which did > > > > > > start full screen but became hung on resolution change. > > > > > > > > > > > > Best regards, > > > > > > Maxim Levitsky > > > > > > > > > > Other info that might help: > > > > > > > > > > I took a look at X and found that it was in normal waiting state > > > > > sleeping waiting for input. > > > > > > > > > > Also, I found when 'unstable' mesa would appear to work when I start the > > > > > X while 'stable' one is used. It was compiz. When compiz is running > > > > > using stable mesa, an game does change the resolution 'usualy' without > > > > > hang even if uses unstable mesa. > > > > > > > > > > Best regards, > > > > > Maxim Levitsky > > > > > > > > i found that the kernel updates for 2.6.34-rc1 did make the hang time > > > > out... this has to be some vblank issue, i assume... > > > > > > > > > Note that I did try git master of linus tree, and that didn't help with > > > the hang at all (now pulled it again, but don't see any changes in drm > > > code) > > > > > > Best regards, > > > Maxim Levisky > > > > yeah. i'm sorry. My issues vanished with current git versions of > > libdrm,mesa,xserver,xf86-video-intel ... while trying to find out(in > > vain) which part of the stack did fix the issue, i noticed that the > > xserver-patches in jesse's tree was which changed "hung" into > > "timeout"... > > > Note that I updated the stack today, but nothing changed. > > Best regards, > Maxim Levitsky > Also note that mesa master + xserver-1.7-branch work fine too. Now xserver-1.7-branch=5a2b3f36a05d1e0fcfd1b0f85d6584478ba24eda Best regards, Maxim Levitsky |
From: Keith W. <kei...@go...> - 2010-03-13 17:35:51
|
Sounds good to me - fewer driver directories to fix up after changes... Keith On Sat, Mar 13, 2010 at 5:29 PM, Luca Barbieri <lu...@lu...> wrote: > Currently the nv30 and nv40 Gallium drivers are very similar, and > contain about 5000 lines of essentially duplicate code. > > I prepared a patchset (which can be found at > http://repo.or.cz/w/mesa/mesa-lb.git/shortlog/refs/heads/unification+fixes) > which gradually unifies the drivers, one file per the commit. > > A new "nvfx" directory is created, and unified files are put there one by one. > After all patches are applied, the nv30 and nv40 directories are > removed and the only the new nvfx directory remains. > > The first patches unify the engine naming (s/curie/eng3d/g; > s/rankine/eng3d), and switch nv40 to use the NV34TCL_ constants. > Initial versions of this work changed renouveau.xml to create a new > "NVFXTCL" object, but the current version doesn't need any > renouveau.xml modification at all. > > The "unification+fixes" branch referenced above is the one that should > be tested. > The "unification" branch contains just the unification, with no > behavior changes, while "unification+fixes" also fixes swtnl and quad > rendering, allowing to better test the unification. Some cleanups on > top of the unfication are also included. > > That same repository also contains other branches with significant > improvements on top of the unification, but I'm still not proposing > them for inclusion as they need more testing and some fixes. > > While there are some branches in the Mesa repository that would > conflict with this, such branches seem to be popping up continuously > (and this is good!), so waiting until they are merged probably won't > really work. > > The conflicts are minimal anyway and the driver fixes can be very > easily reconstructed over the unified codebase. > > How about merging this? > Any objections? Any comments? > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Mesa3d-dev mailing list > Mes...@li... > https://lists.sourceforge.net/lists/listinfo/mesa3d-dev > |
From: Luca B. <lu...@lu...> - 2010-03-13 17:30:00
|
Currently the nv30 and nv40 Gallium drivers are very similar, and contain about 5000 lines of essentially duplicate code. I prepared a patchset (which can be found at http://repo.or.cz/w/mesa/mesa-lb.git/shortlog/refs/heads/unification+fixes) which gradually unifies the drivers, one file per the commit. A new "nvfx" directory is created, and unified files are put there one by one. After all patches are applied, the nv30 and nv40 directories are removed and the only the new nvfx directory remains. The first patches unify the engine naming (s/curie/eng3d/g; s/rankine/eng3d), and switch nv40 to use the NV34TCL_ constants. Initial versions of this work changed renouveau.xml to create a new "NVFXTCL" object, but the current version doesn't need any renouveau.xml modification at all. The "unification+fixes" branch referenced above is the one that should be tested. The "unification" branch contains just the unification, with no behavior changes, while "unification+fixes" also fixes swtnl and quad rendering, allowing to better test the unification. Some cleanups on top of the unfication are also included. That same repository also contains other branches with significant improvements on top of the unification, but I'm still not proposing them for inclusion as they need more testing and some fixes. While there are some branches in the Mesa repository that would conflict with this, such branches seem to be popping up continuously (and this is good!), so waiting until they are merged probably won't really work. The conflicts are minimal anyway and the driver fixes can be very easily reconstructed over the unified codebase. How about merging this? Any objections? Any comments? |
From: Maxim L. <max...@gm...> - 2010-03-13 14:52:49
|
This time I caught it early. 46450c1f3f93bf4dc96696fc7e0f0eb808d9c08a is first bad commit commit 46450c1f3f93bf4dc96696fc7e0f0eb808d9c08a Author: Eric Anholt <er...@an...> Date: Wed Mar 10 17:38:33 2010 -0800 i965: Do FS SLT, SGT, and friends using CMP, SEL instead of CMP, MOV, MOV. :040000 040000 d6abcec74652e20faf81feac8486cfb8ef979494 d5b5c11b472e463525965d9673c0170b0eb206f1 M src (Revert helps restore correct behaviour) This breaks several shaders in my examples folder, that I downloaded from GLSL tutorial site. (http://www.lighthouse3d.com/opengl/glsl/) Attached two example shaders. Best regards, Maxim Levitsky |
From: Brian P. <bri...@gm...> - 2010-03-13 14:51:43
|
On Sat, Mar 13, 2010 at 3:20 AM, Maciej Cencora <m.c...@gm...> wrote: > Dnia sobota, 13 marca 2010 o 01:10:01 Brian Paul napisał(a): >> On Fri, Mar 12, 2010 at 12:19 PM, Maciej Cencora <m.c...@gm...> > wrote: >> > Hi all, >> > >> > I've got some questions regarding FBOs in mesa. I hope you'll be able to >> > answer at least some of them. >> > >> > GLcontext structure holds pointers to 4 gl_framebuffers (DrawBuffer, >> > ReadBuffer, WinSysDrawBuffer, WinSysReadBuffer) and one to >> > gl_renderbuffer >> > (CurrentRenderbuffer). gl_framebuffer struct contains amongs other >> > fields: _ColorDrawBuffers[], _ColorReadBuffers, _DepthBuffer, >> > _StencilBuffer. Now having in mind that r300 wraps all gl_renderbuffers >> > and gl_texture_object >> >> Do you mean that r300 subclasses gl_renderbuffer and >> gl_texture_object? That's what I think you mean. The term "wrapping" >> has a special meaning for renderbuffers (see below). > > Yes, r300 subclasses these structs (and additionally wraps texture objects > with renderbuffers). > >> >> > (they contain HW buffer objects that I'm interested in), what is the >> > proper way to get to read and draw buffers? >> >> The current "read" framebuffer is at ctx->ReadBuffer. The current >> drawing framebuffer is ctx->DrawBuffer. >> >> Then, if you're reading _colors_ the renderbuffer to use is >> ctx->ReadBuffer->_ColorReadBuffer. If you want to read _stencil_ >> values, it would be ctx->ReadBuffer->_StencilBuffer. Similarly for >> Z/depth, etc. > > That's what I'm doing, but sometimes it happens that ctx->ReadBuffer- >>_ColorReadBuffer is NULL (see drivers/dri/radeon/radeon_tex_copy.c). Is this a > legal situation? What should I do then if user requested glCopyTexImage or > glCopyPixels? That shouldn't normally happen. Do you have a test program that exhibits that? If a user specifies a read buffer (like GL_COLOR_ATTACHMENT0) that's missing in the current FBO, the error GL_FRAMEBUFFER_INCOMPLETE_READ_BUFFER_EXT should be generated during FBO validation. In that case _ColorReadBuffer would be NULL. >> To draw to color buffers (there may be more than one) you'll want >> ctx->DrawBuffer->_ColorDrawBuffers[]. For stencil it's >> ctx->DrawBuffer->_Stencil, etc. >> >> Renderbuffers typically correspond to a region of video memory. How >> that works is driver-dependent. >> >> > What operations use ctx->_ReadBuffer besides glReadPixels,glCopyPixels >> > and glCopyTex(Sub)Image? >> >> glCopyColorTable and maybe some other obscure functions. You got the main >> ones. >> >> > What operations use ctx->_DrawBuffer besides rendering operations? >> > (glBegin/glEnd, glDrawElements, glDrawArrays, ...) >> >> That's basically it, plus glClear. >> >> > Am I correct that for ctx->_DrawBuffer field _ColorReadBuffer is unused, >> > and for ctx->_ReadBuffer _ColorDrawBuffers is unused? >> >> No, they're used - see above. The _ColorReadBuffer field depends on >> the glReadBuffer() call. The _ColorDrawBuffers[] pointers depend on >> the glDrawBuffer() and glDrawBuffersARB() functions. >> > > I still don't get it. I see a reason in using _ColorReadBuffer in context of > ctx->_ReadBuffer but not in context of ctx->_DrawBuffer (and similarly for > _ColorDrawBuffers). Suppose I call glDrawBuffers(GL_FRONT_AND_BACK). That means we need to render into two colorbuffers simultaneously (or four if stereo mode). The render buffers in question will be found in ctx->DrawBuffer->ColorDrawBuffers[] and ctx->DrawBuffer->_NumColorDrawBuffers = 2. If we didn't have those derived fields/pointers we'd have to dig around through other FBO state to find where we're supposed to draw to. -Brian |
From: Luca B. <luc...@gm...> - 2010-03-13 14:47:13
|
>> However, I still have doubts on the semantics of max_index in >> pipe_vertex_buffer. >> Isn't it better to _always_ set it to a valid value, even if it is >> just (vb->buffer->size - vb->buffer_offset) / vb->stride ? > > Yes, indeed. Sorry I must have missed that point in the earlier > emails. I would have thought that was what it was *always* set to > (and thus perhaps redundant). This doesn't seem to be the case, and if I understand correctly, it is the source of the issues Corbin is facing. Quoting Corbin: > - I can't just use the vertex buffer sizes, because they're set to ~0 I assume he was referring max_index |
From: Keith W. <kei...@go...> - 2010-03-13 14:30:42
|
On Sat, Mar 13, 2010 at 1:30 PM, Luca Barbieri <luc...@gm...> wrote: >> Having user-buffers with undefined size establishes a connection >> inside the driver between two things which could previously be fully >> understood separately - the vertex buffer is now no longer fully >> defined without reference to an index buffer. Effectively the user >> buffer become just unqualified pointers and we are back to the GL >> world pre-VBOs. > > Yes, indeed. > >> In your examples, scanning and uploading (or transforming) per-index >> is only something which is sensible in special cases - eg where there >> are very few indices or sparse access to a large vertex buffer that >> hasn't already been uploaded/transformed. But you can't even know if >> the vertex buffer is sparse until you know how big it is, ie. what >> max_index is... > > Reconsidering it, it does indeed seem to be sensible to always scan > the index buffer for min/max if any elements are in user buffers, > since if we discover it is dense, we should upload and otherwise do > the index lookup in software (or perhaps remap the indices, but this > isn't necessarily a good idea). > > And if we are uploading buffers, we must always do the scan to avoid > generating segfaults due to out-of-bound reads. > Hardware without draw_elements support could do without the scan, but > I think all Gallium-capable hardware supports it. > > So it seems the only case where we don't necessarily need it is for > swtnl, and here the performance loss due to scanning is probably > insignificant compared to the cost of actually transforming vertices. > > So, yes, I think you are right and the current solution is the best. > > However, I still have doubts on the semantics of max_index in > pipe_vertex_buffer. > Isn't it better to _always_ set it to a valid value, even if it is > just (vb->buffer->size - vb->buffer_offset) / vb->stride ? Yes, indeed. Sorry I must have missed that point in the earlier emails. I would have thought that was what it was *always* set to (and thus perhaps redundant). > It seems this would solve Corbin's problem and make a better > interface, following your principle of having well defined vertex > buffers. > The only cost is doing up to 8/16 divisions, but the driver may needs > to do them anyway. > > Perhaps we could amortize this by only creating/setting the > pipe_vertex_buffers/elements on VBO/array change instead that on every > draw call like we seem to be doing now, in cases where it is possible. Indeed that would be an improvement on the current situation. > An alternative option could be to remove it, and have the driver do > the computation itself instead (and use draw_range_elements to pass a > user-specified or scanned max value). I think I prefer the first approach, to try and reduce rework everywhere. Keith |
From: Marcin B. <ma...@gm...> - 2010-03-13 13:35:52
|
2010/3/13 Karl Schultz <kar...@gm...>: > In some locales, a comma can be used as a decimal place. That is, 5,2 is a > number between 5 and 6. (I think I have that right) I would guess that the > shader language, like C, wouldn't allow this form in code. So, it makes > sense to force the C locale when parsing numbers from shader source code, as > the code does above. > > strtof doesn't show up until C99 and not all compilers support it, including > the MSFT Windows compilers. Ian says that all usages of this function want > a float anyway, so we may end up with something like: > > float > _mesa_strtof( const char *s, char **end ) > { > #ifdef _GNU_SOURCE > static locale_t loc = NULL; > if (!loc) { > loc = newlocale(LC_CTYPE_MASK, "C", NULL); > } > return (float) strtod_l(s, end, loc); > #else > return (float) strtod(s, end); > #endif > } > > And then change all _mesa_strtod to _mesa_strtof. > > If Ian doesn't care for the casts here, then I'm fine with silencing > warnings in the Studio with a compiler option. Attached patch uses strtof when it is available. |
From: Luca B. <luc...@gm...> - 2010-03-13 13:32:55
|
> Having user-buffers with undefined size establishes a connection > inside the driver between two things which could previously be fully > understood separately - the vertex buffer is now no longer fully > defined without reference to an index buffer. Effectively the user > buffer become just unqualified pointers and we are back to the GL > world pre-VBOs. Yes, indeed. > In your examples, scanning and uploading (or transforming) per-index > is only something which is sensible in special cases - eg where there > are very few indices or sparse access to a large vertex buffer that > hasn't already been uploaded/transformed. But you can't even know if > the vertex buffer is sparse until you know how big it is, ie. what > max_index is... Reconsidering it, it does indeed seem to be sensible to always scan the index buffer for min/max if any elements are in user buffers, since if we discover it is dense, we should upload and otherwise do the index lookup in software (or perhaps remap the indices, but this isn't necessarily a good idea). And if we are uploading buffers, we must always do the scan to avoid generating segfaults due to out-of-bound reads. Hardware without draw_elements support could do without the scan, but I think all Gallium-capable hardware supports it. So it seems the only case where we don't necessarily need it is for swtnl, and here the performance loss due to scanning is probably insignificant compared to the cost of actually transforming vertices. So, yes, I think you are right and the current solution is the best. However, I still have doubts on the semantics of max_index in pipe_vertex_buffer. Isn't it better to _always_ set it to a valid value, even if it is just (vb->buffer->size - vb->buffer_offset) / vb->stride ? It seems this would solve Corbin's problem and make a better interface, following your principle of having well defined vertex buffers. The only cost is doing up to 8/16 divisions, but the driver may needs to do them anyway. Perhaps we could amortize this by only creating/setting the pipe_vertex_buffers/elements on VBO/array change instead that on every draw call like we seem to be doing now, in cases where it is possible. An alternative option could be to remove it, and have the driver do the computation itself instead (and use draw_range_elements to pass a user-specified or scanned max value). |
From: Corbin S. <mos...@gm...> - 2010-03-13 12:25:35
|
On Fri, Mar 12, 2010 at 6:53 PM, Roland Scheidegger <sr...@vm...> wrote: > On 13.03.2010 03:20, Corbin Simpson wrote: >> I've pushed a revert of the original patch, and an r300g patch that, >> while not perfect, covers the common case that Wine hits. > > I think I don't really understand that (and the fix you did for r300g). > Why can't you simply clamp the maxIndex to the minimum of the submitted > maxIndex and the vertex buffer max index? > > Now you have this: > maxIndex = MIN3(maxIndex, r300->vertex_buffer_max_index, count - minIndex); > > This is then used to set the hardware max index clamp. However, for > example count could be 3, min index 0, but the actual vertices fetched > 0, 15, 30 - as long as the vertex buffers are large enough this is > perfectly legal, but as far as I can tell your patch would force the > hardware to just fetch the 0th vertex (3 times). Count really tells you > nothing at all about the index range (would also be legal to have huge > count but very small valid index range if you fetch same vertices > repeatedly). That's why I said that it's not perfect. :3 r300->vertex_buffer_max_index is, on this particular rendering path, *also* bogus, which is too bad because it should be used instead of maxIndex/minIndex here. But, as I said before, if it's too big then it can't be used, and we need to err on the side of caution if we err at all. One misrendered draw call is better than dropping the entire packet of commands on the floor. -- Only fools are easily impressed by what is only barely beyond their reach. ~ Unknown Corbin Simpson <Mos...@gm...> |
From: Keith W. <kei...@go...> - 2010-03-13 12:23:52
|
On Sat, Mar 13, 2010 at 11:40 AM, Luca Barbieri <luc...@gm...> wrote: >> But for any such technique, the mesa state tracker will need to figure >> out what memory is being referred to by those non-VBO vertex buffers >> and to do that requires knowing the index min/max values. > > Isn't the min/max value only required to compute a sensible value for > the maximum user buffer length? (the base pointer is passed to > gl*Pointer) Yes, I think that's what I was trying to say. > The fact is, that we don't need to know how large the user buffer is > if the CPU is accessing it (or if we have a very advanced driver that > faults memory in the GPU VM on demand, and/or a mechanism to let the > GPU share the process address space). Even for software t&l, it's pretty important, see below. > As you said, this happens for instance with swtnl, but also with > drivers that scan the index buffer and copy the referenced vertex for > each index onto the GPU FIFO themselves (e.g. nv50 and experimental > versions of nv30/nv40). Having user-buffers with undefined size establishes a connection inside the driver between two things which could previously be fully understood separately - the vertex buffer is now no longer fully defined without reference to an index buffer. Effectively the user buffer become just unqualified pointers and we are back to the GL world pre-VBOs. In your examples, scanning and uploading (or transforming) per-index is only something which is sensible in special cases - eg where there are very few indices or sparse access to a large vertex buffer that hasn't already been uploaded/transformed. But you can't even know if the vertex buffer is sparse until you know how big it is, ie. what max_index is... Typical usage is the opposite - vertices are referenced more than once in a mesh and the efficient thing to do is: - for software tnl, transform all the vertices in the vertex buffer (requires knowing max_index) and then apply the indices to the transformed vertices - for hardware tnl, upload the vertex buffer in entirety (or better still, the referenced subrange). > So couldn't we pass ~0 or similar as the user buffer length, and have > the driver use an auxiliary module on draw calls to determine the real > length, if necessary? > Of course, drivers that upload user buffers on creation (if any > exists) would need to be changed to only do that on draw calls. My feeling is that user buffers are a pretty significant concession to legacy GL vertex arrays in the interface. If there was any change to them, it would be more along the lines of getting rid of them entirely and forcing the state trackers to use proper buffers for everything. They are a nice way to accomodate old-style GL arrays, but they don't have many other uses and are already on shaky ground semantically. In summary, I'm not 100% comfortable with the current userbuffer concept, and I'd be *really* uncomfortable with the idea of buffers who's size we don't know or which could change size when a new index buffer is bound... Keith |
From: Luca B. <luc...@gm...> - 2010-03-13 11:41:07
|
> But for any such technique, the mesa state tracker will need to figure > out what memory is being referred to by those non-VBO vertex buffers > and to do that requires knowing the index min/max values. Isn't the min/max value only required to compute a sensible value for the maximum user buffer length? (the base pointer is passed to gl*Pointer) The fact is, that we don't need to know how large the user buffer is if the CPU is accessing it (or if we have a very advanced driver that faults memory in the GPU VM on demand, and/or a mechanism to let the GPU share the process address space). As you said, this happens for instance with swtnl, but also with drivers that scan the index buffer and copy the referenced vertex for each index onto the GPU FIFO themselves (e.g. nv50 and experimental versions of nv30/nv40). So couldn't we pass ~0 or similar as the user buffer length, and have the driver use an auxiliary module on draw calls to determine the real length, if necessary? Of course, drivers that upload user buffers on creation (if any exists) would need to be changed to only do that on draw calls. |
From: Peter D. <oi...@gm...> - 2010-03-13 11:13:57
|
http://ccache.samba.org/ is where part of this idea comes from. Speeding up glsl conversion to gpu code will make things more effective. There is no point running the glsl to native gpu conversions more times than that are require particularly on devices or anything that is depending on battery life. This brings me to the second half of the idea common storage framework. Directory struct that comes to mind for me. /usr/shared/galuim3d/<target>/<application name>/<shader>/<version as filename> Target would contain glsl for like the raw glsl and like R300 for card particular implementations as per a list. Of course version of compiler would have to be stuck at the start of the pre built GPU code and checked on load. If out of date rebuild. Then like a opengl extension to request shared glsl code access. galuim3dloader(application,shader,version) direct. galuim3dloaderlatest(application,shader) just to load the newest version of that shader. Few advantages of common glsl storage applications able to share glsl code like they can share libraries making it simple to implement fancy features on the gpu. Able to reduce how much glsl code has to be built to particular cards. And the possibility of allowing applications to migrate from card to card of different types without issues. Reason shader information commonally exposed. Same kind of caching for opencl would also be good. No point wasting cpu/gpu time running compilers when we can cache or store the results. Of course these ideas really were not possible to implement without access to raw GPU native code. Peter Dolding |
From: Christoph B. <e04...@st...> - 2010-03-13 10:40:58
|
On 13.03.2010 01:10, Xavier Chantry wrote: > Signed-off-by: Xavier Chantry <cha...@gm...> > --- > src/gallium/drivers/nv50/nv50_screen.c | 1 - > src/gallium/drivers/nv50/nv50_screen.h | 2 -- > 2 files changed, 0 insertions(+), 3 deletions(-) > > diff --git a/src/gallium/drivers/nv50/nv50_screen.c b/src/gallium/drivers/nv50/nv50_screen.c > index 7e2e8aa..adf0d3b 100644 > --- a/src/gallium/drivers/nv50/nv50_screen.c > +++ b/src/gallium/drivers/nv50/nv50_screen.c > @@ -234,7 +234,6 @@ nv50_screen_create(struct pipe_winsys *ws, struct nouveau_device *dev) > pscreen->context_create = nv50_create; > > nv50_screen_init_miptree_functions(pscreen); > - nv50_transfer_init_screen_functions(pscreen); > > /* DMA engine object */ > ret = nouveau_grobj_alloc(chan, 0xbeef5039, > diff --git a/src/gallium/drivers/nv50/nv50_screen.h b/src/gallium/drivers/nv50/nv50_screen.h > index d1bc80c..ec19ea6 100644 > --- a/src/gallium/drivers/nv50/nv50_screen.h > +++ b/src/gallium/drivers/nv50/nv50_screen.h > @@ -38,6 +38,4 @@ nv50_screen(struct pipe_screen *screen) > return (struct nv50_screen *)screen; > } > > -void nv50_transfer_init_screen_functions(struct pipe_screen *); > - > #endif > Pushed, thanks. |
From: Maciej C. <m.c...@gm...> - 2010-03-13 10:20:30
|
Dnia sobota, 13 marca 2010 o 01:10:01 Brian Paul napisał(a): > On Fri, Mar 12, 2010 at 12:19 PM, Maciej Cencora <m.c...@gm...> wrote: > > Hi all, > > > > I've got some questions regarding FBOs in mesa. I hope you'll be able to > > answer at least some of them. > > > > GLcontext structure holds pointers to 4 gl_framebuffers (DrawBuffer, > > ReadBuffer, WinSysDrawBuffer, WinSysReadBuffer) and one to > > gl_renderbuffer > > (CurrentRenderbuffer). gl_framebuffer struct contains amongs other > > fields: _ColorDrawBuffers[], _ColorReadBuffers, _DepthBuffer, > > _StencilBuffer. Now having in mind that r300 wraps all gl_renderbuffers > > and gl_texture_object > > Do you mean that r300 subclasses gl_renderbuffer and > gl_texture_object? That's what I think you mean. The term "wrapping" > has a special meaning for renderbuffers (see below). Yes, r300 subclasses these structs (and additionally wraps texture objects with renderbuffers). > > > (they contain HW buffer objects that I'm interested in), what is the > > proper way to get to read and draw buffers? > > The current "read" framebuffer is at ctx->ReadBuffer. The current > drawing framebuffer is ctx->DrawBuffer. > > Then, if you're reading _colors_ the renderbuffer to use is > ctx->ReadBuffer->_ColorReadBuffer. If you want to read _stencil_ > values, it would be ctx->ReadBuffer->_StencilBuffer. Similarly for > Z/depth, etc. That's what I'm doing, but sometimes it happens that ctx->ReadBuffer- >_ColorReadBuffer is NULL (see drivers/dri/radeon/radeon_tex_copy.c). Is this a legal situation? What should I do then if user requested glCopyTexImage or glCopyPixels? > > To draw to color buffers (there may be more than one) you'll want > ctx->DrawBuffer->_ColorDrawBuffers[]. For stencil it's > ctx->DrawBuffer->_Stencil, etc. > > Renderbuffers typically correspond to a region of video memory. How > that works is driver-dependent. > > > What operations use ctx->_ReadBuffer besides glReadPixels,glCopyPixels > > and glCopyTex(Sub)Image? > > glCopyColorTable and maybe some other obscure functions. You got the main > ones. > > > What operations use ctx->_DrawBuffer besides rendering operations? > > (glBegin/glEnd, glDrawElements, glDrawArrays, ...) > > That's basically it, plus glClear. > > > Am I correct that for ctx->_DrawBuffer field _ColorReadBuffer is unused, > > and for ctx->_ReadBuffer _ColorDrawBuffers is unused? > > No, they're used - see above. The _ColorReadBuffer field depends on > the glReadBuffer() call. The _ColorDrawBuffers[] pointers depend on > the glDrawBuffer() and glDrawBuffersARB() functions. > I still don't get it. I see a reason in using _ColorReadBuffer in context of ctx->_ReadBuffer but not in context of ctx->_DrawBuffer (and similarly for _ColorDrawBuffers). Thanks for explanation, Maciej > > It's important to understand the heirarchy of gl_framebuffers and > gl_renderbuffers. > > Each framebuffer object has a collection of renderbuffers. > > Renderbuffers may also act as wrappers for gl_texture_object when > doing render to texture. In this case, the renderbuffer acts as a 2D > view into a level/slice/face of a texture object. > > Also, we have special depth/stencil renderbuffers which wrap combined > depth/stencil buffers. See main/depthstencil.c That's mainly for > software rendering, though. > > -Brian > |
From: Keith W. <kei...@go...> - 2010-03-13 09:22:25
|
On Sat, Mar 13, 2010 at 4:33 AM, Luca Barbieri <luc...@gm...> wrote: > Actually, why is the state tracker doing the min/max computation at all? > > If the driver does the index lookup itself, as opposed to using an > hardware index buffer, (e.g. the nouveau drivers do this in some > cases) this is unnecessary and slow. > > Would completely removing the call to vbo_get_minmax_index break anything? > > Also, how about removing the max_index field in pipe_vertex_buffer? > This seems to be set to the same value for all vertex buffers, and the > value is then passed to draw_range_elements too. > Isn't the value passed to draw_range_elements sufficient? > It's really only needed for the special (but common) case where the vertex data isn't in a GL VBO at all, but just sitting in regular memory. The state tracker currently has the task of wrapping those regions of memory in user_buffers, and needs this computation to figure out what memory to wrap in that way. User buffers are a useful concept for avoiding a data copy, and can be implemented one of three ways in the driver: a) for drivers doing swtnl, just access the wrapped memory directly b) for drivers with hwtnl but unsophisticated memory managers, upload the userbuffer contents to a real buffer, then use that. c) for drivers with sophisticated memory managers, instruct the kernel to allow the hardware to access these pages directly and do whatever memory management tricks necessary to avoid the contents being clobbered by subsequent CPU accesses. It's a difficult trick to avoid extra data copies in a layered stack like gallium, and user_buffers are an attempt to avoid one source of them. But for any such technique, the mesa state tracker will need to figure out what memory is being referred to by those non-VBO vertex buffers and to do that requires knowing the index min/max values. Keith |