You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
(11) |
Apr
(46) |
May
(65) |
Jun
(85) |
Jul
(94) |
Aug
(99) |
Sep
(62) |
Oct
(58) |
Nov
(85) |
Dec
(39) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(90) |
Feb
(29) |
Mar
(90) |
Apr
(96) |
May
(78) |
Jun
(58) |
Jul
(44) |
Aug
(65) |
Sep
(40) |
Oct
(38) |
Nov
(79) |
Dec
(63) |
2002 |
Jan
(53) |
Feb
(61) |
Mar
(43) |
Apr
(53) |
May
(35) |
Jun
(59) |
Jul
(18) |
Aug
(12) |
Sep
(28) |
Oct
(61) |
Nov
(54) |
Dec
(23) |
2003 |
Jan
(16) |
Feb
(42) |
Mar
(38) |
Apr
(35) |
May
(20) |
Jun
(9) |
Jul
(10) |
Aug
(30) |
Sep
(22) |
Oct
(32) |
Nov
(25) |
Dec
(21) |
2004 |
Jan
(39) |
Feb
(36) |
Mar
(59) |
Apr
(32) |
May
(21) |
Jun
(4) |
Jul
(8) |
Aug
(21) |
Sep
(11) |
Oct
(21) |
Nov
(22) |
Dec
(19) |
2005 |
Jan
(62) |
Feb
(24) |
Mar
(17) |
Apr
(16) |
May
(16) |
Jun
(17) |
Jul
(26) |
Aug
(14) |
Sep
(13) |
Oct
(8) |
Nov
(23) |
Dec
(20) |
2006 |
Jan
(41) |
Feb
(18) |
Mar
(21) |
Apr
(47) |
May
(13) |
Jun
(33) |
Jul
(32) |
Aug
(21) |
Sep
(27) |
Oct
(34) |
Nov
(19) |
Dec
(46) |
2007 |
Jan
(21) |
Feb
(26) |
Mar
(13) |
Apr
(22) |
May
(5) |
Jun
(19) |
Jul
(56) |
Aug
(43) |
Sep
(37) |
Oct
(31) |
Nov
(53) |
Dec
(22) |
2008 |
Jan
(74) |
Feb
(31) |
Mar
(15) |
Apr
(35) |
May
(23) |
Jun
(26) |
Jul
(17) |
Aug
(27) |
Sep
(35) |
Oct
(30) |
Nov
(29) |
Dec
(17) |
2009 |
Jan
(35) |
Feb
(39) |
Mar
(44) |
Apr
(28) |
May
(20) |
Jun
(28) |
Jul
(49) |
Aug
(53) |
Sep
(23) |
Oct
(13) |
Nov
(12) |
Dec
(11) |
2010 |
Jan
(45) |
Feb
(28) |
Mar
(41) |
Apr
(11) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Sam G. <sa...@el...> - 2009-12-27 22:19:33
|
Hi, I contribute to gNewSense, which aims to be a fully Free distro. We have this bug report [1] about the file glut.h in Mesa. The issue is that Mark J. Kilgard did not give permission to modify the file in his original license notice. This was reported and solved for the GLUT package in Debian [2]. So now we're wondering how this applies to Mesa. I've contacted FSF Licensing Lab about it and they said: "Please be careful not to read the notice any further than it goes. In particular: it only applies to what Mark wrote, and it only applies to whatever code of his existed in libglut when he sent his reply to Debian. Nothing anybody else wrote, and nothing that wasn't in libglut at the time." So if we can show that your glut.h was initially based on that of the GLUT package then we'd be a big step closer to a solution. I've compared the GLUT 3.7 tarball [3] to your initial commit [4], but there are some differences. Could you help the gNewSense project (and other free distros) with this? Thanks. [1] http://bugs.gnewsense.org/Bugs/00365 [2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=131997 [3] ftp://ftp.sgi.com/sgi/opengl/glut/ [4] http://cgit.freedesktop.org/mesa/mesa/plain/include/GL/glut.h?id=afb833d4e89c312460a4ab9ed6a7a8ca4ebbfe1c |
From: Rodrigo G. de A. <rod...@ya...> - 2009-12-27 15:52:27
|
Hi foks, I have compiled Mesa Cell driver, and it is great! can I use this, rather than ps3fb for video driver? thanks so much and sorry my bad english. ____________________________________________________________________________________ Veja quais são os assuntos do momento no Yahoo! +Buscados http://br.maisbuscados.yahoo.com |
From: Bogey J. <bog...@ho...> - 2009-12-13 19:14:02
|
Hi all, I am new to Linux world, and trying to build my "Linux from scratch" system. Every component was well installed until mesalib which seem to have a problem to compile the drivers on my system. Tried mesalib-7.6 and mesalib-7.7-RC2 (lib+GLUT packeges) with same compilation issue. Libdrm-2.4.16 was previously installed with success. For configuration, I typed: ./configure --prefix=/usr --sysconfdir=/etc --mandir=/usr/share/man --localstatedir=/var Here are the final lines of compilation output where the error messages are beginning (sorry for the french text): Dans le fichier inclus à partir de i830_context.h:31, à partir de i830_context.c:28: ../intel/intel_context.h:38:26: erreur: intel_bufmgr.h : Aucun fichier ou dossier de ce type In file included from ../intel/intel_context.h:40, from i830_context.h:31, from i830_context.c:28: ../intel/intel_screen.h:81: erreur: expected specifier-qualifier-list before ‘dri_bufmgr’ In file included from i830_context.h:31, from i830_context.c:28: ../intel/intel_context.h:93: erreur: expected specifier-qualifier-list before ‘drm_intel_bo’ ../intel/intel_context.h:164: erreur: expected declaration specifiers or ‘...’ before ‘dri_bo’ ../intel/intel_context.h:180: erreur: expected specifier-qualifier-list before ‘dri_bufmgr’ ../intel/intel_context.h:593: erreur: expected declaration specifiers or ‘...’ before ‘drm_intel_bo’ ../intel/intel_context.h: In function ‘intel_bo_map_gtt_preferred’: ../intel/intel_context.h:596: erreur: ‘struct intel_context’ has no member named ‘intelScreen’ ../intel/intel_context.h:597: attention : implicit declaration of function ‘drm_intel_gem_bo_map_gtt’ ../intel/intel_context.h:597: erreur: ‘bo’ undeclared (first use in this function) ../intel/intel_context.h:597: erreur: (Each undeclared identifier is reported only once ../intel/intel_context.h:597: erreur: for each function it appears in.) ../intel/intel_context.h:599: attention : implicit declaration of function ‘drm_intel_bo_map’ ../intel/intel_context.h: Hors de toute fonction : ../intel/intel_context.h:604: erreur: expected declaration specifiers or ‘...’ before ‘drm_intel_bo’ ../intel/intel_context.h: In function ‘intel_bo_unmap_gtt_preferred’: ../intel/intel_context.h:606: erreur: ‘struct intel_context’ has no member named ‘intelScreen’ ../intel/intel_context.h:607: attention : implicit declaration of function ‘drm_intel_gem_bo_unmap_gtt’ ../intel/intel_context.h:607: erreur: ‘bo’ undeclared (first use in this function) ../intel/intel_context.h:609: attention : implicit declaration of function ‘drm_intel_bo_unmap’ In file included from i830_context.c:28: i830_context.h: Hors de toute fonction : i830_context.h:133: erreur: expected specifier-qualifier-list before ‘dri_bo’ i830_context.c: In function ‘i830CreateContext’: i830_context.c:75: erreur: ‘struct intel_context’ has no member named ‘ViewportMatrix’ i830_context.c:85: erreur: ‘struct intel_context’ has no member named ‘no_rast’ i830_context.c:108: erreur: ‘struct intel_context’ has no member named ‘verts’ make[6]: *** [i830_context.o] Erreur 1 make[6]: quittant le répertoire « /sources/suite/xws/Mesa-7.7-rc2/src/mesa/drivers/dri/i915 » make[5]: *** [lib] Erreur 2 make[5]: quittant le répertoire « /sources/suite/xws/Mesa-7.7-rc2/src/mesa/drivers/dri/i915 » make[4]: *** [subdirs] Erreur 1 make[4]: quittant le répertoire « /sources/suite/xws/Mesa-7.7-rc2/src/mesa/drivers/dri » make[3]: *** [default] Erreur 1 make[3]: quittant le répertoire « /sources/suite/xws/Mesa-7.7-rc2/src/mesa/drivers » make[2]: *** [driver_subdirs] Erreur 2 make[2]: quittant le répertoire « /sources/suite/xws/Mesa-7.7-rc2/src/mesa » make[1]: *** [subdirs] Erreur 1 make[1]: quittant le répertoire « /sources/suite/xws/Mesa-7.7-rc2/src » make: *** [default] Erreur 1 So the "make install" bring errors only like the ones above... It's highly probable that I've misunderstood something, and I couldn't find information about this kind of error. Does someone have experience about issues like this? _________________________________________________________________ Windows 7 à 35€ pour les étudiants ! http://www.windows-7-pour-les-etudiants.com |
From: Geng, X. (NIH/N. [C] <ge...@ni...> - 2009-12-05 16:48:25
|
-----Original Message----- From: Brian Paul [mailto:br...@vm...] Sent: Friday, December 04, 2009 10:14 AM To: Geng, Xiujuan (NIH/NIDA) [C] Cc: mes...@li... Subject: Re: [Mesa3d-users] offscreen render segmentation fault with mesa 7.7 Geng, Xiujuan (NIH/NIDA) [C] wrote: > Hi, > > I'm new to mesa and openGL. I'm trying to do offscreen rendering on a 64bit Suse Linux machine. > > I started from a mesa demo code progs/osdemos/osdemo.c. I copied the main part below: > > /* Create an RGBA-mode context */ > #if OSMESA_MAJOR_VERSION * 100 + OSMESA_MINOR_VERSION >= 305 > /* specify Z, stencil, accum sizes */ > ctx = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL ); > #else > ctx = OSMesaCreateContext( OSMESA_RGBA, NULL ); > #endif > if (!ctx) { > printf("OSMesaCreateContext failed!\n"); > return 0; > } > > /* Allocate the image buffer */ > buffer = malloc( Width * Height * 4 * sizeof(GLubyte) ); > if (!buffer) { > printf("Alloc image buffer failed!\n"); > return 0; > } > > /* Bind the buffer to the context and make it current */ > if (!OSMesaMakeCurrent( ctx, buffer, GL_UNSIGNED_BYTE, Width, Height )) { > printf("OSMesaMakeCurrent failed!\n"); > return 0; > } > > { > int z, s, a; > glGetIntegerv(GL_DEPTH_BITS, &z); > glGetIntegerv(GL_STENCIL_BITS, &s); > glGetIntegerv(GL_ACCUM_RED_BITS, &a); > printf("Depth=%d Stencil=%d Accum=%d\n", z, s, a); > } > render_image(); > > I compiled the code with no problem: > gcc osdemo.c -lglut -lGLU -lOSMesa -lm -o osdemo > > But when I run it: > ./osdemo output.ppm 400 400 > > It gave me a seg fault. I used gdb to debug it, and the error message is: > > (gdb) run > Starting program: /data/380/gengx/tools/Mesa-7.5.1/progs/osdemos/osdemo output.ppm 400 400 > [Thread debugging using libthread_db enabled] > > Program received signal SIGSEGV, Segmentation fault. > glGetIntegerv () at ../../../src/mesa/x86-64/glapi_x86-64.S:9444 > 9444 movq 2104(%rax), %r11 > Current language: auto; currently asm > > Our own offscreen code (also using " OSMesaCreateContextExt " and " OSMesaMakeCurrent ") faced the same segmentation problem. Basically it gave a segmentation at whatever gl routine first called in the code. > > We are currently using mesa7.7. > > Any suggestion is appreciated! Thanks. Can you run w/ valgrind and see if that finds anything? Does the original osdemo.c work properly? Maybe try disabling x86-64 optimizations or a debug build of Mesa. -Brian Brian, Thanks for the suggestion. I got the seg fault from the original osdemo.c. I ran it with valgrind, and didn't find anything particular. We also ran the same osdemo.c code using Mesa7.4 on Ubuntu 9.04, it gave the same segmentation fault. But if I modify the demo code to do on-screen rendering, e.g., using glutCreateWindow and render the image to the window instead of doing off-screen render with "OSMesaCreateContextExt" and "OSMesaMakeCurrent", it works without any segmentation fault. We built mesa with a regular version and a debug version. The library is under /usr/lib/debug/usr/lib64/libOSMesa.so.7.7.0.debug I assume I compiled the demo code using the debug build of mesa. Here is the command I used to compile osdemo: gcc -g osdemo.c -lglut -lGLU -lOSMesa -lm -o osdemo The gdb gave the following error message: Program received signal SIGSEGV, Segmentation fault. glGetIntegerv () at ../../../src/mesa/x86-64/glapi_x86-64.S:9444 9444 movq 2104(%rax), %r11 Is it possible that there's some bug in OSMesaCreateContextExt or OSMesaMakeCurrent? I really need off-screen rendering. Thanks again. Xiujuan |
From: Brian P. <br...@vm...> - 2009-12-04 15:15:05
|
Geng, Xiujuan (NIH/NIDA) [C] wrote: > Hi, > > I'm new to mesa and openGL. I'm trying to do offscreen rendering on a 64bit Suse Linux machine. > > I started from a mesa demo code progs/osdemos/osdemo.c. I copied the main part below: > > /* Create an RGBA-mode context */ > #if OSMESA_MAJOR_VERSION * 100 + OSMESA_MINOR_VERSION >= 305 > /* specify Z, stencil, accum sizes */ > ctx = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL ); > #else > ctx = OSMesaCreateContext( OSMESA_RGBA, NULL ); > #endif > if (!ctx) { > printf("OSMesaCreateContext failed!\n"); > return 0; > } > > /* Allocate the image buffer */ > buffer = malloc( Width * Height * 4 * sizeof(GLubyte) ); > if (!buffer) { > printf("Alloc image buffer failed!\n"); > return 0; > } > > /* Bind the buffer to the context and make it current */ > if (!OSMesaMakeCurrent( ctx, buffer, GL_UNSIGNED_BYTE, Width, Height )) { > printf("OSMesaMakeCurrent failed!\n"); > return 0; > } > > { > int z, s, a; > glGetIntegerv(GL_DEPTH_BITS, &z); > glGetIntegerv(GL_STENCIL_BITS, &s); > glGetIntegerv(GL_ACCUM_RED_BITS, &a); > printf("Depth=%d Stencil=%d Accum=%d\n", z, s, a); > } > render_image(); > > I compiled the code with no problem: > gcc osdemo.c -lglut -lGLU -lOSMesa -lm -o osdemo > > But when I run it: > ./osdemo output.ppm 400 400 > > It gave me a seg fault. I used gdb to debug it, and the error message is: > > (gdb) run > Starting program: /data/380/gengx/tools/Mesa-7.5.1/progs/osdemos/osdemo output.ppm 400 400 > [Thread debugging using libthread_db enabled] > > Program received signal SIGSEGV, Segmentation fault. > glGetIntegerv () at ../../../src/mesa/x86-64/glapi_x86-64.S:9444 > 9444 movq 2104(%rax), %r11 > Current language: auto; currently asm > > Our own offscreen code (also using " OSMesaCreateContextExt " and " OSMesaMakeCurrent ") faced the same segmentation problem. Basically it gave a segmentation at whatever gl routine first called in the code. > > We are currently using mesa7.7. > > Any suggestion is appreciated! Thanks. Can you run w/ valgrind and see if that finds anything? Does the original osdemo.c work properly? Maybe try disabling x86-64 optimizations or a debug build of Mesa. -Brian |
From: Geng, X. (NIH/N. [C] <ge...@ni...> - 2009-12-04 00:26:21
|
Hi, I'm new to mesa and openGL. I'm trying to do offscreen rendering on a 64bit Suse Linux machine. I started from a mesa demo code progs/osdemos/osdemo.c. I copied the main part below: /* Create an RGBA-mode context */ #if OSMESA_MAJOR_VERSION * 100 + OSMESA_MINOR_VERSION >= 305 /* specify Z, stencil, accum sizes */ ctx = OSMesaCreateContextExt( OSMESA_RGBA, 16, 0, 0, NULL ); #else ctx = OSMesaCreateContext( OSMESA_RGBA, NULL ); #endif if (!ctx) { printf("OSMesaCreateContext failed!\n"); return 0; } /* Allocate the image buffer */ buffer = malloc( Width * Height * 4 * sizeof(GLubyte) ); if (!buffer) { printf("Alloc image buffer failed!\n"); return 0; } /* Bind the buffer to the context and make it current */ if (!OSMesaMakeCurrent( ctx, buffer, GL_UNSIGNED_BYTE, Width, Height )) { printf("OSMesaMakeCurrent failed!\n"); return 0; } { int z, s, a; glGetIntegerv(GL_DEPTH_BITS, &z); glGetIntegerv(GL_STENCIL_BITS, &s); glGetIntegerv(GL_ACCUM_RED_BITS, &a); printf("Depth=%d Stencil=%d Accum=%d\n", z, s, a); } render_image(); I compiled the code with no problem: gcc osdemo.c -lglut -lGLU -lOSMesa -lm -o osdemo But when I run it: ./osdemo output.ppm 400 400 It gave me a seg fault. I used gdb to debug it, and the error message is: (gdb) run Starting program: /data/380/gengx/tools/Mesa-7.5.1/progs/osdemos/osdemo output.ppm 400 400 [Thread debugging using libthread_db enabled] Program received signal SIGSEGV, Segmentation fault. glGetIntegerv () at ../../../src/mesa/x86-64/glapi_x86-64.S:9444 9444 movq 2104(%rax), %r11 Current language: auto; currently asm Our own offscreen code (also using " OSMesaCreateContextExt " and " OSMesaMakeCurrent ") faced the same segmentation problem. Basically it gave a segmentation at whatever gl routine first called in the code. We are currently using mesa7.7. Any suggestion is appreciated! Thanks. Xiujuan |
From: Owen K. <Owe...@sc...> - 2009-11-19 03:07:35
|
Ah no worries, thanks for checking it out. Not a priority, mainly just reporting as a bug or in case it was a very easy fix. Owen. Brian Paul wrote: > OK, the problem is we're not doing sub-pixel adjustment of texcoords > in the sprite rasterization code. Look in sprite_point() in > s_points.c if interested. This can be fixed but it will take some > work. I'll see if I can fix it when I get some spare time. > > -Brian > > > Owen Kaluza wrote: > >> Hi Brian, >> >> Sure, here is the best illustration of the issue I could produce: 1000 >> points, aligned as you suggested. >> If I bring the point size up from 1.0 the issue isn't as obvious >> although still noticeable. >> The attenuation seems to drop the point size then gradually increase, >> you can see the points towards the back disappear then reappear. >> Attached modified program and two screen shots, one using osmesa and the >> other with glut+video card gl drivers. >> >> Thanks, >> Owen. >> >> Brian Paul wrote: >> >>> Owen Kaluza wrote: >>> >>>> Hello, >>>> I'm having trouble with point distance attenuation using OSMesa. >>>> I'm rendering a lot of depth sorted, alpha blended, textured points and >>>> dark bands are appearing that are not there when I render with the >>>> system GL. >>>> >>>> I found the problem only occurs with point distance attenuation >>>> turned on. >>>> If you look at the attached image you can see there are clearly defined >>>> bands, possibly the point size calculation is incorrect at certain >>>> distances resulting in size jumps. >>>> >>>> I've attached a sample program that reproduces the problem. Also tried >>>> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still >>>> occurring. >>>> >>> Could you prune down your test program a bit? Perhaps you could draw >>> a series of points between the min/max Z positions and see how they look. >>> >>> -Brian >>> >>> >>> ------------------------------------------------------------------------ >>> >>> >>> ------------------------------------------------------------------------ >>> >>> > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > Mesa3d-users mailing list > Mes...@li... > https://lists.sourceforge.net/lists/listinfo/mesa3d-users > |
From: Brian P. <br...@vm...> - 2009-11-19 01:51:08
|
Available for testing at ftp://freedesktop.org/pub/mesa/7.6.1/ md5sums: cde0f0491eb170422f0a30e2dcc4926c MesaLib-7.6.1-rc1.tar.gz 36f7142f232bd1601cab34e5e425f829 MesaLib-7.6.1-rc1.tar.bz2 d78cfc9e360f9bbad7b5188fc2234b8f MesaLib-7.6.1-rc1.zip 518f523e7daaacb7595c8a68d83e2b9e MesaDemos-7.6.1-rc1.tar.gz 0ded6573bcda23b563643abf9e6ff228 MesaDemos-7.6.1-rc1.tar.bz2 81c37c367bfaa1f87d9b1576644821d3 MesaDemos-7.6.1-rc1.zip c6d0405926adeef36d2fa03ee514e15f MesaGLUT-7.6.1-rc1.tar.gz bd1e94d5c09a8465f831c5ae30ef59b1 MesaGLUT-7.6.1-rc1.tar.bz2 790a29a21cf62ca88125f8810c9c224b MesaGLUT-7.6.1-rc1.zip Please test and report any problems ASAP. If there aren't any issues we'd like to release 7.6.1 on Friday or Saturday. -Brian |
From: tom f. <tf...@al...> - 2009-11-18 23:24:20
|
Hi John, sorry this took so long... kind of fell off my radar. John Wythe <bit...@gm...> writes: > On Sat, Nov 7, 2009 at 2:46 PM, tom fogal <tf...@al...> wrote: > > Hi John, > > > > John Wythe <bit...@gm...> writes: > >> I am encountering different rendering behavior between two > >> seemingly compatible Linux environments. [. . .] Below are links to > >> screen-shots and troubleshooting information: > >> > >> Screenshots of the issue: > >> http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800 > /sc > >> reenshots.jpeg > > > > These look (to me) like they might be Z-fighting issues. > > > > Is there any chance of requesting more resolution from the depth > > buffer? You would normally do this when choosing your glX visual. > > I've never heard of Z-fighting, but I can guess what it is. Probably > the only way I can get more depth from the buffer is to hack at the > wine opengl.dll implementation, since all the GL code is in the > legacy app. Right. > However, I would think that this would not be necessary, as it was > not on my desktop environment. I suppose it is possible something > else is increasing the depth buffer resolution on my desktop. The spec is worded in such a way that allows different implementations to return any among a set of `compatible' buffers. As an example, you might request a 16 bit depth buffer and get a 32bit depth buffer. Another implementation might actually give you the 16bit depth buffer. This can mask subtle bugs; an application might require a 24bit depth buffer, request a 16bit buffer, and through `luck', only be tested on systems that give 32bit depth buffers. See the man page for `glXChooseVisual' for more information. This information should be in the glX spec too, of course. > >> Server environment information: > >> http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt > >> > >> Desktop environment information: > >> http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4 > > > > Unsurprisingly, your desktop X configuration is using XCB, probably > > with it's libX11 `emulation' of a sorts, while your server > > configuration does not have XCB. > > I did some reading about XCB before my initial message and figured it > was a non-issue since it seems to me like just a binding interface > and an app would have to be written for it to use it; which wine must > not be since it does not require it. This is not true; XCB has an emulation layer of sorts that translates libX11 APIs to libXCB APIs. > >> Instead I compiled Mesa using the xlib software driver. When using > >> this libGL version the application continues to work just fine on my > >> desktop. > > > > Are you absolutely certain you're using Mesa? > > I did not change my xorg.conf, only the LD_LIBRARY_PATH. The output of > ldd glxinfo shows that the linker is using the mesa build of libGL and > glxinfo says the renderer is using the Mesa X11 OpenGL renderer. From > what I understand so far, that means it's using Mesa. > > Without the LD_LIBRARY_PATH override, glxinfo instead says Nvidia is > the renderer. Sound logic, I think. To be absolutely certain, of course, it'd be good to check how this works when you've got a `Driver' of "nv" in your xorg.conf, instead of "nvidia". `rmmod nvidia' if you can too (it seems to load itself automagically when needed anyway). Cheers, -tom |
From: Brian P. <br...@vm...> - 2009-11-18 15:54:36
|
OK, the problem is we're not doing sub-pixel adjustment of texcoords in the sprite rasterization code. Look in sprite_point() in s_points.c if interested. This can be fixed but it will take some work. I'll see if I can fix it when I get some spare time. -Brian Owen Kaluza wrote: > Hi Brian, > > Sure, here is the best illustration of the issue I could produce: 1000 > points, aligned as you suggested. > If I bring the point size up from 1.0 the issue isn't as obvious > although still noticeable. > The attenuation seems to drop the point size then gradually increase, > you can see the points towards the back disappear then reappear. > Attached modified program and two screen shots, one using osmesa and the > other with glut+video card gl drivers. > > Thanks, > Owen. > > Brian Paul wrote: >> Owen Kaluza wrote: >>> Hello, >>> I'm having trouble with point distance attenuation using OSMesa. >>> I'm rendering a lot of depth sorted, alpha blended, textured points and >>> dark bands are appearing that are not there when I render with the >>> system GL. >>> >>> I found the problem only occurs with point distance attenuation >>> turned on. >>> If you look at the attached image you can see there are clearly defined >>> bands, possibly the point size calculation is incorrect at certain >>> distances resulting in size jumps. >>> >>> I've attached a sample program that reproduces the problem. Also tried >>> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still >>> occurring. >> Could you prune down your test program a bit? Perhaps you could draw >> a series of points between the min/max Z positions and see how they look. >> >> -Brian >> >> >> ------------------------------------------------------------------------ >> >> >> ------------------------------------------------------------------------ >> |
From: Owen K. <Owe...@sc...> - 2009-11-18 02:20:07
|
Hi Brian, Sure, here is the best illustration of the issue I could produce: 1000 points, aligned as you suggested. If I bring the point size up from 1.0 the issue isn't as obvious although still noticeable. The attenuation seems to drop the point size then gradually increase, you can see the points towards the back disappear then reappear. Attached modified program and two screen shots, one using osmesa and the other with glut+video card gl drivers. Thanks, Owen. Brian Paul wrote: > Owen Kaluza wrote: >> Hello, >> I'm having trouble with point distance attenuation using OSMesa. >> I'm rendering a lot of depth sorted, alpha blended, textured points and >> dark bands are appearing that are not there when I render with the >> system GL. >> >> I found the problem only occurs with point distance attenuation >> turned on. >> If you look at the attached image you can see there are clearly defined >> bands, possibly the point size calculation is incorrect at certain >> distances resulting in size jumps. >> >> I've attached a sample program that reproduces the problem. Also tried >> the latest Mesalib code (Mesa-7.7-devel-20091105) and is still >> occurring. > > Could you prune down your test program a bit? Perhaps you could draw > a series of points between the min/max Z positions and see how they look. > > -Brian > |
From: Brian P. <br...@vm...> - 2009-11-17 23:29:59
|
Owen Kaluza wrote: > Hello, > I'm having trouble with point distance attenuation using OSMesa. > I'm rendering a lot of depth sorted, alpha blended, textured points and > dark bands are appearing that are not there when I render with the > system GL. > > I found the problem only occurs with point distance attenuation turned on. > If you look at the attached image you can see there are clearly defined > bands, possibly the point size calculation is incorrect at certain > distances resulting in size jumps. > > I've attached a sample program that reproduces the problem. Also tried > the latest Mesalib code (Mesa-7.7-devel-20091105) and is still occurring. Could you prune down your test program a bit? Perhaps you could draw a series of points between the min/max Z positions and see how they look. -Brian |
From: vivek v. <viv...@gm...> - 2009-11-17 12:43:32
|
please help i want to know how 3d API implement how it divided in parts for eg(don't take it much series as i don't know much) 3D math, driver approach via software,bug fix etc i even try to learn from Mesa 3.x but i can't please help any link clue etc Q. At most how hardware interacted?? thanks in advance vivek |
From: Owen K. <Owe...@sc...> - 2009-11-17 03:57:36
|
Hello, I'm having trouble with point distance attenuation using OSMesa. I'm rendering a lot of depth sorted, alpha blended, textured points and dark bands are appearing that are not there when I render with the system GL. I found the problem only occurs with point distance attenuation turned on. If you look at the attached image you can see there are clearly defined bands, possibly the point size calculation is incorrect at certain distances resulting in size jumps. I've attached a sample program that reproduces the problem. Also tried the latest Mesalib code (Mesa-7.7-devel-20091105) and is still occurring. Thanks, Owen. |
From: guangshan c. <gc...@gm...> - 2009-11-13 17:23:08
|
Hi all, I am failed to build mesa on Mac OS 10.6 (intel). The following are steps what I did: step 1: download all three source files: MesaDemos-7.6.tar.gz MesaGLUT-7.6.tar.gz MesaLib-7.6.tar.gz step 2: untar the three files step 3: cd to Mesa-7.6 step 4: run configure configure --prefix=/Users/guangshan/local/ncl \ LDFLAGS="-L/usr/X11R6/lib" \ --with-x PKG_CONFIG_PATH="/Users/guangshan/local/ncl/lib/pkgconfig /usr/X11R6/lib/pkgconfig" step 5: make There are many warnings. But I got the following errors (I attached the log of make): mklib: Making Darwin shared library: libOSMesa.7.6.dylib Undefined symbols: "__mesa_enable_2_1_extensions", referenced from: _OSMesaCreateContextExt in osmesa.o "__tnl_run_pipeline", referenced from: _OSMesaCreateContextExt in osmesa.o "__swsetup_CreateContext", referenced from: _OSMesaCreateContextExt in osmesa.o "__mesa_new_renderbuffer", referenced from: _OSMesaCreateContextExt in osmesa.o "__vbo_CreateContext", referenced from: _OSMesaCreateContextExt in osmesa.o "__tnl_CreateContext", referenced from: ............................................. ld: symbol(s) not found Why some many undefined symbols? They are all related to mesa. Could any one help me? Thanks Guangshan |
From: John W. <bit...@gm...> - 2009-11-08 01:55:39
|
On Sat, Nov 7, 2009 at 2:46 PM, tom fogal <tf...@al...> wrote: > Hi John, > > John Wythe <bit...@gm...> writes: >> I am encountering different rendering behavior between two >> seemingly compatible Linux environments. [. . .] Below are links to >> screen-shots and troubleshooting information: >> >> Screenshots of the issue: >> http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800/sc >> reenshots.jpeg > > These look (to me) like they might be Z-fighting issues. > > Is there any chance of requesting more resolution from the depth > buffer? You would normally do this when choosing your glX visual. > I've never heard of Z-fighting, but I can guess what it is. Probably the only way I can get more depth from the buffer is to hack at the wine opengl.dll implementation, since all the GL code is in the legacy app. However, I would think that this would not be necessary, as it was not on my desktop environment. I suppose it is possible something else is increasing the depth buffer resolution on my desktop. >> Server environment information: >> http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt >> >> Desktop environment information: >> http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4 > > Unsurprisingly, your desktop X configuration is using XCB, probably > with it's libX11 `emulation' of a sorts, while your server > configuration does not have XCB. I did some reading about XCB before my initial message and figured it was a non-issue since it seems to me like just a binding interface and an app would have to be written for it to use it; which wine must not be since it does not require it. >> On my desktop I have a NVidia 8800GTS. To try and isolate the >> problem, I wanted to force my desktop to use the software >> renderer. For some unknown reason setting LIBGL_ALWAYS_SOFTWARE=1 has >> no effect. > > You're probably using NVIDIAs driver. Actually, you almost definitely > are, because the only other options are `nv' and `noveau', and of > course the Mesa `swrast' driver. `nv' former can't do 3D, and `noveau' > will crash when used for 3D -- if you're lucky -- AFAICT (never tried > it myself). > If you're using NVIDIA's driver, none of Mesa's environment variables > matter. This makes complete sense now. It did not strike me initially that the using the nvidia driver removes mesa from the rendering pipeline. But that oversight is just me learning about the X architecture still. >> Instead I compiled Mesa using the xlib software driver. When using >> this libGL version the application continues to work just fine on my >> desktop. > > Are you absolutely certain you're using Mesa? I did not change my xorg.conf, only the LD_LIBRARY_PATH. The output of ldd glxinfo shows that the linker is using the mesa build of libGL and glxinfo says the renderer is using the Mesa X11 OpenGL renderer. From what I understand so far, that means it's using Mesa. Without the LD_LIBRARY_PATH override, glxinfo instead says Nvidia is the renderer. > I would recommend you remove any drivers your package manager supplies, > as much as possible at least. This won't be fully possible on Ubuntu > because the removal of all GL impls will make the package manager want > to remove X, but at least remove all nvidia packages. I'll try something like that if I get super desperate, but I don't wish to mess up my development environment. I wanted to get a third machine involved test test on. I was going to use an Ubuntu image on Amazon EC2, but for some reason, as soon as I call winetricks, the server locks up hard. I have to shut it down from EC2. When I get a chance, I might pursue something like this again to test with >> The server on the other hand, is a managed environment without root >> access. The default version of libGL caused the application to crash, >> which initially, I thought was due to an older version of Xvfb. After >> learning much more about xorg, I came to realize that it was not the >> version of Xvfb that made things marginally work, but rather the >> libGL version that was built as a result of building Xvfb/Mesa. Now >> I am only building Mesa and libXmu on the server and using the older >> Xvfb. > > CentOS, IMHO, is trash. Everything's too damn old on it; for software > I work on, we're always hitting things like old compilers not accepting > valid templates or similar. If you can update the toolchain, I would > recommend as much. Or better yet, put a Debian stable / Ubuntu LTS / > hell even openSUSE on the machine and save yourself the pain. Yeah, but unfortunately there is nothing I can do. Cent OS it has to be for now; it's a managed server. I'll try building my own private toolchain on the server, but I did not notice any errors during the build of Mesa. I might save the build log and look closer. >> On the server experiencing the problem, I have set MESA_DEBUG=FP >> to try and get some debug information. I also tried to set >> LIBGL_DEBUG=verbose but that seems to have no effect on either >> machine. Two messages were encountered at various times -only- on the >> server: >> >> "Mesa warning: couldn't open libtxc_dxtn.so, software DXTn >> compression/decompression unavailable" >> and >> "Mesa warning: XGetGeometry failed!" >> >> >> I downloaded the libtxc_dxtn from > [snip] > > I would not worry about it. Mesa will give that warning regardless > of whether or not compressed textures are actually used. I encounter > very few apps that actually use them (I suppose games would frequently, > though?), and in any case the image you sent makes me think there's no > texturing at all in your app, anyway. I assumed as much, but just wanted to make sure. >> Overall these two machines are >> * Using the same version of Mesa >> * Both using software rendering >> * Both using the same version of wine >> >> Which leads me to believe this must be a subtle dependency problem, >> either at runtime or build time. At this point though, I would have >> no idea what could affect the rendering in such a way. > > My best guess is XCB/X11. Try configuring Mesa with --enable-xcb. If > the app is threaded, --enable-glx-tls is probably a good idea as well. > > Beyond that my guess is issues with the ancient toolchain provided by > CentOS. > > You might consider OSMesa for this use case as well. Though, I guess > without source to the application, your only option would be to hack > OSMesa into wine. Thanks Tom. I'll take a look into these things. I guess I will have to try to understand XCB more and how wine might use it implicitly, or explicitly. These are definitely good ideas to try that I would not have thought of. Cheers, John |
From: tom f. <tf...@al...> - 2009-11-07 19:42:56
|
Hi John, John Wythe <bit...@gm...> writes: > I am encountering different rendering behavior between two > seemingly compatible Linux environments. [. . .] Below are links to > screen-shots and troubleshooting information: > > Screenshots of the issue: > http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800/sc > reenshots.jpeg These look (to me) like they might be Z-fighting issues. Is there any chance of requesting more resolution from the depth buffer? You would normally do this when choosing your glX visual. > Server environment information: > http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt > > Desktop environment information: > http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4 Unsurprisingly, your desktop X configuration is using XCB, probably with it's libX11 `emulation' of a sorts, while your server configuration does not have XCB. > On my desktop I have a NVidia 8800GTS. To try and isolate the > problem, I wanted to force my desktop to use the software > renderer. For some unknown reason setting LIBGL_ALWAYS_SOFTWARE=1 has > no effect. You're probably using NVIDIAs driver. Actually, you almost definitely are, because the only other options are `nv' and `noveau', and of course the Mesa `swrast' driver. `nv' former can't do 3D, and `noveau' will crash when used for 3D -- if you're lucky -- AFAICT (never tried it myself). If you're using NVIDIA's driver, none of Mesa's environment variables matter. > Instead I compiled Mesa using the xlib software driver. When using > this libGL version the application continues to work just fine on my > desktop. Are you absolutely certain you're using Mesa? I would recommend you remove any drivers your package manager supplies, as much as possible at least. This won't be fully possible on Ubuntu because the removal of all GL impls will make the package manager want to remove X, but at least remove all nvidia packages. > The server on the other hand, is a managed environment without root > access. The default version of libGL caused the application to crash, > which initially, I thought was due to an older version of Xvfb. After > learning much more about xorg, I came to realize that it was not the > version of Xvfb that made things marginally work, but rather the > libGL version that was built as a result of building Xvfb/Mesa. Now > I am only building Mesa and libXmu on the server and using the older > Xvfb. CentOS, IMHO, is trash. Everything's too damn old on it; for software I work on, we're always hitting things like old compilers not accepting valid templates or similar. If you can update the toolchain, I would recommend as much. Or better yet, put a Debian stable / Ubuntu LTS / hell even openSUSE on the machine and save yourself the pain. > On the server experiencing the problem, I have set MESA_DEBUG=FP > to try and get some debug information. I also tried to set > LIBGL_DEBUG=verbose but that seems to have no effect on either > machine. Two messages were encountered at various times -only- on the > server: > > "Mesa warning: couldn't open libtxc_dxtn.so, software DXTn > compression/decompression unavailable" > and > "Mesa warning: XGetGeometry failed!" > > > I downloaded the libtxc_dxtn from [snip] I would not worry about it. Mesa will give that warning regardless of whether or not compressed textures are actually used. I encounter very few apps that actually use them (I suppose games would frequently, though?), and in any case the image you sent makes me think there's no texturing at all in your app, anyway. > Overall these two machines are > * Using the same version of Mesa > * Both using software rendering > * Both using the same version of wine > > Which leads me to believe this must be a subtle dependency problem, > either at runtime or build time. At this point though, I would have > no idea what could affect the rendering in such a way. My best guess is XCB/X11. Try configuring Mesa with --enable-xcb. If the app is threaded, --enable-glx-tls is probably a good idea as well. Beyond that my guess is issues with the ancient toolchain provided by CentOS. You might consider OSMesa for this use case as well. Though, I guess without source to the application, your only option would be to hack OSMesa into wine. HTH, -tom |
From: John W. <bit...@gm...> - 2009-11-07 03:45:46
|
Hello Mesa3d-users, I am encountering different rendering behavior between two seemingly compatible Linux environments. After about a week of troubleshooting this, researching Google, mailing list archives, and bug trackers, I would be most grateful for any assistance from this list. Below are links to screen-shots and troubleshooting information: Screenshots of the issue: http://lh6.ggpht.com/_mTZwuLfG_iE/SvTnUfC0eWI/AAAAAAAAAB0/SUeL9K7CPcU/s800/screenshots.jpeg Server environment information: http://docs.google.com/View?id=ddkkm9rx_2fvwmsdpt Desktop environment information: http://docs.google.com/View?id=ddkkm9rx_3dgj28nf4 We have a legacy windows application (without source) that we are adapting to run under wine on a headless Cent OS server to perform work in batches. We have a custom windows wrapper around this legacy application to make it scriptable and capture a couple of screen shots using GDI+ during the batch jobs. This works fine on my Ubuntu desktop, but on the server it appears that some of the GL polygons have their normals swapped (I'm not an expert graphics programmer so I hope I have these terms correct). On my desktop I have a NVidia 8800GTS. To try and isolate the problem, I wanted to force my desktop to use the software renderer. For some unknown reason setting LIBGL_ALWAYS_SOFTWARE=1 has no effect. Instead I compiled Mesa using the xlib software driver. When using this libGL version the application continues to work just fine on my desktop. The server on the other hand, is a managed environment without root access. The default version of libGL caused the application to crash, which initially, I thought was due to an older version of Xvfb. After learning much more about xorg, I came to realize that it was not the version of Xvfb that made things marginally work, but rather the libGL version that was built as a result of building Xvfb/Mesa. Now I am only building Mesa and libXmu on the server and using the older Xvfb. On the server experiencing the problem, I have set MESA_DEBUG=FP to try and get some debug information. I also tried to set LIBGL_DEBUG=verbose but that seems to have no effect on either machine. Two messages were encountered at various times -only- on the server: "Mesa warning: couldn't open libtxc_dxtn.so, software DXTn compression/decompression unavailable" and "Mesa warning: XGetGeometry failed!" I downloaded the libtxc_dxtn from http://www.t2-project.org/packages/libtxc-dxtn.html but that did not seem to fix the problem. However the debug message changed to: "Mesa warning: software DXTn compression/decompression available" Overall these two machines are * Using the same version of Mesa * Both using software rendering * Both using the same version of wine Which leads me to believe this must be a subtle dependency problem, either at runtime or build time. At this point though, I would have no idea what could affect the rendering in such a way. Any suggestions would be greatly appreciated. Thank you, John |
From: Ethan G. <ee...@fa...> - 2009-10-27 09:39:27
|
On Tue, 27 Oct 2009 11:33:21 +0800 Chia-I Wu <ol...@gm...> wrote: > On Tue, Oct 27, 2009 at 01:45:56AM +0100, Florent Monnier wrote: > > In my Linux system I have installed the proprietary drivers for my video card, > > But I would like to test a program with the Mesa rendering. > > (Because the driver doesn't seems to support glsl and I would like to test a > > program with it) > > So I have tryed to do what I have done before with other libs, which is I have > > installed Mesa with ./configure --prefix=/tmp/Mesa and then after the install > > I do: export LD_LIBRARY_PATH=/tmp/Mesa/lib > > Then I run my program in the same session. With other libs this works fine, > > but here my program still run with the hardware accelerattion. > > So you guess the question, how can I test a program with Mesa without > > uninstalling my hardware drivers ? > With LD_LIBRARY_PATH set, you can > > $ ldd <your-program> > > to verify the libraries from mesa are used. If it already is, and you > are under X, you can run > > $ glxinfo > > to see if it is doing direct rendering, and which renderer is used. > > If glxinfo reports direct rendering, you can make sure the software DRI > driver is used by setting LIBGL_ALWAYS_SOFTWARE=1. > > Another way is to use a Xlib-based GLX emulation. You can configure > mesa with > > $ ./configure --with-driver=xlib > > to have a libGL.so that does not talk GLX. When I wanted to test software rendering I didn't know of the LIBGL_ALWAYS_SOFTWARE varibale. I configured Mesa like that, but rather than mess with my system Mesa I configured it to install under /opt like so: $ ./configure --prefix=/opt/mesa-standalone --with-driver=xlib Then when I want to test if something works with Mesa I use a Bourne shell feature to set $LD_LIBRARY_PATH just for the one program: $ LD_LIBRARY_PATH=/opt/mesa-standalone/lib program args -- Ethan Grammatikidis Those who are slower at parsing information must necessarily be faster at problem-solving. |
From: Chia-I Wu <ol...@gm...> - 2009-10-27 05:21:39
|
On Tue, Oct 27, 2009 at 12:10 PM, asimov03 <agg...@gm...> wrote: > I need some help. I am trying to find the source code of the > glReadPixels(...). I have looked and looked but not sure which file > contains this. I did find the .h with the function declaration but not the > code itself. I am using visual studio 2005 on winXP. Any help will be > appreciated in locating this function. > Mesa uses a dispatch table. glReadPixels will be dispatched to _mesa_ReadPixels immediately. |
From: asimov03 <agg...@gm...> - 2009-10-27 04:10:19
|
Hi Guys, I need some help. I am trying to find the source code of the glReadPixels(...). I have looked and looked but not sure which file contains this. I did find the .h with the function declaration but not the code itself. I am using visual studio 2005 on winXP. Any help will be appreciated in locating this function. -- View this message in context: http://www.nabble.com/glReadPixels-Source-code-tp26071900p26071900.html Sent from the mesa3d-users mailing list archive at Nabble.com. |
From: Chia-I Wu <ol...@gm...> - 2009-10-27 03:32:06
|
On Tue, Oct 27, 2009 at 01:45:56AM +0100, Florent Monnier wrote: > In my Linux system I have installed the proprietary drivers for my video card, > But I would like to test a program with the Mesa rendering. > (Because the driver doesn't seems to support glsl and I would like to test a > program with it) > So I have tryed to do what I have done before with other libs, which is I have > installed Mesa with ./configure --prefix=/tmp/Mesa and then after the install > I do: export LD_LIBRARY_PATH=/tmp/Mesa/lib > Then I run my program in the same session. With other libs this works fine, > but here my program still run with the hardware accelerattion. > So you guess the question, how can I test a program with Mesa without > uninstalling my hardware drivers ? With LD_LIBRARY_PATH set, you can $ ldd <your-program> to verify the libraries from mesa are used. If it already is, and you are under X, you can run $ glxinfo to see if it is doing direct rendering, and which renderer is used. If glxinfo reports direct rendering, you can make sure the software DRI driver is used by setting LIBGL_ALWAYS_SOFTWARE=1. Another way is to use a Xlib-based GLX emulation. You can configure mesa with $ ./configure --with-driver=xlib to have a libGL.so that does not talk GLX. -- Regards, olv |
From: Florent M. <fmo...@li...> - 2009-10-27 01:04:12
|
Hi, In my Linux system I have installed the proprietary drivers for my video card, But I would like to test a program with the Mesa rendering. (Because the driver doesn't seems to support glsl and I would like to test a program with it) So I have tryed to do what I have done before with other libs, which is I have installed Mesa with ./configure --prefix=/tmp/Mesa and then after the install I do: export LD_LIBRARY_PATH=/tmp/Mesa/lib Then I run my program in the same session. With other libs this works fine, but here my program still run with the hardware accelerattion. So you guess the question, how can I test a program with Mesa without uninstalling my hardware drivers ? thanks in advance |
From: Arthur M. <art...@in...> - 2009-10-11 13:44:22
|
Arthur Marsh wrote, on 11/10/09 23:48: I forgot to mention that the instructions were the ones at: http://www.x.org/wiki/radeonhd:experimental_3D > Thanks, it turns out that I had missed the second step below: > > How to build drm stuff > > git clone git://anongit.freedesktop.org/~agd5f/drm > cd drm > git checkout -t -b r6xx-r7xx-3d origin/r6xx-r7xx-3d > ./autogen.sh --prefix=$(pkg-config --variable=prefix libdrm) > --libdir=$(pkg-config --variable=libdir libdrm) > --includedir=$(pkg-config --variable=includedir libdrm) > make > sudo make install > > cd linux-core > make > sudo make install > > This required me to re-build my current kernel to provide all that the > build process was looking for. > > After these steps and installing the new mesa I have a fair degree of > success. > > glxgears with radeon driver reports just under 1000 fps and with 1.3.0 > radeonhd driver reports just over 1000 fps. > > Now I can get 40 fps playing etracer in 1024*768 on a dual core athlon > 64 on a motherboard with on-board radeon 3200hd, but the keyboard > response during play or something isn't quite right as I'm getting very > low speeds from the racing tux (ie not enough for tux to get airborne much). |
From: Arthur M. <art...@in...> - 2009-10-11 13:19:11
|
Cooper Yuan wrote, on 11/10/09 17:57: > Try to pull the latest mesa/drm. > Cooper > > On Sat, Oct 10, 2009 at 7:09 PM, Arthur Marsh <art...@in... >> wrote: > >> Hi, I did an apt-get source mesa and obtained the 7.6 sources, then >> added r600 into debian/rules and attempted to build, but it failed at >> this stage: >> >> gcc -c -I. -I../../../../../src/mesa/drivers/dri/common -Iserver >> -I../../../../../include -I../../../../../src/mesa >> -I../../../../../src/egl/main -I../../../../../src/egl/drivers/dri >> -I/usr/include/drm -Wall -g -O2 -Wall -Wmissing-prototypes -std=c99 >> -ffast-math -fno-strict-aliasing -fPIC -DUSE_X86_64_ASM -D_GNU_SOURCE >> -DPTHREADS -DHAVE_POSIX_MEMALIGN -DGLX_USE_TLS -DPTHREADS >> -DUSE_EXTERNAL_DXTN_LIB=1 -DIN_DRI_DRIVER -DGLX_DIRECT_RENDERING >> -DGLX_INDIRECT_RENDERING -DHAVE_ALIAS -DCOMPILE_R600 -DR200_MERGED=0 >> -DRADEON_COMMON=1 -DRADEON_COMMON_FOR_R600 r600_cmdbuf.c -o r600_cmdbuf.o >> r600_cmdbuf.c: In function ‘r600_cs_emit’: >> r600_cmdbuf.c:331: error: storage size of ‘cs_cmd’ isn’t known >> r600_cmdbuf.c:332: error: array type has incomplete element type >> r600_cmdbuf.c:353: error: ‘RADEON_CHUNK_ID_IB’ undeclared (first use in >> this function) >> r600_cmdbuf.c:353: error: (Each undeclared identifier is reported only once >> r600_cmdbuf.c:353: error: for each function it appears in.) >> r600_cmdbuf.c:358: error: ‘RADEON_CHUNK_ID_RELOCS’ undeclared (first use >> in this function) >> r600_cmdbuf.c:373: error: ‘DRM_RADEON_CS’ undeclared (first use in this >> function) >> r600_cmdbuf.c:332: warning: unused variable ‘cs_chunk’ >> r600_cmdbuf.c:331: warning: unused variable ‘cs_cmd’ Thanks, it turns out that I had missed the second step below: How to build drm stuff git clone git://anongit.freedesktop.org/~agd5f/drm cd drm git checkout -t -b r6xx-r7xx-3d origin/r6xx-r7xx-3d ./autogen.sh --prefix=$(pkg-config --variable=prefix libdrm) --libdir=$(pkg-config --variable=libdir libdrm) --includedir=$(pkg-config --variable=includedir libdrm) make sudo make install cd linux-core make sudo make install This required me to re-build my current kernel to provide all that the build process was looking for. After these steps and installing the new mesa I have a fair degree of success. glxgears with radeon driver reports just under 1000 fps and with 1.3.0 radeonhd driver reports just over 1000 fps. Now I can get 40 fps playing etracer in 1024*768 on a dual core athlon 64 on a motherboard with on-board radeon 3200hd, but the keyboard response during play or something isn't quite right as I'm getting very low speeds from the racing tux (ie not enough for tux to get airborne much). |