From: Juhana S. <ko...@ni...> - 2005-07-27 17:44:21
|
Hello. I found a reason why ATI nor NVIDIA provides us hardware details: http://www.futuremark.com/companyinfo/3dmark03_audit_report.pdf Regarding ATI: "This performance drop is almost entirely due to 8.2% difference in the game test 4 result, which means that the test was also detected and somehow altered by the ATI drivers." Nvidia is worse: they have 8 cheats in their driver. It is no wonder why they don't want release the hardware details. They simply don't want a driver which does not contain the cheats. Please continue developing reverse-engineered, open sourced drivers. Juhana -- http://music.columbia.edu/mailman/listinfo/linux-graphics-dev for developers of open source graphics software |
From: Roland S. <rsc...@hi...> - 2005-07-27 18:43:14
|
Juhana Sadeharju wrote: > Hello. I found a reason why ATI nor NVIDIA provides us hardware > details: > http://www.futuremark.com/companyinfo/3dmark03_audit_report.pdf > > Regarding ATI: "This performance drop is almost entirely due to 8.2% > difference in the game test 4 result, which means that the test was > also detected and somehow altered by the ATI drivers." > > Nvidia is worse: they have 8 cheats in their driver. Ugh, this is OLD news. You're 2 years late... > It is no wonder why they don't want release the hardware details. > They simply don't want a driver which does not contain the cheats. Why would that make any difference for them? After all, the open-source driver would be slower without the cheats, so they could provide an additional reason why you should use the closed-source driver from them (not that it likely wouldn't be faster anyway, even without cheating...). Btw, both Nvidia and ATI use tricks which aren't really cheats, for instance the "brilinear" filtering and aniso filtering optimizations (and afaik aniso filtering isn't fully specified, so you can't even really cheat there even if you wanted, though there is some general expectation what it should do). You can control at least some of these optimizations in the driver control panels (though every here and then, usually when new cards are launched, disabling some of the optimizations won't work mysteriously, a "bug" which is usually fixed when the initial reviews of the cards are over...). "Brilinear" is supported for r200 even in the open-source driver, though it's manually controlled and certainly not used by default. At one point I even experimented to autocompress textures (what ati's driver does) though I gave that up as its usefulness seemed limited (and it's not opengl conformant). And, nowadays usually even app-detection "cheats" are not necessarily considered evil, as long as the same output is guaranteed (and if it's not just optimizing for a benchmark run, e.g. the static clip planes nvidia did for 3dmark03). Though it's probably something you'd want to stay far away from in a driver developed by the community, as it certainly increases driver complexity - you want good general case performance, not lots of app-specific optimized paths just to increase performance in those particular apps by 3%. > Please continue developing reverse-engineered, open sourced drivers. As time permits... Roland |
From: Patrick M. <pmc...@do...> - 2005-07-27 18:57:06
|
On Wednesday 27 July 2005 02:43 pm, Roland Scheidegger wrote: > Juhana Sadeharju wrote: > > Please continue developing reverse-engineered, open sourced drivers. > As time permits... Heh, the only thing I want is GL ARB fragment shaders accelerated as much a= s=20 possible by R200 hardware. I don't see that happening with ATI's binary=20 drivers, they only support the old ATI pre-ARB fragment shader interface. =2D-=20 Patrick "Diablo-D3" McFarland || pmc...@do... "Computer games don't affect kids; I mean if Pac-Man affected us as kids, w= e'd=20 all be running around in darkened rooms, munching magic pills and listening= to repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 |
From: Roland S. <rsc...@hi...> - 2005-07-27 19:18:23
|
Patrick McFarland wrote: > On Wednesday 27 July 2005 02:43 pm, Roland Scheidegger wrote: > >>Juhana Sadeharju wrote: >> >>>Please continue developing reverse-engineered, open sourced drivers. >> >>As time permits... > > > Heh, the only thing I want is GL ARB fragment shaders accelerated as much as > possible by R200 hardware. I don't see that happening with ATI's binary > drivers, they only support the old ATI pre-ARB fragment shader interface. I'm not sure if it would be useful to even try something like that. Not only would you violate the spec (hardware doesn't support required precision/range of values), but if you'd try to compile a shader you'd likely figure out it won't fit into these 8 instruction slots anyway (ok if we'd figure out how to pass the values from stage 1 to stage 2 we would get 16 slots, and we could do even 1 level of dependant texture reads). But there are probably a ton of things you couldn't directly fit to the hardware and you'd need to expand to multiple instructions. Roland |
From: Patrick M. <pmc...@do...> - 2005-07-27 19:34:10
|
On Wednesday 27 July 2005 03:18 pm, Roland Scheidegger wrote: > Patrick McFarland wrote: > > Heh, the only thing I want is GL ARB fragment shaders accelerated as mu= ch > > as possible by R200 hardware. I don't see that happening with ATI's > > binary drivers, they only support the old ATI pre-ARB fragment shader > > interface. > > I'm not sure if it would be useful to even try something like that. Not > only would you violate the spec (hardware doesn't support required > precision/range of values), but if you'd try to compile a shader you'd > likely figure out it won't fit into these 8 instruction slots anyway (ok > if we'd figure out how to pass the values from stage 1 to stage 2 we > would get 16 slots, and we could do even 1 level of dependant texture > reads). But there are probably a ton of things you couldn't directly fit > to the hardware and you'd need to expand to multiple instructions. Even if we violate precision/range stuff, being able to accelerate simplist= ic=20 shaders would be quite useful. Its better than not having a software=20 implementation of the shader pipeline. Also, what stops you from splitting up a shader, and running the peices bac= k=20 to back over multiple passes? Can't you emulate longer shaders doing that? =2D-=20 Patrick "Diablo-D3" McFarland || pmc...@do... "Computer games don't affect kids; I mean if Pac-Man affected us as kids, w= e'd=20 all be running around in darkened rooms, munching magic pills and listening= to repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 |
From: Ian R. <id...@us...> - 2005-07-27 20:55:07
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Patrick McFarland wrote: > Even if we violate precision/range stuff, being able to accelerate simplistic > shaders would be quite useful. Its better than not having a software > implementation of the shader pipeline. The problem is that most shaders that use ARB_fp or NV_fp aren't simplistic enough. It would be a *lot* of work to benefit 1% of real-world shaders. > Also, what stops you from splitting up a shader, and running the peices back > to back over multiple passes? Can't you emulate longer shaders doing that? So, I looked into this really deeply in the past for other things. The problem is it gets *very* hard to deal with framebuffer blend modes. If you have an arbitrary triangle list, triangles in the list may overlap. If you have a framebuffer blend mode other than dst=src, you can't multipass it (generally) without breaking up the triangle list and sending one triangle at a time. It would not surprise me at all if the performance there was close to that of a good software implementation. This, BTW, is what ATI's "fbuffer" in all about. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFC5/SSX1gOwKyEAw8RAlgoAKCLWrewHelrWjXFlaRZjzJ4ITdZ4gCeM9x5 7jYZbOZ/I0mduOG9O19zzlY= =RibU -----END PGP SIGNATURE----- |
From: Patrick M. <pmc...@do...> - 2005-07-27 21:15:36
|
On Wednesday 27 July 2005 04:54 pm, Ian Romanick wrote: > Patrick McFarland wrote: > > Even if we violate precision/range stuff, being able to accelerate > > simplistic shaders would be quite useful. Its better than not having a > > software implementation of the shader pipeline. > > The problem is that most shaders that use ARB_fp or NV_fp aren't > simplistic enough. It would be a *lot* of work to benefit 1% of > real-world shaders. I think ATI really screwed R200 owners then. The shader pipeline ultimately= is=20 useless. > > Also, what stops you from splitting up a shader, and running the peices > > back to back over multiple passes? Can't you emulate longer shaders doi= ng > > that? > > So, I looked into this really deeply in the past for other things. The > problem is it gets *very* hard to deal with framebuffer blend modes. If > you have an arbitrary triangle list, triangles in the list may overlap. > If you have a framebuffer blend mode other than dst=3Dsrc, you can't > multipass it (generally) without breaking up the triangle list and > sending one triangle at a time. It would not surprise me at all if the > performance there was close to that of a good software implementation. So, how many games use blend modes other than dst=3Dsrc? Also, even if it i= sn't=20 faster than a good software implementation, its still less work done by the= =20 CPU. I own a pretty outdated P3 550, and I'd rather have any sort of boost = I=20 can get. > This, BTW, is what ATI's "fbuffer" in all about. I'm trying to find more information about this "fbuffer", but Google isn't= =20 being too friendly. =2D-=20 Patrick "Diablo-D3" McFarland || pmc...@do... "Computer games don't affect kids; I mean if Pac-Man affected us as kids, w= e'd=20 all be running around in darkened rooms, munching magic pills and listening= to repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 |
From: Matthias H. <mh...@su...> - 2005-07-28 15:08:56
|
On Jul 27, 05 17:14:25 -0400, Patrick McFarland wrote: > On Wednesday 27 July 2005 04:54 pm, Ian Romanick wrote: > > > > Also, what stops you from splitting up a shader, and running the peices > > > back to back over multiple passes? Can't you emulate longer shaders doing > > > that? No. Temporary registers are stepping in your back. And even with Multiple render targets (does the R200 support that already?) you wouldn't have enough targets for the available number of registers. And you do not *want* to support that. It would be dog slow. Better let the application choose to use a render path that does not need pixel shaders, and give you a decent frame rate at worse quality. It can only do so by recognizing that your card does *not* support pixel shader. The only valid approach would be to create another pixel shader extension, that only promotes features R200 is able to implement. Have fun to hack this yourself! It is tons of work. And there's no application out there that would use it. > > So, I looked into this really deeply in the past for other things. The > > problem is it gets *very* hard to deal with framebuffer blend modes. If > > you have an arbitrary triangle list, triangles in the list may overlap. You *could* work with offscreen memory and per-triangle multiple passes. You don't want to, though. > So, how many games use blend modes other than dst=src? Also, even if it isn't All. > > This, BTW, is what ATI's "fbuffer" in all about. > > I'm trying to find more information about this "fbuffer", but Google isn't > being too friendly. It's not Google's fault. Back in University I used to hack a lot for my PhD thesis using latest OpenGL features. fbuffer is something ATI used in advertizing and nowhere else. They claim they can store the intermediate pixel shader state in this fbuffer and make multipass shaders work this way transparently. I doubt it ever worked, because you shouldn't have instruction limits in this case, but you have, and much smaller limits than the GeForce has. I also think to remember that fbuffer advertizement was only added with R300. So no help for R200. > Patrick "Diablo-D3" McFarland || pmc...@do... > "Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd > all be running around in darkened rooms, munching magic pills and listening to > repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 I hate to say it, but some seem to do ;->>> CU Matthias -- Matthias Hopf <mh...@su...> __ __ __ Maxfeldstr. 5 / 90409 Nuernberg (_ | | (_ |__ ma...@ms... Phone +49-911-74053-715 __) |_| __) |__ labs www.mshopf.de |
From: Ian R. <id...@us...> - 2005-07-28 17:02:19
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Matthias Hopf wrote: > On Jul 27, 05 17:14:25 -0400, Patrick McFarland wrote: >>On Wednesday 27 July 2005 04:54 pm, Ian Romanick wrote: >> >>>This, BTW, is what ATI's "fbuffer" in all about. >> >>I'm trying to find more information about this "fbuffer", but Google isn't >>being too friendly. > > It's not Google's fault. Back in University I used to hack a lot for my > PhD thesis using latest OpenGL features. fbuffer is something ATI used > in advertizing and nowhere else. They claim they can store the > intermediate pixel shader state in this fbuffer and make multipass > shaders work this way transparently. Searching "fbuffer" gives nothing useful, but searching "f-buffer" does. Try the first hit: http://graphics.stanford.edu/projects/shading/pubs/hwws2001-fbuffer/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFC6Q+HX1gOwKyEAw8RAphXAJ4r+MMzItNcO9KrSYdIRqe6e5irrQCfeKx0 rmYXC6RCWF3iB3/azILMy3A= =qvT1 -----END PGP SIGNATURE----- |
From: Roland S. <rsc...@hi...> - 2005-07-27 21:35:28
|
Patrick McFarland wrote: > On Wednesday 27 July 2005 04:54 pm, Ian Romanick wrote: > >>Patrick McFarland wrote: >> >>>Even if we violate precision/range stuff, being able to accelerate >>>simplistic shaders would be quite useful. Its better than not having a >>>software implementation of the shader pipeline. >> >>The problem is that most shaders that use ARB_fp or NV_fp aren't >>simplistic enough. It would be a *lot* of work to benefit 1% of >>real-world shaders. > > > I think ATI really screwed R200 owners then. The shader pipeline ultimately is > useless. Erm, I think you're asking a bit too much for a card of this generation. It's not like you could do it on a GF3, for example... The only way a card of that generation would have been able to do such future unknown functionality is if the fragment pipeline would have been a lot more generic - but then again it probably would have been too slow to be really useful, not to mention the longer your program, the more you'll be hurting from lack of precision generally. And, I wouldn't say it's really useless. There ARE some apps out there which indeed can make use of it quite well (doom3 for example, there are quite some directx applications out there which use the equivalent in directx (ps 1.4) too). Roland |
From: Patrick M. <pmc...@do...> - 2005-07-27 22:06:21
|
On Wednesday 27 July 2005 05:35 pm, Roland Scheidegger wrote: > Patrick McFarland wrote: > > I think ATI really screwed R200 owners then. The shader pipeline > > ultimately is useless. > > Erm, I think you're asking a bit too much for a card of this generation. > It's not like you could do it on a GF3, for example... > The only way a card of that generation would have been able to do such > future unknown functionality is if the fragment pipeline would have been > a lot more generic - but then again it probably would have been too slow > to be really useful, not to mention the longer your program, the more > you'll be hurting from lack of precision generally. > And, I wouldn't say it's really useless. There ARE some apps out there > which indeed can make use of it quite well (doom3 for example, there are > quite some directx applications out there which use the equivalent in > directx (ps 1.4) too). So why can Doom 3 use R200 pixel shaders, and DRI can't? =2D-=20 Patrick "Diablo-D3" McFarland || pmc...@do... "Computer games don't affect kids; I mean if Pac-Man affected us as kids, w= e'd=20 all be running around in darkened rooms, munching magic pills and listening= to repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 |
From: Adam J. <aj...@nw...> - 2005-07-27 22:16:24
|
On Wednesday 27 July 2005 18:05, Patrick McFarland wrote: > So why can Doom 3 use R200 pixel shaders, and DRI can't? Doom3's r200 shader pipeline gives different (read: worse) output than thei= r=20 arb shader pipeline. They have the liberty of knowing what visual quality= =20 they can sacrifice. The driver doesn't. =2D ajax |
From: Patrick M. <pmc...@do...> - 2005-07-28 03:40:39
|
On Wednesday 27 July 2005 06:16 pm, Adam Jackson wrote: > On Wednesday 27 July 2005 18:05, Patrick McFarland wrote: > > So why can Doom 3 use R200 pixel shaders, and DRI can't? > > Doom3's r200 shader pipeline gives different (read: worse) output than > their arb shader pipeline. They have the liberty of knowing what visual > quality they can sacrifice. The driver doesn't. This is the sound of me hating ATI for making such useless pixel shaders. =2D-=20 Patrick "Diablo-D3" McFarland || pmc...@do... "Computer games don't affect kids; I mean if Pac-Man affected us as kids, w= e'd=20 all be running around in darkened rooms, munching magic pills and listening= to repetitive electronic music." -- Kristian Wilson, Nintendo, Inc, 1989 |
From: Alan G. <ala...@st...> - 2005-07-28 04:08:59
|
I spent all day dl'ing and installing: ##### This is a pre-release version of the The X.Org Foundation X11. X Window System Version 6.8.99.1 ##### And my reward for spending $40 on a card that appeared to be supported by Linux? ####################### atg@leenooks ~/Croquet0.3 $ ppracer PPRacer 0.3.1 -- http://racer.planetpenguin.de (c) 2004-2005 The PPRacer team (c) 1999-2001 Jasmin F. Patry<jf...@su...> PPRacer comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions. See http://www.gnu.org/copyleft/gpl.html for details. libGL warning: 3D driver returned no fbconfigs. libGL error: InitDriver failed libGL error: reverting to (slow) indirect rendering %%% ppracer warning: Warning: Couldn't set 22050 Hz 16-bit audio Reason: Could not open sound device atg@leenooks ~/Croquet0.3 $ ###################### -- Friends don't let friends use GCC 3.4.4 GCC 3.3.6 produces code that's twice as fast on x86! |
From: Adam J. <aj...@nw...> - 2005-07-28 04:16:37
|
On Thursday 28 July 2005 01:09, Alan Grimes wrote: > I spent all day dl'ing and installing: > > ##### > This is a pre-release version of the The X.Org Foundation X11. > X Window System Version 6.8.99.1 > ##### > > And my reward for spending $40 on a card that appeared to be supported > by Linux? Why you installed a prerelease version of X, to get support for a card (925= 0,=20 right?) that was definitely supported in the last release, is a bit of a=20 mystery. Why you also chose to install 6.8.99.1 when the snapshots are up to about=20 6.8.99.15 is more of a mystery. > libGL warning: 3D driver returned no fbconfigs. > libGL error: InitDriver failed > libGL error: reverting to (slow) indirect rendering This, however, is no mystery. The 2D driver in Xorg enables color tiling b= y=20 default, but the bundled DRI driver doesn't understand color tiling yet. =20 This will be fixed in 7.0, in the meantime: Option "ColorTiling" "off" =2D ajax |
From: Ian R. <id...@us...> - 2005-07-28 05:46:32
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Patrick McFarland wrote: > On Wednesday 27 July 2005 06:16 pm, Adam Jackson wrote: > >>On Wednesday 27 July 2005 18:05, Patrick McFarland wrote: >> >>>So why can Doom 3 use R200 pixel shaders, and DRI can't? >> >>Doom3's r200 shader pipeline gives different (read: worse) output than >>their arb shader pipeline. They have the liberty of knowing what visual >>quality they can sacrifice. The driver doesn't. > > > This is the sound of me hating ATI for making such useless pixel shaders. Gee...when they came out the were the *most function* pixel shaders available. Why are you complaining that a 5 year chip doesn't have the latest features. Uh....duh? -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.6 (GNU/Linux) Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org iD8DBQFC6HEkX1gOwKyEAw8RAg/WAKCQyfa+Kr6x0P4k67IHdp89Pou9dgCfbmMX 4/PpCZ+xuICkuMK3irDpFR4= =9Ku1 -----END PGP SIGNATURE----- |
From: Wladimir v. d. L. <la...@gm...> - 2005-07-28 09:52:27
|
> This is the sound of me hating ATI for making such useless pixel shaders. Hey hey calm down a little there, up until the R300 ATI has been on the forefront for implementing new features on their chips like: - 3d textures. NVidia only came up with those in the FX5 series, ati had them already in the Radeon7500. - Multi render target support. ATI r300 can do it, NVidia FX5 series cannot= . - Floating point textures. ATI r300 can do it perfectly, NVidia FX5 series is limited to TEXTURE_RECTANGLE. ... ATI their Linux drivers are a crippled bunch, that's for a fact, and that's a big reason why open source r300 drivers get so much attention. But don't offend their chip designers :) --=20 Wladimir Ogre3D Team (http://www.ogre3d.org) |
From: Matthias H. <mh...@su...> - 2005-07-28 15:14:25
|
On Jul 28, 05 11:52:10 +0200, Wladimir van der Laan wrote: > > This is the sound of me hating ATI for making such useless pixel shaders. R200 did not really have pixel shaders. They had a configurable pixel pipeline, that's different. Comparable to GeForce2, a little bit better. > Hey hey calm down a little there, up until the R300 ATI has been on > the forefront for implementing new features on their chips like: > > - 3d textures. NVidia only came up with those in the FX5 series, ati > had them already in the Radeon7500. The GeForce3 had 3D textures (except for an early sample we had at Unversity :-/ ), and IIRC this was before the Radeon7500. > - Multi render target support. ATI r300 can do it, NVidia FX5 series cannot. Right, the lack of multiple render targets sucked. > - Floating point textures. ATI r300 can do it perfectly, NVidia FX5 > series is limited to TEXTURE_RECTANGLE. Well - sort of. R300 still does not do IEEE computations in its pixel shader (I think R400 doesn't either), which gives you crappy results for GPGPU applications. > ATI their Linux drivers are a crippled bunch, that's for a fact, and > that's a big reason why open source r300 drivers get so much > attention. But don't offend their chip designers :) Yep. They used to do good hardware. Have fallen behind a bit compared to GeForce6, but not much. > Wladimir > Ogre3D Team (http://www.ogre3d.org) Nice engine, BTW! Matthias -- Matthias Hopf <mh...@su...> __ __ __ Maxfeldstr. 5 / 90409 Nuernberg (_ | | (_ |__ ma...@ms... Phone +49-911-74053-715 __) |_| __) |__ labs www.mshopf.de |
From: Roland S. <rsc...@hi...> - 2005-07-28 17:29:25
|
Matthias Hopf wrote: > On Jul 28, 05 11:52:10 +0200, Wladimir van der Laan wrote: > >>> This is the sound of me hating ATI for making such useless pixel >>> shaders. > > > R200 did not really have pixel shaders. They had a configurable pixel > pipeline, that's different. Comparable to GeForce2, a little bit > better. Looks better to me than GeForce3/4, really, if only for the dependant texture read. You nowadays indeed have (directx) games which have PS 1.4 (thus r200) as minimum, but won't work on GF3/4. > The GeForce3 had 3D textures (except for an early sample we had at > Unversity :-/ ), and IIRC this was before the Radeon7500. If the Radeon7500 has it, the radeon 7200 (radeon sdr/ddr as it was called) should have it too. And that was certainly way before geforce3. It can also do some environment bump mapping (the open-source driver can't, though). > Well - sort of. R300 still does not do IEEE computations in its pixel > shader (I think R400 doesn't either), which gives you crappy results > for GPGPU applications. True, but that's not really what these cards are intended for. R300 does nice fast fp24 calculations, with FX5 you could choose between really slow fp16 and even slower fp32 :-). Oh or you could choose quite fast int8... > >> ATI their Linux drivers are a crippled bunch, that's for a fact, >> and that's a big reason why open source r300 drivers get so much >> attention. But don't offend their chip designers :) > > > Yep. They used to do good hardware. Have fallen behind a bit compared > to GeForce6, but not much. Well, r520 should do fp32, longer shaders and what not. The chip's late though, we'll see. Roland (is it only me or is this all "slightly" offtopic?) |
From: Matthias H. <mh...@su...> - 2005-08-10 12:38:08
|
On Jul 28, 05 19:29:19 +0200, Roland Scheidegger wrote: > >R200 did not really have pixel shaders. They had a configurable pixel > > pipeline, that's different. Comparable to GeForce2, a little bit > >better. > Looks better to me than GeForce3/4, really, if only for the dependant > texture read. You nowadays indeed have (directx) games which have PS 1.4 > (thus r200) as minimum, but won't work on GF3/4. GeForce3 didn't improve much here compared to GeForce2. You could have dependant texture read in GeForce3 as well (NV_texture_shader), but I think that the later R2xx were a little bit more flexible. > >The GeForce3 had 3D textures (except for an early sample we had at > >Unversity :-/ ), and IIRC this was before the Radeon7500. > If the Radeon7500 has it, the radeon 7200 (radeon sdr/ddr as it was > called) should have it too. And that was certainly way before geforce3. It seems to have it as well. I don't remember the release dates, though. I think I remember, that the OpenGL drivers exposed this feature on the GeForce first. > >Well - sort of. R300 still does not do IEEE computations in its pixel > > shader (I think R400 doesn't either), which gives you crappy results > > for GPGPU applications. > True, but that's not really what these cards are intended for. R300 does > nice fast fp24 calculations, with FX5 you could choose between really > slow fp16 and even slower fp32 :-). Oh or you could choose quite fast > int8... Sort of. Well, fp16 was quite ok. And on FX6 this is really fast now. > >Yep. They used to do good hardware. Have fallen behind a bit compared > > to GeForce6, but not much. > Well, r520 should do fp32, longer shaders and what not. The chip's late > though, we'll see. > (is it only me or is this all "slightly" offtopic?) Yep, let's end it now :^) Matthias -- Matthias Hopf <mh...@su...> __ __ __ Maxfeldstr. 5 / 90409 Nuernberg (_ | | (_ |__ ma...@ms... Phone +49-911-74053-715 __) |_| __) |__ labs www.mshopf.de |
From: Dave A. <ai...@li...> - 2005-07-28 00:17:18
|
> > So why can Doom 3 use R200 pixel shaders, and DRI can't? And we currently don't implement the two extensions on r200 that Doom3 uses, I've still got a 90% finished ATI_fragment_shader done but I've little time to pick it back up, and the only test code I had was doom3 and unfortunately when I enable framgent shader it also needs one of the other shaders we've implemented software only so it started to look like a bigger job to accelerate it (if the r200 could do it..) Dave. -- David Airlie, Software Engineer http://www.skynet.ie/~airlied / airlied at skynet.ie Linux kernel - DRI, VAX / pam_smb / ILUG |