From: Bill W. <bil...@gr...> - 2000-09-22 13:56:30
|
Who is working on texture compression in Mesa? I have an implementation of the ARB extension (mostly done by Brian Paul) and an implementation of the GL_3DFX_texture_compression_FXT1 extension. These are in the tdfx-2-1-branch branch of the DRI tree. If someone else is working on this, and depends on the implementation, please let me know. For something I am working on, the driver interface needs to change. In particular, there is a routine which calculates the specific texture compression format given a generic one. For example, given GL_COMPRESSED_RGB_ARB, the routine might return GL_COMPRESSED_RGB_FXT1_3DFX. This routine needs to have more parameters to support some more compressed texture formats. Please let me know if you are working in this area, and I can give you more details. They are generally pretty harmless, and I think they are backward compatible, though I have not tested for this. |
From: Brian P. <br...@va...> - 2000-09-22 15:38:50
|
Bill White wrote: > > Who is working on texture compression in Mesa? I have an implementation > of the ARB extension (mostly done by Brian Paul) and an implementation of > the GL_3DFX_texture_compression_FXT1 extension. These are in the > tdfx-2-1-branch > branch of the DRI tree. > > If someone else is working on this, and depends on the implementation, please > let me know. For something I am working on, the driver interface needs to > change. In particular, there is a routine which calculates the specific > texture compression format given a generic one. For example, given > GL_COMPRESSED_RGB_ARB, the routine might return GL_COMPRESSED_RGB_FXT1_3DFX. > This routine needs to have more parameters to support some more compressed > texture formats. > > Please let me know if you are working in this area, and I can give you > more details. They are generally pretty harmless, and I think they are > backward compatible, though I have not tested for this. You and I are the only ones who've touched that code. Go ahead and make your changes. -Brian |
From: Ian D R. <id...@cs...> - 2000-09-22 16:10:37
|
> Who is working on texture compression in Mesa? I have an implementation > of the ARB extension (mostly done by Brian Paul) and an implementation of > the GL_3DFX_texture_compression_FXT1 extension. These are in the > tdfx-2-1-branch branch of the DRI tree. Is this extension going to only be supported on 3dfx hardware or will there be a software fallback? It sure would be nice to make sure that one's code works right with the FXT1 textures without having to go out and buy a Voodoo 4/5/6. The same could also be done with S3TC format textures, I suppose. If this is not work that is currently planned, could someone perhaps provide some guidance for others that might want to try and tackle this? -- "You must understand...that there are two ways of fighting: by law or by force. The first way is natural to man, and the second to beasts. But as the first way often proves inadequate one must have recourse in the second." -- Machiavelli, The Prince |
From: Daryll S. <da...@va...> - 2000-09-22 17:43:26
|
On Fri, Sep 22, 2000 at 09:10:24AM -0700, Ian D Romanick wrote: > Is this extension going to only be supported on 3dfx hardware or will there > be a software fallback? It sure would be nice to make sure that one's code > works right with the FXT1 textures without having to go out and buy a Voodoo > 4/5/6. The same could also be done with S3TC format textures, I suppose. I don't think there are plans for software fallbacks, but in reality you are much better off using non-compressed textures than using software fallbacks. What really happens is that other drivers that don't implement the feature don't advertise and it and the application doesn't use it. I am not a lawyer, but from what I understand FXT1 can be implemented in software, because 3dfx made it open source and even provided code to do so. S3TC is very heavily protected and I wouldn't want to go near it for a software implementation for fear of legal issues. > If this is not work that is currently planned, could someone perhaps provide > some guidance for others that might want to try and tackle this? Guidance: FXT1 should be portable, but isn't useful unless the hardware you're using supports FXT1. S3TC is supported on lots of hardware, but due to legal restraints I wouldn't go near it. - |Daryll |
From: Ian D R. <id...@cs...> - 2000-09-22 18:28:55
|
> > Is this extension going to only be supported on 3dfx hardware or will there > > be a software fallback? It sure would be nice to make sure that one's code > > works right with the FXT1 textures without having to go out and buy a Voodoo > > 4/5/6. The same could also be done with S3TC format textures, I suppose. > > I don't think there are plans for software fallbacks, but in reality you > are much better off using non-compressed textures than using software > fallbacks. What really happens is that other drivers that don't > implement the feature don't advertise and it and the application doesn't > use it. Well, it could be useful so that a person could write and test code that uses that extension without having to have hardware the explicitly supports it. I for one don't have the extra $$$ just sitting around to go out and buy a Voodoo5, but I'd really like for my code to correctly support its texture compression extensions. > I am not a lawyer, but from what I understand FXT1 can be implemented in > software, because 3dfx made it open source and even provided code to do > so. S3TC is very heavily protected and I wouldn't want to go near it for > a software implementation for fear of legal issues. You are probably right there. It could possibly go either way, but I'd rather not press my luck. :) -- "The [300MHz P-II computer] is clearly not optimized for word processing. But when we pushed it to produce high-end 3D graphics, it truly shined." -- Byte, Sept 97, pp. 33 What the hell does it take to do word processing? http://www.cs.pdx.edu/~idr/ |
From: John S. <jo...@st...> - 2000-09-25 14:14:17
|
You could opt for a voodoo4 so that you could test it against the same hardware to see if you have compression working. That's one nice thing about the VSA100, is that they've used a scalable product so you don't have to go out and blow 400$ to get the latest/greatest card to test out new functions. Just grab the low end card and test it out... Should be out now/soon and only costs around $100. > >Well, it could be useful so that a person could write and test code that >uses that extension without having to have hardware the explicitly supports >it. I for one don't have the extra $$$ just sitting around to go out and >buy a Voodoo5, but I'd really like for my code to correctly support its >texture compression extensions. > Which brings up another question, how long before speed isn't going to be an issue anymore? I mean if you can produce a card that'll do 1600x1200x32 at 60fps how long is it going to take game creators to create a game with enough detail to slow down the card. Not many of us play games with a 21" monitor that will do 1600x1200. I think there would eventually have to be a break point of production time vs. quality. Wow look this new card can reproduce toy story quality of animation.. but how long is it going to take them to make that game? If it takes 2 years to create a game then the functions of the 3d engine would already be out of date? This is probably off topic but it was an interesting conversation that came up at work a few days ago.. John Strange Systems Administrator jo...@yo... |
From: Nathan H. <na...@ma...> - 2000-09-25 14:39:10
|
On Mon, Sep 25, 2000 at 10:14:38AM -0400, John Strange wrote: > > Which brings up another question, how long before speed isn't going to be an > issue anymore? Never. > I mean if you can produce a card that'll do 1600x1200x32 at > 60fps how long is it going to take game creators to create a game with > enough detail to slow down the card. Not many of us play games with a 21" > monitor that will do 1600x1200. People said the same thing about 320x200x256 games, and the same thing about doom, and the same thing about quake, and now they're saying the same thing about quake3. The lesson of this story is... > I think there would eventually have to be a > break point of production time vs. quality. Wow look this new card can > reproduce toy story quality of animation.. but how long is it going to take > them to make that game? If it takes 2 years to create a game then the > functions of the 3d engine would already be out of date? This is probably > off topic but it was an interesting conversation that came up at work a few > days ago.. Production times are likely to increase. Hardware is getting much more complicated and driver development is significantly harder than it was even just a few years ago. |
From: <ph...@bo...> - 2000-09-25 18:26:22
|
[ Nathan Hand writes ] > > I mean if you can produce a card that'll do 1600x1200x32 at > > 60fps how long is it going to take game creators to create a game with > > enough detail to slow down the card. Not many of us play games with a 21" > > monitor that will do 1600x1200. > > People said the same thing about 320x200x256 games, and the same thing > about doom, and the same thing about quake, and now they're saying the > same thing about quake3. The lesson of this story is... The lesson of the story is: there IS a limit of the human eye. Therefore there IS a limit on this too. After all, increasing color from 8-16-32bit stopped at 32bit cause that's all the eye can see. Hopefully, more effort will then go into true 3d object acceleration and AI. |
From: Gareth H. <ga...@va...> - 2000-09-25 23:24:17
|
Philip Brown wrote: > > The lesson of the story is: there IS a limit of the human eye. Therefore > there IS a limit on this too. > After all, increasing color from 8-16-32bit stopped at 32bit cause that's > all the eye can see. I don't think so. We need higher per-channel precision to handle all the multipass rendering techniques being used these days. 8 bits per channel won't cut it, I'm afraid - try doing say 20 passes with 32bpp (throughout the pipeline) and you'll see what I mean. I'd like to see full IEEE floats per channel - that would be fun :-) -- Gareth |
From: Jeff B. <jt...@jt...> - 2000-09-26 05:00:29
|
I've seen mention of this done at UNC actually. Pretty neat, but I can't imagine performance is too hot. Jeff On Mon, Sep 25, 2000 at 07:23:41PM -0400, Gareth Hughes wrote: > I'd like to see full IEEE floats per channel - that would be fun :-) -- Jeff Brubaker - jt...@jt... - http://jtb.dyndns.org/~jtb/ |
From: Nathan H. <na...@ma...> - 2000-09-26 00:17:31
|
On Mon, Sep 25, 2000 at 11:26:03AM -0700, Philip Brown wrote: > [ Nathan Hand writes ] > > > I mean if you can produce a card that'll do 1600x1200x32 at > > > 60fps how long is it going to take game creators to create a game with > > > enough detail to slow down the card. Not many of us play games with a 21" > > > monitor that will do 1600x1200. > > > > People said the same thing about 320x200x256 games, and the same thing > > about doom, and the same thing about quake, and now they're saying the > > same thing about quake3. The lesson of this story is... > > The lesson of the story is: there IS a limit of the human eye. Therefore > there IS a limit on this too. Sure, but the limit isn't 60fps. True motion blur involves rendering at 2-300fps and downsampling. Full screen anti-aliasing at 1600x1200 would require rendering at 6400x4800 for a bare minimum 4x4 interpolation. We haven't even begun to use advanced rendering techniques such as (the easiest example to imagine) parallel mirrored surfaces. Techniques like this require rendering the same scene 20-30x. This means 9000fps! And heaven knows what game developers will be doing once 32 bit stencil buffers are common. Can you imagine real-time water? The mind begins to boggle. None of this is possible without cards 100x faster than today. > After all, increasing color from 8-16-32bit stopped at 32bit cause that's > all the eye can see. We *need* 64 bit colour. You need the extra precision to avoid roundoff errors. Games developers are already asking for this feature! Ideally a future card will have 128 bit colour but todays CPUs aren't up to it. > Hopefully, more effort will then go into true 3d object acceleration and > AI. AI is uninteresting to 3D. But today's cards aren't anywhere near where games developers want them to be. They may seem pretty amazing, but the developers have huge wishlists for the future. Don't be concerned about future hardware taxing the imagination of "how can we use all this power". It's not going to be like that. If anything it will be "damn these cards still aren't fast enough". |
From: Allen A. <ak...@po...> - 2000-10-02 14:58:28
|
On Mon, Sep 25, 2000 at 11:26:03AM -0700, Philip Brown wrote: | After all, increasing color from 8-16-32bit stopped at 32bit cause that's | all the eye can see. Lots of 12-bit-per-color-channel (48-bit RGBA) systems exist, and even 16-bit gray-scale systems are useful. Glossing over *many* details, the reason for this is that the eye's response is logarithmic and a linear brightness scale with 256 entries doesn't approximate the eye's response curve very effectively. This is particularly apparent at the low end of the brightness scale, where banding is obvious on 32-bit RGBA displays, not so obvious on 48-bit RGBA displays. Gamma correction helps, but is not sufficient to eliminate this problem; only using more color bits solves it. So in short, there are circumstances where the eye can perceive more than 8 bits per color channel, and these cases are even commercially significant. Allen |
From: Mark M. <mmu...@va...> - 2000-10-02 15:59:35
|
> -----Original Message----- > From: dri...@li... [mailto:dri...@li...]On Behalf Of Allen Akin > Sent: Monday, October 02, 2000 8:58 AM > To: Philip Brown > Cc: dri...@li... > Subject: Re: [Dri-devel] Who is working on Texture Compression (other than me?) > > On Mon, Sep 25, 2000 at 11:26:03AM -0700, Philip Brown wrote: > | After all, increasing color from 8-16-32bit stopped at 32bit cause that's > | all the eye can see. > > Lots of 12-bit-per-color-channel (48-bit RGBA) systems exist, and even > 16-bit gray-scale systems are useful. Glossing over *many* details, > the reason for this is that the eye's response is logarithmic and a > linear brightness scale with 256 entries doesn't approximate the eye's > response curve very effectively. This is particularly apparent at the > low end of the brightness scale, where banding is obvious on 32-bit > RGBA displays, not so obvious on 48-bit RGBA displays. Gamma > correction helps, but is not sufficient to eliminate this problem; > only using more color bits solves it. > > So in short, there are circumstances where the eye can perceive more > than 8 bits per color channel, and these cases are even commercially > significant. > > Allen > _______________________________________________ > Dri-devel mailing list > Dri...@li... > http://lists.sourceforge.net/mailman/listinfo/dri-devel Agreed Allen, the 32 bit impasse is mostly economical. In the simulation training industry we often had to settle for 10 bits because of the practicality of commodity RAMDACs - 10 is considered a compromise in the strife for realism in low light and sensor simulation scenarios. At least 16 bits internal was required as well as a 24 bit or better gray scale channel. Of course the color operations have to be accurate, including being perspectively correct and there were nasty corner cases to deal with. Only recently have the game cards been improving in their internal accuracy but the cost in gates is huge. It doesn't seem to be too long ago when most game cards were 16 bit. Oddly enough the industry I have seen pushing for color depth now is the broadcast industry - as I understand it they are behind the push for the 64 bit pixel. Mark |
From: Allen A. <ak...@va...> - 2000-10-02 16:24:22
|
On Mon, Oct 02, 2000 at 09:58:13AM -0600, Mark Mueller wrote: | ... Oddly enough the industry I have seen pushing for color | depth now is the broadcast industry - as I understand it they are behind the | push for the 64 bit pixel. While I was at SGI the film/video people were big proponents of deeper pixels. For film, the A/D and D/A conversions plus just a few compositing operations were enough to make 12 bits per color channel mandatory, and more than 12 bits desirable. Allen |
From: Brian P. <br...@va...> - 2000-09-23 17:13:48
|
Ian D Romanick wrote: > > > Who is working on texture compression in Mesa? I have an implementation > > of the ARB extension (mostly done by Brian Paul) and an implementation of > > the GL_3DFX_texture_compression_FXT1 extension. These are in the > > tdfx-2-1-branch branch of the DRI tree. > > Is this extension going to only be supported on 3dfx hardware or will there > be a software fallback? It sure would be nice to make sure that one's code > works right with the FXT1 textures without having to go out and buy a Voodoo > 4/5/6. The same could also be done with S3TC format textures, I suppose. > > If this is not work that is currently planned, could someone perhaps provide > some guidance for others that might want to try and tackle this? We're not doing a software implementation of texture compression. It would be impractical. -Brian |