From: John K. <jk...@gm...> - 2006-03-30 16:19:47
|
Do these drivers do anything to support subpixel rendering of the text or screen images? Is any of that built in to the hardware acceleration, or is that done only at the operating system level? I think on the Windows side, some of the Nvidia drivers do subpixel work on the driver level. |
From: Brian P. <bri...@tu...> - 2006-03-30 16:39:47
|
John Kheit wrote: > Do these drivers do anything to support subpixel rendering of the text > or screen images? Is any of that built in to the hardware acceleration, > or is that done only at the operating system level? > > I think on the Windows side, some of the Nvidia drivers do subpixel work > on the driver level. Can you be more specific? If you're asking about line/triangle rasterization, I believe vertex coordinates are snapped/truncated to some sub-pixel fraction (rather than whole pixel coords) in all hardware. For text, are you asking about some form of antialiasing? Or, do you have multisampling in mind? -Brian |
From: John K. <jk...@gm...> - 2006-03-30 16:51:22
|
Sorry Brian, I should have been more specific. I mean more as a final output onto a screen. Using an LCD/CRT's individual RGB subpixels to antialiasing (or some form of screen output enhancement). It seems a lot of the 3D stuff in the GPU is already employing sub-pixel coordinates, so it would be nice if the actual output to the screen would take advantage of that. On 3/30/06, Brian Paul <bri...@tu...> wrote: > > John Kheit wrote: > > Do these drivers do anything to support subpixel rendering of the text > > or screen images? Is any of that built in to the hardware acceleration, > > or is that done only at the operating system level? > > > > I think on the Windows side, some of the Nvidia drivers do subpixel wor= k > > on the driver level. > > Can you be more specific? > > If you're asking about line/triangle rasterization, I believe vertex > coordinates are snapped/truncated to some sub-pixel fraction (rather > than whole pixel coords) in all hardware. > > For text, are you asking about some form of antialiasing? > > Or, do you have multisampling in mind? > > -Brian > |
From: Brian P. <bri...@tu...> - 2006-03-31 16:33:23
|
John Kheit wrote: > Sorry Brian, I should have been more specific. I mean more as a final > output onto a screen. Using an LCD/CRT's individual RGB subpixels to > antialiasing (or some form of screen output enhancement). It seems a lot > of the 3D stuff in the GPU is already employing sub-pixel coordinates, > so it would be nice if the actual output to the screen would take > advantage of that. AFAIK, nobody's hardware does that. When that kind of antialiasing is done for text, I think it's the job of the font rendering code to do so. -Brian |
From: <jk...@gm...> - 2006-03-31 18:51:16
|
I think some of the cards use the GPUs for scaling video (and perhaps other optimizations). Kind of like the nice upscaling done by some DVD players. Nvidia calls it PureVideo: http://www.nvidia.com/page/purevideo.html "And the high-precision subpixel processing enables videos to be scaled to any size, so that even small videos look like they were recorded in high-resolution." I'm sure ATI has something similar? I would guess that this kind of thing could also be used for other things sent to it? On Mar 31, 2006, at 11:33 AM, Brian Paul wrote: > John Kheit wrote: >> Sorry Brian, I should have been more specific. I mean more as a >> final output onto a screen. Using an LCD/CRT's individual RGB >> subpixels to antialiasing (or some form of screen output >> enhancement). It seems a lot of the 3D stuff in the GPU is already >> employing sub-pixel coordinates, so it would be nice if the actual >> output to the screen would take advantage of that. > > AFAIK, nobody's hardware does that. > > When that kind of antialiasing is done for text, I think it's the > job of the font rendering code to do so. > > -Brian Best regards, John Kheit E-mail: mailto:jk...@op... AOL Instant Messenger: John Kheit |
From: Philipp K. K. <pk...@sp...> - 2006-03-31 19:29:26
|
jk...@gm... wrote: > > "And the high-precision subpixel processing enables videos to be scaled > to any size, so that even small videos look like they were recorded in > high-resolution." Trilinear texture filtering should do that. It's supported on any graphics card these days. It's more a matter of whether the video player application uses it. |
From: Philip A. <phi...@ka...> - 2006-03-31 20:49:02
|
On Fri, Mar 31, 2006 at 01:51:03PM -0500, jk...@gm... wrote: > I think some of the cards use the GPUs for scaling video (and perhaps > other optimizations). Kind of like the nice upscaling done by some DVD > players.? Nvidia calls it PureVideo: > [1]http://www.nvidia.com/page/purevideo.html > "And the high-precision subpixel processing enables videos to be scaled to > any size, so that even small videos look like they were recorded in > high-resolution." > I'm sure ATI has something similar? I would guess that this kind of thing > could also be used for other things sent to it? > On Mar 31, 2006, at 11:33 AM, Brian Paul wrote: > > John Kheit wrote: > > Sorry Brian, I should have been more specific.? I mean more as a final > output onto a screen.? Using an LCD/CRT's individual RGB subpixels to > antialiasing (or some form of screen output enhancement). It seems a > lot of the 3D stuff in the GPU is already employing sub-pixel > coordinates, so it would be nice if the actual output to the screen > would take advantage of that. > > AFAIK, nobody's hardware does that. > When that kind of antialiasing is done for text, I think it's the job of > the font rendering code to do so. Is the original author talking about Cleartype-style antialiasing? (ie using the RGB subpixels to get more {usually horizontal} resolution in text rendering). Sounds like something you could do with a pixel shader perhaps. Straight alpha-blending with the RENDER extension is already accelerated on most hardware supported by DRI isn't it? Phil -- http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt |
From: <jk...@gm...> - 2006-03-31 21:40:45
|
Yes. But not only for text, for video and just about anything blasted onto the screen. I think the Nvidia stuff I put on might do that for everything that hits the LCD. Basically the hardware would give you a resolution boost for anything that can keep partial pixel measurements internally. Do any of the Linux drivers support this type of thing? Thanks. On Mar 31, 2006, at 3:48 PM, Philip Armstrong wrote: > > Is the original author talking about Cleartype-style antialiasing? (ie > using the RGB subpixels to get more {usually horizontal} resolution in > text rendering). > > Sounds like something you could do with a pixel shader perhaps. > Straight alpha-blending with the RENDER extension is already > accelerated on most hardware supported by DRI isn't it? > > Phil > > -- > http://www.kantaka.co.uk/ .oOo. public key: http:// > www.kantaka.co.uk/gpg.txt > > > ------------------------------------------------------- > This SF.Net email is sponsored by xPML, a groundbreaking scripting > language > that extends applications into web and mobile media. Attend the > live webcast > and join the prime developer group breaking into this new coding > territory! > http://sel.as-us.falkag.net/sel? > cmd=lnk&kid=110944&bid=241720&dat=121642 > -- > _______________________________________________ > Dri-devel mailing list > Dri...@li... > https://lists.sourceforge.net/lists/listinfo/dri-devel Best regards, John Kheit E-mail: mailto:jk...@op... AOL Instant Messenger: John Kheit |
From: Keith P. <ke...@ke...> - 2006-04-01 17:48:35
|
On Fri, 2006-03-31 at 09:33 -0700, Brian Paul wrote: > AFAIK, nobody's hardware does that. >=20 > When that kind of antialiasing is done for text, I think it's the job=20 > of the font rendering code to do so. It's not the construction of the glyphs that's at issue here, I don't think. The glyphs are drawn to the screen using a separate alpha channel for each component in the pixel, an operation which isn't directly supported by the GL API at present. I don't know what we'd need in the hardware for this to be efficient though; I believe it is possible to do it today using three passes for each string, which seems horrendous until you realize how slow it will be to do the same thing with the CPU. --=20 kei...@in... |
From: Nicolai H. <pre...@gm...> - 2006-04-01 18:09:02
|
On Friday 31 March 2006 19:49, Keith Packard wrote: > On Fri, 2006-03-31 at 09:33 -0700, Brian Paul wrote: >=20 > > AFAIK, nobody's hardware does that. > >=20 > > When that kind of antialiasing is done for text, I think it's the job=20 > > of the font rendering code to do so. >=20 > It's not the construction of the glyphs that's at issue here, I don't > think. The glyphs are drawn to the screen using a separate alpha channel > for each component in the pixel, an operation which isn't directly > supported by the GL API at present. I don't know what we'd need in the > hardware for this to be efficient though; I believe it is possible to do > it today using three passes for each string, which seems horrendous > until you realize how slow it will be to do the same thing with the CPU. Surely you could just use an RGB texture instead of an ALPHA texture? Then= =20 it's just a matter of setting the appropriate texture environments and=20 blending modes. cu, Nicolai |
From: David R. <da...@no...> - 2006-04-01 21:35:11
|
On Sat, 2006-04-01 at 20:08 +0200, Nicolai Haehnle wrote: > On Friday 31 March 2006 19:49, Keith Packard wrote: > > On Fri, 2006-03-31 at 09:33 -0700, Brian Paul wrote: > > > > > AFAIK, nobody's hardware does that. > > > > > > When that kind of antialiasing is done for text, I think it's the job > > > of the font rendering code to do so. > > > > It's not the construction of the glyphs that's at issue here, I don't > > think. The glyphs are drawn to the screen using a separate alpha channel > > for each component in the pixel, an operation which isn't directly > > supported by the GL API at present. I don't know what we'd need in the > > hardware for this to be efficient though; I believe it is possible to do > > it today using three passes for each string, which seems horrendous > > until you realize how slow it will be to do the same thing with the CPU. > > Surely you could just use an RGB texture instead of an ALPHA texture? Then > it's just a matter of setting the appropriate texture environments and > blending modes. Not really, as you can only pass one alpha value to the blending stage. In the general case you need to do it in multiple passes. I've got code in glitz for doing this in three passes. For the case when we're using a solid source color and OVER operator (I think that's 99.9% of all text rendering in X today) we can actually pass all alpha channels to the blending stage and achieve per-component alpha blending with the solid source color in one pass by using GL blend color. I've got code in glitz for doing this as well. -David |