You can subscribe to this list here.
1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2000 |
Jan
(17) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(8) |
Jul
|
Aug
(7) |
Sep
(8) |
Oct
(67) |
Nov
(32) |
Dec
(78) |
2001 |
Jan
(20) |
Feb
(5) |
Mar
(8) |
Apr
(9) |
May
(12) |
Jun
|
Jul
(2) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
|
2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(5) |
Nov
(9) |
Dec
(4) |
2006 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
(4) |
Jul
(2) |
Aug
(8) |
Sep
(25) |
Oct
(2) |
Nov
|
Dec
|
2007 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
(3) |
Sep
|
Oct
(2) |
Nov
|
Dec
(3) |
2008 |
Jan
(2) |
Feb
(8) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
(5) |
Dec
|
2009 |
Jan
(1) |
Feb
|
Mar
(5) |
Apr
(2) |
May
|
Jun
(4) |
Jul
(3) |
Aug
(1) |
Sep
(6) |
Oct
|
Nov
(6) |
Dec
(9) |
2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2011 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Allen A. <ak...@po...> - 2000-10-03 22:23:00
|
A little while ago I checked in a few minor changes, mostly related to automating tests for the presence of extensions in tbasic.{cpp,h} and glutils.{cpp,h}. However, I haven't received any email confirming the commits, so something at SourceForge may be awry. Thought I'd send you all a warning just in case. Allen |
From: Allen A. <ak...@po...> - 2000-09-30 00:27:16
|
On Fri, Sep 29, 2000 at 01:45:30PM -0600, Brian Paul wrote: | | I've checked in the code for a new Glean test, texCombine, which | exercises the GL_EXT_texture_env_combine extension. ... Yowza. That's one hairy testing problem. | 1. There are WAY too many combinations to test them all. ... What you've done looks like the only practical solution for a black-box test. If any of the hardware vendors are willing to talk about the internals of their implementations so that we could build a white-box test, then we could cut down on the combinatorial explosion. (Hint, hint.) | 2. I'm using a fixed epsilon (1/32) to compare the rendered results | to the expected results. I should probably use Allen's statistical | utilities to evaluate the results. The statistics approach pays off if you're testing a lot of different operand values with each combination of control settings -- for example, the way the blend function test runs a huge number of random colors through each combination of blend factors. That helps you catch data-dependent errors in the implementation, and tells apps how much accuracy they can depend on. When you have a single set of operands for each control setting, the statistics would summarize results with the same operands used in different arithmetic operations. This does have some value, but I'd be wary of it, because it's sensitive to things over which you don't have much control (e.g., whether some control settings go through a high-precision floating-point software path rather than a low-precision fixed-point hardware path). So I'd recommend one of these two approaches: 1. Pick an epsilon, probably based on color depth because apps tend to assume that they get higher precision in deeper rendering targets (even though that's bogus in many cases). 2. Run a whole bunch of randomly-generated operands through each control setting, compute the statistics for the difference between actual and expected values, and express the result as the number of "significant" bits that make it through the computation. If the worst case is zero bits, then you've got a definite failure. Otherwise, the number of significant bits tells you something about the quality of the implementation. In the latter case you'd probably want to store the statistics, rather than just pass/fail, in the database; that way the results from two different implementations can be compared. ("Vendor A's texture_env_combine is good to a full 8 bits, but vendor B's is only good to 6.") | 3. The test requires OpenGL 1.2.1 (1.2 + ARB_multitexture) and I'm | assuming that GL_EXT_texture_env_combine is defined by the GL | headers. I should probably lift the later dependency by defining | the extension enums inside ttexcombine.cpp if gl.h or glext.h | doesn't define them. OK, Allen? Sounds like a good solution. | 4. If the extension doesn't exist, the test returns a PASS result. I probably would have used a NOTE, since that way you can grep the log to discover that something unusual has happened. I agree that FAIL isn't justified when an optional feature is missing. | Allen, it might be nice if you could list extension dependencies | in the test class constructor so that Glean would skip tests which | depend on non-existant extensions. I guess I'll add that, though I think it'll be a little weird-looking on some systems. (A window and rendering context will have to be created for every candidate drawing surface configuration so that the extension can be queried. There'll probably be a lot of flickering.) | 5. The test actually runs 6 subtests: single texture GL_REPLACE, | GL_ADD, GL_SIGNED_ADD, GL_MODULATE, GL_INTERPOLATE and a multi- | texture subtest. Perhaps I should report 6 pass/fail results | instead of just one overall pass/fail. Can I do that when I'm | deriving from the class BasicTest, Allen? Nope; you'll need to derive from Test instead. (That's the problem I mentioned a week or two ago; once you change the result structure, you can't derive from BasicTest anymore.) If there were a lot of calls for it, we could build a class that yielded, say, a vector of pass/fail results or a vector of floating-point values. I've tended to avoid such things, even though they would help cut down on code duplication, because I've tried to put a good bit of unique analysis into each test. In theory that could help diagnose bugs or make more meaningful comparisons between implementations. Allen |
From: Brian P. <br...@va...> - 2000-09-29 19:46:24
|
[resending this message] I've checked in the code for a new Glean test, texCombine, which exercises the GL_EXT_texture_env_combine extension. I've been working on it over the past few nights and it's in pretty good shape now but there's still a few issues: 1. There are WAY too many combinations to test them all. In fact, for N texture units you'd need to run 117,964,800 ^ N tests to hit every state combination! I've pruned down the test space but it might still be a bit large. With software rendering it takes about 43 seconds (on a PII/466) to test each visual. 2. I'm using a fixed epsilon (1/32) to compare the rendered results to the expected results. I should probably use Allen's statistical utilities to evaluate the results. 3. The test requires OpenGL 1.2.1 (1.2 + ARB_multitexture) and I'm assuming that GL_EXT_texture_env_combine is defined by the GL headers. I should probably lift the later dependency by defining the extension enums inside ttexcombine.cpp if gl.h or glext.h doesn't define them. OK, Allen? 4. If the extension doesn't exist, the test returns a PASS result. Allen, it might be nice if you could list extension dependencies in the test class constructor so that Glean would skip tests which depend on non-existant extensions. 5. The test actually runs 6 subtests: single texture GL_REPLACE, GL_ADD, GL_SIGNED_ADD, GL_MODULATE, GL_INTERPOLATE and a multi- texture subtest. Perhaps I should report 6 pass/fail results instead of just one overall pass/fail. Can I do that when I'm deriving from the class BasicTest, Allen? -Brian |
From: Brian P. <br...@va...> - 2000-09-29 17:21:18
|
Mail I sent yesterday to this list hasn't come through yet. This is a test. |
From: Brian P. <br...@va...> - 2000-09-28 16:37:03
|
I've checked in the code for a new Glean test, texCombine, which exercises the GL_EXT_texture_env_combine extension. I've been working on it over the past few nights and it's in pretty good shape now but there's still a few issues: 1. There are WAY too many combinations to test them all. In fact, for N texture units you'd need to run 117,964,800 ^ N tests to hit every state combination! I've pruned down the test space but it might still be a bit large. With software rendering it takes about 43 seconds (on a PII/466) to test each visual. 2. I'm using a fixed epsilon (1/32) to compare the rendered results to the expected results. I should probably use Allen's statistical utilities to evaluate the results. 3. The test requires OpenGL 1.2.1 (1.2 + ARB_multitexture) and I'm assuming that GL_EXT_texture_env_combine is defined by the GL headers. I should probably lift the later dependency by defining the extension enums inside ttexcombine.cpp if gl.h or glext.h doesn't define them. OK, Allen? 4. If the extension doesn't exist, the test returns a PASS result. Allen, it might be nice if you could list extension dependencies in the test class constructor so that Glean would skip tests which depend on non-existant extensions. 5. The test actually runs 6 subtests: single texture GL_REPLACE, GL_ADD, GL_SIGNED_ADD, GL_MODULATE, GL_INTERPOLATE and a multi- texture subtest. Perhaps I should report 6 pass/fail results instead of just one overall pass/fail. Can I do that when I'm deriving from the class BasicTest, Allen? -Brian |
From: Allen A. <ak...@po...> - 2000-09-13 22:39:12
|
On Wed, Sep 13, 2000 at 01:25:04PM -0600, Brian Paul wrote: | | It seems to me that BasicTest will be suitable for lots of rendering | tests. That would be good. | > In the RGB test you use 0.5 as the pass/fail threshold. | | I was going to use your image registration and statistics classes | but was a bit confused about how they're supposed to be used. The registration stuff is probably overkill for this. It's mostly useful when you've got a complicated image that you can't analyze very well with a simple algorithm. | > if (ErrorBits(fabs(expected - actual), r.config->g) >= 1.0) | > passed = false; | | Does ErrorBits take dithering into account? ... No; it just takes an error value (the absolute difference of two normalized values, such as you can get by subtracting two GL_FLOAT color channel values) and the number of significant bits, and figures out how many of those bits are required to represent the error. Once you've got that, you make a judgement call as to whether the error is tolerable or not. For most of the tests I've written, I allow the actual and expected values to differ by at most one least-significant bit; that's pretty strict, but there are apps out there that depend on accurate pixel arithmetic, so it's good to know whether an OpenGL implementation can support them. As far as dithering goes, I usually just turn it off, which causes color values to be clamped and color index values to be rounded. But if you wanted to leave it enabled for this test, there's at least one easy way to handle it: The OpenGL spec says that a dithered color can differ from the actual color by at most one least-significant bit. So if you use ErrorBits() to compute the tested error, just increase the tolerance by 1.0 to take the worst-case dither into account. You could read back an array of pixels and compute their average value, then compare the average to the expected value. That might allow you to eliminate the extra LSB of slop. But it also might not work on oddball machines that store gamma-corrected values in the framebuffer. I don't think there are many of those around these days, though. Allen |
From: Brian P. <br...@va...> - 2000-09-13 19:24:34
|
Allen Akin wrote: > > On Wed, Sep 13, 2000 at 11:10:27AM -0600, Brian Paul wrote: > | > | I've added a simple new test to Glean called maskedClear. It exercises > | glColorMask, glIndexMask, glDepthMask and glStencilMask with glClear. > | > | I believe I've followed the guidelines for new tests but would appreciate > | another set of eyes taking a look at it. > > Looks great! Extremely useful. It's especially nice to finally see a > test that gives us some coverage in color-index mode. > > I don't have any substantial suggestions; just comments and nits. > > I've limited myself to 8.3 filenames in the source > directories, but that's probably unjustified paranoia these > days. Let's stick with the longer names (tmaskedclear.cpp) > and see if anyone complains. > > It's interesting that you chose to derive MaskedClearTest from > BasicTest; so far MaskedClearTest is the only test class to do > so. It's a good idea as long as the only result you need is > pass/fail. (Once you change the Result class, you have to > derive from Test rather than BasicTest, and duplicate some > code. This is really a design flaw on my part. I've thought > about building a template to reduce the code duplication, but > some tests actually need to make subtle changes to the > mostly-shared code, so I'm waiting for more inspiration to > strike.) It seems to me that BasicTest will be suitable for lots of rendering tests. > In the RGB test you use 0.5 as the pass/fail threshold. I was going to use your image registration and statistics classes but was a bit confused about how they're supposed to be used. > That > yields correct results on correct implementations of OpenGL > (even for 1-bit-deep color channels), but might allow some > questionable implementations to slip through. Other > reasonable approaches are to use zero/nonzero (as you did for > the depth buffer), or to use a "less than one LSB" threshold > The latter could be handled with code like this (for the green > channel): > > if (ErrorBits(fabs(expected - actual), r.config->g) >= 1.0) > passed = false; Does ErrorBits take dithering into account? Clearing the color buffer is subject to dithering. I'm not sure that Mesa's 16bpp dithering is 100% up to spec. The regular GL conformance tests report a lot of failures in that mode for me. > Oh, yeah, I just noticed that you use GL_TRUE and GL_FALSE for > the result variable; that'll work, or you can just use the C++ > Booleans "true" and "false". Doh! I'm in the habit of using GL datatypes all the time. -Brian |
From: Allen A. <ak...@po...> - 2000-09-13 18:40:49
|
On Wed, Sep 13, 2000 at 11:10:27AM -0600, Brian Paul wrote: | | I've added a simple new test to Glean called maskedClear. It exercises | glColorMask, glIndexMask, glDepthMask and glStencilMask with glClear. | | I believe I've followed the guidelines for new tests but would appreciate | another set of eyes taking a look at it. Looks great! Extremely useful. It's especially nice to finally see a test that gives us some coverage in color-index mode. I don't have any substantial suggestions; just comments and nits. I've limited myself to 8.3 filenames in the source directories, but that's probably unjustified paranoia these days. Let's stick with the longer names (tmaskedclear.cpp) and see if anyone complains. It's interesting that you chose to derive MaskedClearTest from BasicTest; so far MaskedClearTest is the only test class to do so. It's a good idea as long as the only result you need is pass/fail. (Once you change the Result class, you have to derive from Test rather than BasicTest, and duplicate some code. This is really a design flaw on my part. I've thought about building a template to reduce the code duplication, but some tests actually need to make subtle changes to the mostly-shared code, so I'm waiting for more inspiration to strike.) In the RGB test you use 0.5 as the pass/fail threshold. That yields correct results on correct implementations of OpenGL (even for 1-bit-deep color channels), but might allow some questionable implementations to slip through. Other reasonable approaches are to use zero/nonzero (as you did for the depth buffer), or to use a "less than one LSB" threshold The latter could be handled with code like this (for the green channel): if (ErrorBits(fabs(expected - actual), r.config->g) >= 1.0) passed = false; Oh, yeah, I just noticed that you use GL_TRUE and GL_FALSE for the result variable; that'll work, or you can just use the C++ Booleans "true" and "false". Thanks again for the contribution, Allen |
From: Brian P. <br...@va...> - 2000-09-13 17:09:54
|
I've added a simple new test to Glean called maskedClear. It exercises glColorMask, glIndexMask, glDepthMask and glStencilMask with glClear. I believe I've followed the guidelines for new tests but would appreciate another set of eyes taking a look at it. -Brian |
From: Allen A. <ak...@po...> - 2000-08-11 21:47:15
|
On Fri, Aug 11, 2000 at 01:55:35PM -0700, Brian Sharp wrote: | | Just checking in to see how the test list work is going; I did an update a | couple days ago and didn't get anything new... Did you do the equivalent of "cvs update -d"? The document's in a new directory, and the option is required if you want to pick up new directories. Rik posted a copy to this list a few days ago. I'll forward it to you. Allen |
From: Brian S. <br...@gl...> - 2000-08-11 20:52:30
|
Heya, Just checking in to see how the test list work is going; I did an update a couple days ago and didn't get anything new... if any progress has been made, it'd be great if whoever has the most up-to-date version could check it in, so I could read it over and contribute feedback and other test ideas. Later! -Brian |
From: Allen A. <ak...@po...> - 2000-08-08 14:38:19
|
Just an FYI - we've created the gle...@li... mailing list. Notice of checkins to the glean CVS tree will be posted to that list, so you can track them if you wish. To subscribe, visit http://lists.sourceforge.net/mailman/listinfo/glean-patches Allen |
From: Rik F. <fa...@va...> - 2000-08-08 09:18:18
|
Below is the HTML version of a preliminary draft of: glean: An OpenGL Test and Benchmarking Suite This paper discusses potential glean users and the sorts of tests they'd find helpful. The paper is a work in progress, and the most up-to-date version will always be available in the doc/sgml directory of the glean CVS tree. |
From: Allen A. <ak...@po...> - 2000-08-05 01:08:11
|
Since we're going to build a real results repository at GLSetup, I don't see any particular reason to keep the informal "results" directory in the glean distribution. It's out-of-date, anyway. Anyone object to deleting it from the CVS tree? Allen |
From: Brian S. <br...@gl...> - 2000-08-04 17:14:31
|
At 11:18 PM 8/3/00 -0400, Gareth Hughes wrote: >Allen Akin wrote: > > > > Welcome, Brian! > >Here here! It's great to be working with you, Brian. And likewise! Is everyone here on glean-dev? (If so, you'll get two copies of this email, but better safe than sorry.) Anyway, it's great to be working on this stuff; so far my thoughts are to work on test design (i.e. getting a big list of comprehensive, orthogonal tests) for the next month or so and then actually do implementation after that. The main reason for that is that I'm in Oakland working directly with Chris (and in close proximity to you guys) until mid September, after which I'm moving back to New Hampshire until June (ah, college...) So Allen says a list-o-tests is in the works, which is great. I've got a substantial amount of time to work on this stuff (my only other job is the part-time contracting for 3dfx and my ARB working group involvement) so hopefully we can make some serious progress! -Brian |
From: Gareth H. <ga...@va...> - 2000-08-04 15:25:36
|
Rik, how's the proposal stuff going? Do you need any more info from me? I can't remember if I've sent you all the stuff you asked for. -- Gareth |
From: Allen A. <ak...@po...> - 2000-06-15 21:27:24
|
In the top level of the glean source tree on SourceForge I've checked-in a file named JOBS containing suggestions for new tests and infrastructure changes. Everyone please feel free to implement the suggestions found there or add new ones. Thanks! Allen |
From: Brian P. <br...@pr...> - 2000-06-15 17:31:19
|
Allen Akin wrote: > > On Thu, Jun 15, 2000 at 12:35:11PM +1000, Leath Muller wrote: > | Can I just ask what type of tests are sought? Mostly performance or > | conformance? ... I've got a few ideas for conformance tests I'd like: 1. Test glDrawBuffer, especially the GL_FRONT_AND_BACK and GL_NONE options with glDrawPixels, points, lines, triangles. 2. Test glColorMask on front buffer and back buffer, with glDrawPixels, points, lines, triangles. 3. Test glPolygonMode in all permutations. Perhaps also combine this with glCullFace. 4. Test two sided lighting, also with glPolygonMode permutations. 5. Test glTexEnv modes. For extra credit, test with multitexture. -Brian |
From: Allen A. <ak...@po...> - 2000-06-15 17:11:12
|
On Thu, Jun 15, 2000 at 12:35:11PM +1000, Leath Muller wrote: | Can I just ask what type of tests are sought? Mostly performance or | conformance? ... I'm not getting a lot of feedback from potential users, so this is mostly a guess. But correctness seems to matter more than performance. John Carmack has complained about drivers being incomplete and/or incorrect, and quite a few bugs have been mentioned on opengl-gamedev. I think I ought to concentrate for a while on problems that have actually stopped apps, since if there are problems elsewhere, either people haven't run into them or they've found workarounds. | ...I notice the current tests for conformance check the | accuracy from readback, but I really can't do that sort of stuff as | I don't have source conformance information to compare to (other than | what would be available in glean) so I was going to rely on performance | to ensure that all went Ok... Correctness tests are a lot harder to write than performance tests. I've been thinking about biting the bullet and writing a reference pipeline to provide "golden images," but (a) that's a huge task, and (b) it still has to accomodate substantial variations in hardware -- for example, some devices have perspective-correct color interpolation, and others don't, but both approaches are acceptable. The current correctness tests try to be clever about sanity-checking the resulting images without actually requiring a pixel-by-pixel match. For example, the blending tests use image registration to try and find the closest match between the test image and the target image. They also summarize results statistically, so you can tell whether things are OK on average. The vertex-specification tests code unique IDs into color values for each of tens of thousands of triangles, and check that all the IDs are actually present in the final image. Some tests draw things two different ways that should be equivalent in practice (e.g. triangles vs. tristrips) and compare the resulting images. Just one test that I can think of (the 2D drawing text) really checks carefully for pixel exactness. It would be nice to have some tests for popular new extensions that might not be well-covered in the standard OpenGL conformance tests. Multitexturing, for example. Allen |
From: <Lea...@en...> - 2000-06-15 02:35:46
|
> I can make one for you (it's pretty easy), so let me know if you want it. No, don't do that if it's out of your way... > But if you want to check in your changes without having to send them > to me first (and I tend to be a bottleneck these days), you'll > eventually need to get CVS working. I will boot into Linux tonight and grab the source tree from there... I don't have the CVS stuff setup under Windows that's all... I might look at getting the WinCVS tonight again (I lost it... :) Can I just ask what type of tests are sought? Mostly performance or conformance? I notice the current tests for conformance check the accuracy from readback, but I really can't do that sort of stuff as I don't have source conformance information to compare to (other than what would be available in glean) so I was going to rely on performance to ensure that all went Ok... Leathal. |
From: Allen A. <ak...@po...> - 2000-06-15 02:16:27
|
On Thu, Jun 15, 2000 at 12:11:11PM +1000, Leath Muller wrote: | | > I've added several tests in the past few weeks, but haven't packaged | > up a new release, so they're only available if you pick up the current | > CVS snapshot. | | Ergh -- I like the zipped up archive... ;) I can make one for you (it's pretty easy), so let me know if you want it. But if you want to check in your changes without having to send them to me first (and I tend to be a bottleneck these days), you'll eventually need to get CVS working. Allen |
From: <Lea...@en...> - 2000-06-15 02:11:37
|
> I've talked to Chris Hecker about setting one up. He's busy with > other things at the moment, but the more critical issue seems to be > getting additional tests into glean. It needs a larger test base for > the repository to be really useful. I've downloaded the source and am having a look at it to be able to create more tests -- I should be able to roll some out over th next few weeks/months hopefully. > I've added several tests in the past few weeks, but haven't packaged > up a new release, so they're only available if you pick up the current > CVS snapshot. Ergh -- I like the zipped up archive... ;) Leathal. |
From: Allen A. <ak...@po...> - 2000-06-14 01:58:10
|
On Wed, Jun 14, 2000 at 09:51:38AM +1000, Leath Muller wrote: | Is there a repository of Glean results as yet anywhere? I notice | the glean.sourceforge.net page states that one should be set up, | but no reference is made after that... I've talked to Chris Hecker about setting one up. He's busy with other things at the moment, but the more critical issue seems to be getting additional tests into glean. It needs a larger test base for the repository to be really useful. I've added several tests in the past few weeks, but haven't packaged up a new release, so they're only available if you pick up the current CVS snapshot. | Many people on this list? :) 15. Allen |
From: <Lea...@en...> - 2000-06-13 23:52:04
|
Is there a repository of Glean results as yet anywhere? I notice the glean.sourceforge.net page states that one should be set up, but no reference is made after that... Many people on this list? :) Leathal. |
From: Adam H. <ad...@ne...> - 2000-01-25 18:27:46
|
To get around the image.h problem I had to modify the standard Be headers. I changed #include <image.h> to #include <kernel/image.h> in the file /boot/develop/headers/be/support/Archivable.h I'll work up and check in a README.BeOS that explains the current caveats for BeOS. -- Adam Haberlach |"Be a realist. The glass is twice ad...@ne... | as large as it needs to be." http://www.newsnipple.com/ | --Greg Kaiser |