You can subscribe to this list here.
| 1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2000 |
Jan
(17) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(8) |
Jul
|
Aug
(7) |
Sep
(8) |
Oct
(67) |
Nov
(32) |
Dec
(78) |
| 2001 |
Jan
(20) |
Feb
(5) |
Mar
(8) |
Apr
(9) |
May
(12) |
Jun
|
Jul
(2) |
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
|
Dec
|
| 2005 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(5) |
Nov
(9) |
Dec
(4) |
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
(4) |
Jul
(2) |
Aug
(8) |
Sep
(25) |
Oct
(2) |
Nov
|
Dec
|
| 2007 |
Jan
|
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
(4) |
Aug
(3) |
Sep
|
Oct
(2) |
Nov
|
Dec
(3) |
| 2008 |
Jan
(2) |
Feb
(8) |
Mar
(2) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(3) |
Sep
|
Oct
|
Nov
(5) |
Dec
|
| 2009 |
Jan
(1) |
Feb
|
Mar
(5) |
Apr
(2) |
May
|
Jun
(4) |
Jul
(3) |
Aug
(1) |
Sep
(6) |
Oct
|
Nov
(6) |
Dec
(9) |
| 2010 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2011 |
Jan
|
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(2) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Gareth H. <ga...@va...> - 2000-12-30 17:15:17
|
Just wanted to say that the work Rik did to factor out a lot of the common code has meant writing new tests (and indeed learning how to write tests) is significantly easier. I've written a simple test of the GL scissor which I'll commit in a day or two, and it was a breeze! -- Gareth |
|
From: Allen A. <ak...@po...> - 2000-12-30 16:42:38
|
On Sat, Dec 30, 2000 at 05:10:06AM -0500, Rik Faith wrote: | | It should have some tweaking if anyone cares about how public: and | private: are indented... No strong opinions here. Allen |
|
From: Rik F. <fa...@va...> - 2000-12-30 10:10:11
|
On Sat 30 Dec 2000 16:11:31 +1100, Gareth Hughes <ga...@va...> wrote: > I'm sure it's a case of RTFM, but is the glean Emacs mode defined in any > of the glean docs/READMEs? Rik, could you forward the Lisp code? I'm > assuming it's just something like linux-c. I thought it was better to request a "glean" mode instead of relying on the linux-c mode. That way, everyone can configure their own mode for glean. This is what I use: (defun glean-mode () "C mode with adjusted defaults for use with GLEAN." (interactive) (c-mode) (c-set-style "K&R") (setq c-basic-offset 8)) It should have some tweaking if anyone cares about how public: and private: are indented... |
|
From: Gareth H. <ga...@va...> - 2000-12-30 05:16:08
|
I'm sure it's a case of RTFM, but is the glean Emacs mode defined in any of the glean docs/READMEs? Rik, could you forward the Lisp code? I'm assuming it's just something like linux-c. -- Gareth |
|
From: Allen A. <ak...@po...> - 2000-12-25 00:16:44
|
On Wed, Dec 20, 2000 at 04:48:00PM -0500, Rik Faith wrote: | I've made changes on the rik-0-1-branch that I think simplify writing | tests. This looks superb; I'm looking forward to trying it out. Thanks! Allen |
|
From: Allen A. <ak...@po...> - 2000-12-24 04:38:12
|
It's up for an hour or two a day, at unpredictable times. So I'm having trouble responding to anything in a timely fashion. I've signed up for new service starting 1/2/2001, so I'm hoping things will be fixed then. Allen |
|
From: Allen A. <ak...@po...> - 2000-12-24 04:36:14
|
On Fri, Dec 22, 2000 at 05:56:58PM -0500, Brian Sharp wrote: | | .... As far as glVertex-style rendering, I suspect it'll become even | more important as ISVs use non-polygonal dynamic representations that can | be converted to polygonal representations iteratively (like bezier patches | or whatnot) as it allows for far better pipelining between the CPU | generating the polygons and the graphics card rendering them. ... That sounds very plausible to me. | ... I'd probably just validate the data once and store a | pointer and use that... Something like that should work. I could put it on my list of back-burner projects; you've probably got enough on your plate for now. Allen |
|
From: Brian S. <br...@gl...> - 2000-12-22 22:55:39
|
At 10:59 AM 12/21/00 -0800, you wrote: >geomrend looks pretty nice. I did want to raise one warning flag, >though. As GLVERTEX_MODE rendering is currently handled, a lot of >cycles are spent checking the data format, fetching data, etc. This >isn't a problem for correctness tests, but it introduces enough >overhead that we should be wary of using it for performance tests. Yes. I wrote geomrend just to work for now; it definitely isn't optimal for performance. Good point on watching out for using it for performance tests. As far as glVertex-style rendering, I suspect it'll become even more important as ISVs use non-polygonal dynamic representations that can be converted to polygonal representations iteratively (like bezier patches or whatnot) as it allows for far better pipelining between the CPU generating the polygons and the graphics card rendering them. So, yes, it's worth fixing up. I'd probably just validate the data once and store a pointer and use that; going as far as using Duff's Device, well... Duff's Device still scares me. ;-) >About geomutil: In the new sphere-generation utilities, don't worry >about returning the address of the first element of a vector to >represent the start address of an array. I hear ANSI is going to >standardize that as part of the required semantics for vectors. Good to hear. Given the STL requirements on running times, the only way for vector not to store the data in a contiguous array like that is if it does something gratuitous or stupid, like storing the data backwards, or arbitrarily padding it, or something. .b |
|
From: Allen A. <ak...@po...> - 2000-12-22 21:30:44
|
My network is back up (for who knows how long), so I'm finally grabbing recent updates and taking a look at them. geomrend looks pretty nice. I did want to raise one warning flag, though. As GLVERTEX_MODE rendering is currently handled, a lot of cycles are spent checking the data format, fetching data, etc. This isn't a problem for correctness tests, but it introduces enough overhead that we should be wary of using it for performance tests. At SGI we used to see a lot of apps with poor rendering performance because the inner loops that delivered data via glVertex et al burned too many CPU cycles on things other than shoveling the data to the graphics card. That tended to give the glVertex-style commands a bad name. However, they can have some major advantages (particularly with respect to cache coherency) if a developer needs to optimize application data structures for purposes other than rendering. That's why it's worth using techniques like Duff's device (illustrated in tvtxperf.cpp) when you really want to measure performance of glVertex-style data transfer. We can retrofit some faster rendering techniques for GLVERTEX_MODE without changing the class interface, so I don't think we need to take any special action at the moment. We just need to be aware of the issue. About geomutil: In the new sphere-generation utilities, don't worry about returning the address of the first element of a vector to represent the start address of an array. I hear ANSI is going to standardize that as part of the required semantics for vectors. Allen |
|
From: Allen A. <ak...@po...> - 2000-12-21 18:37:35
|
On Thu, Dec 21, 2000 at 09:09:06AM -0500, Brian Sharp wrote: | At 08:30 AM 12/21/00 -0500, you wrote: | >Perhaps I should put in an example where all messages are logged to a | >results buffer and sent to both the results file (in runOne) and to | >env->log (in logOne)? Would that be helpful? | | That would be cool, also there's a semantic issue I don't quite understand | as to when you want to use env->log and when you want to record things in | the results file. I think it's only marginally useful to store the env->log messages in the results file. It would allow us to reconstruct a log if the original one had been lost, but it doesn't really make it easier to compare two results databases. I'm convinced that as the number of tests grows, it'll become much more important to compare results of two runs than to examine results for a single run. That's how we'll determine which features change from release to release, which features don't work reliably across all drivers, etc. Allen |
|
From: Allen A. <ak...@po...> - 2000-12-21 17:02:28
|
On Wed, Dec 20, 2000 at 12:10:15PM -0500, Rik Faith wrote:
| Is anyone seeing the paths test failing when run as part of a complete
| test suite (i.e., but not failing when run individually)?
Yes, I am. When run as part of the complete suite, I get this failure:
paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 34
Stencil Test should have had no effect but actually modified the fragment
When run alone, the test passes.
Allen
|
|
From: Rik F. <fa...@al...> - 2000-12-21 15:42:29
|
On Thu 21 Dec 2000 09:09:06 -0500, Brian Sharp <br...@gl...> wrote: > At 08:30 AM 12/21/00 -0500, you wrote: > >Perhaps I should put in an example where all messages are logged to a > >results buffer and sent to both the results file (in runOne) and to > >env->log (in logOne)? Would that be helpful? > > That would be cool, also there's a semantic issue I don't quite understand > as to when you want to use env->log and when you want to record things in > the results file. I think it would be great to always record everthing in the results file and then be able to play it back, perhaps even with another ./glean option (i.e., one that says "reproduce the runtime output"). I suspect that people aren't always doing this because it's hard -- perhaps I can add some support to make it easy, I'll take a look. |
|
From: Rik F. <fa...@va...> - 2000-12-21 15:22:08
|
On Mon 18 Dec 2000 13:47:56 -0700, Brian Paul <br...@va...> wrote: > > I spent some time over the past days looking at the polygon offset > test and why it's failing with Mesa (s/w rendering and h/w rendering). > > First, as a sanity check, I ran pgos on my Indigo2 High IMPACT system > and it failed 7 of the 9 tests. This lead me to question the test > itself. I'm also curious what kind of system Angus Dorbie originally > tested with. > > Digging deeper I found that the triangles which are being rendered are > _very_ tiny. There's 121200 vertices in the sphere. I reduced the > subdiv1 constant from 50 down to 5 (for 1320 vertices). The triangles > are still pretty small, mind you. But after I fixed a Mesa bug in the > s/w rasterizer, Mesa 3.5 passed all the basic offset tests. But it's > still failing the punchthrough test. > > By rendering such tiny triangles I think the test is stressing the > accuracy of sub-pixel triangle rendering moreso than polygon offset. > > I'm not sure I understand what the purpose of the punchthrough test is. > It seems to me that the typical use of polygon offset is to render > some geometry, set the offset, then rerender the same geometry. If > there are vertex transformations between the rendering passes (ala > punchthrough), that could change the rasterization parameters enough > to foil polygon offset. > > With Mesa, punchthrough is failing at the outermost edge of the spheres. > The dz/dx and dz/dy partial derivatives are large at the edge and I'm not > surprised that sub-pixel triangles are having some rasterization problems > there. > > Mesa's polygon offset seems to be working fine in practice in our > various DRI drivers with games and such, but the pgos test is > failing on all of them, AFAIK. > > I think the test needs to be reexamined, and probably changed. > Comments? It seems reasonable to relax the test and make the changes you propose above, since it seems that (except for punchthrough) we're still understand what we're testing and how the results can vary. |
|
From: Brian S. <br...@gl...> - 2000-12-21 14:07:51
|
At 08:30 AM 12/21/00 -0500, you wrote: >Perhaps I should put in an example where all messages are logged to a >results buffer and sent to both the results file (in runOne) and to >env->log (in logOne)? Would that be helpful? That would be cool, also there's a semantic issue I don't quite understand as to when you want to use env->log and when you want to record things in the results file. .b |
|
From: Rik F. <fa...@al...> - 2000-12-21 13:30:49
|
On Thu 21 Dec 2000 08:16:14 -0500, Brian Sharp <br...@gl...> wrote: > At 09:44 AM 12/18/00 -0800, you wrote: > >Does it record those failure details in the results database, so that > >they can be checked against future runs? (It's nice to see exactly > >what's different from one driver version or card to the next, which is > >something you can't get by examining the log generated at test time.) > > I output it with env->log, which it looks like only prints it out when the > test runs, but doesn't save it anywhere; I'll have to check and see how to > store it in the results. Perhaps I should put in an example where all messages are logged to a results buffer and sent to both the results file (in runOne) and to env->log (in logOne)? Would that be helpful? |
|
From: Brian S. <br...@gl...> - 2000-12-21 13:15:08
|
At 09:44 AM 12/18/00 -0800, you wrote: >Does it record those failure details in the results database, so that >they can be checked against future runs? (It's nice to see exactly >what's different from one driver version or card to the next, which is >something you can't get by examining the log generated at test time.) I output it with env->log, which it looks like only prints it out when the test runs, but doesn't save it anywhere; I'll have to check and see how to store it in the results. .b |
|
From: Brian P. <br...@va...> - 2000-12-20 23:26:51
|
Rik Faith wrote: > > On Wed 20 Dec 2000 12:10:15 -0500, > Rik Faith <fa...@va...> wrote: > > Is anyone seeing the paths test failing when run as part of a complete > > test suite (i.e., but not failing when run individually)? > > > > I'm trying to make sure my branch behaves like the trunk, but I can't > > ever get the trunk to pass a paths tests when it's run as part of a > > complete suite... > > OK, I found a bug on my branch and now I fail in the same way that the > trunk fails. This is good and bad. > > The good part is that the branch is ready for final review. I'll send > out another posting soon. > > The bad part is that paths will fail after maskedClear for some old > version of software Mesa. If you see paths failing with a new version, > try this: > > ./glean -t +paths <-- always passed > > ./glean -t +maskedClear+paths <-- always fails (on old software Mesa): > > paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0 > Stencil Test should have had no effect but actually modified the fragment > paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0 > Stencil Test should have had no effect but actually modified the fragment > paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0 > Stencil Test should have had no effect but actually modified the fragment > paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0 > Stencil Test should have had no effect but actually modified the fragment > > I haven't tried this on a recent Mesa, so it's probably nothing to worry > about. I seem to recall a color mask problem in an older versions of Mesa. The glean path test has worked fine for me since Mesa 3.4, at least. -Brian |
|
From: Rik F. <fa...@va...> - 2000-12-20 21:48:03
|
I've made changes on the rik-0-1-branch that I think simplify writing
tests. I've merged the trunk in and I'd like to merge this work back to
the trunk tomorrow, assuming no one objects.
HOW TO CHECK OUT THE BRANCH
To check out the new branch, use -r rik-0-1-branch on the command line.
E.g.,
cvs -d:pserver:ano...@cv...:/cvsroot/glean login
cvs -d:pserver:ano...@cv...:/cvsroot/glean \
co -d glean.rik -r rik-0-1-branch glean
HOW TO MAKE A SIMPLE TEST
tbasic.h shows how to make a simple test. The results are in a
different class from the test class, so you will derive your results
from BaseResult and add two methods for getting and putting the
information that is part of YOUR class -- BaseResult will handle the
header:
class BasicResult: public BaseResult {
public:
bool pass;
void putresults(ostream& s) const {
s << pass << '\n';
}
bool getresults(istream& s) {
s >> pass;
return s.good();
}
};
The test will be derived from the BaseTest template. I've made a macro
to help with this -- the expanded version is below:
class BasicTest: public BaseTest<BasicResult> {
public:
GLEAN_CLASS(BasicTest, BasicResult);
};
This expands into:
class BasicTest: public BaseTest<BasicResult> {
public:
BasicTest(const char* aName, const char* aFilter,
const char* aDescription):
BaseTest(aName, aFilter, aDescription) {
fWidth = WIDTH;
fHeight = HEIGHT;
testOne = ONE;
}
BasicTest(const char* aName, const char* aFilter,
const char* anExtensionList,
const char* aDescription):
BaseTest(aName, aFilter, anExtensionList, aDescription) {
fWidth = WIDTH;
fHeight = HEIGHT;
}
virtual ~BasicTest() {}
virtual void runOne(BasicResult& r, Window& w);
virtual void compareOne(BasicResult& oldR, BasicResult& newR);
virtual void logOne(BasicResult& r)
};
Then you make the rest of the test:
void BasicTest::runOne(BasicResult& r, Window&) { r.pass = true; }
void BasicTest::logOne(BasicResult& r) {
logPassFail(r);
logConcise(r);
}
void BasicTest::compareOne(BasicResult& oldR, BasicResult& newR) {
comparePassFail(oldR, newR);
}
[NOTE: you do *NOT* provide your own run and compare functions -- these
you get from BaseTest. If you only want to run for one configuration,
then you set testOne = true in your constructor.]
RATIONALE
My idea is that runOne won't print anything and that logOne will always
be called after runOne. This seemed reasonable since it would be nice
to be able to have the comparison function print out _exactly_ the same
messages that the test itself prints out. I know that a lot of tests
don't do this now, and I think part of the reason is that it's so hard
to get information into and out of a results file. It is my sincere
hope that the new form of the Results class will make it trivially easy
to save, log, and print all of the information you'd like to. If it's
not easy enough, I'd like to make it easier if anyone can figure out
how.
[Note that if you just want a pass/fail, you can derive from BasicTest.
However, it's really simple now to derive from BaseTest<YourResult> and
make your results as complex or as simple as you'd like. You can start
by copying the tbasic.* files -- they're minimal and simple.]
FUTURE WORK
I think what we also need is a logStats() method that will print out all
the interesting information. This can be used in compareOne when
verbosity is on:
if (env->options.verbosity) {
env->log << env->options.db1Name << ':';
logStats(oldR);
env->log << env->options.db2Name << ':';
logStats(newR);
}
and can mean that a single logOne function could would be sufficient for
most tests, e.g:
void BasicPerfTest::logOne(BasicPerfResult& r) {
logPassFail(r);
logConcise(r);
logStats(r);
}
(In which case I'd like to make this part of BaseTest and *NOT* have the
code duplicated in each individual test.)
I'd also like to standardize logging more and reduce code duplication
there, too, but that seemed like more work than I have time for right
now.
|
|
From: Rik F. <fa...@va...> - 2000-12-20 18:22:13
|
On Wed 20 Dec 2000 12:10:15 -0500,
Rik Faith <fa...@va...> wrote:
> Is anyone seeing the paths test failing when run as part of a complete
> test suite (i.e., but not failing when run individually)?
>
> I'm trying to make sure my branch behaves like the trunk, but I can't
> ever get the trunk to pass a paths tests when it's run as part of a
> complete suite...
OK, I found a bug on my branch and now I fail in the same way that the
trunk fails. This is good and bad.
The good part is that the branch is ready for final review. I'll send
out another posting soon.
The bad part is that paths will fail after maskedClear for some old
version of software Mesa. If you see paths failing with a new version,
try this:
./glean -t +paths <-- always passed
./glean -t +maskedClear+paths <-- always fails (on old software Mesa):
paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0
Stencil Test should have had no effect but actually modified the fragment
paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0
Stencil Test should have had no effect but actually modified the fragment
paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0
Stencil Test should have had no effect but actually modified the fragment
paths: FAIL rgb8, db, z16, s8, accrgba16, win+pmap, id 0
Stencil Test should have had no effect but actually modified the fragment
I haven't tried this on a recent Mesa, so it's probably nothing to worry
about.
|
|
From: Rik F. <fa...@va...> - 2000-12-20 17:10:22
|
Is anyone seeing the paths test failing when run as part of a complete test suite (i.e., but not failing when run individually)? I'm trying to make sure my branch behaves like the trunk, but I can't ever get the trunk to pass a paths tests when it's run as part of a complete suite... |
|
From: Brian P. <br...@va...> - 2000-12-19 15:01:34
|
Allen Akin wrote: > > Did you change Mesa so that the offset value is scaled by the minimum > resolvable difference for the current Visual? Last I checked that > seemed to be the main problem, but I admit I haven't examined it in > detail. Yes, I did that. But I'm not convinced it's a done deal. -Brian |
|
From: Allen A. <ak...@po...> - 2000-12-19 05:23:21
|
Did you change Mesa so that the offset value is scaled by the minimum resolvable difference for the current Visual? Last I checked that seemed to be the main problem, but I admit I haven't examined it in detail. Allen |
|
From: Allen A. <ak...@po...> - 2000-12-19 01:36:05
|
On Sun, Dec 17, 2000 at 11:08:14PM -0800, bs...@so... wrote: | | Everything seems to be working fine. I forced it to fail and it reports | the errors specifically (what mode, arrays locked or unlocked, in a display | list or not, etc.) Does it record those failure details in the results database, so that they can be checked against future runs? (It's nice to see exactly what's different from one driver version or card to the next, which is something you can't get by examining the log generated at test time.) Allen |
|
From: Brian P. <br...@va...> - 2000-12-18 20:47:48
|
Brian Paul wrote: > > I spent some time over the past days looking at the polygon offset > test and why it's failing with Mesa (s/w rendering and h/w rendering). > > First, as a sanity check, I ran pgos on my Indigo2 High IMPACT system > and it failed 7 of the 9 tests. This lead me to question the test > itself. I'm also curious what kind of system Angus Dorbie originally > tested with. Something funny is going on with my Indigo2 tests. I reran with a different version of our DRI libGL and now it passed the first six tests (basic polyon offset), but failed all the punchthrough tests. The same as s/w Mesa. I'm getting an X protocol error when I display remotely, but I think that's a different issue. -Brian |
|
From: Brian P. <br...@va...> - 2000-12-18 20:39:15
|
I spent some time over the past days looking at the polygon offset test and why it's failing with Mesa (s/w rendering and h/w rendering). First, as a sanity check, I ran pgos on my Indigo2 High IMPACT system and it failed 7 of the 9 tests. This lead me to question the test itself. I'm also curious what kind of system Angus Dorbie originally tested with. Digging deeper I found that the triangles which are being rendered are _very_ tiny. There's 121200 vertices in the sphere. I reduced the subdiv1 constant from 50 down to 5 (for 1320 vertices). The triangles are still pretty small, mind you. But after I fixed a Mesa bug in the s/w rasterizer, Mesa 3.5 passed all the basic offset tests. But it's still failing the punchthrough test. By rendering such tiny triangles I think the test is stressing the accuracy of sub-pixel triangle rendering moreso than polygon offset. I'm not sure I understand what the purpose of the punchthrough test is. It seems to me that the typical use of polygon offset is to render some geometry, set the offset, then rerender the same geometry. If there are vertex transformations between the rendering passes (ala punchthrough), that could change the rasterization parameters enough to foil polygon offset. With Mesa, punchthrough is failing at the outermost edge of the spheres. The dz/dx and dz/dy partial derivatives are large at the edge and I'm not surprised that sub-pixel triangles are having some rasterization problems there. Mesa's polygon offset seems to be working fine in practice in our various DRI drivers with games and such, but the pgos test is failing on all of them, AFAIK. I think the test needs to be reexamined, and probably changed. Comments? -Brian |