Hi,
I'm pretty new to OpenGL and Mesa (I am reading my redbook...). If I
picked the wrong place for my question, please let me know.
I'm trying to find out whether I can use mesa for rendering polygons
to offscreen images. In particular I'm interested in the accuracy of
the anti-aliasing routines.
To find out I followed the example in
MesaDemos/progs/osdemos/osdemo32.c. I changed the function
render_image() to the one described below. I basically draw a simple
polygon with three vertices.
Because the top vertex of my polygon is almost flat, but not entirely,
I expect (theoretically) to find a row with increasing pixel values
in the buffer. However, the total number of unique pixel values is 23
at maximum. I inspect only the "R" color component, as I'm interested
only in greyscale images, and assume that anti-aliasing is equal for
all color components.
I suspect that the anti-aliasing routine is written for speed,
producing a result that is visually acceptable. Is there a way that I
can increase the accuracy of the anti-aliasing routine? Or, do I have
to turn to more GPGPU/shader kind of solutions, when I'm more
interested in accuracy then in speed?
I use Mesa in a VirtualBox, so I guess it's all software rendering.
Thanks a lot,
Martijn Sanderse
#define WIDTH 2500
#define HEIGHT 20
static void render_image( void )
{
glViewport(0, 0, WIDTH, HEIGHT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, WIDTH, 0, HEIGHT);
glEnable (GL_POLYGON_SMOOTH);
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint (GL_POLYGON_SMOOTH_HINT, GL_NICEST);
glClearColor(0.0, 0.0, 0.0, 0.0); //black
glClear(GL_COLOR_BUFFER_BIT);
glColor3f (1.0, 1.0, 1.0); // white
glPushMatrix();
glBegin (GL_POLYGON);
glVertex2d (0.0, 10.0);
glVertex2d (2500.0, 11.0); // slope = 1/2500
glVertex2d (2500.0, 10.0);
glEnd ();
glPopMatrix();
glFinish();
}
|