From: Craig <cr...@to...> - 2000-09-29 22:14:44
|
I am trying to use Mesa without the aid of any window toolkit such as GLUT, GLX, GTK+, or whatever. If you really want to know why, I can tell you in another email, but on to my question.... If I use any OpenGL/Mesa functions without using GLUT and its initialization routintes, I get a segfault on the very first GL call I make. I am guessing that the GLUT routines malloc some space for buffers and so on that I will have to do manually. Is this correct? and if so, how do I do this?? Here is some example code that illustrates what I am trying to do. Just assume that everything below is in main(): /* select clearing (background) color */ glClearColor(0.0, 0.0, 0.0, 0.0); /* segfaults here */ /* initialize viewing values */ glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0); /* clear all pixels */ glClear(GL_COLOR_BUFFER_BIT); /* draw white polygon (rectangle) */ glColor3f(1.0, 1.0, 1.0); glBegin(GL_POLYGON); glVertex3f(0.25, 0.25, 0.0); glVertex3f(0.75, 0.25, 0.0); glVertex3f(0.75, 0.75, 0.0); glVertex3f(0.25, 0.75, 0.0); glEnd(); /* start processing OpenGL routines */ glFinish(); /* specify which buffer to read from */ glReadBuffer(GL_FRONT); glReadPixels(0.0, 0.0, MY_WIDTH, MY_HEIGHT, GL_RGB, GL_UNSIGNED_BYTE, glbuffer); /* do something with glbuffer......*/ I sure hope someone here can help me. Although I could very well just read through GLUT source code, I would rather keep my sanity. I will be happy to send any additional information that you deem necessary. Later, -- \ Craig "Cowboy" McDaniel /_\ Software Engineer /_/_\ Internet Tool & Die /_/_/_\ cr...@to... /_/_/_/_\ (502) 584-8665 ext 108 |
From: barrero <ba...@ir...> - 2000-09-30 13:53:55
|
Craig wrote: > I am trying to use Mesa without the aid of any window toolkit such as GLUT, > GLX, GTK+, or whatever. If you really want to know why, I can tell you in > another email, but on to my question.... > If I use any OpenGL/Mesa functions without using GLUT and its initialization > routintes, I get a segfault on the very first GL call I make. > I am guessing that the GLUT routines malloc some space for buffers and so on that I will > have to do manually. Is this correct? and if so, how do I do this?? > Here is some example code that illustrates what I am trying to do. Just > assume that everything below is in main(): > /* select clearing (background) color */ > glClearColor(0.0, 0.0, 0.0, 0.0); /* segfaults here */ > /* initialize viewing values */ > glMatrixMode(GL_PROJECTION); .... The problem is that you're trying to use Mesa/OpenGL without creating an opengl context! if you don't want to use GLUT i suggest that you take a look at the examples of using different interfaces (xmesa,svga,glx) in the mesa xdemos directory. Dany -- ------------------------------------------------------------------------- Daniel Barrero |"Who has been in the sky, will Engineer, Dreamer, Flyer.... | always walk looking at it, e-mail: | wishing to go back." Ba...@ir... | Leonardo DaVinci da...@ma... | web home page: | Well.. it's something like that :) http://members.tripod.com/dbarrero -------------------------------------------------------------------------- |
From: Andreas E. <eh...@ly...> - 2000-10-04 15:42:31
|
On Fri, Sep 29, 2000 at 06:14:34PM -0400, Craig wrote: > I am trying to use Mesa without the aid of any window toolkit such as GLUT, > GLX, GTK+, or whatever. If you really want to know why, I can tell you in > another email, but on to my question.... The way you are doing this won't work. You need to tell OpenGL/X where you want to do your rendering. If you want to use OpenGL in X you are pretty much stuck with GLX. Note that GLX isn't a window toolkit. It is rather the glue between platform independent OpenGL and X. I recommend http://www-europe.sgi.com/software/opengl/glandx/xlib/xlib.html for information on how to use OpenGL in X without using GLUT. (Or SDL for that matter, or qtgl or whatever.) regards Andreas Ehliar |
From: Greg F. <je...@mu...> - 2000-10-04 20:11:06
|
> > On Fri, Sep 29, 2000 at 06:14:34PM -0400, Craig wrote: > > I am trying to use Mesa without the aid of any window toolkit such as GLUT, > > GLX, GTK+, or whatever. If you really want to know why, I can tell you in > > another email, but on to my question.... Perhaps you could use Mesa's Off-Screen rendering (OSMesa) driver. I believe there's also a null frame buffer driver for X which would allow a GLX-based program to run (whether it uses GLUT, GTK+, Motif, Xt, etc.), but would throw away the output. FYI- GLX isn't a window toolkit, it's the glue between Mesa/OpenGL and the windowing system. -G |
From: Craig <cr...@to...> - 2000-10-04 22:00:07
|
This is long but you will probably find it very interesting. I have been getting lots of great suggestions to my previous question and pointers on where to look. Now I have time to explain what I am trying to do (which I think is pretty cool) and maybe I can get some more advice. Imagine a wall of monitors (I'll use a 2x2 wall for example). They are stacked like so: +--------++--------+ | || | | PE 0 || PE 2 | | || | +--------++--------+ +--------++--------+ | || | | PE 1 || PE 3 | | || | +--------++--------+ Each monitor is connected to one computer (PE == Processing Element). The goal here is a virtual display space that consists of all four monitors. Each PE must know where it is in the wall so it knows which portion of the wall to display. (When you add animation, synchronization becomes a major issue, but other people have come up with a brilliant solution for that, and it isn't really important for this conversation). Now imagine that a High-Res image is displayed very small on PE0, and it can be moved (using a mouse) around the wall, sometimes displaying on multiple monitors (if is spanning PE0 and PE2, for example). Or you can use the left and right mouse buttons to zoom in and out, so that the image takes up more than the entire video wall space, or only 1 pixel. All this has already been done by someone else. They have written a library (VWlib) which handles this (though it also make use of special hardware for synchronization). There are SVGAlib and Frame Buffer versions of VWLib, but I am using the X version. It uses a struct called "vwbuf" which can be drawn anywhere on the wall (scale and position can be controlled), or it can run full screen (meaning the full wall space). Vwbuf is simply a 2-Dimensional RGB color buffer. My Masters Thesis in Computer Science is to run OpenGL apps on this setup. You might think of it as a toolkit (like GLUT) for the Video Wall. I am not looking for the most optimal way - I do want to finish it in a reasonable amount of time :) So I am going for the basics. I want to be able to render an OpenGL scene off screen, then read the buffer from OpenGL and copy it to vwbuf, since it is already set up to handle the Video Wall display issues. I will probably run this full screen (full wall) and use the mouse and keyboard inputs to control OpenGL transformations (VWLib already has simple routines for reading from mouse and keyboard). Here would be the pseudo-code for one frame of animation: - Sync up all PE's (they are all finished drawing the last frame) - Read input from mouse and keyboard - Apply OpenGL transformations - Render the result off-screen - Copy the OpenGL Buffer to vwbuf - Each PE repaints its display There are probably better ways to intgrate OpenGL and VWLib more tightly, but again, I am just trying to finish. Someone else can follow up on my work with another thesis and make it better. Perhaps OpenGL can be told to only render a part of the final 2D result, rather than having each PE render the whole scene. Certainly that would be more optimal and make the overall performance of the wall better. Feel free to discuss the "optimal" scenario, but please give me your thoughts on how to approach the basic solution. Thanks, -- \ Craig "Cowboy" McDaniel /_\ Software Engineer /_/_\ Internet Tool & Die /_/_/_\ cr...@to... /_/_/_/_\ (502) 584-8665 ext 108 |
From: Brian P. <br...@va...> - 2000-10-05 14:30:57
|
Craig wrote: > > This is long but you will probably find it very interesting. > > I have been getting lots of great suggestions to my previous question and > pointers on where to look. Now I have time to explain what I am trying to do > (which I think is pretty cool) and maybe I can get some more advice. > > Imagine a wall of monitors (I'll use a 2x2 wall for example). They are > stacked like so: > > +--------++--------+ > | || | > | PE 0 || PE 2 | > | || | > +--------++--------+ > +--------++--------+ > | || | > | PE 1 || PE 3 | > | || | > +--------++--------+ > > Each monitor is connected to one computer (PE == Processing Element). The > goal here is a virtual display space that consists of all four monitors. > Each PE must know where it is in the wall so it knows which portion of the > wall to display. (When you add animation, synchronization becomes a major > issue, but other people have come up with a brilliant solution for that, and > it isn't really important for this conversation). > > Now imagine that a High-Res image is displayed very small on PE0, and it can > be moved (using a mouse) around the wall, sometimes displaying on multiple > monitors (if is spanning PE0 and PE2, for example). Or you can use the left > and right mouse buttons to zoom in and out, so that the image takes up more > than the entire video wall space, or only 1 pixel. All this has already been > done by someone else. They have written a library (VWlib) which handles this > (though it also make use of special hardware for synchronization). There are > SVGAlib and Frame Buffer versions of VWLib, but I am using the X version. It > uses a struct called "vwbuf" which can be drawn anywhere on the wall (scale > and position can be controlled), or it can run full screen (meaning the full > wall space). Vwbuf is simply a 2-Dimensional RGB color buffer. > > My Masters Thesis in Computer Science is to run OpenGL apps on this setup. > You might think of it as a toolkit (like GLUT) for the Video Wall. I am not > looking for the most optimal way - I do want to finish it in a reasonable > amount of time :) So I am going for the basics. I want to be able to render > an OpenGL scene off screen, then read the buffer from OpenGL and copy it to > vwbuf, since it is already set up to handle the Video Wall display issues. > > I will probably run this full screen (full wall) and use the mouse and > keyboard inputs to control OpenGL transformations (VWLib already has simple > routines for reading from mouse and keyboard). > > Here would be the pseudo-code for one frame of animation: > > - Sync up all PE's (they are all finished drawing the last frame) > - Read input from mouse and keyboard > - Apply OpenGL transformations > - Render the result off-screen > - Copy the OpenGL Buffer to vwbuf > - Each PE repaints its display > > There are probably better ways to intgrate OpenGL and VWLib more tightly, > but again, I am just trying to finish. Someone else can follow up on my work > with another thesis and make it better. Perhaps OpenGL can be told to only > render a part of the final 2D result, rather than having each PE render the > whole scene. Certainly that would be more optimal and make the overall > performance of the wall better. > > Feel free to discuss the "optimal" scenario, but please give me your > thoughts on how to approach the basic solution. The July/August 200 issue of IEEE Computer Graphics featured articles on alternative display technologies. Several of the articles discussed multi-machine OpenGL rendering. I recommend that you read it. The basic approach by one of them is this: Create a new OpenGL-like API layer which broadcasts OpenGL commands to N OpenGL machines. For example: void glEnable(GLenum cap) { for (int i = 0; i < N; i++) { send "glEnable" message to machine[i]. } } Intercept glViewport commands to setup the image tiling. If someone calls glViewport(0, 0, 2000, 2000) and each screen is 1000x1000 you'd send glViewport(0, 0, 1000, 1000) to each of the four machines (using your example). Intercept glFrustum, glOrtho, etc commands to setup image tiling as well. This is a little trickier but suppose someone calls glFrustum(-x, x, -y, y, near, far). You'd break that frustum down into 4 sub-frustum calls sort of like: glFrustum(-x, 0, -y, 0, near, far); // lower-left, PE 1 glFrustum( 0, x, -y, 0, near, far); // lower-right, PE 3 glFrustum(-x, 0, 0, y, near, far); // upper-left, PE 0 glFrustum( 0, x, 0, y, near, far); // upper-right, PE 2 My TR (Tile Rendering) library that I wrote years ago (visit my homepage) does exactly this. glGet*() commands are an issue too if your N OpenGL machines aren't identical. glGet(PROJECTION_MATRIX) will, for example, return different results for each machine so state like that will have be maintained separately in the 'host' library. This approach allows many OpenGL apps to run on a multi-screen system. But if you have a specific OpenGL application you want to run on a multi-screen display the odds are good that a more specialized/custom solution would be more efficient. -Brian |