pyopengl-users Mailing List for PyOpenGL (Page 31)
Brought to you by:
mcfletch
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(81) |
Oct
(41) |
Nov
(55) |
Dec
(14) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(34) |
Feb
(3) |
Mar
(16) |
Apr
(5) |
May
(10) |
Jun
(13) |
Jul
(24) |
Aug
(14) |
Sep
(14) |
Oct
(9) |
Nov
(10) |
Dec
(16) |
2003 |
Jan
(25) |
Feb
(59) |
Mar
(9) |
Apr
(21) |
May
(54) |
Jun
(4) |
Jul
(16) |
Aug
(19) |
Sep
(19) |
Oct
(15) |
Nov
(13) |
Dec
(22) |
2004 |
Jan
(19) |
Feb
(8) |
Mar
(20) |
Apr
(16) |
May
(13) |
Jun
(18) |
Jul
(18) |
Aug
(14) |
Sep
(24) |
Oct
(47) |
Nov
(20) |
Dec
(10) |
2005 |
Jan
(23) |
Feb
(31) |
Mar
(11) |
Apr
(29) |
May
(18) |
Jun
(7) |
Jul
(11) |
Aug
(12) |
Sep
(8) |
Oct
(4) |
Nov
(11) |
Dec
(7) |
2006 |
Jan
(7) |
Feb
(8) |
Mar
(15) |
Apr
(3) |
May
(8) |
Jun
(25) |
Jul
(19) |
Aug
(3) |
Sep
(17) |
Oct
(27) |
Nov
(24) |
Dec
(9) |
2007 |
Jan
(6) |
Feb
(43) |
Mar
(33) |
Apr
(8) |
May
(20) |
Jun
(11) |
Jul
(7) |
Aug
(8) |
Sep
(11) |
Oct
(22) |
Nov
(15) |
Dec
(18) |
2008 |
Jan
(14) |
Feb
(6) |
Mar
(6) |
Apr
(37) |
May
(13) |
Jun
(17) |
Jul
(22) |
Aug
(16) |
Sep
(14) |
Oct
(16) |
Nov
(29) |
Dec
(13) |
2009 |
Jan
(7) |
Feb
(25) |
Mar
(38) |
Apr
(57) |
May
(12) |
Jun
(32) |
Jul
(32) |
Aug
(35) |
Sep
(10) |
Oct
(28) |
Nov
(16) |
Dec
(49) |
2010 |
Jan
(57) |
Feb
(37) |
Mar
(22) |
Apr
(15) |
May
(45) |
Jun
(25) |
Jul
(32) |
Aug
(7) |
Sep
(13) |
Oct
(2) |
Nov
(11) |
Dec
(28) |
2011 |
Jan
(35) |
Feb
(39) |
Mar
|
Apr
(25) |
May
(32) |
Jun
(17) |
Jul
(29) |
Aug
(10) |
Sep
(26) |
Oct
(9) |
Nov
(28) |
Dec
(4) |
2012 |
Jan
(24) |
Feb
(47) |
Mar
(4) |
Apr
(8) |
May
(9) |
Jun
(6) |
Jul
(4) |
Aug
(1) |
Sep
(4) |
Oct
(28) |
Nov
(2) |
Dec
(2) |
2013 |
Jan
(11) |
Feb
(3) |
Mar
(4) |
Apr
(38) |
May
(15) |
Jun
(11) |
Jul
(15) |
Aug
(2) |
Sep
(2) |
Oct
(4) |
Nov
(3) |
Dec
(14) |
2014 |
Jan
(24) |
Feb
(31) |
Mar
(28) |
Apr
(16) |
May
(7) |
Jun
(6) |
Jul
(1) |
Aug
(10) |
Sep
(10) |
Oct
(2) |
Nov
|
Dec
|
2015 |
Jan
(6) |
Feb
(5) |
Mar
(2) |
Apr
(1) |
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(2) |
Oct
(1) |
Nov
(19) |
Dec
|
2016 |
Jan
(6) |
Feb
(1) |
Mar
(7) |
Apr
|
May
(6) |
Jun
|
Jul
(3) |
Aug
(7) |
Sep
|
Oct
(2) |
Nov
(2) |
Dec
|
2017 |
Jan
|
Feb
(6) |
Mar
(8) |
Apr
|
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
(3) |
Oct
(2) |
Nov
|
Dec
|
2018 |
Jan
(9) |
Feb
(1) |
Mar
|
Apr
|
May
(1) |
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(6) |
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(7) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Jonathan W. <wr...@es...> - 2011-01-17 17:16:03
|
Hello, I was looking for a compiled version of Togl for windows7 64 bit, to use with python2.7 and pyopengl-3.0.1. This took all afternoon, so I'm cross posting what I found so google might help me better in the future. A Togl17.dll file for win64 (amd64) can be found in the binary distributions of netgen: http://sourceforge.net/projects/netgen-mesher/files/netgen-mesher/Additional%20Files/TclTkTixTogl-w64.zip It can be installed by copying the dll into: C:\python27\lib\site-packages\OpenGL\Tk\togl-win32\togl17.dll Then add pkgIndex.tcl file containing: package ifneeded Togl 1.7\ [list load [file join $dir Togl17.dll] This appears to work for a cursory test using my own application. Cheers Jon |
From: Chris B. <Chr...@no...> - 2011-01-15 01:23:53
|
On 1/14/2011 12:22 PM, Pedro Miranda wrote: > But is someone sure that the proposed strategy to use > NaN to separate different road segments is viable? Actually, no, sorry -- now that I think about it more, I think we found that NaNs behaved differently on different Video Cards, so that may not be safe. > [[(-45005.21, 168113.25), (-45003.77, 168112.71), (-44995.30, > 168109.057)]] > > How can I convert this lineVert to a numpy array and create a "big" > array containing all lineVerts? numpy arrays are fixed size, so your best bet is to create a big list first, and the convert to numpy arrays. Something like: for i in xrange(num_segments): lineVert.extend(self.shp.read_object(i).vertices()) line_array = np.array(lineVert, dtype=np.float32) That will put them all sequentially together. In real use, you will want to either: create one numpy array for each connected set of segments -- you can do that by seeing if the endpoint of one is the startpoint of the next or Keep track of the indexes where the breaks occur, and I think you can then put it all in a VBO or VertexArray, and index into that to draw each contiguous line. Sorry, I didn't write the OpenGL part of MapRoom, so I have only a hazy idea of how this works. > Is there any upper limit to maximum number of vertices a vertex array > can contain? probably only memory limits. > d) VBO ------ The next step in the development of the application > (after the map layer is smoothly drawn in openGL) is to draw a series > of routes on top of the map (static). However, in this case this data > can be more dynamic because the user can select the time interval of > the data that is shown. I will definitely consider in that case using > VBO. Though I suspect that the route data is MUCH less, so performance may not be an issue -- VBOS may be a good way to do anyway. \> e) number of vertices per objMap (vertex array) > ----------------------------------------------- I was checking and > there are : - 7 vertices in average per vertex array - 2 vertices is > the minimum number of vertices per vertex array - 272 vertices is the > maximum mumber of vertices per vertex array > > So the average is pretty low and making so many calls to openGL > routines is certainly good. yes -- that is you key problem, you need to join the ones together that are contiguous, at least. I'd test that and see what the numbers look like. > f) maproom ---------- > > Thank you for sending the link of maproom. I will install it and > check how it works. The API contains lots of useful functionality! > Collaboration seems a good idea; let's see how thinks evolve. OK -- it's been sleeping a bit after Dan Helfman left us -- but we are picking it up again, and probably hiring a new developer to work on it -- anyone looking for work in Seattle? I'll try to see if I can add loading of shape files like yours to it next week -- we do want that eventually. > FYI: I have recently found that Quantum GIS has python bindings as > well. You can find more information in > http://www.qgis.org/pyqgis-cookbook/ Yup -- QGIS is really nice, and can be used as a toolkit for building custom apps -- we're not using it because we have data that does not fit into the standard GIS data model, and we want to be able to draw fast and interactrively edit big data sets -- QGIS doesn't (or didn't when we evaluated it) do that very well. Perhaps it's a good option for you, though. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Pedro M. <pm...@gm...> - 2011-01-14 20:23:08
|
Hi all, First of all thank you for your all your comments and fast replies. Second, I am sorry for late replying but I was away for the day. I should also add that I am not professional programmer and I was not aware of how some mechanisms work in programming. In order to avoid spamming the mailing list I will try to address all the comments you have sent in a single e-mail. 1) Numpy arrays --------------- Previously to starting the tread I tried to make an implementation using numpy arrays but I guess I was not successful. As many of you point out this should be the way to go for managing GIS data. In our comments you state that I could I could make a "big" numpy array containing all the road segments. For me this is ok because this is just map data that I will not be changing it (static). But is someone sure that the proposed strategy to use NaN to separate different road segments is viable? The following code to read a line of the shapefile returns a list of vertices for a given road segment: lineVert = self.shp.read_object(i).vertices() A print of lineVert will give for example: [[(-45005.21, 168113.25), (-45003.77, 168112.71), (-44995.30, 168109.057)]] How can I convert this lineVert to a numpy array and create a "big" array containing all lineVerts? Is there any upper limit to maximum number of vertices a vertex array can contain? b) Python List creation ----------------------- I am just loading the shapefile and creating the list of lists of tuples at the initialization of the application. Since this is static data I just do this once at the beginning of the application. I do not allow the user to load a new map during runtime; besides this is not interesting for my application. c) profiling ------------ I was not aware of the tool RunSnakeRun to do profiling in python. Thank you for sending this! It was useful! Analysing the output I could find out that the majority of the time (almost 11 seconds or 60% of the time) was spent in function "as...@li...", which is a part of python/OpenGL/Arrays. I think this function does the conversion that have been mentioned here. The second most important cost was due to "wra...@wr...". If someone is interested I can send a copy of the profile file. d) VBO ------ The next step in the development of the application (after the map layer is smoothly drawn in openGL) is to draw a series of routes on top of the map (static). However, in this case this data can be more dynamic because the user can select the time interval of the data that is shown. I will definitely consider in that case using VBO. Do someone know any good tutorial to use VBO in dynamic scenarios? e) number of vertices per objMap (vertex array) ----------------------------------------------- I was checking and there are : - 7 vertices in average per vertex array - 2 vertices is the minimum number of vertices per vertex array - 272 vertices is the maximum mumber of vertices per vertex array So the average is pretty low and making so many calls to openGL routines is certainly good. f) maproom ---------- Thank you for sending the link of maproom. I will install it and check how it works. The API contains lots of useful functionality! Collaboration seems a good idea; let's see how thinks evolve. FYI: I have recently found that Quantum GIS has python bindings as well. You can find more information in http://www.qgis.org/pyqgis-cookbook/ Thank you! Pedro -------- Original-Nachricht -------- > Datum: Thu, 13 Jan 2011 20:46:45 -0700 > Von: Ian Mallett <geo...@gm...> > An: Chris Barker <Chr...@no...> > CC: pyopengl-users <pyo...@li...> > Betreff: Re: [PyOpenGL-Users] Pyopengl - slow performance using > On Thu, Jan 13, 2011 at 7:45 PM, Chris Barker <Chr...@no...> > wrote: > > > On 1/13/2011 3:17 PM, Ian Mallett wrote: > > > > doing this. Python lists are just linked lists, > > > > > > no, they are not -- they are internally regular C arrays. > > > > > > But they are resizable, which implies they are either linked lists or > > > array lists--because I doubt the pointers are copied over to a > brand-new > > > array just big enough each time a single element is added or removed. > > > > A bit OT, but they handle re-sizing by over allocating when appended to. > > So most of the time you can append to a list without any memory > > allocation or copying, but as it grows, it does need to do that once in > > a while. > > > Right--this data structure is called an array list. > > You're absolutely correct that Python users should avoid it for very high > performance. I usually stick with NumPy arrays for all but the most dead > simple stuff. They also tend to be compatible with other packages--in > particular, I've used them to tie very nicely into PyOpenGL's VBO class: > > my_vbo = vbo.VBO(numpy_1x3_array,'f') > > In fact, they seemed to be the only thing that actually did work. And, > speed aside, NumPy provides some truly great functionality when it comes > to > working with arrays--like element-wise operations, anyone? > > Ian -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de |
From: Ian M. <geo...@gm...> - 2011-01-14 03:46:51
|
On Thu, Jan 13, 2011 at 7:45 PM, Chris Barker <Chr...@no...> wrote: > On 1/13/2011 3:17 PM, Ian Mallett wrote: > > > doing this. Python lists are just linked lists, > > > > no, they are not -- they are internally regular C arrays. > > > > But they are resizable, which implies they are either linked lists or > > array lists--because I doubt the pointers are copied over to a brand-new > > array just big enough each time a single element is added or removed. > > A bit OT, but they handle re-sizing by over allocating when appended to. > So most of the time you can append to a list without any memory > allocation or copying, but as it grows, it does need to do that once in > a while. > Right--this data structure is called an array list. You're absolutely correct that Python users should avoid it for very high performance. I usually stick with NumPy arrays for all but the most dead simple stuff. They also tend to be compatible with other packages--in particular, I've used them to tie very nicely into PyOpenGL's VBO class: my_vbo = vbo.VBO(numpy_1x3_array,'f') In fact, they seemed to be the only thing that actually did work. And, speed aside, NumPy provides some truly great functionality when it comes to working with arrays--like element-wise operations, anyone? Ian |
From: Chris B. <Chr...@no...> - 2011-01-14 02:45:26
|
On 1/13/2011 3:17 PM, Ian Mallett wrote: > > doing this. Python lists are just linked lists, > > no, they are not -- they are internally regular C arrays. > > But they are resizable, which implies they are either linked lists or > array lists--because I doubt the pointers are copied over to a brand-new > array just big enough each time a single element is added or removed. A bit OT, but they handle re-sizing by over allocating when appended to. So most of the time you can append to a list without any memory allocation or copying, but as it grows, it does need to do that once in a while. linked lists would be great for appending, but horrible for indexing and slicing. But anyway, the original point is correct -- python lists don't store the data in a way that is compatible with OpenGL, so a lot of work has to be done to create OpenGL objects from lists -- numpy arrays are really just wrappers around blocks of memory, so they can efficiently be used to move data in and out of OpenGL. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Ian M. <geo...@gm...> - 2011-01-13 23:17:18
|
Hi, If you're binding a new vertex array for each individual line, then that is certainly your problem. I once had an application where I changed vertex buffers a couple times per object. Maybe 50 bindings, and the program was too slow to be usable. The problem had to be solved with interleaved vertex arrays/VBOs. 350,000 vertex buffer bindings, and I see why you have performance issues! You can solve this by batching more of your edges in a single vertex array. If you batched all of them into a single vertex array, I'd predict somewhere around 5-15fps. Because you're still drawing these things with vertex arrays--i.e., transferring all the data across the graphics bus each frame. Even if you have to duplicate vertices, you'll still be far better off. AFTER you've done this, to make your application realtime, you can then transition to VBOs, where you'll transfer the data once. I recommend using the OpenGL.arrays.vbo class. VBOs are the "best" solution, and will even let you do fancy stuff like update the data (or only a part of it). An easier solution is using display lists--although they're technically depreciated in versions of OpenGL that no nobody likes. After this, you'll probably be around 50-70fps. Keep in mind that 350,000 * lines* is a hefty load, because each line (internally) requires two triangles. On Thu, Jan 13, 2011 at 3:49 PM, Chris Barker <Chr...@no...> wrote: > On 1/13/2011 3:50 AM, Alejandro Segovia wrote: > > I find it puzzling that you didn't notice any speedup at all after > > doing this. Python lists are just linked lists, > > no, they are not -- they are internally regular C arrays. > But they are resizable, which implies they are either linked lists or array lists--because I doubt the pointers are copied over to a brand-new array just big enough each time a single element is added or removed. Ian |
From: Chris B. <Chr...@no...> - 2011-01-13 22:49:13
|
On 1/13/2011 3:29 AM, pm...@gm... wrote: > I am developing an application using pyopenGL to read data from a shape file and to draw it in 2D using openGl. However, the performance is really bad; it takes up to 20 seconds to draw the map. I am using Ubuntu and nVidia NVS 3100M. You may be interested in this: https://bitbucket.org/dhelfman/maproom/wiki/Home It aims to be a general purpose map viewer/manipulator library for custom mapping applications, written in Python, with PyOpenGL, and some Cython for performance bottlenecks. We don't have shapefile line segment support yet, but the pieces are all there, so it wouldn't be hard to add, and we've addressed your performance issues. How far are you going with this? Maybe an opportunity for collaboration? On 1/13/2011 3:50 AM, Alejandro Segovia wrote: > I find it puzzling that you didn't notice any speedup at all after > doing this. Python lists are just linked lists, no, they are not -- they are internally regular C arrays. > which means they are > probably not consecutive in memory, but rather, they have to be > converted to an array at each call to drawArrays. however, they are arrays of pointers to PyObjects, so you are right, each pyObject needs to be converted, and that can be slow, you don't want to do that with each pain event. On 1/13/2011 3:56 AM, Jonathan Hartley wrote: > Personally I highly recommend using RunSnakeRun to > draw diagrams from the output of stdlib cProfile. cool! thanks, I hadn't see that before -- looks really useful. > I'd guess that part (a) of the code, constructing the arrays, is > probably pretty expensive, since you are processing every individual > vertex. Importantly, this code should only need to be run at application > start-up, not every frame, is this correct? or at data loading. MapRoom is a bit pokey when you load a large data set, but then the drawing is plenty fast. > I can see that the value of objMap represents a vertex array, but how > many vertices are generally in it? You may find substantial acceleration > from increasing the number of vertices in an array, Yup -- I think this is your proplem -- you won't want to have to draw each line segment separately -- I'll bet you have a lot. As a rule, if you have much looping in Python in side your drawing code, it's going to be slow. > the number of times this loop needs to iterate. In order to do this, you > will need to put the arrays for several roads into a single array. yup -- I *think* you can put NaNs in between segments to create a gap and do a bunch with one line drawing call. or, if you know which segments are connected, you ca re-join them, essentially. HTH, -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Jonathan H. <ta...@ta...> - 2011-01-13 12:16:30
|
On 13/01/2011 11:50, Alejandro Segovia wrote: > I find it puzzling that you didn't notice any speedup at all after doing this. Python lists are just linked lists, which means they are probably not consecutive in memory, but rather, they have to be converted to an array at each call to drawArrays. Hey. I'm guessing this implies there is something else going on which is massively slower, swamping the change from lists to numpy arrays. This hypothesis is borne out by the reported performance of 20 secs per frame. :-) -- Jonathan Hartley Made of meat. http://tartley.com ta...@ta... +44 7737 062 225 twitter/skype: tartley |
From: Jonathan H. <ta...@ta...> - 2011-01-13 12:16:30
|
On 13/01/2011 11:29, pm...@gm... wrote: > Hi all, > > I am developing an application using pyopenGL to read data from a shape file and to draw it in 2D using openGl. However, the performance is really bad; it takes up to 20 seconds to draw the map. I am using Ubuntu and nVidia NVS 3100M. > > The shape file being read has the following characteristics: > > - 50 000 entries containing the description roads segments; each road segment has the associated geometry (set of (x,y) points ) > - +/- 350 000 points in total > > > A simplified version of the code is shown bellow: > > a) to get data from shapefile (using pyshapelib) and store it in a list of lists of tuples (python): > ... > self.shp = [] > self.extents = [] > self.array = [] > ... > self.shp = shapelib.ShapeFile(filename) > for i in range(self.shp.info()[0]): > lineVert = self.shp.read_object(i).vertices() > tempArray = [] > for j in lineVert[0]: > tempArray.append([j[0],j[1]]) > self.array.append(tempArray) > > b) to draw the data using pyopenGL: > ... > glEnableClientState(GL_VERTEX_ARRAY) > # obtained from the shapefile(array element) > arrayList = self.map.getArrayList() > if len(arrayList)> 0: > for objMap in arrayList: > self.drawPolyline(objMap) > > def drawPolyline(self,inArray): > #glColor(1.0, 0.0, 0.0) > glVertexPointerf(inArray) > glDrawArrays(GL_LINE_STRIP, 0, len(inArray)) > > - Do you have any hints how to improve the performance dramatically and make it usable? I tried several options but none worked out. For instance I tried to replace python list with numpy arrays but the performance was not significantly changed. > > - Do I need execute the OpenGL code in C? Altough I could do this, I still need to use python as scripting language due to other dependencies of the implementation. > > Thank you in advance for your help! > > Best Regards, > > Pedro > Hey there, Have you profiled or timed your code to see precisely which parts are taking all the time? Personally I highly recommend using RunSnakeRun to draw diagrams from the output of stdlib cProfile. Let me know if you'd like to do this but are having problems, I'll help you out. Since GIS data is mostly static (i.e. not animated), so that you don't need to process every vertex every frame, I would be very surprised if you had to move from Python to C for performance reasons. I am confident that you will be able to optimise this code to be fast. I'd guess that part (a) of the code, constructing the arrays, is probably pretty expensive, since you are processing every individual vertex. Importantly, this code should only need to be run at application start-up, not every frame, is this correct? Also, in part (b) of the code, where you loop over the objects in arrayList: for objMap in arrayList: self.drawPolyline(objMap) I can see that the value of objMap represents a vertex array, but how many vertices are generally in it? You may find substantial acceleration from increasing the number of vertices in an array, and hence decreasing the number of times this loop needs to iterate. In order to do this, you will need to put the arrays for several roads into a single array. I don't know whether it will be useful to put the vertices for *all* roads into a single array - there may be an upper limit on the useful size of vertex arrays. Doubtless someone cleverer than me on the list will know this. Also, a very minor point: if len(arrayList)> 0: for objMap in arrayList: self.drawPolyline(objMap) If arrayList is a normal Python iterable (e.g. a list), then this could be written more simply as: for objMap in arrayList: self.drawPolyline(objMap) Best regards, Jonathan -- Jonathan Hartley Made of meat. http://tartley.com ta...@ta... +44 7737 062 225 twitter/skype: tartley |
From: Alejandro S. <as...@gm...> - 2011-01-13 11:52:35
|
Hello Pedro, On Jan 13, 2011, at 9:29 AM, pm...@gm... wrote: > Hi all, > > I am developing an application using pyopenGL to read data from a shape file and to draw it in 2D using openGl. However, the performance is really bad; it takes up to 20 seconds to draw the map. I am using Ubuntu and nVidia NVS 3100M. > > The shape file being read has the following characteristics: > > - 50 000 entries containing the description roads segments; each road segment has the associated geometry (set of (x,y) points ) > - +/- 350 000 points in total > > > A simplified version of the code is shown bellow: > > a) to get data from shapefile (using pyshapelib) and store it in a list of lists of tuples (python): > ... > self.shp = [] > self.extents = [] > self.array = [] > ... > self.shp = shapelib.ShapeFile(filename) > for i in range(self.shp.info()[0]): > lineVert = self.shp.read_object(i).vertices() > tempArray = [] > for j in lineVert[0]: > tempArray.append([j[0],j[1]]) > self.array.append(tempArray) > > b) to draw the data using pyopenGL: > ... > glEnableClientState(GL_VERTEX_ARRAY) > # obtained from the shapefile(array element) > arrayList = self.map.getArrayList() > if len(arrayList) > 0: > for objMap in arrayList: > self.drawPolyline(objMap) > > def drawPolyline(self,inArray): > #glColor(1.0, 0.0, 0.0) > glVertexPointerf(inArray) > glDrawArrays(GL_LINE_STRIP, 0, len(inArray)) > > - Do you have any hints how to improve the performance dramatically and make it usable? I tried several options but none worked out. For instance I tried to replace python list with numpy arrays but the performance was not significantly changed. I find it puzzling that you didn't notice any speedup at all after doing this. Python lists are just linked lists, which means they are probably not consecutive in memory, but rather, they have to be converted to an array at each call to drawArrays. Are you 100% certain you were converting from a python list to a numpy array just once and not at every frame? The other thing I noticed is that you are making several calls to drawArrays. This might be required by your app's logic, but if data is actually static, I would consider making just one big array and supplying all the data at once to OpenGL. Finally, I would also consider using VBOs to prevent copying vertex data from main memory to the GPU several times over (specially if your data is static). > > - Do I need execute the OpenGL code in C? Altough I could do this, I still need to use python as scripting language due to other dependencies of the implementation. I don't think so, as long as you avoid doing a lot of calculations using python and you using deprecated features, you should be fine. Please consider revising the usage of numpy arrays. Good luck! Alejandro.- > > Thank you in advance for your help! > > Best Regards, > > Pedro > > -- > Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir > belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de > > ------------------------------------------------------------------------------ > Protect Your Site and Customers from Malware Attacks > Learn about various malware tactics and how to avoid them. Understand > malware threats, the impact they can have on your business, and how you > can protect your company and customers by using code signing. > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > PyOpenGL Homepage > http://pyopengl.sourceforge.net > _______________________________________________ > PyOpenGL-Users mailing list > PyO...@li... > https://lists.sourceforge.net/lists/listinfo/pyopengl-users |
From: <pm...@gm...> - 2011-01-13 11:29:36
|
Hi all, I am developing an application using pyopenGL to read data from a shape file and to draw it in 2D using openGl. However, the performance is really bad; it takes up to 20 seconds to draw the map. I am using Ubuntu and nVidia NVS 3100M. The shape file being read has the following characteristics: - 50 000 entries containing the description roads segments; each road segment has the associated geometry (set of (x,y) points ) - +/- 350 000 points in total A simplified version of the code is shown bellow: a) to get data from shapefile (using pyshapelib) and store it in a list of lists of tuples (python): ... self.shp = [] self.extents = [] self.array = [] ... self.shp = shapelib.ShapeFile(filename) for i in range(self.shp.info()[0]): lineVert = self.shp.read_object(i).vertices() tempArray = [] for j in lineVert[0]: tempArray.append([j[0],j[1]]) self.array.append(tempArray) b) to draw the data using pyopenGL: ... glEnableClientState(GL_VERTEX_ARRAY) # obtained from the shapefile(array element) arrayList = self.map.getArrayList() if len(arrayList) > 0: for objMap in arrayList: self.drawPolyline(objMap) def drawPolyline(self,inArray): #glColor(1.0, 0.0, 0.0) glVertexPointerf(inArray) glDrawArrays(GL_LINE_STRIP, 0, len(inArray)) - Do you have any hints how to improve the performance dramatically and make it usable? I tried several options but none worked out. For instance I tried to replace python list with numpy arrays but the performance was not significantly changed. - Do I need execute the OpenGL code in C? Altough I could do this, I still need to use python as scripting language due to other dependencies of the implementation. Thank you in advance for your help! Best Regards, Pedro -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de |
From: Jon N. <jo...@gm...> - 2011-01-09 17:55:03
|
Hi all, I'm having problems using the function glGenVertexArrays despite the fact that I've correctly imported OpenGL.GL.ARB.vertex_array_object. Here's the output I get from my (simple) program: Traceback (most recent call last): File "cube.py", line 24, in <module> batch.begin(GL_TRIANGLES, 36) File "cube.py", line 20, in begin glGenVertexArrays(1, self.vertexArrayObject) File "c:\Python26\lib\site-packages\OpenGL\platform\baseplatform.py", line 340 , in __call__ self.__name__, self.__name__, OpenGL.error.NullFunctionError: Attempt to call an undefined function glGenVerte xArrays, check for bool(glGenVertexArrays) before calling If it makes any difference, I'm on Windows 7. Any ideas? Thank you, Jon Nye |
From: Alejandro S. <as...@gm...> - 2011-01-05 01:31:34
|
Hello Andrew, On Tue, Jan 4, 2011 at 7:31 PM, Merrill, Andrew <Mer...@ca...> wrote: > I am attempting to use glInterleavedArrays, and encountering intermittent > access violation exceptions. When I run my program, it will run fine most > of the time. However, some of the time it will immediately exit with the > following traceback: > > > > Traceback (most recent call last): > > File "C:\Users\merrilla\Documents\Python\3d\crash-test.py", line 67, in > <module> > > animate() > > File "C:\Users\merrilla\Documents\Python\3d\crash-test.py", line 62, in > animate > > shape.draw() > > File "C:\Users\merrilla\Documents\Python\3d\crash-test.py", line 49, in > draw > > glInterleavedArrays(GL_C3F_V3F, 0, None) > > File "C:\Python26\lib\site-packages\OpenGL\latebind.py", line 45, in > __call__ > > return self._finalCall( *args, **named ) > > File "C:\Python26\lib\site-packages\OpenGL\wrapper.py", line 1217, in > wrapperCall > > result = self.wrappedOperation( *cArguments ) > > WindowsError: exception: access violation writing 0x05F89C5C > > > > I get this error about 10-20% of the times that I run the program. If the > error occurs, it occurs immediately; I’ve never seen the program run for a > while and then produce this error. > > > > I’m using PyOpenGl 3.0.1 with Python 2.6.6 on a Windows 7 (32 bit) system. > The OpenGL version is 3.0.0. > > > > I’ve attached the program source code. > > > > If you have any ideas about what is happening here, I would greatly > appreciate any help you can provide. > >From reading your mail I got the feeling you might be provoking a segmentation fault somewhere. I've taken a look at your code and there are several things that could be causing a problem like this. Now, I wasn't able to reproduce the crash (Mac OS X 10.6.5, PyOpenGL 3.0.1), but you may want to check this things just in case. 1) When calling glInterleavedArrays, you are supplying "None" as the pointer to the beginning of the data. This might be OK, but what you really want is a pointer equal to 0x0. I would suggest importing ctypes and using c_void_p(0) instead. 2) I noticed you are using "dataArray = OpenGL.arrays.lists.ListHandler().asArray(dataList, GL_FLOAT)" to create a linear memory array with the contents of the python list with the data. You might want to try using numpy arrays instead as in "dataArray = numpy.array(dataList, dtype=numpy.float32)" and then casting the array to void*. 3) Finally, I ran into issues with VBO's myself using PyOpenGL 3.0.0. This was, apparently, becaus of a bug in that version. I would suggest you update to 3.0.1 and see if the issue persists. I've passed the Python interpreter through Valgrind while running your code and it didn't report any access violations, so this makes me lean towards #3, but I would also check the other two. Good luck! Alejandro.- ------------------------------------------------------------------------------ > Learn how Oracle Real Application Clusters (RAC) One Node allows customers > to consolidate database storage, standardize their database environment, > and, > should the need arise, upgrade to a full multi-node Oracle RAC database > without downtime or disruption > http://p.sf.net/sfu/oracle-sfdevnl > _______________________________________________ > PyOpenGL Homepage > http://pyopengl.sourceforge.net > _______________________________________________ > PyOpenGL-Users mailing list > PyO...@li... > https://lists.sourceforge.net/lists/listinfo/pyopengl-users > > -- Alejandro Segovia Azapian Director, Algorithmia: Visualization & Acceleration http://web.algorithmia.net |
From: Christopher B. <Chr...@no...> - 2010-12-23 19:47:14
|
On 12/22/10 5:25 PM, Derakon wrote: > I did eventually get the zoom factor right. Turns out it's determined > by the ratio of the size of the tile in pixels (512) and the size of > the tile in OpenGL units (1000). Which makes sense; it just required a > bunch of fiddling to sort out. yup -- seems simple in retrospect, but hard to get right. > Yeah, our use cases are basically 1) looking at details of tiles we've > imaged earlier, and 2) zooming way out so we can pan around quickly to > zoom in on other tiles. So there's no real need for intermediate zoom > levels. well, it's a matter of how big a range of zoom you need to support. > I've set up the megatiles to be able to hold about 150 normal > tiles (so a ~10x decrease in resolution) and it looks just fine. I'm sure it will look fine, with GL's texture scaling built in, perform fine. In our case, we are doing a set of tiles for each factor-of-two zoom level. That's probably a lot more than necessary, however. WE did that because that' a standard way to tile map images. IN the case we have currently implemented, they re just re-scaled versions of the same image, so no good reason to do it. However, it is common with maps to have the maps rendered with different levels of detail at different zoom levels, so you really do need all those layers. See google maps for an example. It sounds like you have a fine solution for your problem. > I'm not certain I understand your question. The function I'm > describing here is the "we've just taken a new image with our camera; > now we need to add it to the mosaic" function. That also implies that > we need to add it to any appropriate megatiles, so that later when we > draw the megatile, the new image will be there. I think it's similar -- except that you have full resolution tiles, and a 1/10 resolution tile, where as we have a bigger set -- a set of tiles at full resolution, one at 1/2 resolution, 1/4, etc, all the way down to whatever gives us a single 256x256 tile. > With the old behavior , we were rendering each tile that intersected > the view every frame. This meant that when you zoomed far out, we had > to iterate over thousands of tiles and render them, which turned out > to be costly. The goal is to avoid having to tell OpenGL to render > large numbers of textures, yup. > Boy howdy, you ain't kidding. At least it works now. Good work -- it's satisfying when it does work right! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Derakon <de...@gm...> - 2010-12-23 01:25:49
|
I did eventually get the zoom factor right. Turns out it's determined by the ratio of the size of the tile in pixels (512) and the size of the tile in OpenGL units (1000). Which makes sense; it just required a bunch of fiddling to sort out. Thanks for the help all. More stuff inline. On Wed, Dec 22, 2010 at 10:29 AM, Christopher Barker <Chr...@no...> wrote: > Sorry for the slow response -- lots going on here. > > Derakon wrote: >> The basic flow of my code is: >> >> Main render function: >> if zoomed far out: >> render each megatile in view >> else: >> render each individual tile in view > > That's pretty much what we do, except we have many levels of zoom -- it > looks like you have only two. Yeah, our use cases are basically 1) looking at details of tiles we've imaged earlier, and 2) zooming way out so we can pan around quickly to zoom in on other tiles. So there's no real need for intermediate zoom levels. I've set up the megatiles to be able to hold about 150 normal tiles (so a ~10x decrease in resolution) and it looks just fine. > >> Add new tile function: >> find megatile(s) that this tile overlaps >> render tile to megatiles > > ahh -- that is a bit different. rather than renderintg each tile each time, > you render to a megatile, then just render that. What is the advantage of > that, why not just render the tiles directly? I suppose that does take a > Python loop, but I don't think that's very slow in this case. Our tiles are > 256X256, so for a 1000X1000 screen, that's 16 tiles --not many at all. I'm not certain I understand your question. The function I'm describing here is the "we've just taken a new image with our camera; now we need to add it to the mosaic" function. That also implies that we need to add it to any appropriate megatiles, so that later when we draw the megatile, the new image will be there. With the old behavior , we were rendering each tile that intersected the view every frame. This meant that when you zoomed far out, we had to iterate over thousands of tiles and render them, which turned out to be costly. The goal is to avoid having to tell OpenGL to render large numbers of textures, which is accomplished by pre-rendering to the megatiles. Then each frame, we draw the megatiles to the screen, instead of drawing the normal tiles. When we're zoomed in close enough to see details, then we still render the normal tiles directly, but that's cheap because we can cull out all the tiles that don't intersect the view. > >> Something is going wrong in the "render to megatile" code, since when >> I render the megatile later in the main render function, its scale is >> off. But I can't figure out why. > > It is a real pain to get the math right for that kind of thing. All I can > suggest is some really tedious debugging/print statements, and check the > math at each step. > Boy howdy, you ain't kidding. At least it works now. > -Chris > -Chris |
From: Christopher B. <Chr...@no...> - 2010-12-22 18:29:23
|
Sorry for the slow response -- lots going on here. Derakon wrote: > I took a look at your code, but it looks like you're doing this all > "manually" while I'm trying to leverage OpenGL to do the work for me. I'm not sure that's true. We do break the process down into two stages: 1) generate the tiles at different zoom levels. 2) render them. 1) is done without using OpenGL at all. Perhaps we should have used it, but we've had in mind being able to use different rendering back-ends, for instance for PDF generation or printing, so we wanted to b able to have the tiles without any OpenGL. We've also separated the generation of tiles from rendering because we may well want to use tiles that are pre-generated, such as from OpenStreetMap and the like. 2) The rendering stage is using OpenGL as much as I know how. For a given zoom level, we need to determine which scale of tiles to use, which specific tiles to use, how they are scaled to OpenGL coords, then pass them in. Isn't that what you are doing? > The basic flow of my code is: > > Main render function: > if zoomed far out: > render each megatile in view > else: > render each individual tile in view That's pretty much what we do, except we have many levels of zoom -- it looks like you have only two. > Add new tile function: > find megatile(s) that this tile overlaps > render tile to megatiles ahh -- that is a bit different. rather than renderintg each tile each time, you render to a megatile, then just render that. What is the advantage of that, why not just render the tiles directly? I suppose that does take a Python loop, but I don't think that's very slow in this case. Our tiles are 256X256, so for a 1000X1000 screen, that's 16 tiles --not many at all. > Something is going wrong in the "render to megatile" code, since when > I render the megatile later in the main render function, its scale is > off. But I can't figure out why. It is a real pain to get the math right for that kind of thing. All I can suggest is some really tedious debugging/print statements, and check the math at each step. good luck, -Chris > -Chris > > On Thu, Dec 16, 2010 at 4:31 PM, Christopher Barker > <Chr...@no...> wrote: >> On 12/16/10 4:26 PM, Christopher Barker wrote: >>> Sorry, no time to try to take a look at your code, but we have a similar >>> system in maproom, for rending tiles at different zoom levels, maybe it >>> will help you figure out your issue: >>> >>> https://bitbucket.org/dhelfman/maproom/wiki/Home >> I think this is the relevant file: >> >> https://bitbucket.org/dhelfman/maproom/src/668ea464624e/maproomlib/plugin/Tile_set_layer.py -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Utku A. <utk...@gm...> - 2010-12-20 14:51:45
|
Hello Dirk, You are right, I am using a frame buffer with a texture as depth attachment. I've used GPU Studio of ATI to copy the actual texture that includes depth values, here it is: http://img228.imageshack.us/img228/3865/dc98pstexture0t.png Z-fighting should take place in the compare phase of SM shader, in the 2nd pass I think, I am not sure what can result such a texture. On Mon, Dec 20, 2010 at 4:20 PM, Dirk Reiners <dir...@gm...> wrote: > > Hi Utku, > > are you sure this is a depth buffer? It looks more like a framebuffer to me, and > the artifacts look a lot like z-fighting, where you have two polygons at almost > the same positions. > > Just my $.02 > > Dirk > > > ------------------------------------------------------------------------------ > Lotusphere 2011 > Register now for Lotusphere 2011 and learn how > to connect the dots, take your collaborative environment > to the next level, and enter the era of Social Business. > http://p.sf.net/sfu/lotusphere-d2d > _______________________________________________ > PyOpenGL Homepage > http://pyopengl.sourceforge.net > _______________________________________________ > PyOpenGL-Users mailing list > PyO...@li... > https://lists.sourceforge.net/lists/listinfo/pyopengl-users > |
From: Dirk R. <dir...@gm...> - 2010-12-20 14:20:58
|
Hi Utku, are you sure this is a depth buffer? It looks more like a framebuffer to me, and the artifacts look a lot like z-fighting, where you have two polygons at almost the same positions. Just my $.02 Dirk |
From: Utku A. <utk...@gm...> - 2010-12-20 13:53:39
|
Hello, I've been working on shadow maps using GLSL, but the first pass render result, which is a depth only render is pretty much unusable, it almost looks like a wireframe projection of the scene from the light's point of view. Here is the image: http://img80.imageshack.us/img80/2889/95488546.png Since my far and near planes are not widely configured I think it is not a depth buffer precision should not be a problem. The depth texture is similar to the shadow, so it is not a self shadowing issue, the first pass seems like to be the problem. I have put together a test case only dependency is pyopengl, shaders are inside the file(shader::build method). # shadow mapping test # utkualtinkaya at gmail # shader is from http://www.fabiensanglard.net/shadowmapping/index.php from OpenGL.GL import * from OpenGL.GLU import * from OpenGL.GLUT import * from OpenGL.GL.shaders import * from OpenGL.GL.framebufferobjects import * import math class Camera: def __init__(self): self.rotx, self.roty = math.pi/4, math.pi/4 self.distance = 100 self.moving = False self.ex, self.ey = 0, 0 self.size = (800, 600) def load_matrices(self): glViewport(0, 0, *self.size) y = math.cos(self.roty) * self.distance x = math.sin(self.roty) * math.cos(self.rotx) * self.distance z = math.sin(self.roty) * math.sin(self.rotx) * self.distance glMatrixMode(GL_PROJECTION) glLoadIdentity() gluPerspective(45.0, self.size[0]/float(self.size[1]), 1, 1000) glMatrixMode(GL_MODELVIEW) glLoadIdentity() gluLookAt(x,y,z, 0,0,0, 0,1,0) def on_mouse_button (self, b, s, x, y): self.moving = not s self.ex, self.ey = x, y if b in [3, 4]: dz = (1 if b == 3 else -1) self.distance += self.distance/15.0 * dz; def on_mouse_move(self, x, y, z = 0): if self.moving: self.rotx += (x-self.ex) / 300.0 self.roty += -(y-self.ey) / 300.0 self.ex, self.ey = x, y def set_size(self, w, h): self.size = w, h class Shader(): def __init__(self): self.is_built = False self.uniforms = {} def build(self): self.program = compileProgram( compileShader(''' uniform mat4 camMatrix; uniform mat4 shadowMatrix; varying vec4 depthProjection; uniform bool useShadow; void main() { gl_Position = camMatrix * gl_ModelViewMatrix * gl_Vertex; depthProjection = shadowMatrix * gl_ModelViewMatrix * gl_Vertex; gl_FrontColor = gl_Color; } ''',GL_VERTEX_SHADER), compileShader(''' varying vec4 depthProjection; uniform sampler2D shadowMap; uniform bool useShadow; void main () { float shadow = 1.0; if (useShadow) { vec4 shadowCoord = depthProjection / depthProjection.w ; // shadowCoord.z -= 0.0003; float distanceFromLight = texture2D(shadowMap, shadowCoord.st).z; if (depthProjection .w > 0.0) shadow = distanceFromLight < shadowCoord.z ? 0.5 : 1.0 ; } gl_FragColor = shadow * gl_Color; } ''',GL_FRAGMENT_SHADER),) self.is_built = True self.uniforms['camMatrix'] = glGetUniformLocation(self.program, 'camMatrix') self.uniforms['shadowMatrix'] = glGetUniformLocation(self.program, 'shadowMatrix') self.uniforms['shadowMap'] = glGetUniformLocation(self.program, 'shadowMap') self.uniforms['useShadow'] = glGetUniformLocation(self.program, 'useShadow') print self.uniforms def use(self): if not self.is_built: self.build() glUseProgram(self.program) class Test: def __init__(self): glutInit(sys.argv) glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_ALPHA | GLUT_DEPTH) glutInitWindowSize(800, 600) glutInitWindowPosition(1120/2, 100) self.window = glutCreateWindow("Shadow Test") self.cam = Camera() self.light = Camera() self.cam.set_size(800, 600) self.light.set_size(2048, 2048) self.light.distance = 100 self.shader = Shader() self.initialized = False def setup(self): self.initialized = True glClearColor(0,0,0,1.0); glDepthFunc(GL_LESS) glEnable(GL_DEPTH_TEST) self.fbo = glGenFramebuffers(1); self.shadowTexture = glGenTextures(1) glBindFramebuffer(GL_FRAMEBUFFER, self.fbo) w, h = self.light.size glActiveTexture(GL_TEXTURE5) glBindTexture(GL_TEXTURE_2D, self.shadowTexture) glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP ); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP ); glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, None) glDrawBuffer(GL_NONE) glReadBuffer(GL_NONE) glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, self.fbo, 0) FBOstatus = glCheckFramebufferStatus(GL_FRAMEBUFFER) if FBOstatus != GL_FRAMEBUFFER_COMPLETE: print ("GL_FRAMEBUFFER_COMPLETE_EXT failed, CANNOT use FBO\n"); glBindFramebuffer(GL_FRAMEBUFFER, 0) #glActiveTexture(GL_TEXTURE0) def draw(self): glPushMatrix() glTranslate(0, 10 ,0) glColor4f(0, 1, 1, 1) glutSolidCube(5) glPopMatrix() glPushMatrix() glColor4f(0.5, 0.5, .5, 1) glScale(100, 1, 100) glutSolidCube(1) glPopMatrix() def apply_camera(self, cam): cam.load_matrices() model_view = glGetDoublev(GL_MODELVIEW_MATRIX); projection = glGetDoublev(GL_PROJECTION_MATRIX); glMatrixMode(GL_MODELVIEW) glLoadIdentity() glMultMatrixd(projection) glMultMatrixd(model_view) glUniformMatrix4fv(self.shader.uniforms['camMatrix'], 1, False, glGetFloatv(GL_MODELVIEW_MATRIX)) glLoadIdentity() def shadow_pass(self): glUniform1i(self.shader.uniforms['useShadow'], 0) glBindFramebuffer(GL_FRAMEBUFFER, self.fbo) glClear(GL_DEPTH_BUFFER_BIT) glCullFace(GL_FRONT) self.apply_camera(self.light) self.draw() glBindFramebuffer(GL_FRAMEBUFFER, 0) def final_pass(self): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) self.light.load_matrices() model_view = glGetDoublev(GL_MODELVIEW_MATRIX); projection = glGetDoublev(GL_PROJECTION_MATRIX); glMatrixMode(GL_MODELVIEW) glLoadIdentity() bias = [ 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.0, 0.0, 0.0, 0.5, 0.0, 0.5, 0.5, 0.5, 1.0] glLoadMatrixd(bias) glMultMatrixd(projection) glMultMatrixd(model_view) glUniformMatrix4fv(self.shader.uniforms['shadowMatrix'], 1, False, glGetFloatv(GL_MODELVIEW_MATRIX)) glActiveTexture(GL_TEXTURE5) glBindTexture(GL_TEXTURE_2D, self.shadowTexture) glUniform1i(self.shader.uniforms['shadowMap'], 5) glUniform1i(self.shader.uniforms['useShadow'], 1); self.apply_camera(self.cam) glLoadIdentity() glCullFace(GL_BACK) self.draw() def render(self): if not self.initialized: self.setup() self.shader.use() self.shadow_pass() self.final_pass() glutSwapBuffers() def mouse_move(self, *args): self.cam.on_mouse_move(*args) self.light.on_mouse_move(*args) def mouse_button(self, b, *args): if b==0: self.light.on_mouse_button(b, *args) else: self.cam.on_mouse_button(b, *args) def main(self): glutDisplayFunc(self.render) glutIdleFunc(self.render) glutMouseFunc(self.mouse_button) glutMotionFunc(self.mouse_move) glutReshapeFunc(self.cam.set_size) #self.setup() glutMainLoop() if __name__ == '__main__': test = Test() test.main() |
From: Derakon <de...@gm...> - 2010-12-17 21:25:02
|
I took a look at your code, but it looks like you're doing this all "manually" while I'm trying to leverage OpenGL to do the work for me. The basic flow of my code is: Main render function: if zoomed far out: render each megatile in view else: render each individual tile in view Add new tile function: find megatile(s) that this tile overlaps render tile to megatiles Something is going wrong in the "render to megatile" code, since when I render the megatile later in the main render function, its scale is off. But I can't figure out why. -Chris On Thu, Dec 16, 2010 at 4:31 PM, Christopher Barker <Chr...@no...> wrote: > On 12/16/10 4:26 PM, Christopher Barker wrote: >> >> Sorry, no time to try to take a look at your code, but we have a similar >> system in maproom, for rending tiles at different zoom levels, maybe it >> will help you figure out your issue: >> >> https://bitbucket.org/dhelfman/maproom/wiki/Home > > I think this is the relevant file: > > https://bitbucket.org/dhelfman/maproom/src/668ea464624e/maproomlib/plugin/Tile_set_layer.py > > -Chris > > > -- > Christopher Barker, Ph.D. > Oceanographer > > Emergency Response Division > NOAA/NOS/OR&R (206) 526-6959 voice > 7600 Sand Point Way NE (206) 526-6329 fax > Seattle, WA 98115 (206) 526-6317 main reception > > Chr...@no... > |
From: Christopher B. <Chr...@no...> - 2010-12-17 00:31:59
|
On 12/16/10 4:26 PM, Christopher Barker wrote: > Sorry, no time to try to take a look at your code, but we have a similar > system in maproom, for rending tiles at different zoom levels, maybe it > will help you figure out your issue: > > https://bitbucket.org/dhelfman/maproom/wiki/Home I think this is the relevant file: https://bitbucket.org/dhelfman/maproom/src/668ea464624e/maproomlib/plugin/Tile_set_layer.py -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Christopher B. <Chr...@no...> - 2010-12-17 00:26:29
|
On 12/16/10 3:15 PM, Derakon wrote: > a microscope slide. These images are tiled in a mosaic viewer, and the > user can pan and zoom about in the viewer. Once we get a few thousand > tiles, the viewer starts bogging down because it has to try to draw > all of those tiles when the user is zoomed out. I want to modify the > viewer to pre-render the tiles using framebuffers at a low level of > detail (packing many tiles onto each pre-rendered megatile); then, > when the user zooms out, I can switch from rendering each tile > individually to using the pre-rendered megatiles with no apparent loss > of detail but a reduction in orders of magnitude of the number of > textures that OpenGL has to deal with. > > Unfortunately, I'm running into some scaling and offsetting issues -- Sorry, no time to try to take a look at your code, but we have a similar system in maproom, for rending tiles at different zoom levels, maybe it will help you figure out your issue: https://bitbucket.org/dhelfman/maproom/wiki/Home -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no... |
From: Derakon <de...@gm...> - 2010-12-16 23:25:28
|
Er, my mistake: the megatiles render too *big* and I have to scale them down. Each one renders in the right location (the lower-left corner of the megatile is the same location in "canvas space" and "megatile space") but is too large, so other pixels don't map properly. -Chris On Thu, Dec 16, 2010 at 3:15 PM, Derakon <de...@gm...> wrote: > After my previous success in getting framebuffers to run at all in my > program, I spent some time getting them to work *properly*...and I'm > stumped. To recap: > > I have a program that uses a camera to take images of the contents of > a microscope slide. These images are tiled in a mosaic viewer, and the > user can pan and zoom about in the viewer. Once we get a few thousand > tiles, the viewer starts bogging down because it has to try to draw > all of those tiles when the user is zoomed out. I want to modify the > viewer to pre-render the tiles using framebuffers at a low level of > detail (packing many tiles onto each pre-rendered megatile); then, > when the user zooms out, I can switch from rendering each tile > individually to using the pre-rendered megatiles with no apparent loss > of detail but a reduction in orders of magnitude of the number of > textures that OpenGL has to deal with. > > Unfortunately, I'm running into some scaling and offsetting issues -- > the megatiles render too small; I need to scale them up by a factor of > about 1.958 to get them close to the right size. Obviously they should > be rendering in exactly the right spot, but since I don't know why > their scale is off I don't know what the proper fix is (the 1.958 is > just a hack to get them approximately right). > > I've uploaded a standalone app that shows off the problem here: > > http://derakon.dyndns.org/~chriswei/temp/mosaicapp.tgz > > It depends on PyOpenGL, wx, and numpy. I'm using the framebuffers in > GL.EXT here (old version of OpenGL). > > Controls: > Left-click: prints out location of click in canvas space. > Alt-left-click: pan > Shift-alt-left-click: zoom > Right-click: spawn a new tile > > The megatiles include some debugging rendering; every 400 units of > canvas space, they render some text marking the location, as well as a > single pixel marking the exact location. If you zoom in on some text > you should see the point to the left of the left parenthesis; you can > click on that to compare what the megatile thinks the location is to > what the canvas knows the location is. > > As far as code is concerned, mosaicViewer.GLViewer.OnPaint(), > mosaicTile.MegaTile.prerenderTiles(), and mosaicTile.MegaTile.render() > should have everything; the rest is just infrastructure. I've uploaded > just those three functions to a pastebin here: > > http://paste.ubuntu.com/544652/ > > This is a mishmash of very old code and code I've written myself; > sorry about the style clash. I'm doing my best to clean things up as I > come to understand what they are. > > If any of you have any ideas about what could be going wrong, I'd love > to hear them. > > -Chris > |
From: Derakon <de...@gm...> - 2010-12-16 23:15:23
|
After my previous success in getting framebuffers to run at all in my program, I spent some time getting them to work *properly*...and I'm stumped. To recap: I have a program that uses a camera to take images of the contents of a microscope slide. These images are tiled in a mosaic viewer, and the user can pan and zoom about in the viewer. Once we get a few thousand tiles, the viewer starts bogging down because it has to try to draw all of those tiles when the user is zoomed out. I want to modify the viewer to pre-render the tiles using framebuffers at a low level of detail (packing many tiles onto each pre-rendered megatile); then, when the user zooms out, I can switch from rendering each tile individually to using the pre-rendered megatiles with no apparent loss of detail but a reduction in orders of magnitude of the number of textures that OpenGL has to deal with. Unfortunately, I'm running into some scaling and offsetting issues -- the megatiles render too small; I need to scale them up by a factor of about 1.958 to get them close to the right size. Obviously they should be rendering in exactly the right spot, but since I don't know why their scale is off I don't know what the proper fix is (the 1.958 is just a hack to get them approximately right). I've uploaded a standalone app that shows off the problem here: http://derakon.dyndns.org/~chriswei/temp/mosaicapp.tgz It depends on PyOpenGL, wx, and numpy. I'm using the framebuffers in GL.EXT here (old version of OpenGL). Controls: Left-click: prints out location of click in canvas space. Alt-left-click: pan Shift-alt-left-click: zoom Right-click: spawn a new tile The megatiles include some debugging rendering; every 400 units of canvas space, they render some text marking the location, as well as a single pixel marking the exact location. If you zoom in on some text you should see the point to the left of the left parenthesis; you can click on that to compare what the megatile thinks the location is to what the canvas knows the location is. As far as code is concerned, mosaicViewer.GLViewer.OnPaint(), mosaicTile.MegaTile.prerenderTiles(), and mosaicTile.MegaTile.render() should have everything; the rest is just infrastructure. I've uploaded just those three functions to a pastebin here: http://paste.ubuntu.com/544652/ This is a mishmash of very old code and code I've written myself; sorry about the style clash. I'm doing my best to clean things up as I come to understand what they are. If any of you have any ideas about what could be going wrong, I'd love to hear them. -Chris |
From: Ian M. <geo...@gm...> - 2010-12-08 02:56:58
|
Hi, I've never seen this behavior before. Like ever. So I'm confused. You can look at the source, as you're doing. You may also want to check the way your do your NURBS setup. It occurs to me that if something is happening in that call, it may be because it's not be getting data in an anticipated way. In all honesty, I haven't done NURBS rendering in years, so I'm sorry, but I can't be much help there. Ian |