Re: [PyOpenGL-Users] Pyopengl - slow performance using
Brought to you by:
mcfletch
From: Pedro M. <pm...@gm...> - 2011-01-14 20:23:08
|
Hi all, First of all thank you for your all your comments and fast replies. Second, I am sorry for late replying but I was away for the day. I should also add that I am not professional programmer and I was not aware of how some mechanisms work in programming. In order to avoid spamming the mailing list I will try to address all the comments you have sent in a single e-mail. 1) Numpy arrays --------------- Previously to starting the tread I tried to make an implementation using numpy arrays but I guess I was not successful. As many of you point out this should be the way to go for managing GIS data. In our comments you state that I could I could make a "big" numpy array containing all the road segments. For me this is ok because this is just map data that I will not be changing it (static). But is someone sure that the proposed strategy to use NaN to separate different road segments is viable? The following code to read a line of the shapefile returns a list of vertices for a given road segment: lineVert = self.shp.read_object(i).vertices() A print of lineVert will give for example: [[(-45005.21, 168113.25), (-45003.77, 168112.71), (-44995.30, 168109.057)]] How can I convert this lineVert to a numpy array and create a "big" array containing all lineVerts? Is there any upper limit to maximum number of vertices a vertex array can contain? b) Python List creation ----------------------- I am just loading the shapefile and creating the list of lists of tuples at the initialization of the application. Since this is static data I just do this once at the beginning of the application. I do not allow the user to load a new map during runtime; besides this is not interesting for my application. c) profiling ------------ I was not aware of the tool RunSnakeRun to do profiling in python. Thank you for sending this! It was useful! Analysing the output I could find out that the majority of the time (almost 11 seconds or 60% of the time) was spent in function "as...@li...", which is a part of python/OpenGL/Arrays. I think this function does the conversion that have been mentioned here. The second most important cost was due to "wra...@wr...". If someone is interested I can send a copy of the profile file. d) VBO ------ The next step in the development of the application (after the map layer is smoothly drawn in openGL) is to draw a series of routes on top of the map (static). However, in this case this data can be more dynamic because the user can select the time interval of the data that is shown. I will definitely consider in that case using VBO. Do someone know any good tutorial to use VBO in dynamic scenarios? e) number of vertices per objMap (vertex array) ----------------------------------------------- I was checking and there are : - 7 vertices in average per vertex array - 2 vertices is the minimum number of vertices per vertex array - 272 vertices is the maximum mumber of vertices per vertex array So the average is pretty low and making so many calls to openGL routines is certainly good. f) maproom ---------- Thank you for sending the link of maproom. I will install it and check how it works. The API contains lots of useful functionality! Collaboration seems a good idea; let's see how thinks evolve. FYI: I have recently found that Quantum GIS has python bindings as well. You can find more information in http://www.qgis.org/pyqgis-cookbook/ Thank you! Pedro -------- Original-Nachricht -------- > Datum: Thu, 13 Jan 2011 20:46:45 -0700 > Von: Ian Mallett <geo...@gm...> > An: Chris Barker <Chr...@no...> > CC: pyopengl-users <pyo...@li...> > Betreff: Re: [PyOpenGL-Users] Pyopengl - slow performance using > On Thu, Jan 13, 2011 at 7:45 PM, Chris Barker <Chr...@no...> > wrote: > > > On 1/13/2011 3:17 PM, Ian Mallett wrote: > > > > doing this. Python lists are just linked lists, > > > > > > no, they are not -- they are internally regular C arrays. > > > > > > But they are resizable, which implies they are either linked lists or > > > array lists--because I doubt the pointers are copied over to a > brand-new > > > array just big enough each time a single element is added or removed. > > > > A bit OT, but they handle re-sizing by over allocating when appended to. > > So most of the time you can append to a list without any memory > > allocation or copying, but as it grows, it does need to do that once in > > a while. > > > Right--this data structure is called an array list. > > You're absolutely correct that Python users should avoid it for very high > performance. I usually stick with NumPy arrays for all but the most dead > simple stuff. They also tend to be compatible with other packages--in > particular, I've used them to tie very nicely into PyOpenGL's VBO class: > > my_vbo = vbo.VBO(numpy_1x3_array,'f') > > In fact, they seemed to be the only thing that actually did work. And, > speed aside, NumPy provides some truly great functionality when it comes > to > working with arrays--like element-wise operations, anyone? > > Ian -- Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de |