Very simple question
I have a bundled graph
I want to study it by selecting the edges bundled and explore the subgraphs created by this selection
I use the "select nodes/edges in a rectangle"
I draw a little rectangle and then...
I wait for more than a minute to have the selection of several hundreds of edges only out of a graph of 1 200 nodes and 23 000 edges, and the window is greyed during this time.
how could I improve this on my system? and I wish not to have to buy some new hardware...
thanks
tulip 4.5
ubuntu 13.10
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
This is an excellent question, and is the basis of my recent study and aggressive self-learning of opengl. Buying new hardware will not really get you much difference in performance due to the complexity of each rendering call.
Basically, I identify Tulip's slow rendering performance to be due to some poor code in GlScene.cpp. The explanation is, that upon every call to it's render function, it literally clears the depth and color buffers (on the graphics card), and does a pass over every single node and edge (or rather, every entity as a simple entity), and call's that object's draw function. At least within GlNode the draw is not aware of any context regarding graphics memory initialization prior to that time; it sends data to the graphics card all over again. All told, there's a limit on how fast you can send/receive between a graphics card and the CPU, and it's a bottleneck. Ideally, in any graphics application you would initialize all the memory at the beginning, and have the graphics card do all the heavy lifting regarding geometry and rendering/rasterizing, and merely have a stream of pixels coming back to be shown to the user.
The second most problem needing some massaging is the signaling/messaging setup. If you didn't know it, when you even click the window, it actually performs multiple rendering calls behind the scene. I think this is possibly due to something about signaling management between the Tulip developers custom signal management via their observers/listeners pattern, and the various slots that are connected to Qt's signals.
Lastly, there's a good bit of performance hit due to OpenGL api usage in this code region; careful selection of the exact opengl calls and more precise understanding of the OpenGL rendering pipeline while rewriting the buffering of OpenGL data here is exactly what the doctor needs.
All said, identifying as much is pretty much the start. Actually getting that into code is a different story, and I'll probably spend my free summer hours or possibly even after the summer working to get this improvement out.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
with the same system configuration (Tulip 4.5 and Ubuntu 13.10) on an older computer, laptop Dell Latitude from 2009 with double core and no graphic card, the selection is reasonnably fast, one or two seconds. The exact same task on the same file takes more than a minute on an very recent Dell quad core with no graphic card.
So Tulip is not the problem!
What could I do?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You haven't specifically determined what you think is wrong in this latest answer, but what it seems that you're saying that this new information rules out my determination of the bottleneck. The lack of a graphics card and the program's faster performance doesn't mean that I'm wrong. I'm pretty certain that OpenGL has to provide some CPU bound emulation of rendering in order to be able to work at all on a machine without a graphics card.
Once again, the bottleneck identified in tulip is the repeated transfer of all coordinate and shape information necessary to render from the CPU to the GPU on each individual render call. If there is no GPU, this specific bottleneck is gone, but if there is no graphics card, you lose the acceleration that they provide; rendering will never be as fast as it could be without it. In addition, how often do you see games with significantly higher number of objects that are also moving run smoothly on your machine with the graphics card? If you think about it, the number of triangles and information that Tulip has to process and deal with is pretty insignificant, which unofficially corroborates what I have already told you.
If you want to just try and buy a better system to out the problem then go right ahead, but I don't see how specifically avoiding a graphics card would ameliorate the situation in the long run, because what you really want is 24+ frames per second, and you're just not going to get that the way Tulip is written. I have no other advice, but if you want to run Tulip under a profiler like I already have, there's nothing stopping you.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Profilers are a category of tools that assist in providing diagnostic information about performance; they help you identify what functions are called and how much time they take to complete.
It doesn't make sense for any developer to write their code in such a way that timing information is gathered by them in a custom fashion. Rather, a tool is used to either instrument the application dynamically or rewrite it in a fashion such that some analysis calls can be inserted in an automated fashion.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Your explanations does not really help me in this problem because I probably do not have the skills to debug with developer tools
Is it sufficient to rely on "Tulip crash handler" indications?
and then how to exploit them?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
in addition, I also have a problem of node and surrounding edges selection and I just observed that this problem of slow selection occurs on the same graph where I have this problem and not on another, as indicated here: https://sourceforge.net/p/auber/discussion/206283/thread/4a4e5839/
so now my problem turns into: what is the problem in this graph number 1 on Lille?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Well I'm sorry man, I can't do much more than tell you that you'll have to wait until the rendering implementation has been improved.
Also, I have no idea about Lille or the other graph issue. I think possibly the selection is slow on one graph and not the other because the number of edges and nodes are likely different. Remember, each render call has complexity in the number of nodes, n, and edges e. O(n+e) is going to be noticeably slower for higher numbers of edges/nodes.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
had a clue by a tulip developper:
when labels are removed, the selection happens in a much more reasonable time
(it exist some improvements that have just been implemented since version 4.5)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Very simple question
I have a bundled graph
I want to study it by selecting the edges bundled and explore the subgraphs created by this selection
I use the "select nodes/edges in a rectangle"
I draw a little rectangle and then...
I wait for more than a minute to have the selection of several hundreds of edges only out of a graph of 1 200 nodes and 23 000 edges, and the window is greyed during this time.
how could I improve this on my system? and I wish not to have to buy some new hardware...
thanks
tulip 4.5
ubuntu 13.10
This is an excellent question, and is the basis of my recent study and aggressive self-learning of opengl. Buying new hardware will not really get you much difference in performance due to the complexity of each rendering call.
Basically, I identify Tulip's slow rendering performance to be due to some poor code in GlScene.cpp. The explanation is, that upon every call to it's render function, it literally clears the depth and color buffers (on the graphics card), and does a pass over every single node and edge (or rather, every entity as a simple entity), and call's that object's draw function. At least within GlNode the draw is not aware of any context regarding graphics memory initialization prior to that time; it sends data to the graphics card all over again. All told, there's a limit on how fast you can send/receive between a graphics card and the CPU, and it's a bottleneck. Ideally, in any graphics application you would initialize all the memory at the beginning, and have the graphics card do all the heavy lifting regarding geometry and rendering/rasterizing, and merely have a stream of pixels coming back to be shown to the user.
The second most problem needing some massaging is the signaling/messaging setup. If you didn't know it, when you even click the window, it actually performs multiple rendering calls behind the scene. I think this is possibly due to something about signaling management between the Tulip developers custom signal management via their observers/listeners pattern, and the various slots that are connected to Qt's signals.
Lastly, there's a good bit of performance hit due to OpenGL api usage in this code region; careful selection of the exact opengl calls and more precise understanding of the OpenGL rendering pipeline while rewriting the buffering of OpenGL data here is exactly what the doctor needs.
All said, identifying as much is pretty much the start. Actually getting that into code is a different story, and I'll probably spend my free summer hours or possibly even after the summer working to get this improvement out.
with the same system configuration (Tulip 4.5 and Ubuntu 13.10) on an older computer, laptop Dell Latitude from 2009 with double core and no graphic card, the selection is reasonnably fast, one or two seconds. The exact same task on the same file takes more than a minute on an very recent Dell quad core with no graphic card.
So Tulip is not the problem!
What could I do?
You haven't specifically determined what you think is wrong in this latest answer, but what it seems that you're saying that this new information rules out my determination of the bottleneck. The lack of a graphics card and the program's faster performance doesn't mean that I'm wrong. I'm pretty certain that OpenGL has to provide some CPU bound emulation of rendering in order to be able to work at all on a machine without a graphics card.
Once again, the bottleneck identified in tulip is the repeated transfer of all coordinate and shape information necessary to render from the CPU to the GPU on each individual render call. If there is no GPU, this specific bottleneck is gone, but if there is no graphics card, you lose the acceleration that they provide; rendering will never be as fast as it could be without it. In addition, how often do you see games with significantly higher number of objects that are also moving run smoothly on your machine with the graphics card? If you think about it, the number of triangles and information that Tulip has to process and deal with is pretty insignificant, which unofficially corroborates what I have already told you.
If you want to just try and buy a better system to out the problem then go right ahead, but I don't see how specifically avoiding a graphics card would ameliorate the situation in the long run, because what you really want is 24+ frames per second, and you're just not going to get that the way Tulip is written. I have no other advice, but if you want to run Tulip under a profiler like I already have, there's nothing stopping you.
What do you mean by "run Tulip under a profiler"?
Profilers are a category of tools that assist in providing diagnostic information about performance; they help you identify what functions are called and how much time they take to complete.
It doesn't make sense for any developer to write their code in such a way that timing information is gathered by them in a custom fashion. Rather, a tool is used to either instrument the application dynamically or rewrite it in a fashion such that some analysis calls can be inserted in an automated fashion.
Your explanations does not really help me in this problem because I probably do not have the skills to debug with developer tools
Is it sufficient to rely on "Tulip crash handler" indications?
and then how to exploit them?
in addition, I also have a problem of node and surrounding edges selection and I just observed that this problem of slow selection occurs on the same graph where I have this problem and not on another, as indicated here:
https://sourceforge.net/p/auber/discussion/206283/thread/4a4e5839/
so now my problem turns into: what is the problem in this graph number 1 on Lille?
Well I'm sorry man, I can't do much more than tell you that you'll have to wait until the rendering implementation has been improved.
Also, I have no idea about Lille or the other graph issue. I think possibly the selection is slow on one graph and not the other because the number of edges and nodes are likely different. Remember, each render call has complexity in the number of nodes, n, and edges e. O(n+e) is going to be noticeably slower for higher numbers of edges/nodes.
had a clue by a tulip developper:
when labels are removed, the selection happens in a much more reasonable time
(it exist some improvements that have just been implemented since version 4.5)