You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
(1) |
Dec
|
| 2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
| 2010 |
Jan
(2) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
|
From: Ali T. <ali...@gm...> - 2015-12-30 22:23:49
|
Thank you Andrea for helping out with this. I will make the change on the github repository. Thanks again, /ali On Wed, Dec 30, 2015 at 4:47 PM, Andrea Marino <ma...@di...> wrote: > Dear Ali, > > When compiling (I am using boost 1_58_0) I get the following error. > > test/../src/edmonds_optimum_branching_impl.hpp:7:10: fatal error: > 'boost/property_map.hpp' file not > found > #include <boost/property_map.hpp> > ^ > 1 error generated. > > I think this include is not needed, since you are including boost/ > property_map/property_map.hpp. > Indeed, if you delete/comment this include, everything compiles and the > program passes the test. > > Ciao, > Andrea > > Il giorno mar 29 dic 2015 alle ore 23:32 Ali Tofigh <ali...@gm...> > ha scritto: > >> Hi Andrea, >> >> Thanks for reporting this! I believe you are correct in both the >> description of the bug and the required fix. >> >> I no longer use sourceforge to keep my repositories. I have moved to >> github. Please switch to using the following git repository instead: >> https://github.com/atofigh/edmonds-alg >> >> I have already committed a patch with your suggested fix on the master >> branch. Could you do me a favor and run all the tests for me (This code was >> written a long time ago and I currently do not have boost installed)? You >> just need to compile the test.cpp file and run the test.py script in the >> test/ directory. If all tests pass, I will then create a new release tag. >> >> Thanks again and happy new year to you too! >> /ali >> >> On Tue, Dec 29, 2015 at 7:55 AM, Andrea Marino <ma...@di...> >> wrote: >> >>> Dear All, >>> >>> Thanks for this nice implementation of the Edmond Algorithm. We have >>> been lucky to have found it and we are really happy to use it. >>> >>> I think there is a small problem sometimes giving segmentation fault. >>> The problem is when the number of nodes is computed, namely in the function >>> void operator()() in the file src/edmonds_optimum_branching_impl.hpp >>> The number of nodes is computed as the 1+the maximum index among all the >>> target of all the edges. This is not correct if the maximum has indegree 0. >>> I think it is sufficient to add after the line >>> >>> max_vertex_idx = std::max(max_vertex_idx, index[target(e, g)]); >>> >>> also >>> >>> max_vertex_idx = std::max(max_vertex_idx, index[source(e, g)]); >>> >>> It is certainly a very small problem, but since our input had this bad >>> shape, we spent a while to find out this. >>> >>> Thanks again for the nice job. >>> With my very best regards, >>> Happy new year >>> Andrea Marino >>> >>> >>> >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> _______________________________________________ >>> Edmonds-alg-users mailing list >>> Edm...@li... >>> https://lists.sourceforge.net/lists/listinfo/edmonds-alg-users >>> >>> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Edmonds-alg-users mailing list >> Edm...@li... >> https://lists.sourceforge.net/lists/listinfo/edmonds-alg-users >> > |
|
From: Andrea M. <ma...@di...> - 2015-12-30 21:47:24
|
Dear Ali,
When compiling (I am using boost 1_58_0) I get the following error.
test/../src/edmonds_optimum_branching_impl.hpp:7:10: fatal error:
'boost/property_map.hpp' file not
found
#include <boost/property_map.hpp>
^
1 error generated.
I think this include is not needed, since you are including boost/
property_map/property_map.hpp.
Indeed, if you delete/comment this include, everything compiles and the
program passes the test.
Ciao,
Andrea
Il giorno mar 29 dic 2015 alle ore 23:32 Ali Tofigh <ali...@gm...>
ha scritto:
> Hi Andrea,
>
> Thanks for reporting this! I believe you are correct in both the
> description of the bug and the required fix.
>
> I no longer use sourceforge to keep my repositories. I have moved to
> github. Please switch to using the following git repository instead:
> https://github.com/atofigh/edmonds-alg
>
> I have already committed a patch with your suggested fix on the master
> branch. Could you do me a favor and run all the tests for me (This code was
> written a long time ago and I currently do not have boost installed)? You
> just need to compile the test.cpp file and run the test.py script in the
> test/ directory. If all tests pass, I will then create a new release tag.
>
> Thanks again and happy new year to you too!
> /ali
>
> On Tue, Dec 29, 2015 at 7:55 AM, Andrea Marino <ma...@di...> wrote:
>
>> Dear All,
>>
>> Thanks for this nice implementation of the Edmond Algorithm. We have been
>> lucky to have found it and we are really happy to use it.
>>
>> I think there is a small problem sometimes giving segmentation fault. The
>> problem is when the number of nodes is computed, namely in the function
>> void operator()() in the file src/edmonds_optimum_branching_impl.hpp
>> The number of nodes is computed as the 1+the maximum index among all the
>> target of all the edges. This is not correct if the maximum has indegree 0.
>> I think it is sufficient to add after the line
>>
>> max_vertex_idx = std::max(max_vertex_idx, index[target(e, g)]);
>>
>> also
>>
>> max_vertex_idx = std::max(max_vertex_idx, index[source(e, g)]);
>>
>> It is certainly a very small problem, but since our input had this bad
>> shape, we spent a while to find out this.
>>
>> Thanks again for the nice job.
>> With my very best regards,
>> Happy new year
>> Andrea Marino
>>
>>
>>
>>
>>
>> ------------------------------------------------------------------------------
>>
>> _______________________________________________
>> Edmonds-alg-users mailing list
>> Edm...@li...
>> https://lists.sourceforge.net/lists/listinfo/edmonds-alg-users
>>
>>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> Edmonds-alg-users mailing list
> Edm...@li...
> https://lists.sourceforge.net/lists/listinfo/edmonds-alg-users
>
|
|
From: Ali T. <ali...@gm...> - 2015-12-29 22:32:14
|
Hi Andrea, Thanks for reporting this! I believe you are correct in both the description of the bug and the required fix. I no longer use sourceforge to keep my repositories. I have moved to github. Please switch to using the following git repository instead: https://github.com/atofigh/edmonds-alg I have already committed a patch with your suggested fix on the master branch. Could you do me a favor and run all the tests for me (This code was written a long time ago and I currently do not have boost installed)? You just need to compile the test.cpp file and run the test.py script in the test/ directory. If all tests pass, I will then create a new release tag. Thanks again and happy new year to you too! /ali On Tue, Dec 29, 2015 at 7:55 AM, Andrea Marino <ma...@di...> wrote: > Dear All, > > Thanks for this nice implementation of the Edmond Algorithm. We have been > lucky to have found it and we are really happy to use it. > > I think there is a small problem sometimes giving segmentation fault. The > problem is when the number of nodes is computed, namely in the function > void operator()() in the file src/edmonds_optimum_branching_impl.hpp > The number of nodes is computed as the 1+the maximum index among all the > target of all the edges. This is not correct if the maximum has indegree 0. > I think it is sufficient to add after the line > > max_vertex_idx = std::max(max_vertex_idx, index[target(e, g)]); > > also > > max_vertex_idx = std::max(max_vertex_idx, index[source(e, g)]); > > It is certainly a very small problem, but since our input had this bad > shape, we spent a while to find out this. > > Thanks again for the nice job. > With my very best regards, > Happy new year > Andrea Marino > > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Edmonds-alg-users mailing list > Edm...@li... > https://lists.sourceforge.net/lists/listinfo/edmonds-alg-users > > |
|
From: Andrea M. <ma...@di...> - 2015-12-29 13:13:27
|
Dear All, Thanks for this nice implementation of the Edmond Algorithm. We have been lucky to have found it and we are really happy to use it. I think there is a small problem sometimes giving segmentation fault. The problem is when the number of nodes is computed, namely in the function void operator()() in the file src/edmonds_optimum_branching_impl.hpp The number of nodes is computed as the 1+the maximum index among all the target of all the edges. This is not correct if the maximum has indegree 0. I think it is sufficient to add after the line max_vertex_idx = std::max(max_vertex_idx, index[target(e, g)]); also max_vertex_idx = std::max(max_vertex_idx, index[source(e, g)]); It is certainly a very small problem, but since our input had this bad shape, we spent a while to find out this. Thanks again for the nice job. With my very best regards, Happy new year Andrea Marino |
|
From: Ali T. <ali...@gm...> - 2010-04-07 18:21:48
|
It's funny that you should mention this now. I have been thinking of asking if there is any interest in making it part of BGL in the mailing lists. So, yes. I'm definitely interested. It will take a little more work for the code to be really finished though, and it is a bit of bad timing at the moment since I have just moved to Canada starting a new job. But in a few weeks I will have more time to spare and could work on the code. /Ali On Wed, Apr 7, 2010 at 12:00, Jeremiah Willcock <jew...@os...> wrote: > I am one of the maintainers of the Boost Graph Library, and I noticed that > you wrote an extension to it that appears to be compatibly licensed. Are > you interested in merging your code into BGL? It would likely just > require relicensing your code under the Boost Software License and making > sure you have documentation and test cases in the appropriate format. We > can take care of converting to use bjam and the other administrative > stuff. > > -- Jeremiah Willcock > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Edmonds-alg-users mailing list > Edm...@li... > https://lists.sourceforge.net/lists/listinfo/edmonds-alg-users > |
|
From: Jeremiah W. <jew...@os...> - 2010-04-07 18:05:47
|
I am one of the maintainers of the Boost Graph Library, and I noticed that you wrote an extension to it that appears to be compatibly licensed. Are you interested in merging your code into BGL? It would likely just require relicensing your code under the Boost Software License and making sure you have documentation and test cases in the appropriate format. We can take care of converting to use bjam and the other administrative stuff. -- Jeremiah Willcock |
|
From: Ali T. <ali...@gm...> - 2010-01-11 14:42:29
|
A minor release of edmonds-alg, version 1.1.2, is now available. This release fixes a critical bug that resulted in incorrect output under certain conditions. Please replace any older versions with version 1.1.2. Best wishes, /Ali Tofigh |
|
From: Ali T. <ali...@gm...> - 2010-01-11 14:03:44
|
Dear Luigi, On Sun, Dec 6, 2009 at 11:02, Luigi Laura <la...@di...> wrote: > Hi, > sorry to bother you; > I think I found a small bug in your program. > Indeed, you did find a bug, and a serious one at that! I have now added it to the tracker on sourceforge. I believe that the problem has an easy solution and will hopefully upload a new corrected version as soon as possible. Thanks for reporting the bug! Best wishes /Ali Tofigh > I slightly modified the "sparse-example.cpp" file into > "sparse-example2.cpp", that I attach to this mail. Note that the the graph > is the only change. > Below I paste the output (where you can also see the graph): > -------------------------------------------------- > This is the graph: > (0, 6) 1 > (1, 0) 1 > (2, 1) 1 > (3, 1) 30 > (4, 5) 1 > (5, 3) 1 > (5, 4) 30 > (6, 3) 30 > (6, 4) 1 > (6, 5) 30 > This is the maximum branching > (1, 0) 1 > (3, 1) 30 > (6, 3) 30 > (5, 4) 30 > (6, 5) 30 > (0, 6) 1 > -------------------------------------------------- > > Note that vertex 2 is not in the branching, and, most important, the > branching includes a cycle: 1 -> 0 -> 6 -> 3 -> 1 (therefore it is > difficult to still call it a branching :-( > > Note also that, if the arc (2,1) would have been in the solution instead of > the (3,1) one, that would have been a branching (probably the optimal, i > didn't check). > > Thank you in advance for any answer. > Best regards, > Luigi Laura > |
|
From: Luigi L. <la...@di...> - 2009-12-06 10:34:51
|
Hi, sorry to bother you; I think I found a small bug in your program. I slightly modified the "sparse-example.cpp" file into "sparse-example2.cpp", that I attach to this mail. Note that the the graph is the only change. Below I paste the output (where you can also see the graph): -------------------------------------------------- This is the graph: (0, 6) 1 (1, 0) 1 (2, 1) 1 (3, 1) 30 (4, 5) 1 (5, 3) 1 (5, 4) 30 (6, 3) 30 (6, 4) 1 (6, 5) 30 This is the maximum branching (1, 0) 1 (3, 1) 30 (6, 3) 30 (5, 4) 30 (6, 5) 30 (0, 6) 1 -------------------------------------------------- Note that vertex 2 is not in the branching, and, most important, the branching includes a cycle: 1 -> 0 -> 6 -> 3 -> 1 (therefore it is difficult to still call it a branching :-( Note also that, if the arc (2,1) would have been in the solution instead of the (3,1) one, that would have been a branching (probably the optimal, i didn't check). Thank you in advance for any answer. Best regards, Luigi Laura |
|
From: Ali T. <ali...@gm...> - 2009-09-23 16:12:34
|
A minor release of edmonds-alg, version 1.1.1, is now available. This release fixes a critical bug and updates the SConstruct file. Best wishes, /Ali Tofigh |
|
From: Ali T. <ali...@gm...> - 2008-11-11 20:21:16
|
Version 1.1.0 of edmonds-alg has now been released. An example of how to use edmonds-alg using the boost graph library has been added to the documentation. A few compilation bugs have also been fixed. For instructions on downloading and installing see http://edmonds-alg.sourceforge.net. Cheers, /Ali |
|
From: Vladimir Y. <vol...@cs...> - 2008-08-31 04:56:16
|
Hi Ali, 1. Yes certainly one cannot find branchings faster than one can read the graph:-) Having O(n log n) algorithm will only improve your implementation for dense graphs if you read a graph significantly faster than compute the branchings and I don't know if this is the case. 2. My implementation of cycle contractions using union-find failed on your test. I'm now on vacation/conference in Europe for 3 weeks, so it may take some time till I fix and post it. 3. The purpose of decrementing the edge weight of the edges in a cycle is only in order to make all the weights equal, so that any branching/arborescence can be replace by one having a single entry into the cycle. Hence this least costly edge reduction is completely unnecessary. 4. The main idea of O(n log n) algorithm is very similar to O(m log n). The main difference is that in O(m log n) they did not use Fibonacci heap which was not available then. In O(n log n) paper they use Fibonacci heap to store sets of edges adjacent to nodes and also notice that with Fibonacci heap one does not have to delete an item from one heap in order to move to another, instead a whole tree can be moved between heaps (nodes) in O(1). Then the only operation taking log n time would be finding critical edges (computing MIN/MAX of the heap) and there are at most n of this. 5. Thanks a lot for the examples for sparse graphs. They will realy help. Best, Vlad On Fri, Aug 22, 2008 at 2:35 PM, Ali Tofigh <ali...@gm...> wrote: > On Tue, Aug 19, 2008 at 07:48, Vladimir Yanovsky <vol...@cs...> > wrote: >> >> Btw, Tarjan decreases the weight of all incoming edges by the >> difference between the >> cycle's edge weight and the weight of the cheapest edge in the cycle. >> Subtracting the >> weight of the cycle edge from the weights of all edges sharing with it >> the target should >> be good as well and will save additional arithmetic operation per >> edge. By induction, this will >> not causes any edge weights to become negative as result. > > This only works when finding maximum spanning arborescences. It does not > work for maximum branchings, where the negative edge weights are essential > for the algorithm to work. (Also, there is nothing that prevents the > incoming graph from having negative edges in the first place.) > The purpose of >> >> Moreover, >> the decrement >> can be performed when we select a critical edge -not waiting the cycle >> colapse stage- just decrease all the incoming edges >> by its weight. The weights of edges on weakly connected components >> then become zero. This is faster then going >> through all cycle edges to decrement due to cache locality. > > We could do this optimization by treating the case of MSAs and maximum > branchings separately. But I would rather get the code working for sparse > graphs first, and do such optimizations later when we've gotten most of the > bugs out of the way. > >> Regarding the O(VlogV) algorithm, it should be faster than O(V^2) even >> for dense graphs. > > But dense graphs can never have better time complexity than O(|V|^2) simply > because they have O(|V|^2) edges, and you always have to at least read the > entire input! The time complexity of Tarjan's '86-paper is O(|V| log |V| + > |E|). > >> I'll work >> on its implementation as we get O(ElogV) ready - it's very similar and >> the constants in O look low. > > If I remember correctly, the algorithm in the '86-paper isn't just a simple > modification of the older algorithm. It is much more different than simply > using modified F-heaps to keep track of incoming edges. The core data > structures used in book-keeping are very different (again, this is just from > memory). I remember thinking, while reading the '86-paper, that the constant > in the time complexity may actually be quite large. I don't recall why I > thought so, perhaps because there was so much book-keeping needed, or > perhaps because I had read that F-heaps actually are not very fast in > practice. This, however, is from "Introduction to algorithms" by Cormen et > al 1989, so that information may be dated. > >> >> Do you have some toy example of creating a sparse graph of V vertices, >> initially no edges, so that I can add edges >> myself and have your implementation run on it? This will help a >> lot in my understanding of the Boost related staff of the implementation. > > I have created an example on how to create a sparse graph using the > adjacency_list class of BGL and calling edmonds_optimum_branching. It is now > committed into the subversion repository at sourceforge. You'll find it in > trunk/doc/examples. Let me know if you have problems compiling the code. > > Cheers, > /Ali > > |
|
From: Vladimir Y. <vol...@cs...> - 2008-08-19 05:48:12
|
No, I have not yet finished incorporating it into edmonds-alg - have to change accessing weights to property maps. Will post the results when ready. Regarding O(N^2) complexity, you are right. The number of list merge operations is |V|-1 as each merge decreases the number of remaining of vertices (SCCs) by 1. Btw, Tarjan decreases the weight of all incoming edges by the difference between the cycle's edge weight and the weight of the cheapest edge in the cycle. Subtracting the weight of the cycle edge from the weights of all edges sharing with it the target should be good as well and will save additional arithmetic operation per edge. By induction, this will not causes any edge weights to become negative as result. Moreover, the decrement can be performed when we select a critical edge -not waiting the cycle colapse stage- just decrease all the incoming edges by its weight. The weights of edges on weakly connected components then become zero. This is faster then going through all cycle edges to decrement due to cache locality. Regarding the O(VlogV) algorithm, it should be faster than O(V^2) even for dense graphs. I'll work on its implementation as we get O(ElogV) ready - it's very similar and the constants in O look low. Do you have some toy example of creating a sparse graph of V vertices, initially no edges, so that I can add edges myself and have your implementation run on it? This will help a lot in my understanding of the Boost related staff of the implementation. Thanks, Vlad On Sun, Aug 17, 2008 at 04:49, Vladimir Yanovsky <vol...@cs...>wrote: > I modified pending/disjoint_sets.hpp to have fast and dirty > implementation of "disjoint sets with weights" which are used by > Tarjan's implementation to update edge weights when collapsing cycles. > Currently the implementation updates every incoming edge to every > strongly connected component vertex on a cycle being collapsed (this > particularly inefficient when the graph becomes dense and, I think, > leads to Omega(VE) complexity). Actually, I'm quite sure that the time complexity is O(V^2) for the algorithm in the current implementation. You seem to have overlooked the fact that the sizes of the lists containing the incoming edges are at most |V|: Each list contains at most one outgoing edge for each strongly connected component. So we are not actually updating _every_ incoming edge, but at most |V| of them. A careful examination also reveals that the total number of list merge operations is O(V) (I think 2*|V| is the actual upper bound). Since each list contains at most |V| edges, the total time of merging the lists is O(V^2). Updating the weights before or during the merges clearly makes no difference in terms of time complexity. Still, I'm very interested to know if there is any dramatic change in the running time with your implementation of disjoint sets with weights. Have you run any tests? Another thing that should be done is to use Boost's Fibonacci or > Relaxed heaps as a container for edges so that we have O(E log V) > time. Finally Fibonacci heaps implementation of O(E log V) algorithm > can be later modified into O (V log V). Yes, I agree. I'm aware of the article describing this. I just haven't had any use for it myself since all my graphs are dense. I'll need some (continuous) time on my hands to be able to do that, though of course now I have a motivation since someone may actually use it. Any comments, help, especially on correct declaration/interfacing with > Boost is greatly appreciated - my Boost experience is tiny and writing > function bodies is much easier for me than the headers. A great resource is of course the boost users mailing list where more knowledgeable people can help with Boost matters. That would be my first choice of resource when dealing with Boost-specific issues. > One question about the current implementation. Why are all data > structures initialized with (max_vertex_idx +1) elements and Union > Finds with 2*(max_vertex_idx +1)? What are these x2 and "+1" needed > for? Should not max_vertex_idx be enough? > Since the vertex indices range from zero to (and including) max_vertex_idx, there are max_vertex_idx + 1 actual vertices. The disjoint set datastructure in Boost needs to know the maximum number of disjoint sets; in our case, the weakly and strongly connected components. The upper bound on the number of created components is 2*|V|. Cheers, /Ali |
|
From: Ali T. <ali...@gm...> - 2008-08-18 15:13:07
|
On Sun, Aug 17, 2008 at 04:49, Vladimir Yanovsky <vol...@cs...>wrote: > I modified pending/disjoint_sets.hpp to have fast and dirty > implementation of "disjoint sets with weights" which are used by > Tarjan's implementation to update edge weights when collapsing cycles. > Currently the implementation updates every incoming edge to every > strongly connected component vertex on a cycle being collapsed (this > particularly inefficient when the graph becomes dense and, I think, > leads to Omega(VE) complexity). Actually, I'm quite sure that the time complexity is O(V^2) for the algorithm in the current implementation. You seem to have overlooked the fact that the sizes of the lists containing the incoming edges are at most |V|: Each list contains at most one outgoing edge for each strongly connected component. So we are not actually updating _every_ incoming edge, but at most |V| of them. A careful examination also reveals that the total number of list merge operations is O(V) (I think 2*|V| is the actual upper bound). Since each list contains at most |V| edges, the total time of merging the lists is O(V^2). Updating the weights before or during the merges clearly makes no difference in terms of time complexity. Still, I'm very interested to know if there is any dramatic change in the running time with your implementation of disjoint sets with weights. Have you run any tests? Another thing that should be done is to use Boost's Fibonacci or > Relaxed heaps as a container for edges so that we have O(E log V) > time. Finally Fibonacci heaps implementation of O(E log V) algorithm > can be later modified into O (V log V). Yes, I agree. I'm aware of the article describing this. I just haven't had any use for it myself since all my graphs are dense. I'll need some (continuous) time on my hands to be able to do that, though of course now I have a motivation since someone may actually use it. Any comments, help, especially on correct declaration/interfacing with > Boost is greatly appreciated - my Boost experience is tiny and writing > function bodies is much easier for me than the headers. A great resource is of course the boost users mailing list where more knowledgeable people can help with Boost matters. That would be my first choice of resource when dealing with Boost-specific issues. > One question about the current implementation. Why are all data > structures initialized with (max_vertex_idx +1) elements and Union > Finds with 2*(max_vertex_idx +1)? What are these x2 and "+1" needed > for? Should not max_vertex_idx be enough? > Since the vertex indices range from zero to (and including) max_vertex_idx, there are max_vertex_idx + 1 actual vertices. The disjoint set datastructure in Boost needs to know the maximum number of disjoint sets; in our case, the weakly and strongly connected components. The upper bound on the number of created components is 2*|V|. Cheers, /Ali |
|
From: Vladimir Y. <vol...@gm...> - 2008-08-17 02:56:22
|
Hello, I'm currently working on making the implementation more efficient, particularly for sparse graphs. I modified pending/disjoint_sets.hpp to have fast and dirty implementation of "disjoint sets with weights" which are used by Tarjan's implementation to update edge weights when collapsing cycles. Currently the implementation updates every incoming edge to every strongly connected component vertex on a cycle being collapsed (this particularly inefficient when the graph becomes dense and, I think, leads to Omega(VE) complexity). Another thing that should be done is to use Boost's Fibonacci or Relaxed heaps as a container for edges so that we have O(E log V) time. Finally Fibonacci heaps implementation of O(E log V) algorithm can be later modified into O (V log V). Any comments, help, especially on correct declaration/interfacing with Boost is greatly appreciated - my Boost experience is tiny and writing function bodies is much easier for me than the headers. One question about the current implementation. Why are all data structures initialized with (max_vertex_idx +1) elements and Union Finds with 2*(max_vertex_idx +1)? What are these x2 and "+1" needed for? Should not max_vertex_idx be enough? Thanks, Vladimir |
|
From: Vladimir Y. <vol...@cs...> - 2008-08-17 02:49:34
|
Hello, I'm currently working on making the implementation more efficient, particularly for sparse graphs. I modified pending/disjoint_sets.hpp to have fast and dirty implementation of "disjoint sets with weights" which are used by Tarjan's implementation to update edge weights when collapsing cycles. Currently the implementation updates every incoming edge to every strongly connected component vertex on a cycle being collapsed (this particularly inefficient when the graph becomes dense and, I think, leads to Omega(VE) complexity). Another thing that should be done is to use Boost's Fibonacci or Relaxed heaps as a container for edges so that we have O(E log V) time. Finally Fibonacci heaps implementation of O(E log V) algorithm can be later modified into O (V log V). Any comments, help, especially on correct declaration/interfacing with Boost is greatly appreciated - my Boost experience is tiny and writing function bodies is much easier for me than the headers. One question about the current implementation. Why are all data structures initialized with (max_vertex_idx +1) elements and Union Finds with 2*(max_vertex_idx +1)? What are these x2 and "+1" needed for? Should not max_vertex_idx be enough? Thanks, Vladimir |
|
From: Ali T. <ali...@gm...> - 2008-08-16 13:01:23
|
This post attempts to answer some questions received about the current
implementation of edmonds-alg (version 1.0).
1) You decrement weights of edges incoming to a vertex while
contracting a cycle by actually accessing each of them and decreasing
the values. Why? I think this is a great performance hit, especially
for dense graphs. Tarjan's paper supposed to use union-find DS with
values so that the decrement is O(1) per vertex.
The implementation given in Tarjan's paper assumes the input graph is
sparse and uses a priority queue datastructure to keep track of the
incoming edges of the strongly connected components. Such an
implementation gives a time complexity of O(m log n) where m is the
number of edges and n is the number of vertices. For dense graphs the
time complexity becomes O(n^2 log n). In order to reduce this to
O(n^2), the incoming edges are instead kept sorted by their tails in a
list (as suggested in the same article by Tarjan). Also, for each
vertex v at most one edge whose tail is v is kept in the list, and if
there are several such edges then one with optimal weight is
kept. Therefore, during the merging of the lists the actual weight of
each edge must be known. The time complexity remains the same
irrespective of whether the weights are updated before or during the
merging of the lists.
2) I also could not understand your edge sort (not that it will be
necessary for the sparse implementation as I need either O(nlog n)
from 85', or original Tarjan's O(m log n)), but I'm still curious. (My
C++ is bad so this could explain my problem).
Why you just don't go through all edges in order of their sources and
put them into buckets corresponding to the destinations - there is a
bucket for every node. This will be linear? Or is this what you are
doing? (May be a newbie question - I use
your work as my tutorial to boost).
Well, I'm using a standard radix sort algorithm to sort the edges. The
time complexity is linear for this algorithm.
3) P.S. Minor comment:
In test.cpp
if (abs(max_weight) >= numeric_limits<int>::max() / n_vertices ||
abs(min_weight) >= numeric_limits<int>::max() / n_vertices)
{
cerr << "weights too large\n";
exit(1);
}
checking min_weight seems not necessary.
Well, the whole thing seems a bit paranoid to me ;), but still... If we
are going to be paranoid, lets be properly paranoid: We're not
checking min_weight, we are checking the absolute value of min_weight
which could, in theory, be greater than the absolute value of max_weight.
4) Do you have a test case or some toy example of your implementation
with Vertex and vertex_idx_t being two different incompatible types?
I'm totally new to Boost and confused between the two. Both are used
to index into arrays. Also you declare a Vertex type but never use it
in the code.
In Boosts Graph Library (BGL) a vertex_descriptor (that I have
typedefed to Vertex) is not required to be an integral type. There
are, however, algorithms that need to treat vertices more or less as
integers. In general, these algorithms require the caller to provide a
property_map that maps the vertices to the integers 0, 1, ...,
n-1. The type of integer returned by the property map is
property_traits<property_map_type>::value_type which I have typedefed to
vertex_idx_t for readability and to save typing.
As for using a Vertex to index into arrays, that is simply a bug. I'd
appreciate it if you could point out where in the code a Vertex is
used to index into an array so that I can fix it.
BGL relies heavily on Boosts property_map concepts, so its worth while
getting acquainted with that section of Boost. If you still need an
example of an implementation where vertex_descriptor is not an
integral type, let me know and I will get to work on it.
5) Also an example of a sparse graph with weights stored
space efficiently (not in an array) on which I can run your algorithm
and test my changes will also help a lot.
Well, this depends a lot on how you would want to store your graphs in
the first place. Again, you need to understand Boosts property map
concepts. But get back to me if you really want an example and I'll
try to come up with something.
Also, if you modify the algorithm with good results, consider sending
a patch and/or a description of the changes and I'll try to
incorporate those into the code. This way other users can also benefit.
Cheers,
/Ali
|
|
From: Ali T. <ali...@gm...> - 2008-06-19 15:53:14
|
Version 1.0.0 of edmonds-alg has now been released. The code has been completely rewritten on top of concepts in the Boost Graph Library. The algorithm is templatized to support various versions of the algorithm, such as minium vs maximum weight branchings, and branchings vs spanning trees. In the future, a special version for sparse graphs may be implmented. The probability of this feature ever appearing will be proportional to feedback and demand from users. The web page of the project has also been updated to reflect the new release. For instructions for downloading and installing see http://edmonds-alg.sourceforge.net. Cheers, /Ali |
|
From: Erik <eri...@sb...> - 2007-12-17 14:14:16
|
Just testing the mailing list. cheers, Erik |