mulan-list Mailing List for Mulan (Page 3)
Brought to you by:
stevelaskaridis,
tsoumakas
You can subscribe to this list here.
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(13) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2010 |
Jan
(9) |
Feb
|
Mar
|
Apr
(3) |
May
(6) |
Jun
(2) |
Jul
(15) |
Aug
(7) |
Sep
(11) |
Oct
(4) |
Nov
(11) |
Dec
(9) |
2011 |
Jan
(6) |
Feb
|
Mar
(19) |
Apr
(11) |
May
(5) |
Jun
(13) |
Jul
(5) |
Aug
|
Sep
(13) |
Oct
(1) |
Nov
|
Dec
|
2012 |
Jan
(16) |
Feb
(9) |
Mar
(15) |
Apr
(19) |
May
(4) |
Jun
(5) |
Jul
(8) |
Aug
(3) |
Sep
(3) |
Oct
(3) |
Nov
(10) |
Dec
(2) |
2013 |
Jan
|
Feb
(2) |
Mar
(12) |
Apr
(11) |
May
(4) |
Jun
(2) |
Jul
(5) |
Aug
(7) |
Sep
(4) |
Oct
(2) |
Nov
(11) |
Dec
|
2014 |
Jan
|
Feb
(10) |
Mar
|
Apr
(10) |
May
|
Jun
|
Jul
(7) |
Aug
(1) |
Sep
|
Oct
(5) |
Nov
(4) |
Dec
(2) |
2015 |
Jan
(2) |
Feb
(10) |
Mar
(4) |
Apr
(2) |
May
(7) |
Jun
(6) |
Jul
(2) |
Aug
(5) |
Sep
(4) |
Oct
(3) |
Nov
(2) |
Dec
(2) |
2016 |
Jan
(1) |
Feb
|
Mar
|
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
(2) |
Nov
(3) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(7) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Nadav S. <ns...@gm...> - 2015-05-27 08:44:46
|
Oh, and one more thing- After calling the model.makePrediction() I get a MultiLabelOutput object ("output"). What is exactly the boolean array of predictions? is it output.getBipartition? Thanks! Nadav. 2015-05-27 10:37 GMT+03:00 Nadav Stiner <ns...@gm...>: > Hi Greg, > > thanks for your answer. > I'll clerify my question- > In my data each sample has a dataset_ID that tells me to which dataset it > belongs (All of the datasets have the same features and labels). Instead of > cross validating by splitting the data into k folds (i.e. on each iteration > excluding one fold while training and then predicting that fold), I'd like > to to split the data to the data sets mentioned (using their dataset ID) > and then on each iteration- > > - Exclude one of the data sets (i.e. exclude all samples that belong > to that data set) and train the model on the remaining data. > - Generate prediction for the data set left out. > > In the end I'd like to compare the predicted matrix to the original one > and get all relevant performance measurements. > This is essentially the same as cross validating with k-folds, only that > now each fold is a different data set. > > Any suggestion on how to do so with Mulan? > > Also, just to be sure- what's exactly the return value of the evaluate > function? > > Thank you, > Nadav. > > 2015-05-26 23:52 GMT+03:00 Grigorios Tsoumakas <gr...@cs...>: > >> Hi, >> >> I have about 20 data sets, and I'd like to perform a >> leave-study-out-cross validation, i.e. on each iteration I want to train >> the model on all data sets but one, and then cross validate using the one >> data set left out. >> Is there a built-in way to do that in Mulan? >> >> >> I guess you mean some kind of meta-learning model or hyper-parameter >> optimization approach, otherwise leave-study-out does not really make >> sense. In any case, Mulan does not support this. >> >> If not, should I use the Evaluator.evaluate function in order to cross >> validate the one data set left out? >> >> >> I can't say I understand this question. In >> leave-study-out-cross-validation, you should be learning on the 19 datasets >> and predicting on the 20th, why would you want to do cross-validation on >> the 20th dataset? I suppose you mean to evaluate a model on the 20th >> dataset. In this case, Evaluator.evaluate function would do the trick. >> >> Hope this helps, >> Greg >> >> >> >> On 26/5/2015 5:52 μμ, Nadav Stiner wrote: >> >> Hi everyone, >> >> I have about 20 data sets, and I'd like to perform a >> leave-study-out-cross validation, i.e. on each iteration I want to train >> the model on all data sets but one, and then cross validate using the one >> data set left out. >> Is there a built-in way to do that in Mulan? >> If not >> , should I use the Evaluator.evaluate function in order to cross >> validate the one data set left out? >> >> Thanks! >> Nadav. >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight.http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> >> >> >> _______________________________________________ >> Mulan-list mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/mulan-list >> >> >> >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Mulan-list mailing list >> Mul...@li... >> https://lists.sourceforge.net/lists/listinfo/mulan-list >> >> > |
From: Nadav S. <ns...@gm...> - 2015-05-27 07:38:01
|
Hi Greg, thanks for your answer. I'll clerify my question- In my data each sample has a dataset_ID that tells me to which dataset it belongs (All of the datasets have the same features and labels). Instead of cross validating by splitting the data into k folds (i.e. on each iteration excluding one fold while training and then predicting that fold), I'd like to to split the data to the data sets mentioned (using their dataset ID) and then on each iteration- - Exclude one of the data sets (i.e. exclude all samples that belong to that data set) and train the model on the remaining data. - Generate prediction for the data set left out. In the end I'd like to compare the predicted matrix to the original one and get all relevant performance measurements. This is essentially the same as cross validating with k-folds, only that now each fold is a different data set. Any suggestion on how to do so with Mulan? Also, just to be sure- what's exactly the return value of the evaluate function? Thank you, Nadav. 2015-05-26 23:52 GMT+03:00 Grigorios Tsoumakas <gr...@cs...>: > Hi, > > I have about 20 data sets, and I'd like to perform a > leave-study-out-cross validation, i.e. on each iteration I want to train > the model on all data sets but one, and then cross validate using the one > data set left out. > Is there a built-in way to do that in Mulan? > > > I guess you mean some kind of meta-learning model or hyper-parameter > optimization approach, otherwise leave-study-out does not really make > sense. In any case, Mulan does not support this. > > If not, should I use the Evaluator.evaluate function in order to cross > validate the one data set left out? > > > I can't say I understand this question. In > leave-study-out-cross-validation, you should be learning on the 19 datasets > and predicting on the 20th, why would you want to do cross-validation on > the 20th dataset? I suppose you mean to evaluate a model on the 20th > dataset. In this case, Evaluator.evaluate function would do the trick. > > Hope this helps, > Greg > > > > On 26/5/2015 5:52 μμ, Nadav Stiner wrote: > > Hi everyone, > > I have about 20 data sets, and I'd like to perform a > leave-study-out-cross validation, i.e. on each iteration I want to train > the model on all data sets but one, and then cross validate using the one > data set left out. > Is there a built-in way to do that in Mulan? > If not > , should I use the Evaluator.evaluate function in order to cross validate > the one data set left out? > > Thanks! > Nadav. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight.http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > > _______________________________________________ > Mulan-list mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/mulan-list > > > > > ------------------------------------------------------------------------------ > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list > > |
From: Grigorios T. <gr...@cs...> - 2015-05-26 20:53:05
|
Hi, > I have about 20 data sets, and I'd like to perform a > leave-study-out-cross validation, i.e. on each iteration I want to > train the model on all data sets but one, and then cross validate > using the one data set left out. > Is there a built-in way to do that in Mulan? I guess you mean some kind of meta-learning model or hyper-parameter optimization approach, otherwise leave-study-out does not really make sense. In any case, Mulan does not support this. > If not, should I use the Evaluator.evaluate function in order to > cross validate the one data set left out? I can't say I understand this question. In leave-study-out-cross-validation, you should be learning on the 19 datasets and predicting on the 20th, why would you want to do cross-validation on the 20th dataset? I suppose you mean to evaluate a model on the 20th dataset. In this case, Evaluator.evaluate function would do the trick. Hope this helps, Greg On 26/5/2015 5:52 μμ, Nadav Stiner wrote: > Hi everyone, > > I have about 20 data sets, and I'd like to perform a > leave-study-out-cross validation, i.e. on each iteration I want to > train the model on all data sets but one, and then cross validate > using the one data set left out. > Is there a built-in way to do that in Mulan? > If not > , should I use the Evaluator.evaluate function in order to cross > validate the one data set left out? > > Thanks! > Nadav. > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |
From: Nadav S. <ns...@gm...> - 2015-05-26 14:52:38
|
Hi everyone, I have about 20 data sets, and I'd like to perform a leave-study-out-cross validation, i.e. on each iteration I want to train the model on all data sets but one, and then cross validate using the one data set left out. Is there a built-in way to do that in Mulan? If not , should I use the Evaluator.evaluate function in order to cross validate the one data set left out? Thanks! Nadav. |
From: Grigorios T. <gr...@cs...> - 2015-04-24 20:04:57
|
Dear Libo, > There have been some discussions about empty prediction on test > instances as it seems to create problems, based on (Spyromitros et > al.,2008;Liu et al,.2015). I do not find any real concrete examples in > those papers and I wonder if the former Mulan thread is relevant to > the problem:[Mulan-list] how to express "negative example"/"no class" > instances http://sourceforge.net/p/mulan/mailman/message/26537043/ The problem is concrete. Some algorithms, such as BR, may indeed output an all-zeros vector (0/1 = irrelevant/relevant label). However, in most multi-label learning tasks, all examples have at least 1 label. So, you could say that outputting all zeros should be avoided, and at least the most confident label should be considered relevant (Spyromitros et al., 2008). On the other hand, if the algorithms is too uncertain, forcing it to predict at least one label, might lead to false positive predictions. The thread you mention above, actually points to a counter-example: a domain application where there are indeed training examples with none of the labels being relevant. In this case, outputting an all-zeros vector is ok. > But what if ,for instance a dataset with all instances have n number > of labels that are all binary (for each instance there will always be > exactly n labels)? All single label value with either +1 or -1 have it > own meaning. In this case would it be possible to have empty > prediction outputs? Typically -1 (or 0) means the label is not relevant to the example, while +1 means the example is relevant. So, by empty prediction outputs we would mean in this case an all -1 vector of predictions. If however the labels have binary values, such as Gender={Female,Male}, as in the case you bring up here, then indeed empty prediction is not meaningful! Regards, Greg On 24/4/2015 7:16 μμ, Libo Li wrote: > Dear Mulan users and maintainers: > > There have been some discussions about empty prediction on test > instances as it seems to create problems, based on (Spyromitros et > al.,2008;Liu et al,.2015). I do not find any real concrete examples in > those papers and I wonder if the former Mulan thread is relevant to > the problem:[Mulan-list] how to express "negative example"/"no class" > instances http://sourceforge.net/p/mulan/mailman/message/26537043/ > > Without a concrete example I find it hard to figure out what exactly > is the problem. In some domain the number of labels per each instance > have are different, and some of the instances do not have a label at > all. In this setting it is reasonable to think that some test > instances will have no labels at all, thus empty label predictions. > But what if ,for instance a dataset with all instances have n number > of labels that are all binary (for each instance there will always be > exactly n labels)? All single label value with either +1 or -1 have it > own meaning. In this case would it be possible to have empty > prediction outputs? > > I hope I expressed myself clearly.Thanks for your attention! > Libo > > > Spyromitros, Eleftherios, Grigorios Tsoumakas, and Ioannis Vlahavas. > "An empirical study of lazy multilabel classification algorithms." > /Artificial Intelligence: Theories, Models and Applications/. Springer > Berlin Heidelberg, 2008. 401-406. > > Liu, Shuhua Monica, and Jiun-Hung Chen. "An empirical study of empty > prediction of multi-label classification." /Expert Systems with > Applications/(2015). > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |
From: Libo Li <dan...@gm...> - 2015-04-24 16:16:53
|
Dear Mulan users and maintainers: There have been some discussions about empty prediction on test instances as it seems to create problems, based on (Spyromitros et al.,2008;Liu et al,.2015). I do not find any real concrete examples in those papers and I wonder if the former Mulan thread is relevant to the problem:[Mulan-list] how to express "negative example"/"no class" instances http://sourceforge.net/p/mulan/mailman/message/26537043/ Without a concrete example I find it hard to figure out what exactly is the problem. In some domain the number of labels per each instance have are different, and some of the instances do not have a label at all. In this setting it is reasonable to think that some test instances will have no labels at all, thus empty label predictions. But what if ,for instance a dataset with all instances have n number of labels that are all binary (for each instance there will always be exactly n labels)? All single label value with either +1 or -1 have it own meaning. In this case would it be possible to have empty prediction outputs? I hope I expressed myself clearly.Thanks for your attention! Libo Spyromitros, Eleftherios, Grigorios Tsoumakas, and Ioannis Vlahavas. "An empirical study of lazy multilabel classification algorithms." *Artificial Intelligence: Theories, Models and Applications*. Springer Berlin Heidelberg, 2008. 401-406. Liu, Shuhua Monica, and Jiun-Hung Chen. "An empirical study of empty prediction of multi-label classification." *Expert Systems with Applications*(2015). |
From: Leonardo L. <lli...@gm...> - 2015-03-04 14:39:16
|
Thanks Greg and Lefteris for these valuable information. Lefteris, I'll follow your advice in using the same Mulan's dataset format. Kind Regards, Leonardo On 4 March 2015 at 18:19, Eleftherios Spyromitros-Xioufis < esp...@cs...> wrote: > Hi, > > I just wanted to add that the implementation of most multi-target > regression methods assumes that the targets are placed in the last > positions of the dataset. So please use accordingly formatted datasets if > you plan to use Mulan for regression. > > Regards, > Lefteris > > > On 3/3/2015 10:10 μμ, Grigorios Tsoumakas wrote: > > Hi, > > By multi-target learning in general means that there are multiple target > variables. Mulan traditionally addressed multi-label learning which means > multiple *binary* target variables. Recently Mulan started also to address > multi-target *regression*, which means multiple *numeric* target variables. > > Example popular datasets for both tasks can be found here: > http://mulan.sourceforge.net/datasets.html > > Regards, > Greg > > > On 3/3/2015 5:49 μμ, Leonardo Lion wrote: > > Hi, > > Generally, what is the difference between "multi-target learning" dataset, > and "multi-label" dataset in terms of dataset design? > > What are the famouse dataset examples for multi-target learning and > multi-label? > > Thanks you. > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > > > _______________________________________________ > Mulan-list mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/mulan-list > > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > > > _______________________________________________ > Mulan-list mailing lis...@li...https://lists.sourceforge.net/lists/listinfo/mulan-list > > > -- > Eleftherios Spyromitros-Xioufis > Phd Candidate - Department of Informatics > Aristotle University of Thessaloniki > -- > Research Associate - Information Technologies Institute > Centre for Research and Technology Hellas > WWW: http://users.auth.gr/espyromi > ---------------------------------------------------- > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, > sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for > all > things parallel software development, from weekly thought leadership blogs > to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list > > |
From: Eleftherios Spyromitros-X. <esp...@cs...> - 2015-03-04 10:19:53
|
Hi, I just wanted to add that the implementation of most multi-target regression methods assumes that the targets are placed in the last positions of the dataset. So please use accordingly formatted datasets if you plan to use Mulan for regression. Regards, Lefteris On 3/3/2015 10:10 μμ, Grigorios Tsoumakas wrote: > Hi, > > By multi-target learning in general means that there are multiple > target variables. Mulan traditionally addressed multi-label learning > which means multiple *binary* target variables. Recently Mulan started > also to address multi-target *regression*, which means multiple > *numeric* target variables. > > Example popular datasets for both tasks can be found here: > http://mulan.sourceforge.net/datasets.html > > Regards, > Greg > > > On 3/3/2015 5:49 μμ, Leonardo Lion wrote: >> >> Hi, >> >> Generally, what is the difference between "multi-target learning" >> dataset, and "multi-label" dataset in terms of dataset design? >> >> What are the famouse dataset examples for multi-target learning and >> multi-label? >> >> Thanks you. >> >> >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming The Go Parallel Website, sponsored >> by Intel and developed in partnership with Slashdot Media, is your hub for all >> things parallel software development, from weekly thought leadership blogs to >> news, videos, case studies, tutorials and more. Take a look and join the >> conversation now.http://goparallel.sourceforge.net/ >> >> >> _______________________________________________ >> Mulan-list mailing list >> Mul...@li... >> https://lists.sourceforge.net/lists/listinfo/mulan-list > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list -- Eleftherios Spyromitros-Xioufis Phd Candidate - Department of Informatics Aristotle University of Thessaloniki -- Research Associate - Information Technologies Institute Centre for Research and Technology Hellas WWW: http://users.auth.gr/espyromi ---------------------------------------------------- |
From: Grigorios T. <gr...@cs...> - 2015-03-03 20:10:57
|
Hi, By multi-target learning in general means that there are multiple target variables. Mulan traditionally addressed multi-label learning which means multiple *binary* target variables. Recently Mulan started also to address multi-target *regression*, which means multiple *numeric* target variables. Example popular datasets for both tasks can be found here: http://mulan.sourceforge.net/datasets.html Regards, Greg On 3/3/2015 5:49 μμ, Leonardo Lion wrote: > > Hi, > > Generally, what is the difference between "multi-target learning" > dataset, and "multi-label" dataset in terms of dataset design? > > What are the famouse dataset examples for multi-target learning and > multi-label? > > Thanks you. > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming The Go Parallel Website, sponsored > by Intel and developed in partnership with Slashdot Media, is your hub for all > things parallel software development, from weekly thought leadership blogs to > news, videos, case studies, tutorials and more. Take a look and join the > conversation now. http://goparallel.sourceforge.net/ > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |
From: Leonardo L. <lli...@gm...> - 2015-03-03 15:49:37
|
Hi, Generally, what is the difference between "multi-target learning" dataset, and "multi-label" dataset in terms of dataset design? What are the famouse dataset examples for multi-target learning and multi-label? Thanks you. |
From: Grigorios T. <gr...@cs...> - 2015-02-18 20:32:55
|
Dear Aftab, I answer to you inline. On 17/2/2015 8:27 πμ, Aftab Hassan wrote: > Thanks Grigorios for the reply, a lot of things were clear from it. > > _I have a couple of follow up questions :_ > 1. I now understand that all the labels (target variables) should be > binary. But how do I define the training and test data ? Say I want to > predict values for the two labels predvar1 and predvar2, do I assign a > '?' for these variables in the test data ? I was however looking at > *emotions.arff* and I don't see any question mark anywhere. Does mulan > internally separate out the training and test data, and we just have > to give values for every row? Emotions is a training set, part of which can be used as a test set. If you have unknown values, i.e. ?, then you are talking about an "unlabeled" set. Please have a look at: http://mulan.sourceforge.net/starting.html > > 2. Also, say if two labels that I have are inter dependent. Say, I am > predicting age of a person, so I have binary labels like Age_30, > Age_60, Age_90. How do I ensure that the multi label classification > does not classify the person in two labels since a person cannot come > under both age categories. You are talking about mutually exclusive labels. Currently there is no way to ensure that only one will get predicted. This should actually be one variable with 4 values, going back to your previous email's question, which is already answered. > > 3. The mulan homepage also talks about hierarchies. I did not > understand this. (Is this somewhere related to my question 2 above?) This is different. A hierarchy does not mean that only one of the children of a label should be predicted. Best, Greg > > > On Sun, Feb 15, 2015 at 11:48 PM, Grigorios Tsoumakas > <gr...@cs... <mailto:gr...@cs...>> wrote: > > Dear Aftab, > > I answer your questions inline. > > On 16/2/2015 4:33 πμ, Aftab Hassan wrote: > > I was working on a multi label classification problem using multi > > label k nearest neighbour(mlknn) in mulan. > > > > Your library is fantastic, but not a lot of resources or > > documentation available, I was trying out different things and got > > some results, but I'm not sure if I'm right or totally wrong. I > have a > > few questions. > > > > 1. Say, I want to predict the two variables, predvar1 and > predvar2, am > > I supposed to give these in the xml file? > > There are two ways you can do this: > a) Just have these two variables as the last two variables of your > dataset and call the MultiLabelInstances constructor with the second > argument being the number of output variables, e.g. new > MultiLabelInstances(arffFilename, 2); > b) Put these in an xml file according to our schema > (http://mulan.sourceforge.net/format.html) and call the > constructor you > are already using. > > > > 2. One of the variables I want to predict, say, predvar1 can > take three > > values - 0, 1 or 2. When I give these in the xml file, I get the > > error, "The format of label attribute 'predvar1' is not valid". > > However, if I do the same for a variable which can take only two > > values 0 or 1, it works fine and gives me some accuracy and > other > > metrics. Why is this? > > Mulan addressed primarily multi-label learning tasks, i.e. all target > variables should be binary. Recently we are also addressing problems > with all target variables being numeric. Having a mixed typed of > target > variables as well as nominal attributes like {0, 1, 2} is a future > goal. > > > 3. Also, for the variables, which I want to predict, should I > give a > > '?' in the arff file? > > Yes, this is Weka's way of indicating unknown values. > > > 4.If I give less than three labels in the xml file, it gives me an > > error (using mulan 1.3) > > Less than three labels doesn't make sense for RAkEL. > > Cheers, > Greg > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and > Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration > & more > Get technology previously reserved for billion-dollar > corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk > _______________________________________________ > Mulan-list mailing list > Mul...@li... > <mailto:Mul...@li...> > https://lists.sourceforge.net/lists/listinfo/mulan-list > > > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |
From: Aftab H. <aft...@gm...> - 2015-02-17 06:27:37
|
Thanks Grigorios for the reply, a lot of things were clear from it. *I have a couple of follow up questions :* 1. I now understand that all the labels (target variables) should be binary. But how do I define the training and test data ? Say I want to predict values for the two labels predvar1 and predvar2, do I assign a '?' for these variables in the test data ? I was however looking at *emotions.arff* and I don't see any question mark anywhere. Does mulan internally separate out the training and test data, and we just have to give values for every row? 2. Also, say if two labels that I have are inter dependent. Say, I am predicting age of a person, so I have binary labels like Age_30, Age_60, Age_90. How do I ensure that the multi label classification does not classify the person in two labels since a person cannot come under both age categories. 3. The mulan homepage also talks about hierarchies. I did not understand this. (Is this somewhere related to my question 2 above?) On Sun, Feb 15, 2015 at 11:48 PM, Grigorios Tsoumakas <gr...@cs...> wrote: > Dear Aftab, > > I answer your questions inline. > > On 16/2/2015 4:33 πμ, Aftab Hassan wrote: > > I was working on a multi label classification problem using multi > > label k nearest neighbour(mlknn) in mulan. > > > > Your library is fantastic, but not a lot of resources or > > documentation available, I was trying out different things and got > > some results, but I'm not sure if I'm right or totally wrong. I have a > > few questions. > > > > 1. Say, I want to predict the two variables, predvar1 and predvar2, am > > I supposed to give these in the xml file? > > There are two ways you can do this: > a) Just have these two variables as the last two variables of your > dataset and call the MultiLabelInstances constructor with the second > argument being the number of output variables, e.g. new > MultiLabelInstances(arffFilename, 2); > b) Put these in an xml file according to our schema > (http://mulan.sourceforge.net/format.html) and call the constructor you > are already using. > > > > 2. One of the variables I want to predict, say, predvar1 can take three > > values - 0, 1 or 2. When I give these in the xml file, I get the > > error, "The format of label attribute 'predvar1' is not valid". > > However, if I do the same for a variable which can take only two > > values 0 or 1, it works fine and gives me some accuracy and other > > metrics. Why is this? > > Mulan addressed primarily multi-label learning tasks, i.e. all target > variables should be binary. Recently we are also addressing problems > with all target variables being numeric. Having a mixed typed of target > variables as well as nominal attributes like {0, 1, 2} is a future goal. > > > 3. Also, for the variables, which I want to predict, should I give a > > '?' in the arff file? > > Yes, this is Weka's way of indicating unknown values. > > > 4.If I give less than three labels in the xml file, it gives me an > > error (using mulan 1.3) > > Less than three labels doesn't make sense for RAkEL. > > Cheers, > Greg > > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > > http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list > |
From: Grigorios T. <gr...@cs...> - 2015-02-16 20:56:21
|
Hi, We have never so-far added mulan to the central repository. Probably Meka has added us as a third-party library. We are working however towards using gradle as a build tool and towards a submission to the central repository, but we cannot give accurate estimates when this will have been achieved. Help is welcome. Greg On 16/2/2015 4:35 μμ, Kenan wrote: > Hi, > > I was wondering when the newest version of mulan will be available in > mvnrepository.com <http://mvnrepository.com> ? > > > Best regards, > Kenan Imamovic > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=190641631&iu=/4140/ostg.clktrk > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |
From: Kenan <ke...@gm...> - 2015-02-16 14:35:40
|
Hi, I was wondering when the newest version of mulan will be available in mvnrepository.com ? Best regards, Kenan Imamovic |
From: Grigorios T. <gr...@cs...> - 2015-02-16 07:48:57
|
Dear Aftab, I answer your questions inline. On 16/2/2015 4:33 πμ, Aftab Hassan wrote: > I was working on a multi label classification problem using multi > label k nearest neighbour(mlknn) in mulan. > > Your library is fantastic, but not a lot of resources or > documentation available, I was trying out different things and got > some results, but I'm not sure if I'm right or totally wrong. I have a > few questions. > > 1. Say, I want to predict the two variables, predvar1 and predvar2, am > I supposed to give these in the xml file? There are two ways you can do this: a) Just have these two variables as the last two variables of your dataset and call the MultiLabelInstances constructor with the second argument being the number of output variables, e.g. new MultiLabelInstances(arffFilename, 2); b) Put these in an xml file according to our schema (http://mulan.sourceforge.net/format.html) and call the constructor you are already using. > 2. One of the variables I want to predict, say, predvar1 can take three > values - 0, 1 or 2. When I give these in the xml file, I get the > error, "The format of label attribute 'predvar1' is not valid". > However, if I do the same for a variable which can take only two > values 0 or 1, it works fine and gives me some accuracy and other > metrics. Why is this? Mulan addressed primarily multi-label learning tasks, i.e. all target variables should be binary. Recently we are also addressing problems with all target variables being numeric. Having a mixed typed of target variables as well as nominal attributes like {0, 1, 2} is a future goal. > 3. Also, for the variables, which I want to predict, should I give a > '?' in the arff file? Yes, this is Weka's way of indicating unknown values. > 4.If I give less than three labels in the xml file, it gives me an > error (using mulan 1.3) Less than three labels doesn't make sense for RAkEL. Cheers, Greg |
From: Aftab H. <aft...@gm...> - 2015-02-16 02:33:50
|
I was working on a multi label classification problem using multi label k nearest neighbour(mlknn) in mulan. Your library is fantastic, but not a lot of resources or documentation available, I was trying out different things and got some results, but I'm not sure if I'm right or totally wrong. I have a few questions. 1. Say, I want to predict the two variables, predvar1 and predvar2, am I supposed to give these in the xml file? 2. One of the variables I want to predict, say, predvar1 can take three values - 0, 1 or 2. When I give these in the xml file, I get the error, "The format of label attribute 'predvar1' is not valid". However, if I do the same for a variable which can take only two values 0 or 1, it works fine and gives me some accuracy and other metrics. Why is this? 3. Also, for the variables, which I want to predict, should I give a '?' in the arff file? 4.If I give less than three labels in the xml file, it gives me an error (using mulan 1.3) This is the code I'm using import mulan.classifier.lazy.MLkNN; import mulan.classifier.meta.RAkEL; import mulan.classifier.transformation.LabelPowerset; import mulan.data.MultiLabelInstances; import mulan.evaluation.Evaluator; import mulan.evaluation.MultipleEvaluation; import weka.classifiers.trees.J48; import weka.core.Utils; public class MulanExp1 { public static void main(String[] args) throws Exception { String arffFilename = Utils.getOption("arff", args); // e.g. -arff emotions.arff String xmlFilename = Utils.getOption("xml", args); // e.g. -xml emotions.xml MultiLabelInstances dataset = new MultiLabelInstances(arffFilename, xmlFilename); RAkEL learner1 = new RAkEL(new LabelPowerset(new J48())); MLkNN learner2 = new MLkNN(); Evaluator eval = new Evaluator(); MultipleEvaluation results; int numFolds = 10; results = eval.crossValidate(learner1, dataset, numFolds); System.out.println(results); results = eval.crossValidate(learner2, dataset, numFolds); System.out.println(results); } } PS : I also posted this question on SO Thanks a lot |
From: Eleftherios Spyromitros-X. <esp...@cs...> - 2015-02-09 12:16:54
|
Dear Daniel, A large number of multi-label classification techniques (most of them are implemented in Mulan) belong to the problem transformation family (see for instance [Tsoumakas and Katakis. Multi-label classification: An overview. 2006]). Since all these methods tackle the problem by transforming it into binary or multi-class classification, comprehensibility of the model is then up to the comprehensibility of the underlying single-label algorithm. Of course, there are also algorithm adaptation methods that produce very interpretable models, e.g. multi-objective decision trees and rules. You can find such algorithms in the Clus (http://dtai.cs.kuleuven.be/clus/) library for instance. Note also that Mulan implements a wrapper for Clus. Best, Lefteris On 9/2/2015 1:58 μμ, Libo Li wrote: > Dear IIze: > > Many thanks for the prompt reply! I only know a few of the > techniques, e.g. MLKNN, random-k and ML-instance based LR. Could you > please name a few rule based/comprehensible methods so I can look > into the articles? Thanks a lot! > > Regards > Daniel > > 2015-02-09 12:39 GMT+01:00 Ilze Birzniece <i.b...@gm... > <mailto:i.b...@gm...>>: > > Hi Daniel, > Not all techniques are black boxes. I used rule algorithms, e.g. > JRIP with Binary Relevance, particularly when I needed > explainability and rules are giving it. However it is not the only > option. > > Best regards, > Ilze > > 2015-02-08 17:28 GMT+02:00 Libo Li <dan...@gm... > <mailto:dan...@gm...>>: > > Dear Mulan users: > > I have been studying multi-label classification > problems for sometime and the idea of multi-labeling is nice. > However most of the techniques are black boxed and I just > wonder it there a way to extract the information about the > influential attributes? Just to make the model more > comprehensible and to provide some analytically > interpretations.Thanks! > > Regards > Daniel > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming. The Go Parallel > Website, > sponsored by Intel and developed in partnership with Slashdot > Media, is your > hub for all things parallel software development, from weekly > thought > leadership blogs to news, videos, case studies, tutorials and > more. Take a > look and join the conversation now. > http://goparallel.sourceforge.net/ > _______________________________________________ > Mulan-list mailing list > Mul...@li... > <mailto:Mul...@li...> > https://lists.sourceforge.net/lists/listinfo/mulan-list > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming. The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot > Media, is your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and > more. Take a > look and join the conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Mulan-list mailing list > Mul...@li... > <mailto:Mul...@li...> > https://lists.sourceforge.net/lists/listinfo/mulan-list > > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming. The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot Media, is your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and more. Take a > look and join the conversation now. http://goparallel.sourceforge.net/ > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list -- Eleftherios Spyromitros-Xioufis Phd Candidate - Department of Informatics Aristotle University of Thessaloniki -- Research Associate - Information Technologies Institute Centre for Research and Technology Hellas WWW: http://users.auth.gr/espyromi ---------------------------------------------------- |
From: Libo Li <dan...@gm...> - 2015-02-09 11:58:52
|
Dear IIze: Many thanks for the prompt reply! I only know a few of the techniques, e.g. MLKNN, random-k and ML-instance based LR. Could you please name a few rule based/comprehensible methods so I can look into the articles? Thanks a lot! Regards Daniel 2015-02-09 12:39 GMT+01:00 Ilze Birzniece <i.b...@gm...>: > Hi Daniel, > Not all techniques are black boxes. I used rule algorithms, e.g. JRIP with > Binary Relevance, particularly when I needed explainability and rules are > giving it. However it is not the only option. > > Best regards, > Ilze > > 2015-02-08 17:28 GMT+02:00 Libo Li <dan...@gm...>: > >> Dear Mulan users: >> >> I have been studying multi-label classification problems for >> sometime and the idea of multi-labeling is nice. However most of the >> techniques are black boxed and I just wonder it there a way to extract the >> information about the influential attributes? Just to make the model more >> comprehensible and to provide some analytically interpretations.Thanks! >> >> Regards >> Daniel >> >> >> ------------------------------------------------------------------------------ >> Dive into the World of Parallel Programming. The Go Parallel Website, >> sponsored by Intel and developed in partnership with Slashdot Media, is >> your >> hub for all things parallel software development, from weekly thought >> leadership blogs to news, videos, case studies, tutorials and more. Take a >> look and join the conversation now. http://goparallel.sourceforge.net/ >> _______________________________________________ >> Mulan-list mailing list >> Mul...@li... >> https://lists.sourceforge.net/lists/listinfo/mulan-list >> >> > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming. The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot Media, is > your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and more. Take a > look and join the conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list > > |
From: Ilze B. <i.b...@gm...> - 2015-02-09 11:40:09
|
Hi Daniel, Not all techniques are black boxes. I used rule algorithms, e.g. JRIP with Binary Relevance, particularly when I needed explainability and rules are giving it. However it is not the only option. Best regards, Ilze 2015-02-08 17:28 GMT+02:00 Libo Li <dan...@gm...>: > Dear Mulan users: > > I have been studying multi-label classification problems for > sometime and the idea of multi-labeling is nice. However most of the > techniques are black boxed and I just wonder it there a way to extract the > information about the influential attributes? Just to make the model more > comprehensible and to provide some analytically interpretations.Thanks! > > Regards > Daniel > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming. The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot Media, is > your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and more. Take a > look and join the conversation now. http://goparallel.sourceforge.net/ > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list > > |
From: Libo Li <dan...@gm...> - 2015-02-08 15:28:36
|
Dear Mulan users: I have been studying multi-label classification problems for sometime and the idea of multi-labeling is nice. However most of the techniques are black boxed and I just wonder it there a way to extract the information about the influential attributes? Just to make the model more comprehensible and to provide some analytically interpretations.Thanks! Regards Daniel |
From: Grigorios T. <gr...@cs...> - 2015-01-16 08:44:10
|
Dear Alexandor, There is no such classifier in Mulan at the moment, but we are working on implementing one, based on this paper: http://link.springer.com/chapter/10.1007%2F978-3-642-12127-2_2 Regards, Greg On 5/1/2015 3:03 μμ, Alexandor mulu wrote: > Hi; > I am new user of mulan.I want to build a classifier that outputs > average prediction from two classifers.for Exmaple; RAKEL and MLKNN > output a score between 0 and 1 for each lable.I want the classifier > that outputs average prediction of both classifier score. > After that i want to evaluate the accuracy of the the classfier.Any > help?Thanks!!! > > > Best Regards! > > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming! The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot Media, is your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and more. Take a > look and join the conversation now. http://goparallel.sourceforge.net > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |
From: Alexandor m. <ale...@gm...> - 2015-01-05 13:03:55
|
Hi; I am new user of mulan.I want to build a classifier that outputs average prediction from two classifers.for Exmaple; RAKEL and MLKNN output a score between 0 and 1 for each lable.I want the classifier that outputs average prediction of both classifier score. After that i want to evaluate the accuracy of the the classfier.Any help?Thanks!!! Best Regards! |
From: Eleftherios Spyromitros-X. <esp...@cs...> - 2014-12-30 15:35:07
|
Dear Jeiran, all transformation-based multi-target regression methods implemented in Mulan (including ERCC and MTSC) implement the getModel() method. This method returns a String representation of the model (or models) that is built for each individual target. This is done by calling the toString() method of the underlying base regressor. Therefore, it depends on what Weka's regression algorithms return in their toString() method. If you use LinearRegression, for instance, you will get what you want. Below is a small illustrative example that uses the edm dataset and the ST method. Make sure you use the latest Mulan version from the git repository because there was bug (that I just fixed) in the getModel() method of ERC. String dataset = "C:/Jeiran/edm.arff"; MultiLabelInstances train = new MultiLabelInstances(dataset, 2); LinearRegression lr = new LinearRegression(); SingleTargetRegressor st = new SingleTargetRegressor(lr); st.build(train); System.out.println(st.getModel()); This gives you the following output: -- Model for target DFlow: Linear Regression Model DFlow = -1.444 * ASM_A_MeanT + 2.7637 * ASD_A_SDevT + -1.2339 * BSM_B_MeanT + 0.8236 * BSD_B_SDevT + -0.9913 * CSM_C_MeanT + 0.7356 * ISM_I_MeanT + 1.0654 * BLM_B_MeanT + -0.6474 * BLD_B_SDevT + 0.991 * CLM_C_MeanT + 0.7737 * CLD_C_SDevT + -0.4861 * ILM_I_MeanT + -7.3142 -- Model for target DGap: Linear Regression Model DGap = -2.2829 * ASD_A_SDevT + 1.0479 * BSM_B_MeanT + 1.39 * CSM_C_MeanT + -0.3682 * ISM_I_MeanT + -1.9213 * ALM_A_MeanT + 2.7593 * ALD_A_SDevT + -3.5648 * BLM_B_MeanT + -1.1544 * BLD_B_SDevT + -0.8381 * CLM_C_MeanT + 1.5768 * ILM_I_MeanT + -12.4362 On 28/12/2014 2:05 πμ, Jeiran Choupan wrote: > Dear Eleftherios > > Thank you for all the efforts you have made for the library. > > I have a question related to the Multi-Target Model /h/. > > Is it possible in your code to see (and save) the features weight > value (in other words, feature importance level, or features > statistical map) something like SVM's weight value or Random forest > Gini importance level for each feature? > > Could you please point me to the line of the code that it has been > calculated? I assume it is different for ERCC and MTSC. > > I really appreciate your help > > Happy New Year > > Kind regards > Jeiran Choupan > > PhD candidate | Centre for Advanced Imaging & Queensland Brain Institute > The University of Queensland | Brisbane , Australia > P: (+61) 7 334 60363 > W: http://cai.uq.edu.au > > > ------------------------------------------------------------------------------ > Dive into the World of Parallel Programming! The Go Parallel Website, > sponsored by Intel and developed in partnership with Slashdot Media, is your > hub for all things parallel software development, from weekly thought > leadership blogs to news, videos, case studies, tutorials and more. Take a > look and join the conversation now. http://goparallel.sourceforge.net > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list -- Eleftherios Spyromitros-Xioufis Phd Candidate - Department of Informatics Aristotle University of Thessaloniki -- Research Associate - Information Technologies Institute Centre for Research and Technology Hellas WWW: http://users.auth.gr/espyromi ---------------------------------------------------- |
From: Jeiran C. <j.c...@uq...> - 2014-12-28 03:44:31
|
Dear Eleftherios Thank you for all the efforts you have made for the library. I have a question related to the Multi-Target Model h. Is it possible in your code to see (and save) the features weight value (in other words, feature importance level, or features statistical map) something like SVM's weight value or Random forest Gini importance level for each feature? Could you please point me to the line of the code that it has been calculated? I assume it is different for ERCC and MTSC. I really appreciate your help Happy New Year Kind regards Jeiran Choupan PhD candidate | Centre for Advanced Imaging & Queensland Brain Institute The University of Queensland | Brisbane , Australia P: (+61) 7 334 60363 W: http://cai.uq.edu.au |
From: Grigorios T. <gr...@cs...> - 2014-11-25 04:43:58
|
Dear Vivian, Mulan does not include semi-supervised learning methods. As it is open-source, you can study its code and extend it towards that direction if you so desire. Regards, Greg On 21/11/2014 10:30 πμ, 念昔 ? wrote: > Hi, > First,thank you for the effort that you put in the mulan library . > I'm new using Mulan and I'm trying to do a multi-label classification > using semi-supervised learning---a self-learning method.So how can I > modify the code of Mulan API ? For example,how to set up the > percentage values of the initial labeled instances and the instances > that have not labeled will be labeled? Actually ,I'm working on the > use of the paper: Araken M Santos, Anne M P Canuto,Using > semi-supervised learning in multi-label classification > problems,http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6252800 > Thank you for your help, > Vivian > > > ------------------------------------------------------------------------------ > Download BIRT iHub F-Type - The Free Enterprise-Grade BIRT Server > from Actuate! Instantly Supercharge Your Business Reports and Dashboards > with Interactivity, Sharing, Native Excel Exports, App Integration & more > Get technology previously reserved for billion-dollar corporations, FREE > http://pubads.g.doubleclick.net/gampad/clk?id=157005751&iu=/4140/ostg.clktrk > > > _______________________________________________ > Mulan-list mailing list > Mul...@li... > https://lists.sourceforge.net/lists/listinfo/mulan-list |