This list is closed, nobody may subscribe to it.
2008 |
Jan
(1) |
Feb
(35) |
Mar
(41) |
Apr
(4) |
May
(19) |
Jun
(26) |
Jul
(3) |
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
(49) |
Feb
(15) |
Mar
(17) |
Apr
(7) |
May
(26) |
Jun
(1) |
Jul
(5) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
(29) |
Apr
(4) |
May
(31) |
Jun
(46) |
Jul
|
Aug
(5) |
Sep
(3) |
Oct
(2) |
Nov
(15) |
Dec
|
2011 |
Jan
(8) |
Feb
(1) |
Mar
(6) |
Apr
(10) |
May
(17) |
Jun
(23) |
Jul
(5) |
Aug
(3) |
Sep
(28) |
Oct
(41) |
Nov
(20) |
Dec
(1) |
2012 |
Jan
(20) |
Feb
(15) |
Mar
(1) |
Apr
(1) |
May
(8) |
Jun
(3) |
Jul
(9) |
Aug
(10) |
Sep
(1) |
Oct
(2) |
Nov
(5) |
Dec
(8) |
2013 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
(16) |
May
(13) |
Jun
(6) |
Jul
(1) |
Aug
(2) |
Sep
(3) |
Oct
(2) |
Nov
(6) |
Dec
(2) |
2014 |
Jan
(4) |
Feb
(5) |
Mar
(15) |
Apr
(16) |
May
|
Jun
(6) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
(13) |
Dec
(8) |
2015 |
Jan
(7) |
Feb
|
Mar
(3) |
Apr
|
May
(6) |
Jun
(24) |
Jul
(3) |
Aug
(10) |
Sep
(36) |
Oct
(3) |
Nov
|
Dec
(39) |
2016 |
Jan
(9) |
Feb
(38) |
Mar
(25) |
Apr
(3) |
May
(12) |
Jun
(5) |
Jul
(40) |
Aug
(13) |
Sep
(4) |
Oct
|
Nov
|
Dec
(2) |
2017 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(29) |
Jun
(26) |
Jul
(12) |
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
From: James L. <j.l...@au...> - 2009-05-12 00:40:12
|
Hi Stefan et al. If I may just make a short point here, I have to disagree with Stefan. It seems that stochastic simulations would be within the scope that we'd like MIASE to cover (and I think I recall this being discussed in Auckland.) If we were describing a stochastic simulation, a particular simulation experiment described in a MIASE compliant format would need to be allowed to have different results between iterations of the simulation. This is just one example, but I am sure there are others that would be relevant to us. Kind regards, James > I agree with Nicolas that we should ask for reproducibility. Repeating > a simulation is insufficient. If a simulation experiment which is > described in a MIASE compliant format is allowed to have different > results we have not specified anything at least in my opinion. If the > result is not reproducible something is wrong with the initial > experiment, the second experiment or the MIASE compliant description. > Either of the 3 cases can be resolved through further investigation, > which is the normal scientific process. > > Thanks, > Stefan > > > |
From: Stefan H. <sh...@vb...> - 2009-05-11 17:30:35
|
Hello Nicolas, On Mon, 11 May 2009 16:07:34 +0100 (BST) "Nicolas Le Novere" <le...@eb...> wrote: > > > I agree with Nicolas that we should ask for reproducibility. > > I think you got me wrong. I actually advocated the opposite. And I > believed I was reflecting the Auckland consensus, that said MIASE did > not deal with correctness. > > > Repeating > > a simulation is insufficient. If a simulation experiment which is > > described in a MIASE compliant format is allowed to have different > > results we have not specified anything at least in my opinion. > > Yes, we specified a way to discover the discrepancy, which is exactly > what a materials and methods is for. Results are described in another > part of the paper, called ... results. We should encourage the use of > standard formats to describe numerical results, but I do not believe > the results themselves are part of a "simulation experiment > description". And even less the correctness criteria to compare two > sets of results. > I think we actually agree, I was not precise enough. We need to distinguish between the use of MIASE and its scope. The scope of MIASE is to specify repeatable simulations. The use is to determine reproducibility of scientific results. Thanks, Stefan -- Stefan Hoops, Ph.D. Senior Project Associate Virginia Bioinformatics Institute - 0477 Virginia Tech Bioinformatics Facility II Blacksburg, Va 24061, USA Phone: (540) 231-1799 Fax: (540) 231-2606 Email: sh...@vb... |
From: Stefan H. <sh...@vb...> - 2009-05-11 14:02:07
|
Hello All, On Mon, 11 May 2009 09:22:38 +0100 Nicolas Le novère <le...@eb...> wrote: > Dagmar Köhn wrote: > > > Regarding the reproducibility... > > Well, being a bit picky, and because we had kind of a related > > discussion with Frank and Nicolas, I checked the term > > reproducibility, which is often defined as: "the ability of a test > > or experiment to be accurately reproduced, or replicated, by > > someone else working independently." Re-occurring in all the > > definitions is the fact that in order to be reproducible, an > > experiment has to be run by several groups/ under different lab > > conditions. I am not sure, but tend to think that "someone else" > > might not only include another user on the same simulation tool, > > but also a different simulation/computational environment?! > > There is nothing in your definition that says the tools have to be > different. The investigators must be different. All this is not at > all specific to computational biology. Experimental science have > something called materials and methods in papers, that allow someone > using the same materials, and the same methods, to produce the same > results and come to the same conclusion. If one finally finds the > Higg boson using the LHC, the experiment will have to be reproduced > many times. I am not certain, another multi-billion accelerator will > be built for that. > > > Maybe we want to avoid the discussion on reproducibility and just > > state that the repetition of experiments is enabled, which is > > easier to achieve as repeatable does not demand different > > conditions per definition. ("Repeatability is the variation in > > measurements taken by a single person or instrument on the same > > item and under the same conditions. A measurement may be said to be > > repeatable when this variation is smaller than some agreed limit.") > > I really believe people won't understand the difference between > reproducibility and repeatability (I am not certain I do). Regarding > the use of different tools, two situations are clear (and were > discussed in Auckland): > 1) The simulation description is encoded in a standard language, and > that language is supported by several simulation tools. No problem. > 2) The simulation tool is a black-box. In that case, the simulation > description should describe exactly how to use the black-box, but > also be sufficient to permit to run the simulation on another tool, > including a black-box. > > The grey area is the simulation that is run with an open code (not a > black-box), but cannot be encoded in a standard format. What do-we > require/request/advise/suggest here? My personal inclination is to > merge it with the case 1) above. Since the code is open, people can > reproduce the simulation (yes, it would take a lot of work, time, > energy and money. But I am not sure we can use that argument at all). > > So at the end we would have open codes and black-boxes. In the first > case, we request the use of standard formats when a relevant standard > exist. In the second case, we ask for reproducibility. > I agree with Nicolas that we should ask for reproducibility. Repeating a simulation is insufficient. If a simulation experiment which is described in a MIASE compliant format is allowed to have different results we have not specified anything at least in my opinion. If the result is not reproducible something is wrong with the initial experiment, the second experiment or the MIASE compliant description. Either of the 3 cases can be resolved through further investigation, which is the normal scientific process. Thanks, Stefan -- Stefan Hoops, Ph.D. Senior Project Associate Virginia Bioinformatics Institute - 0477 Virginia Tech Bioinformatics Facility II Blacksburg, Va 24061, USA Phone: (540) 231-1799 Fax: (540) 231-2606 Email: sh...@vb... |
From: Richard A. <ra...@st...> - 2009-05-11 09:40:56
|
Just on the reproducibility/repeatability issue. I did a quick survey of wet-lab standards are, looking in instructions for authors, and a quick survey of other Mibbi projects. Looking at 4 top experimental journals: Cell, Nature MSB Journal of Cell Biology EMBO Journal the only requirement for 'Material & Methods' sections are that sufficient detail is given so that 'all experimental procedures can be repeated by others' i.e., no mention of reproducing results. And a quick survey of 4 Mibbi projects: Misfishie, Miape-GE ,Mimix and Miare The first three have goals of enabling understanding/ability to repeat experiment, but only Miare claims as its goal 'Minimum Information About an RNAi Experiment (MIARE) is a set of reporting guidelines which describes the minimum information that should be reported about an RNAi experiment to enable the unambiguous interpretation and REPRODUCTION of the results'(my caps) The Miare spec has excruciating detail about all aspects of an RNAi experiment. So, if we want to claim a Miase file aims to allow results to be reproduced, we would need to include lots of detail. Or, is there a fundamental distinction between computer simulations and experiment, that simulations are inherently more reproducible and thus do not need such level of detail. Or, should the onus be on the curator of the miase file, who could check how robust the results were, using different software, different OS etc, and come up with some sort of 'reproducibility index', ranging from 'these results are only achieved using MyGreatSoftware version 2.3.1beta on MacOSX10.5' to 'these results are reproducible using any standard implementation of algorithm X'. But, this would be quite a large burden on curators/model builders to evaluate the reproducibility. Richard > Mike Cooling wrote: >> Dagmar, >> >> For 3) off the top of my head I would think 'reproducible' could mean: >> >> Starting with the MIASE description, and access to the software and hardware >> required to implement the algorithm(s) suggested therein, can I reproduce >> the 'desired output' within a 'reasonable' level of tolerance. 'Desired >> output' might be a data set, a figure, or some relationship derived from >> those (for stochastic simulations, maybe the 'desired output' is a >> particular pattern or trend line emerging from a figure plotted from data >> from multiple runs). >> > > OK, I agree on the desired output. > Question is how detailed we would define the 'level of tolerance'. Maybe > we want to leave it as "vague" as this and let the user decide? > > Regarding the reproducibility... > Well, being a bit picky, and because we had kind of a related discussion > with Frank and Nicolas, I checked the term reproducibility, which is > often defined as: "the ability of a test or experiment to be accurately > reproduced, or replicated, by someone else working independently." > Re-occurring in all the definitions is the fact that in order to be > reproducible, an experiment has to be run by several groups/ under > different lab conditions. > I am not sure, but tend to think that "someone else" might not only > include another user on the same simulation tool, but also a different > simulation/computational environment?! > > Maybe we want to avoid the discussion on reproducibility and just state > that the repetition of experiments is enabled, which is easier to > achieve as repeatable does not demand different conditions per > definition. ("Repeatability is the variation in measurements taken by a > single person or instrument on the same item and under the same > conditions. A measurement may be said to be repeatable when this > variation is smaller than some agreed limit.") > > Doing so, we will say that following the MIASE guidelines, experiments > can be repeated. We do not state though that they are reproducible. As a > consequence exchanging them between simulation tools will not be > guaranteed to be possible. > > Is that limitation how far we want to go? Or do we want to include the > reproducibility statement as well? In that case, we probably have to > define the "agreed limit" or "level of tolerance" for a reproduced > simulation result for ourselves and the readers. > > BTW, I am in favor of including the reproducibility, as this is what > makes MIASE most exciting ... :-) > > Dagmar > >> For 5) I also think the two levels might be better as two 'use cases', >> perhaps towards opposing ends of a continuum, more in line with the idea >> expressed in: >> >> "Information on elements of the simulation system that may have minor but >> non-substantive effects on simulation outcome, such as the CPU or operating >> system used, could also be provided but are outside the scope of MIASE and >> are not considered part of the minimum requirements for reproducibility." >> >> My 2c, >> >> >> -----Original Message----- >> From: Dagmar Köhn [mailto:da...@eb...] >> Sent: Friday, 8 May 2009 8:49 p.m. >> To: mia...@li...; p.n...@au...; >> p.h...@au...; Andrew Miller; j.l...@au...; >> r.b...@au...; Mike Cooling; Catherine Lloyd; >> e.c...@au...; David Nickerson; ala...@dp...; >> db...@mc...; dcook@u.washington.edu; Stefan Hoops; bza...@in...; >> hsauro@u.washington.edu; ped...@ma...; Dominic Tolle; >> Vijayalakshmi Chelliah >> Subject: MIASE paper >> >> Dear all, >> >> there have been some improvements on the paper (hopefully!), thanks to many >> helpful comments by Mike Cooling and Frank Bergman (thanks!). I have >> attached an updated version of the paper draft for you (odt & pdf). >> >> Besides some remaining smaller issues, there are also some more striking >> ones which need further discussions. Summarised further down and waiting for >> your comments. Surely, I did forget to put issues, so feel free to open the >> discussion on them! >> >> If you have time to go through the paper (again), I'd say that the >> motivation, "what is not in the scope of miase", the repressilator example >> (which will be enhanced according to a suggestion by MC towards describing >> post-processing also), and discussion sections are fine for now. But the >> rules and the sections belonging to them need to be read again (especially >> "Information on the model" / "- simulation")! >> >> Best, >> Dagmar >> >> issues & required opinions: >> >> 1) future direction: Do you want me to include some comments on other >> existing approaches for the description of simulation experiments or is it >> irrelevant as we are talking about MIs? >> >> 2) the glossary: ... needs to be filled with definitions for simulation, >> experiment, MIASE-compliance, (model, model instance). Suggestions welcome. >> >> 3) results: the notion of correctness/reproducibility. >> In the paper we currently say that >> "The Minimum Information About a Simulation Experiment (MIASE) is a set of >> guidelines that serve as a recommendation for information to be provided >> together with models in order to enable fast, simple and reproducible >> simulation." >> But we do not define nor say what "reproduce"/"reproducible" means to us. >> >> Also, we say that >> "The description of a simulation can lead to different levels of >> correctness. To be MIASE compliant, a description should reproduce the >> result of an experiment in a correct manner." >> But we do not specify what "correct" means for us (When is a simulation >> result actually really corresponding to an earlier result?, thinking also >> about stochastic simulations). >> >> This directly relates to the next issue: >> 5) Information on the simulation: the MIASE-compliance level First, I >> thought it was a good idea to have different levels of MIASE compliance in >> order to make a difference between levels of details of the simulation >> description. But I think this is contra-productive as we call the whole >> thing an "MI" which should either be minimum or not!? So I suppose we should >> agree on the MI and not on different levels of it. >> However, there was the idea of using 2 different MIASE levels (which we >> could re-use as 2 different use cases where MIASE-compliant descriptions >> differ a lot, but both are "correct"): >> [comment Mike Cooling] >> "I suspect there could be two levels of MIASE information (this is >> orthogonal to what Mike Hucka was suggesting as in your point on progressive >> guidelines). One might be conceptual and mentions the algorithm used (not >> implementation), and general processes to achieve the example result. The >> second level might be more an inventory of how the example result was >> actually achieved, listing the implementation of said algorithm(s). >> The first level would be used by those wanting to provide some transparency >> and quality assessment, and allow those without the exact same technology >> some kind of shot at reproducing the same (or similar) results. The second >> ('implementation') level would provide total transparency which could be >> become useful if a) one doesn't have much time and just wants to exactly >> reproduce an example output or b) one tries to use one's own technology to >> produce the example result via the first level but for some reason can't - >> the exact details of an example run would be available to help the keen >> track down the difference between their output and the example. " >> >> 6) MIASE rules: please, could everyone go through them again and make sure >> they all make sense, they are "complete" and correspond to what was said in >> Auckland. >> >> 7) open comments from the Auckland meeting There are some of Nicolas' >> comments on the Auckland meeting left that I did not yet include in the >> paper, because basically I was not sure what they meant to say. Those are: >> - (in the rules section, including) Geometry, Space discretization, Physics >> and Constitutive parameters (PH) >> - (in the guidelines section, talking about MIASE applicability) Could >> domain-specific information be encoded in external ontology? (PH) >> - (in information on the simulation) "We need to mention procedures to >> ensure MIASE-compliance. This may be a problem in the case of proprietary >> code. A repository or a registry of simulation codes may be a way.” -> this >> goes to the compliance discussion actually! >> >> 8) KiSAO >> We had a bit of discussion on whether KiSAO at its current state could >> satisfy the needs for MIASE-compliant algorithm specification. But I think >> we found a work around for the example given in the paper and the rest of >> discussion about KiSAO should go to miase-discuss... Frank!? >> >> >> >> > > > ------------------------------------------------------------------------------ > The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your > production scanning environment may not be a perfect world - but thanks to > Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 > Series Scanner you'll get full speed at 300 dpi even with all image > processing features enabled. http://p.sf.net/sfu/kodak-com > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > > -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Nicolas Le n. <le...@eb...> - 2009-05-11 08:23:03
|
Dagmar Köhn wrote: > Regarding the reproducibility... > Well, being a bit picky, and because we had kind of a related discussion > with Frank and Nicolas, I checked the term reproducibility, which is > often defined as: "the ability of a test or experiment to be accurately > reproduced, or replicated, by someone else working independently." > Re-occurring in all the definitions is the fact that in order to be > reproducible, an experiment has to be run by several groups/ under > different lab conditions. > I am not sure, but tend to think that "someone else" might not only > include another user on the same simulation tool, but also a different > simulation/computational environment?! There is nothing in your definition that says the tools have to be different. The investigators must be different. All this is not at all specific to computational biology. Experimental science have something called materials and methods in papers, that allow someone using the same materials, and the same methods, to produce the same results and come to the same conclusion. If one finally finds the Higg boson using the LHC, the experiment will have to be reproduced many times. I am not certain, another multi-billion accelerator will be built for that. > Maybe we want to avoid the discussion on reproducibility and just state > that the repetition of experiments is enabled, which is easier to > achieve as repeatable does not demand different conditions per > definition. ("Repeatability is the variation in measurements taken by a > single person or instrument on the same item and under the same > conditions. A measurement may be said to be repeatable when this > variation is smaller than some agreed limit.") I really believe people won't understand the difference between reproducibility and repeatability (I am not certain I do). Regarding the use of different tools, two situations are clear (and were discussed in Auckland): 1) The simulation description is encoded in a standard language, and that language is supported by several simulation tools. No problem. 2) The simulation tool is a black-box. In that case, the simulation description should describe exactly how to use the black-box, but also be sufficient to permit to run the simulation on another tool, including a black-box. The grey area is the simulation that is run with an open code (not a black-box), but cannot be encoded in a standard format. What do-we require/request/advise/suggest here? My personal inclination is to merge it with the case 1) above. Since the code is open, people can reproduce the simulation (yes, it would take a lot of work, time, energy and money. But I am not sure we can use that argument at all). So at the end we would have open codes and black-boxes. In the first case, we request the use of standard formats when a relevant standard exist. In the second case, we ask for reproducibility. -- Nicolas LE NOVERE, Computational Neurobiology, EMBL-EBI, Wellcome-Trust Genome Campus, Hinxton CB101SD UK, Mob:+447833147074, Tel:+441223494521 Fax:468,Skype:n.lenovere,AIM:nlenovere,MSN:nle...@ho...(NOT email) http://www.ebi.ac.uk/~lenov/, http://www.ebi.ac.uk/compneur/ |
From: Dagmar K. <da...@eb...> - 2009-05-11 07:53:58
|
Mike Cooling wrote: > Dagmar, > > For 3) off the top of my head I would think 'reproducible' could mean: > > Starting with the MIASE description, and access to the software and hardware > required to implement the algorithm(s) suggested therein, can I reproduce > the 'desired output' within a 'reasonable' level of tolerance. 'Desired > output' might be a data set, a figure, or some relationship derived from > those (for stochastic simulations, maybe the 'desired output' is a > particular pattern or trend line emerging from a figure plotted from data > from multiple runs). > OK, I agree on the desired output. Question is how detailed we would define the 'level of tolerance'. Maybe we want to leave it as "vague" as this and let the user decide? Regarding the reproducibility... Well, being a bit picky, and because we had kind of a related discussion with Frank and Nicolas, I checked the term reproducibility, which is often defined as: "the ability of a test or experiment to be accurately reproduced, or replicated, by someone else working independently." Re-occurring in all the definitions is the fact that in order to be reproducible, an experiment has to be run by several groups/ under different lab conditions. I am not sure, but tend to think that "someone else" might not only include another user on the same simulation tool, but also a different simulation/computational environment?! Maybe we want to avoid the discussion on reproducibility and just state that the repetition of experiments is enabled, which is easier to achieve as repeatable does not demand different conditions per definition. ("Repeatability is the variation in measurements taken by a single person or instrument on the same item and under the same conditions. A measurement may be said to be repeatable when this variation is smaller than some agreed limit.") Doing so, we will say that following the MIASE guidelines, experiments can be repeated. We do not state though that they are reproducible. As a consequence exchanging them between simulation tools will not be guaranteed to be possible. Is that limitation how far we want to go? Or do we want to include the reproducibility statement as well? In that case, we probably have to define the "agreed limit" or "level of tolerance" for a reproduced simulation result for ourselves and the readers. BTW, I am in favor of including the reproducibility, as this is what makes MIASE most exciting ... :-) Dagmar > For 5) I also think the two levels might be better as two 'use cases', > perhaps towards opposing ends of a continuum, more in line with the idea > expressed in: > > "Information on elements of the simulation system that may have minor but > non-substantive effects on simulation outcome, such as the CPU or operating > system used, could also be provided but are outside the scope of MIASE and > are not considered part of the minimum requirements for reproducibility." > > My 2c, > > > -----Original Message----- > From: Dagmar Köhn [mailto:da...@eb...] > Sent: Friday, 8 May 2009 8:49 p.m. > To: mia...@li...; p.n...@au...; > p.h...@au...; Andrew Miller; j.l...@au...; > r.b...@au...; Mike Cooling; Catherine Lloyd; > e.c...@au...; David Nickerson; ala...@dp...; > db...@mc...; dcook@u.washington.edu; Stefan Hoops; bza...@in...; > hsauro@u.washington.edu; ped...@ma...; Dominic Tolle; > Vijayalakshmi Chelliah > Subject: MIASE paper > > Dear all, > > there have been some improvements on the paper (hopefully!), thanks to many > helpful comments by Mike Cooling and Frank Bergman (thanks!). I have > attached an updated version of the paper draft for you (odt & pdf). > > Besides some remaining smaller issues, there are also some more striking > ones which need further discussions. Summarised further down and waiting for > your comments. Surely, I did forget to put issues, so feel free to open the > discussion on them! > > If you have time to go through the paper (again), I'd say that the > motivation, "what is not in the scope of miase", the repressilator example > (which will be enhanced according to a suggestion by MC towards describing > post-processing also), and discussion sections are fine for now. But the > rules and the sections belonging to them need to be read again (especially > "Information on the model" / "- simulation")! > > Best, > Dagmar > > issues & required opinions: > > 1) future direction: Do you want me to include some comments on other > existing approaches for the description of simulation experiments or is it > irrelevant as we are talking about MIs? > > 2) the glossary: ... needs to be filled with definitions for simulation, > experiment, MIASE-compliance, (model, model instance). Suggestions welcome. > > 3) results: the notion of correctness/reproducibility. > In the paper we currently say that > "The Minimum Information About a Simulation Experiment (MIASE) is a set of > guidelines that serve as a recommendation for information to be provided > together with models in order to enable fast, simple and reproducible > simulation." > But we do not define nor say what "reproduce"/"reproducible" means to us. > > Also, we say that > "The description of a simulation can lead to different levels of > correctness. To be MIASE compliant, a description should reproduce the > result of an experiment in a correct manner." > But we do not specify what "correct" means for us (When is a simulation > result actually really corresponding to an earlier result?, thinking also > about stochastic simulations). > > This directly relates to the next issue: > 5) Information on the simulation: the MIASE-compliance level First, I > thought it was a good idea to have different levels of MIASE compliance in > order to make a difference between levels of details of the simulation > description. But I think this is contra-productive as we call the whole > thing an "MI" which should either be minimum or not!? So I suppose we should > agree on the MI and not on different levels of it. > However, there was the idea of using 2 different MIASE levels (which we > could re-use as 2 different use cases where MIASE-compliant descriptions > differ a lot, but both are "correct"): > [comment Mike Cooling] > "I suspect there could be two levels of MIASE information (this is > orthogonal to what Mike Hucka was suggesting as in your point on progressive > guidelines). One might be conceptual and mentions the algorithm used (not > implementation), and general processes to achieve the example result. The > second level might be more an inventory of how the example result was > actually achieved, listing the implementation of said algorithm(s). > The first level would be used by those wanting to provide some transparency > and quality assessment, and allow those without the exact same technology > some kind of shot at reproducing the same (or similar) results. The second > ('implementation') level would provide total transparency which could be > become useful if a) one doesn't have much time and just wants to exactly > reproduce an example output or b) one tries to use one's own technology to > produce the example result via the first level but for some reason can't - > the exact details of an example run would be available to help the keen > track down the difference between their output and the example. " > > 6) MIASE rules: please, could everyone go through them again and make sure > they all make sense, they are "complete" and correspond to what was said in > Auckland. > > 7) open comments from the Auckland meeting There are some of Nicolas' > comments on the Auckland meeting left that I did not yet include in the > paper, because basically I was not sure what they meant to say. Those are: > - (in the rules section, including) Geometry, Space discretization, Physics > and Constitutive parameters (PH) > - (in the guidelines section, talking about MIASE applicability) Could > domain-specific information be encoded in external ontology? (PH) > - (in information on the simulation) "We need to mention procedures to > ensure MIASE-compliance. This may be a problem in the case of proprietary > code. A repository or a registry of simulation codes may be a way.” -> this > goes to the compliance discussion actually! > > 8) KiSAO > We had a bit of discussion on whether KiSAO at its current state could > satisfy the needs for MIASE-compliant algorithm specification. But I think > we found a work around for the example given in the paper and the rest of > discussion about KiSAO should go to miase-discuss... Frank!? > > > > |
From: Dagmar K. <da...@eb...> - 2009-05-11 05:46:38
|
Hi Richard, thanks for your comments, I hope you don't mind me forwarding them to the list, as I think there are some good discussion points in there. Dagmar |
From: Mike C. <m.c...@au...> - 2009-05-10 21:57:26
|
Dagmar, For 3) off the top of my head I would think 'reproducible' could mean: Starting with the MIASE description, and access to the software and hardware required to implement the algorithm(s) suggested therein, can I reproduce the 'desired output' within a 'reasonable' level of tolerance. 'Desired output' might be a data set, a figure, or some relationship derived from those (for stochastic simulations, maybe the 'desired output' is a particular pattern or trend line emerging from a figure plotted from data from multiple runs). For 5) I also think the two levels might be better as two 'use cases', perhaps towards opposing ends of a continuum, more in line with the idea expressed in: "Information on elements of the simulation system that may have minor but non-substantive effects on simulation outcome, such as the CPU or operating system used, could also be provided but are outside the scope of MIASE and are not considered part of the minimum requirements for reproducibility." My 2c, -----Original Message----- From: Dagmar Köhn [mailto:da...@eb...] Sent: Friday, 8 May 2009 8:49 p.m. To: mia...@li...; p.n...@au...; p.h...@au...; Andrew Miller; j.l...@au...; r.b...@au...; Mike Cooling; Catherine Lloyd; e.c...@au...; David Nickerson; ala...@dp...; db...@mc...; dcook@u.washington.edu; Stefan Hoops; bza...@in...; hsauro@u.washington.edu; ped...@ma...; Dominic Tolle; Vijayalakshmi Chelliah Subject: MIASE paper Dear all, there have been some improvements on the paper (hopefully!), thanks to many helpful comments by Mike Cooling and Frank Bergman (thanks!). I have attached an updated version of the paper draft for you (odt & pdf). Besides some remaining smaller issues, there are also some more striking ones which need further discussions. Summarised further down and waiting for your comments. Surely, I did forget to put issues, so feel free to open the discussion on them! If you have time to go through the paper (again), I'd say that the motivation, "what is not in the scope of miase", the repressilator example (which will be enhanced according to a suggestion by MC towards describing post-processing also), and discussion sections are fine for now. But the rules and the sections belonging to them need to be read again (especially "Information on the model" / "- simulation")! Best, Dagmar issues & required opinions: 1) future direction: Do you want me to include some comments on other existing approaches for the description of simulation experiments or is it irrelevant as we are talking about MIs? 2) the glossary: ... needs to be filled with definitions for simulation, experiment, MIASE-compliance, (model, model instance). Suggestions welcome. 3) results: the notion of correctness/reproducibility. In the paper we currently say that "The Minimum Information About a Simulation Experiment (MIASE) is a set of guidelines that serve as a recommendation for information to be provided together with models in order to enable fast, simple and reproducible simulation." But we do not define nor say what "reproduce"/"reproducible" means to us. Also, we say that "The description of a simulation can lead to different levels of correctness. To be MIASE compliant, a description should reproduce the result of an experiment in a correct manner." But we do not specify what "correct" means for us (When is a simulation result actually really corresponding to an earlier result?, thinking also about stochastic simulations). This directly relates to the next issue: 5) Information on the simulation: the MIASE-compliance level First, I thought it was a good idea to have different levels of MIASE compliance in order to make a difference between levels of details of the simulation description. But I think this is contra-productive as we call the whole thing an "MI" which should either be minimum or not!? So I suppose we should agree on the MI and not on different levels of it. However, there was the idea of using 2 different MIASE levels (which we could re-use as 2 different use cases where MIASE-compliant descriptions differ a lot, but both are "correct"): [comment Mike Cooling] "I suspect there could be two levels of MIASE information (this is orthogonal to what Mike Hucka was suggesting as in your point on progressive guidelines). One might be conceptual and mentions the algorithm used (not implementation), and general processes to achieve the example result. The second level might be more an inventory of how the example result was actually achieved, listing the implementation of said algorithm(s). The first level would be used by those wanting to provide some transparency and quality assessment, and allow those without the exact same technology some kind of shot at reproducing the same (or similar) results. The second ('implementation') level would provide total transparency which could be become useful if a) one doesn't have much time and just wants to exactly reproduce an example output or b) one tries to use one's own technology to produce the example result via the first level but for some reason can't - the exact details of an example run would be available to help the keen track down the difference between their output and the example. " 6) MIASE rules: please, could everyone go through them again and make sure they all make sense, they are "complete" and correspond to what was said in Auckland. 7) open comments from the Auckland meeting There are some of Nicolas' comments on the Auckland meeting left that I did not yet include in the paper, because basically I was not sure what they meant to say. Those are: - (in the rules section, including) Geometry, Space discretization, Physics and Constitutive parameters (PH) - (in the guidelines section, talking about MIASE applicability) Could domain-specific information be encoded in external ontology? (PH) - (in information on the simulation) "We need to mention procedures to ensure MIASE-compliance. This may be a problem in the case of proprietary code. A repository or a registry of simulation codes may be a way. -> this goes to the compliance discussion actually! 8) KiSAO We had a bit of discussion on whether KiSAO at its current state could satisfy the needs for MIASE-compliant algorithm specification. But I think we found a work around for the example given in the paper and the rest of discussion about KiSAO should go to miase-discuss... Frank!? |
From: Dagmar K. <da...@eb...> - 2009-05-08 08:52:50
|
Dear all, there have been some improvements on the paper (hopefully!), thanks to many helpful comments by Mike Cooling and Frank Bergman (thanks!). I have attached an updated version of the paper draft for you (odt & pdf). Besides some remaining smaller issues, there are also some more striking ones which need further discussions. Summarised further down and waiting for your comments. Surely, I did forget to put issues, so feel free to open the discussion on them! If you have time to go through the paper (again), I'd say that the motivation, "what is not in the scope of miase", the repressilator example (which will be enhanced according to a suggestion by MC towards describing post-processing also), and discussion sections are fine for now. But the rules and the sections belonging to them need to be read again (especially "Information on the model" / "- simulation")! Best, Dagmar issues & required opinions: 1) future direction: Do you want me to include some comments on other existing approaches for the description of simulation experiments or is it irrelevant as we are talking about MIs? 2) the glossary: ... needs to be filled with definitions for simulation, experiment, MIASE-compliance, (model, model instance). Suggestions welcome. 3) results: the notion of correctness/reproducibility. In the paper we currently say that "The Minimum Information About a Simulation Experiment (MIASE) is a set of guidelines that serve as a recommendation for information to be provided together with models in order to enable fast, simple and reproducible simulation." But we do not define nor say what "reproduce"/"reproducible" means to us. Also, we say that "The description of a simulation can lead to different levels of correctness. To be MIASE compliant, a description should reproduce the result of an experiment in a correct manner." But we do not specify what "correct" means for us (When is a simulation result actually really corresponding to an earlier result?, thinking also about stochastic simulations). This directly relates to the next issue: 5) Information on the simulation: the MIASE-compliance level First, I thought it was a good idea to have different levels of MIASE compliance in order to make a difference between levels of details of the simulation description. But I think this is contra-productive as we call the whole thing an "MI" which should either be minimum or not!? So I suppose we should agree on the MI and not on different levels of it. However, there was the idea of using 2 different MIASE levels (which we could re-use as 2 different use cases where MIASE-compliant descriptions differ a lot, but both are "correct"): [comment Mike Cooling] "I suspect there could be two levels of MIASE information (this is orthogonal to what Mike Hucka was suggesting as in your point on progressive guidelines). One might be conceptual and mentions the algorithm used (not implementation), and general processes to achieve the example result. The second level might be more an inventory of how the example result was actually achieved, listing the implementation of said algorithm(s). The first level would be used by those wanting to provide some transparency and quality assessment, and allow those without the exact same technology some kind of shot at reproducing the same (or similar) results. The second ('implementation') level would provide total transparency which could be become useful if a) one doesn't have much time and just wants to exactly reproduce an example output or b) one tries to use one's own technology to produce the example result via the first level but for some reason can't - the exact details of an example run would be available to help the keen track down the difference between their output and the example. " 6) MIASE rules: please, could everyone go through them again and make sure they all make sense, they are "complete" and correspond to what was said in Auckland. 7) open comments from the Auckland meeting There are some of Nicolas' comments on the Auckland meeting left that I did not yet include in the paper, because basically I was not sure what they meant to say. Those are: - (in the rules section, including) Geometry, Space discretization, Physics and Constitutive parameters (PH) - (in the guidelines section, talking about MIASE applicability) Could domain-specific information be encoded in external ontology? (PH) - (in information on the simulation) "We need to mention procedures to ensure MIASE-compliance. This may be a problem in the case of proprietary code. A repository or a registry of simulation codes may be a way.” -> this goes to the compliance discussion actually! 8) KiSAO We had a bit of discussion on whether KiSAO at its current state could satisfy the needs for MIASE-compliant algorithm specification. But I think we found a work around for the example given in the paper and the rest of discussion about KiSAO should go to miase-discuss... Frank!? |
From: James L. <j.l...@au...> - 2009-04-29 03:24:41
|
Hi Dagmar et al., Looks very good :) I probably don't have much to say on the technical side of things but I do have one minor note, which is that CellML does not actually stand for "Cell Markup Language." The name is just "CellML." Kind regards, James Lawson Dagmar Köhn wrote: > I mixed up journal names ... I also put it wrong in the paper draft. > > Sorry for the confusion and thanks for pointing it out, Nicolas. > It has to be PLoS Computational Biology. > Dagmar > > > Nicolas Le novère wrote: >> da...@eb... wrote: >> >> >>> The most current version is attached as odt and pdf file. Please feel >>> free to read, comment, rewrite. >>> >> >> Dagmar, >> >> Thanks a lot for putting that together. >> >> >>> Nicolas suggested to aim at handing the guidelines in at PLOS >>> Biotechnology. I do hope that finds everybody's agreement, otherwise >>> please let me know. >>> >> >> I think you meant PLoS Computational Biology. >> >> Cheers, >> >> > |
From: Dagmar K. <da...@eb...> - 2009-04-24 21:54:42
|
I mixed up journal names ... I also put it wrong in the paper draft. Sorry for the confusion and thanks for pointing it out, Nicolas. It has to be PLoS Computational Biology. Dagmar Nicolas Le novère wrote: > da...@eb... wrote: > > >> The most current version is attached as odt and pdf file. Please feel >> free to read, comment, rewrite. >> > > Dagmar, > > Thanks a lot for putting that together. > > >> Nicolas suggested to aim at handing the guidelines in at PLOS >> Biotechnology. I do hope that finds everybody's agreement, otherwise >> please let me know. >> > > I think you meant PLoS Computational Biology. > > Cheers, > > |
From: Nicolas Le n. <le...@eb...> - 2009-04-24 21:38:51
|
da...@eb... wrote: > The most current version is attached as odt and pdf file. Please feel > free to read, comment, rewrite. Dagmar, Thanks a lot for putting that together. > Nicolas suggested to aim at handing the guidelines in at PLOS > Biotechnology. I do hope that finds everybody's agreement, otherwise > please let me know. I think you meant PLoS Computational Biology. Cheers, -- Nicolas LE NOVERE, Computational Neurobiology, EMBL-EBI, Wellcome-Trust Genome Campus, Hinxton CB101SD UK, Mob:+447833147074, Tel:+441223494521 Fax:468,Skype:n.lenovere,AIM:nlenovere,MSN:nle...@ho...(NOT email) http://www.ebi.ac.uk/~lenov/, http://www.ebi.ac.uk/compneur/ |
From: <da...@eb...> - 2009-04-24 16:53:08
|
Dear all, Based on the discussions about MIASE we had during meetings in 2008 and especially due to the many comments during the Auckland meeting 2009, we have written down a first proposal for the MIASE guidelines. The most current version is attached as odt and pdf file. Please feel free to read, comment, rewrite. If you are able to make changes using openoffice or any other similar software, please try to track your changes in the document, that will be very helpful when putting the comments together. You are also welcome to E-mail comments either to me or to the whole list. Nicolas suggested to aim at handing the guidelines in at PLOS Biotechnology. I do hope that finds everybody's agreement, otherwise please let me know. Best regards, Dagmar |
From: David N. <dav...@gm...> - 2009-04-15 12:59:37
|
Hi all, I have started to jot down some notes of the discussions that took place at the workshop in Auckland last week with regard to the use of CellML models with SED-ML (http://apps.sourceforge.net/mediawiki/miase/index.php?title=Auckland-2009-SED-ML). Currently the notes are a bit sparse, but if people can take a peak and either edit directly or post questions to this list, that would be great. Thanks, David. |
From: Tommaso M. <ma...@co...> - 2009-04-13 11:02:40
|
Call for Papers Please find attached the 2nd Call For Papers for: ************************************************************************************ International Workshop on High Performance Computational Systems Biology (HiBi 2009) Trento, Italy, October 14-16, 2009 http://www.cosbi.eu/hibi09/ ************************************************************************************ The HiBi (High performance computational systems Biology) workshop establishes a forum to link researchers in the areas of parallel computing and computational systems biology. One of the main limitations in managing models of biological systems comes from the fundamental difference between the high parallelism evident in biochemical reactions and the sequential environments employed for the analysis of these reactions. Such limitations affect all varieties of continuous, deterministic, discrete and stochastic models; undermining the applicability of simulation techniques and analysis of biological models. The goal of HiBi is therefore to bring together researchers in the fields of high performance computing and computational systems biology. Experts from around the world will present their current work, discuss profound challenges, new ideas, results, applications and their experience relating to key aspects of high performance computing in biology. IMPORTANT DEADLINES: paper submission deadline (abstract only) April 24, 2009 paper submission deadline May 1, 2009 acceptance notification June 23, 2009 poster abstract deadline July 10, 2009 revised papers due July 15, 2009 early registration deadline July 31, 2009 Best regards, HiBi organizing committee ******************************************************************************** We apologize if you received multiple copies of this Call for Papers. Please feel free to distribute it to those who might be interested. ******************************************************************************** -------------------------------------------------------------------------------- International Workshop on High Performance Computational Systems Biology (HiBi 2009) October 14-16, 2009 Trento, Italy -------------------------------------------------------------------------------- 2nd Call for Papers (http://www.cosbi.eu/hibi09/cfp.pdf) HiBi09 poster (http://www.cosbi.eu/hibi09/poster.pdf) The HiBi (High performance computational systems Biology) workshop establishes a forum to link researchers in the areas of parallel computing and computational systems biology. One of the main limitations in managing models of biological systems comes from the fundamental difference between the high parallelism evident in biochemical reactions and the sequential environments employed for the analysis of these reactions. Such limitations affect all varieties of continuous, deterministic, discrete and stochastic models, undermining the applicability of simulation techniques and analysis of biological models. The goal of HiBi is therefore to bring together researchers in the fields of high performance computing and computational systems biology. Experts from around the world will present their current work and discuss profound challenges, new ideas, results, applications and their experience relating to key aspects of high performance computing in biology. Topics of interest include, but are not limited to: Parallel Stochastic simulation Biological and Numerical parallel computing Parallel and distributed architectures Emerging processing architectures (e.g., Cell processors, GPUs, mixed CPU-FPGA) Parallel Model Checking techniques Parallel parameters estimation Parallel algorithms for biological analysis Application of concurrency theory to biology Parallel visualization algorithms Web-services and internet computing for e-Science Tools and applications Submission HiBi welcomes submissions for: Research papers. Paper will be no more than 10 pages long. Additional material for the aid of the reviewers (e.g., proofs) has to be placed in a clearly marked appendix that is not included in the page limit. HiBi referees are at liberty to ignore appendices, and papers must be understandable without them. The paper will be presented as a regular talk. Tool papers. Paper will be no more that 2 pages long. The paper should describe a new relevant tool together with its features, comparison with previous tools evaluation, or any other information that shows the novelty and the relevance of the tool. The paper will be presented as a tutorial in a specific conference session. Poster abstracts. A 1 page abstract has to present the poster. The abstract presents interesting recent results or novel ideas that are not quite ready for a regular full-length paper. A poster associated to the paper will be shown in the main conference room, and a dedicated session will be arranged. Proceedings All papers submitted to HiBi will be evaluated by at least three reviewers on the basis of their originality, technical quality, scientific or practical impact on the field, and adequacy of references. Submissions must be in English. All accepted papers will appear in the Conference Proceedings published by the IEEE Computing Society Press, and must be presented at the conference by one of the authors. Papers will need to be in 8.5" x 11", Two-Column Format. Formatting instructions and templates can be obtained from the IEEE Conference Publishing Service (http://www.computer.org/portal/pages/cscps/cps/cps_forms.html ). Authors should prepare an Adobe Acrobat PDF version of their papers and submit it together with any other source files. Submission of papers to HiBi09 will be done through the EasyChair conference system (https://www.easychair.org/login.cgi?conf=hibi09 ). Submissions not adhering to the specified format and length may be rejected immediately, without review. Special Issues and Prizes HiBi strongly encourages students participation. Two students awards will be available: TC-SIM Best Student Research Paper Award SERSC Best Student Tool Paper Award HiBi also works to spread relevant results. Two special issues will be arranged. Briefings in Bionformatics: a selection of best research papers will be considered for a special issue of Briefings in Bioinformatics SERSC Journal: a selection of best posters will be invited to submit an extended abstract to the International Journal of Advanced Science and Technology or to the International Journal of Bio-Science and Bio- Technology Program Committee Members: - Tommaso Mazza (CoSBi, Italy) - chair - Mateo Valero (Technical University of Catalonia, Spain) - co-chair - Yutaka Akiyama (Tokyo Institute of Technology, Japan) - Paolo Ballarini (CoSBi, Italy) - Gianfranco Balbo (University of Torino, Italy) - Lubos Brim (Masaryk University, Czech Republic) - Kevin Burrage (University of Oxford, UK) - Hidde De Jong (INRIA Rhône-Alpes, France) - François Fages (INRIA Paris-Rocquencourt, France) - Fabrizio Gagliardi (Microsoft Research Cambridge, United Kingdom) - Martin Leucker (Technische Universität München, Germany) - Gethin Norman (Oxford University Computing Laboratory, United Kingdom) - Dave Parker (Oxford University Computing Laboratory, United Kingdom) - Davide Prandi (CoSBi, Italy) - Assaf Schuster (Israel Institute of Technology, Israel) - Koichi Takahashi (RIKEN, Japan) - David Torrents Arenales (Barcelona Supercomputing Center, Spain) - Adelinde Uhrmacher (University of Rostock, Germany) Information related to HiBi 2009 is available at the official HiBi 2009 Web site: http://www.cosbi.eu/hibi09/ -- Tommaso Mazza, Ph. D. The Microsoft Research - University of Trento Centre for Computational and Systems Biology piazza Manci 17 38100 Povo, Trento Tel.: +39 0461 882833 Fax.: +39 0461 882814 E-mail: ma...@co... Web page: http://www.cosbi.eu/ |
From: Nicolas Le n. <le...@eb...> - 2009-04-06 05:11:07
|
Ion noted that inserting XML is not elegant at the moment since the XPath of target identifies the piece of XML to replace. Inserting a species in SBML means replacing the whole listOfSpecies. One easy solution would be to add an attribute changeType, that could take two values: "insertion" and "replacement" (note that deletion is a replacement by nothing). changeType="insertion" means insert the "newXML" at the address specified by the target attribute. changeType="replacement" means replace the "newXML" at the address specified by the target attribute. -- Nicolas LE NOVERE, Computational Neurobiology, EMBL-EBI, Wellcome-Trust Genome Campus, Hinxton CB101SD UK, Mob:+447833147074, Tel:+441223494521 Fax:468,Skype:n.lenovere,AIM:nlenovere,MSN:nle...@ho...(NOT email) http://www.ebi.ac.uk/~lenov/, http://www.ebi.ac.uk/compneur/ |
From: Frank B. <fbergman@u.washington.edu> - 2009-03-27 10:48:34
|
thank you :) Frank On Mar 27, 2009, at 10:10 AM, Richard Adams wrote: > > In response to Frank's Xpath comments, an my own > http://sourceforge.net/mailarchive/forum.php?thread_name=020a01c8c5df%2406ce6e70%24146b4b50%24%40washington.edu&forum_name=miase-discuss > > I've added an 'sbml' prefix to the example sedml models XPath strings. > > Richard > > -- > Dr Richard Adams > Senior Software Developer, > Computational Systems Biology Group, > University of Edinburgh > Tel: 0131 650 8285 > email : ric...@ed... > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss |
From: Richard A. <ra...@st...> - 2009-03-27 10:10:48
|
In response to Frank's Xpath comments, an my own http://sourceforge.net/mailarchive/forum.php?thread_name=020a01c8c5df%2406ce6e70%24146b4b50%24%40washington.edu&forum_name=miase-discuss I've added an 'sbml' prefix to the example sedml models XPath strings. Richard -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Frank B. <fbergman@u.washington.edu> - 2009-03-22 23:54:38
|
> > Frank, so you mean something like: > > String sedMLDoc.getModel(String modelId); > Almost, while constructing models, you'd still want to deal with properties of changed model. Something like: changedModel = new Model(sourceModelReference); changedModel.AddChange(new ChangeAttribute(sbmlId, newValue)); but then you'd have: changedModel.ToSBML() / changedModel.ToCellML() / changedModel.ToModelString() which return you the actual model string. > which would hide all the need for getting XPath terms, and return > a String of the model? That would be more direct, for sure. If I'm consuming a Miase description, I'd definitely not be interested in the nitty-gritty of applying the Xpath queries, that's what the library should do. Cheers Frank > > Richard > > > Just to say that I agree with everything Frank wrote (hopefully I > > understood right what he meant ;-)). > > > > Dagmar > > > > > > > > Frank Bergmann wrote: > >> Hello ... > >> > >> > >>> Hi, > >>> Just a query about the listOfModels element - is it assumed to be > >>> the case that, for a given sedml document, it will describe a > single > >>> 'reference' model version, which will be the first model in the > list, > >>> and subsequent entries in the list will refer to that base model, > >>> applying changes (as is the case with the example 'oscillation to > >>> chaos' model)? Or, is it possible that a given sedml document could > >>> refer to >1 reference model? > >>> > >> > >> Definitely ... you can (and should be able to) refer to more than > one model. > >> > >> > >> > >>> I'm asking from the point of view of how a self-contained .miase > >>> archive file might look. (By self contained, I mean that the > >>> application using it will be able to access all the information > needed > >>> to run a simulation without having to query repositories - with > the > >>> help of a supporting library. ) In the first case above, it would > >>> only ever contain a single model file, in the second case, it could > >>> contain multiple models. > >>> > >> > >> In my implementation, the self-contained .miase file, still has the > option > >> of referring to external sources. Thus There could be 0 or more > models > >> contained in the archive. > >> > >> > >>> Also, do we want to allow archives which contain all model variants > as > >>> complete models, or are we going to stick with a 'reference' > version > >>> of the model, and generate new, altered versions of the model by > >>> applying Xpath changes on the fly at runtime? I assume the latter, > but > >>> I can't find anything in the current examples, or in the > CMSB2008.pdf > >>> to distinguish between these possibilities. > >>> > >>> > >> > >> I hoped to keep the .miase file as simple as possible. It really > just > >> provides the option of storing (locally) referenced files > conveniently. > >> Apart from that there should not be more logic needed to interpret a > .miase > >> file, as compared to the original experiment description. For that > reason > >> I'd opt of making the changes on the fly, they won't be expensive to > do. > >> While writing out you can opt to refer to the altered models if you > so > >> desire, however with that you loose the information, that the second > model > >> is actually the first model with a handful of changes ... > >> > >> > >>> From the point of view of supporting library methods, I was > thinking > >>> something along the lines of: > >>> > >>> MiaseArchive archive = libSedml.readArchive(String > >>> filepathToMiaseArchive); > >>> > >>> // A MiaseArchive enables access to a sedml document, and model > >>> file(s?) > >>> SedmlDocument document = archive.getSedMlDocument(); > >>> // access Sedml DOM via > >>> document.getSedMlModel(); > >>> > >>> //access model file ( or is it possible to have > 1 model file > >>> here??) > >>> > >> > >> YES it is ... :) ... > >> > >> > >>> // if the latter, need method to iterate over multiple model > files. > >>> String referenceModelContents = archive.getModelAsString(); > >>> > >>> //now, get XPath change from SedML model and apply it > >>> ListOfChanges changes = > >>> sedMl.getListOfModels.getModelById(modelID).getListOfChanges(); > >>> String alteredModelContents = > >>> document.applyChanges(referenceModelContents,changes); > >>> > >>> > >> > >> Instead of > >> > >> document.applyChanges(referenceModelContents,changes); > >> > >> I'd prefer to just grab the changed model. That's what I did in my > >> implementation, for a user there is hardly a difference on the two > model > >> types. Thy have the option of getting the model source from both, > and in > >> that case the changed model, applies the changes automatically. > >> > >> Best > >> Frank > >> > >> > >> -------------------------------------------------------------------- > ---------- > >> Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) > are > >> powering Web 2.0 with engaging, cross-platform capabilities. Quickly > and > >> easily build your RIAs with Flex Builder, the Eclipse(TM)based > development > >> software that enables intelligent coding and step-through debugging. > >> Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > >> _______________________________________________ > >> Miase-discuss mailing list > >> Mia...@li... > >> https://lists.sourceforge.net/lists/listinfo/miase-discuss > >> > > > > > > --------------------------------------------------------------------- > --------- > > Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) > are > > powering Web 2.0 with engaging, cross-platform capabilities. Quickly > and > > easily build your RIAs with Flex Builder, the Eclipse(TM)based > development > > software that enables intelligent coding and step-through debugging. > > Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > > _______________________________________________ > > Miase-discuss mailing list > > Mia...@li... > > https://lists.sourceforge.net/lists/listinfo/miase-discuss > > > > > > > > -- > Dr Richard Adams > Senior Software Developer, > Computational Systems Biology Group, > University of Edinburgh > Tel: 0131 650 8285 > email : ric...@ed... > > -- > The University of Edinburgh is a charitable body, registered in > Scotland, with registration number SC005336. > > > > ----------------------------------------------------------------------- > ------- > Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are > powering Web 2.0 with engaging, cross-platform capabilities. Quickly > and > easily build your RIAs with Flex Builder, the Eclipse(TM)based > development > software that enables intelligent coding and step-through debugging. > Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss |
From: Richard A. <ra...@st...> - 2009-03-22 23:36:11
|
Oh great, That all sounds fine then. I'll try to look at your example next week Dagmar. Frank, so you mean something like: String sedMLDoc.getModel(String modelId); which would hide all the need for getting XPath terms, and return a String of the model? That would be more direct, for sure. Richard > Just to say that I agree with everything Frank wrote (hopefully I > understood right what he meant ;-)). > > Dagmar > > > > Frank Bergmann wrote: >> Hello ... >> >> >>> Hi, >>> Just a query about the listOfModels element - is it assumed to be >>> the case that, for a given sedml document, it will describe a single >>> 'reference' model version, which will be the first model in the list, >>> and subsequent entries in the list will refer to that base model, >>> applying changes (as is the case with the example 'oscillation to >>> chaos' model)? Or, is it possible that a given sedml document could >>> refer to >1 reference model? >>> >> >> Definitely ... you can (and should be able to) refer to more than one model. >> >> >> >>> I'm asking from the point of view of how a self-contained .miase >>> archive file might look. (By self contained, I mean that the >>> application using it will be able to access all the information needed >>> to run a simulation without having to query repositories - with the >>> help of a supporting library. ) In the first case above, it would >>> only ever contain a single model file, in the second case, it could >>> contain multiple models. >>> >> >> In my implementation, the self-contained .miase file, still has the option >> of referring to external sources. Thus There could be 0 or more models >> contained in the archive. >> >> >>> Also, do we want to allow archives which contain all model variants as >>> complete models, or are we going to stick with a 'reference' version >>> of the model, and generate new, altered versions of the model by >>> applying Xpath changes on the fly at runtime? I assume the latter, but >>> I can't find anything in the current examples, or in the CMSB2008.pdf >>> to distinguish between these possibilities. >>> >>> >> >> I hoped to keep the .miase file as simple as possible. It really just >> provides the option of storing (locally) referenced files conveniently. >> Apart from that there should not be more logic needed to interpret a .miase >> file, as compared to the original experiment description. For that reason >> I'd opt of making the changes on the fly, they won't be expensive to do. >> While writing out you can opt to refer to the altered models if you so >> desire, however with that you loose the information, that the second model >> is actually the first model with a handful of changes ... >> >> >>> From the point of view of supporting library methods, I was thinking >>> something along the lines of: >>> >>> MiaseArchive archive = libSedml.readArchive(String >>> filepathToMiaseArchive); >>> >>> // A MiaseArchive enables access to a sedml document, and model >>> file(s?) >>> SedmlDocument document = archive.getSedMlDocument(); >>> // access Sedml DOM via >>> document.getSedMlModel(); >>> >>> //access model file ( or is it possible to have > 1 model file >>> here??) >>> >> >> YES it is ... :) ... >> >> >>> // if the latter, need method to iterate over multiple model files. >>> String referenceModelContents = archive.getModelAsString(); >>> >>> //now, get XPath change from SedML model and apply it >>> ListOfChanges changes = >>> sedMl.getListOfModels.getModelById(modelID).getListOfChanges(); >>> String alteredModelContents = >>> document.applyChanges(referenceModelContents,changes); >>> >>> >> >> Instead of >> >> document.applyChanges(referenceModelContents,changes); >> >> I'd prefer to just grab the changed model. That's what I did in my >> implementation, for a user there is hardly a difference on the two model >> types. Thy have the option of getting the model source from both, and in >> that case the changed model, applies the changes automatically. >> >> Best >> Frank >> >> >> ------------------------------------------------------------------------------ >> Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are >> powering Web 2.0 with engaging, cross-platform capabilities. Quickly and >> easily build your RIAs with Flex Builder, the Eclipse(TM)based development >> software that enables intelligent coding and step-through debugging. >> Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com >> _______________________________________________ >> Miase-discuss mailing list >> Mia...@li... >> https://lists.sourceforge.net/lists/listinfo/miase-discuss >> > > > ------------------------------------------------------------------------------ > Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are > powering Web 2.0 with engaging, cross-platform capabilities. Quickly and > easily build your RIAs with Flex Builder, the Eclipse(TM)based development > software that enables intelligent coding and step-through debugging. > Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > > -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Dagmar K. <da...@eb...> - 2009-03-22 18:27:25
|
... it is in the svn, sedml-tmp.xsd is updated. Changed: so far, only one plot of a type (2d/3d/report) could occur within the listOfOutput, I have corrected that and set it to unbounded. dagmar |
From: Dagmar K. <da...@eb...> - 2009-03-22 11:45:28
|
Just to say that I agree with everything Frank wrote (hopefully I understood right what he meant ;-)). Dagmar Frank Bergmann wrote: > Hello ... > > >> Hi, >> Just a query about the listOfModels element - is it assumed to be >> the case that, for a given sedml document, it will describe a single >> 'reference' model version, which will be the first model in the list, >> and subsequent entries in the list will refer to that base model, >> applying changes (as is the case with the example 'oscillation to >> chaos' model)? Or, is it possible that a given sedml document could >> refer to >1 reference model? >> > > Definitely ... you can (and should be able to) refer to more than one model. > > > >> I'm asking from the point of view of how a self-contained .miase >> archive file might look. (By self contained, I mean that the >> application using it will be able to access all the information needed >> to run a simulation without having to query repositories - with the >> help of a supporting library. ) In the first case above, it would >> only ever contain a single model file, in the second case, it could >> contain multiple models. >> > > In my implementation, the self-contained .miase file, still has the option > of referring to external sources. Thus There could be 0 or more models > contained in the archive. > > >> Also, do we want to allow archives which contain all model variants as >> complete models, or are we going to stick with a 'reference' version >> of the model, and generate new, altered versions of the model by >> applying Xpath changes on the fly at runtime? I assume the latter, but >> I can't find anything in the current examples, or in the CMSB2008.pdf >> to distinguish between these possibilities. >> >> > > I hoped to keep the .miase file as simple as possible. It really just > provides the option of storing (locally) referenced files conveniently. > Apart from that there should not be more logic needed to interpret a .miase > file, as compared to the original experiment description. For that reason > I'd opt of making the changes on the fly, they won't be expensive to do. > While writing out you can opt to refer to the altered models if you so > desire, however with that you loose the information, that the second model > is actually the first model with a handful of changes ... > > >> From the point of view of supporting library methods, I was thinking >> something along the lines of: >> >> MiaseArchive archive = libSedml.readArchive(String >> filepathToMiaseArchive); >> >> // A MiaseArchive enables access to a sedml document, and model >> file(s?) >> SedmlDocument document = archive.getSedMlDocument(); >> // access Sedml DOM via >> document.getSedMlModel(); >> >> //access model file ( or is it possible to have > 1 model file >> here??) >> > > YES it is ... :) ... > > >> // if the latter, need method to iterate over multiple model files. >> String referenceModelContents = archive.getModelAsString(); >> >> //now, get XPath change from SedML model and apply it >> ListOfChanges changes = >> sedMl.getListOfModels.getModelById(modelID).getListOfChanges(); >> String alteredModelContents = >> document.applyChanges(referenceModelContents,changes); >> >> > > Instead of > > document.applyChanges(referenceModelContents,changes); > > I'd prefer to just grab the changed model. That's what I did in my > implementation, for a user there is hardly a difference on the two model > types. Thy have the option of getting the model source from both, and in > that case the changed model, applies the changes automatically. > > Best > Frank > > > ------------------------------------------------------------------------------ > Apps built with the Adobe(R) Flex(R) framework and Flex Builder(TM) are > powering Web 2.0 with engaging, cross-platform capabilities. Quickly and > easily build your RIAs with Flex Builder, the Eclipse(TM)based development > software that enables intelligent coding and step-through debugging. > Download the free 60 day trial. http://p.sf.net/sfu/www-adobe-com > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > |
From: Frank B. <fbergman@u.washington.edu> - 2009-03-22 02:05:00
|
Hello ... > Hi, > Just a query about the listOfModels element - is it assumed to be > the case that, for a given sedml document, it will describe a single > 'reference' model version, which will be the first model in the list, > and subsequent entries in the list will refer to that base model, > applying changes (as is the case with the example 'oscillation to > chaos' model)? Or, is it possible that a given sedml document could > refer to >1 reference model? Definitely ... you can (and should be able to) refer to more than one model. > I'm asking from the point of view of how a self-contained .miase > archive file might look. (By self contained, I mean that the > application using it will be able to access all the information needed > to run a simulation without having to query repositories - with the > help of a supporting library. ) In the first case above, it would > only ever contain a single model file, in the second case, it could > contain multiple models. In my implementation, the self-contained .miase file, still has the option of referring to external sources. Thus There could be 0 or more models contained in the archive. > Also, do we want to allow archives which contain all model variants as > complete models, or are we going to stick with a 'reference' version > of the model, and generate new, altered versions of the model by > applying Xpath changes on the fly at runtime? I assume the latter, but > I can't find anything in the current examples, or in the CMSB2008.pdf > to distinguish between these possibilities. > I hoped to keep the .miase file as simple as possible. It really just provides the option of storing (locally) referenced files conveniently. Apart from that there should not be more logic needed to interpret a .miase file, as compared to the original experiment description. For that reason I'd opt of making the changes on the fly, they won't be expensive to do. While writing out you can opt to refer to the altered models if you so desire, however with that you loose the information, that the second model is actually the first model with a handful of changes ... > From the point of view of supporting library methods, I was thinking > something along the lines of: > > MiaseArchive archive = libSedml.readArchive(String > filepathToMiaseArchive); > > // A MiaseArchive enables access to a sedml document, and model > file(s?) > SedmlDocument document = archive.getSedMlDocument(); > // access Sedml DOM via > document.getSedMlModel(); > > //access model file ( or is it possible to have > 1 model file > here??) YES it is ... :) ... > // if the latter, need method to iterate over multiple model files. > String referenceModelContents = archive.getModelAsString(); > > //now, get XPath change from SedML model and apply it > ListOfChanges changes = > sedMl.getListOfModels.getModelById(modelID).getListOfChanges(); > String alteredModelContents = > document.applyChanges(referenceModelContents,changes); > Instead of document.applyChanges(referenceModelContents,changes); I'd prefer to just grab the changed model. That's what I did in my implementation, for a user there is hardly a difference on the two model types. Thy have the option of getting the model source from both, and in that case the changed model, applies the changes automatically. Best Frank |
From: Richard A. <ra...@st...> - 2009-03-22 01:17:34
|
Hi, Just a query about the listOfModels element - is it assumed to be the case that, for a given sedml document, it will describe a single 'reference' model version, which will be the first model in the list, and subsequent entries in the list will refer to that base model, applying changes (as is the case with the example 'oscillation to chaos' model)? Or, is it possible that a given sedml document could refer to >1 reference model? I'm asking from the point of view of how a self-contained .miase archive file might look. (By self contained, I mean that the application using it will be able to access all the information needed to run a simulation without having to query repositories - with the help of a supporting library. ) In the first case above, it would only ever contain a single model file, in the second case, it could contain multiple models. Also, do we want to allow archives which contain all model variants as complete models, or are we going to stick with a 'reference' version of the model, and generate new, altered versions of the model by applying Xpath changes on the fly at runtime? I assume the latter, but I can't find anything in the current examples, or in the CMSB2008.pdf to distinguish between these possibilities. From the point of view of supporting library methods, I was thinking something along the lines of: MiaseArchive archive = libSedml.readArchive(String filepathToMiaseArchive); // A MiaseArchive enables access to a sedml document, and model file(s?) SedmlDocument document = archive.getSedMlDocument(); // access Sedml DOM via document.getSedMlModel(); //access model file ( or is it possible to have > 1 model file here??) // if the latter, need method to iterate over multiple model files. String referenceModelContents = archive.getModelAsString(); //now, get XPath change from SedML model and apply it ListOfChanges changes = sedMl.getListOfModels.getModelById(modelID).getListOfChanges(); String alteredModelContents = document.applyChanges(referenceModelContents,changes); // now use your model specific library to parse the altered model and run simulation. If we can clarify this I can put some examples into svn. I can send a class diagram if that would be more explanatory. Cheers Richard -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Richard A. <ra...@st...> - 2009-03-06 12:03:18
|
Hi Frank Like you said it would be useful to help tools decide if they could handle model file content. The format could be very simple ,similar to jar e.g., Description: circadian clock simulations, using different photo -periods Experiment description:Mysedmlfile.xml Model Types:sbml, cellml ( comma separated list of model types) but perhaps you're right, it can be left for now. Cheers Richard > >> >> >> Other than that, the format Frank proposed seems quite effective - >> Frank I hope to be able to test out some .miase files generated by the >> java library on your website soon (at least before the Biomodels >> meeting). > > That would be great, > thanks > > best > Frank > >> >> >> >> Thanks >> Richard >> >> >> >> >> >> >> >> >> >> -- >> Dr Richard Adams >> Senior Software Developer, >> Computational Systems Biology Group, >> University of Edinburgh >> Tel: 0131 650 8281/8285 >> email : ric...@ed... >> >> -- >> The University of Edinburgh is a charitable body, registered in >> Scotland, with registration number SC005336. >> >> >> >> ------------------------------------------------------------------------------ >> Open Source Business Conference (OSBC), March 24-25, 2009, San >> Francisco, CA >> -OSBC tackles the biggest issue in open source: Open Sourcing the >> Enterprise >> -Strategies to boost innovation and cut costs with open source >> participation >> -Receive a $600 discount off the registration fee with the source >> code: SFAD >> http://p.sf.net/sfu/XcvMzF8H >> _______________________________________________ >> Miase-discuss mailing list >> Mia...@li... >> https://lists.sourceforge.net/lists/listinfo/miase-discuss -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8281/8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |