This list is closed, nobody may subscribe to it.
2008 |
Jan
(1) |
Feb
(35) |
Mar
(41) |
Apr
(4) |
May
(19) |
Jun
(26) |
Jul
(3) |
Aug
(2) |
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(3) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
(49) |
Feb
(15) |
Mar
(17) |
Apr
(7) |
May
(26) |
Jun
(1) |
Jul
(5) |
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2010 |
Jan
|
Feb
(1) |
Mar
(29) |
Apr
(4) |
May
(31) |
Jun
(46) |
Jul
|
Aug
(5) |
Sep
(3) |
Oct
(2) |
Nov
(15) |
Dec
|
2011 |
Jan
(8) |
Feb
(1) |
Mar
(6) |
Apr
(10) |
May
(17) |
Jun
(23) |
Jul
(5) |
Aug
(3) |
Sep
(28) |
Oct
(41) |
Nov
(20) |
Dec
(1) |
2012 |
Jan
(20) |
Feb
(15) |
Mar
(1) |
Apr
(1) |
May
(8) |
Jun
(3) |
Jul
(9) |
Aug
(10) |
Sep
(1) |
Oct
(2) |
Nov
(5) |
Dec
(8) |
2013 |
Jan
(2) |
Feb
(1) |
Mar
|
Apr
(16) |
May
(13) |
Jun
(6) |
Jul
(1) |
Aug
(2) |
Sep
(3) |
Oct
(2) |
Nov
(6) |
Dec
(2) |
2014 |
Jan
(4) |
Feb
(5) |
Mar
(15) |
Apr
(16) |
May
|
Jun
(6) |
Jul
(3) |
Aug
(2) |
Sep
(1) |
Oct
|
Nov
(13) |
Dec
(8) |
2015 |
Jan
(7) |
Feb
|
Mar
(3) |
Apr
|
May
(6) |
Jun
(24) |
Jul
(3) |
Aug
(10) |
Sep
(36) |
Oct
(3) |
Nov
|
Dec
(39) |
2016 |
Jan
(9) |
Feb
(38) |
Mar
(25) |
Apr
(3) |
May
(12) |
Jun
(5) |
Jul
(40) |
Aug
(13) |
Sep
(4) |
Oct
|
Nov
|
Dec
(2) |
2017 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(29) |
Jun
(26) |
Jul
(12) |
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
From: Nicolas Le n. <le...@eb...> - 2010-03-03 19:34:21
|
[I move this to sed-ml discuss for reasons that I hope are obvious] Frank Bergmann wrote: > Nicolas Le Novere wrote: >> As you know there is another library developed by Richard, jlibsedml. >> I suspect users will expect the same kinds of services from the two >> libraries. Would-there be possibities for you guys to discuss and >> document an API that would be common, and re-usable by anyone wanting >> to develop support for SED-ML in another language. This API would be >> libsedml, and the other libraries would be instances, like jlibsedml, >> nlibsedml, plibsedml etc. Whether this should be hosted via sed-ml or >> via libsedml on sourceforge, would be left to be discussed by you >> developers. > > This does sound interesting and I'm definitely open for discussions > about a generic API. Though I'm not sure yet how much implementing the > API in one language would help in the implementation for another. Just > take the two examples we have here: In my implementation I took the > object model as guideline in order to implement the library I needed, I > made use of other libraries I wrote before to allow for the mathematical > transformations. The approach for jlibsedml, and please correct me if > I'm wrong here Richard, was to use JAXB, that is code generation out of > a schema. While the two API's will be similar, as the Schema and the > object model hopefully agree with each other, the actual programming > code will be different. This discussion is probably too technical for me. What I meant was that: 1) The libraries should be developed in sync, to cover the same versions 2) A user expects to find the same services offered by the different libraries. One way could be to move in the same direction than libSBML and JSBML, with the API defined by a list of tests. -- Nicolas LE NOVERE, Computational Neurobiology, EMBL-EBI, Wellcome-Trust Genome Campus, Hinxton CB101SD UK, Mob:+447833147074, Tel:+441223494521 Fax:468,Skype:n.lenovere,AIM:nlenovere,MSN:nle...@ho...(NOT email) http://www.ebi.ac.uk/~lenov/, http://www.ebi.ac.uk/compneur/, @lenovere |
From: Dagmar W. <dag...@un...> - 2010-02-26 20:29:42
|
Dear all, it has been very quiet on the mailing list recently, but nevertheless there has been some work done on MIASE and SED-ML. First of all, you might have noticed that you are now receiving mails from a sed...@li... mailing list. I have moved everyone from the miase-discuss mailing list on sed-ml-discuss, please remove yourself if you were only interested in MIASE development and/or feel you are not interested in the future SED-ML development process and language updates. The reason for having the new mailing list is that we split the SED-ML project on Sourceforge from the MIASE project. As many of you know, we have submitted the MIASE guidelines paper just recently (currently under review). That seemed like a good time for SED-ML to have its own project and discussion list from now on. Please note that MIASE does not have a mailing list at the moment. All the archived miase-discuss mails are still available from the SED-ML project website on http://sourceforge.net/mailarchive/forum.php?forum_name=sed-ml-discuss . In line with the new SF project, we have also restructured the SED-ML homepage, which is now available as part of the biomodels.net initiative from http://biomodels.net/sed-ml . During the last months we started to work on a SED-ML specification. Currently, Nicolas Le Novere and myself are preparing a draft version which we would like to discuss with you during the biomodels.net meeting in Seattle in April http://sbml.org/Events/Hackathons/The_2010_SBML-BioModels.net_Hackathon . We will have a break-out session dedicated to SED-ML there. The specification will describe the SED-ML Level 1 Version 1, that is the language version discussed back in Okinawa, corresponding to the UML Class Diagram of October 2008. But while we are preparing that specification, we hope to continue working toward a full version that would support more features, incorporating the discussion outcomes of previous meetings. So far the news, please keep me aware of possible SED-ML support on your side, and I will be happy to add it on the website. I hope to see many of you in the SED-ML break-out session in Seattle for discussion of the SED-ML specification draft. Yours, Dagmar |
From: Dagmar K. <dag...@un...> - 2009-10-08 15:56:08
|
Hello all, for those interested in the MIASE guidelines development, I have set up a flow chart (http://en.wikipedia.org/wiki/Flowchart) to describe the current MIASE requirements. I will be very happy for suggestions on the enhancement of and the spotting of mistakes in the diagram - both, regarding the workflow itself, but also regarding the naming of processes and decisions. Best, Dagmar |
From: Frank B. <fbergman@u.washington.edu> - 2009-07-24 15:37:50
|
On Jul 24, 2009, at 4:19 AM, Dagmar Koehn wrote: > Richard Adams wrote: >> Hi Dagmar >> The system of having a supported schema and an experimental one >> seems fine, especially at this early stage, I don't think we want to >> have > 2 though, it would be too confusing. Is there any reason not >> to include the new simulation classes? >> > > Not for me, I thought you were voting for it ;-) I'm fine with the OM as it is on SF (looking at the PDF). I have not started to implement it, as I'm not sure we actually have started to discuss the new classes. The only issue i see currently is with the AnySimulation class, which has arbitrary SimulationProperty elements attached. As it is not regulated at all I'm not sure what to do with them. I'd also like to see an example of how the new Range's would be working ... And finally it might make sense for backward compatibility to keep our uniform timecourse simulation class ... but that is optional :) So far my thoughts cheers Frank > But then I got you wrong. > Sorry. > > >> Richard >> >> >> >>> Hej Richard, >>> >>> sorry for the late reply... >>> Not sure what you mean by "first MIASE paper", but >>> I agree that v0r1 is pretty outdated. >>> I would say that sed-tmp is accepted well enough to be turned into >>> v0r2. >>> But then we should include the new simulation classes as well. I am >>> reluctant on putting out a new "release" of the schema and not >>> using the >>> most current one (i.e. including the changed simulation classes). >>> >>> To my knowledge apart from you, Frank currently has an >>> implementation, >>> so I'd like to know what he thinks about updating the schema to >>> the new >>> version? Frank? >>> >>> (Maybe same question goes to Ion?! How far are you with SED-ML >>> support?) >>> >>> Best, >>> Dagmar >>> >>> >>> Richard Adams wrote: >>> >>>> Hi Dagmar, >>>> Is there any intention /need to add these changes to the Sedml >>>> schema >>>> just yet, or is it better to wait, especially if the first MIASe >>>> paper will only cover standard simulations? >>>> Also I was wondering if the 'sedml-tmp' schema is now accepted >>>> enough >>>> to become the current schema? at present we have the original >>>> version-0-release-1 which now seems rather outdated, since it >>>> does not >>>> support notes and annotations. E.g., in order for all the example >>>> models to be compliant with the version-0-release-1 schema, we >>>> need to >>>> add support for notes via SedBase extension, and also to add in >>>> 'maxOccurs=unbounded' attributes into the listOfOutputs >>>> definition. Is >>>> this worth doing, or should we just make the sedml-tmp into >>>> version-0-release-2? >>>> >>>> Cheers >>>> Richard >>>> >>>> >>>> >>>>> Dear all, >>>>> >>>>> we had some discussion on extending the SED-ML simulation class >>>>> during >>>>> the CellML combined workshop in Auckland. Frank and I tried to >>>>> come up >>>>> with a good class structure to map bifurcation analyses and >>>>> steady state >>>>> analyses. The according UML diagram can be found on sourceforge >>>>> (PDF): >>>>> http://miase.svn.sourceforge.net/viewvc/miase/sed-ml/documents/sed-om/sedom-tmp.pdf?revision=115 >>>>> >>>>> I marked the changed classes in red. >>>>> We do have three main simulation classes now, namely: >>>>> BifurcationSearch1D (a bifurcation analysis over a parameter >>>>> with a >>>>> uniform range), TimeCourse (time courses with uniform, vector or >>>>> functional range) and SteadyStateParameterScan1D (over 1 >>>>> parameter with >>>>> different ranges again). >>>>> >>>>> Questions are: >>>>> 1) Do you think this is a good way/structure of mapping steady >>>>> state and >>>>> bifurcation experiments to SED-ML? >>>>> 2) Frank suggested that the AnySimulation class should be >>>>> removed from >>>>> the diagram as (1) the use of already defined classes should be >>>>> encouraged, (2) and self-defined simulation experiment classes >>>>> cannot >>>>> be reused anyways... he also mentioned that (3) it was still >>>>> possible to >>>>> describe an experiment that is not representable in SED-ML so >>>>> far. It >>>>> could always be described inside the notes/annotation element. I >>>>> thought >>>>> however that it might be useful to have at least a structure >>>>> (even if >>>>> very general) for self-defined experiment types. But maybe it is >>>>> sufficient to state in the documentation that "not-representable >>>>> experiments should be defined in the according notes/ >>>>> annotation". Are >>>>> there any opinions on that? >>>>> >>>>> Best, >>>>> Dagmar >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> Crystal Reports - New Free Runtime and 30 Day Trial >>>>> Check out the new simplified licensing option that enables >>>>> unlimited >>>>> royalty-free distribution of the report engine for externally >>>>> facing >>>>> server and web deployment. >>>>> http://p.sf.net/sfu/businessobjects >>>>> _______________________________________________ >>>>> Miase-discuss mailing list >>>>> Mia...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/miase-discuss >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Miase-discuss mailing list >>> Mia...@li... >>> https://lists.sourceforge.net/lists/listinfo/miase-discuss >>> >>> >>> >> >> >> >> > > > ------------------------------------------------------------------------------ > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss |
From: Dagmar K. <dag...@un...> - 2009-07-24 11:19:35
|
Richard Adams wrote: > Hi Dagmar > The system of having a supported schema and an experimental one > seems fine, especially at this early stage, I don't think we want to > have > 2 though, it would be too confusing. Is there any reason not > to include the new simulation classes? > Not for me, I thought you were voting for it ;-) But then I got you wrong. Sorry. > Richard > > > >> Hej Richard, >> >> sorry for the late reply... >> Not sure what you mean by "first MIASE paper", but >> I agree that v0r1 is pretty outdated. >> I would say that sed-tmp is accepted well enough to be turned into v0r2. >> But then we should include the new simulation classes as well. I am >> reluctant on putting out a new "release" of the schema and not using the >> most current one (i.e. including the changed simulation classes). >> >> To my knowledge apart from you, Frank currently has an implementation, >> so I'd like to know what he thinks about updating the schema to the new >> version? Frank? >> >> (Maybe same question goes to Ion?! How far are you with SED-ML support?) >> >> Best, >> Dagmar >> >> >> Richard Adams wrote: >> >>> Hi Dagmar, >>> Is there any intention /need to add these changes to the Sedml schema >>> just yet, or is it better to wait, especially if the first MIASe >>> paper will only cover standard simulations? >>> Also I was wondering if the 'sedml-tmp' schema is now accepted enough >>> to become the current schema? at present we have the original >>> version-0-release-1 which now seems rather outdated, since it does not >>> support notes and annotations. E.g., in order for all the example >>> models to be compliant with the version-0-release-1 schema, we need to >>> add support for notes via SedBase extension, and also to add in >>> 'maxOccurs=unbounded' attributes into the listOfOutputs definition. Is >>> this worth doing, or should we just make the sedml-tmp into >>> version-0-release-2? >>> >>> Cheers >>> Richard >>> >>> >>> >>>> Dear all, >>>> >>>> we had some discussion on extending the SED-ML simulation class during >>>> the CellML combined workshop in Auckland. Frank and I tried to come up >>>> with a good class structure to map bifurcation analyses and steady state >>>> analyses. The according UML diagram can be found on sourceforge (PDF): >>>> http://miase.svn.sourceforge.net/viewvc/miase/sed-ml/documents/sed-om/sedom-tmp.pdf?revision=115 >>>> >>>> I marked the changed classes in red. >>>> We do have three main simulation classes now, namely: >>>> BifurcationSearch1D (a bifurcation analysis over a parameter with a >>>> uniform range), TimeCourse (time courses with uniform, vector or >>>> functional range) and SteadyStateParameterScan1D (over 1 parameter with >>>> different ranges again). >>>> >>>> Questions are: >>>> 1) Do you think this is a good way/structure of mapping steady state and >>>> bifurcation experiments to SED-ML? >>>> 2) Frank suggested that the AnySimulation class should be removed from >>>> the diagram as (1) the use of already defined classes should be >>>> encouraged, (2) and self-defined simulation experiment classes cannot >>>> be reused anyways... he also mentioned that (3) it was still possible to >>>> describe an experiment that is not representable in SED-ML so far. It >>>> could always be described inside the notes/annotation element. I thought >>>> however that it might be useful to have at least a structure (even if >>>> very general) for self-defined experiment types. But maybe it is >>>> sufficient to state in the documentation that "not-representable >>>> experiments should be defined in the according notes/annotation". Are >>>> there any opinions on that? >>>> >>>> Best, >>>> Dagmar >>>> >>>> ------------------------------------------------------------------------------ >>>> Crystal Reports - New Free Runtime and 30 Day Trial >>>> Check out the new simplified licensing option that enables unlimited >>>> royalty-free distribution of the report engine for externally facing >>>> server and web deployment. >>>> http://p.sf.net/sfu/businessobjects >>>> _______________________________________________ >>>> Miase-discuss mailing list >>>> Mia...@li... >>>> https://lists.sourceforge.net/lists/listinfo/miase-discuss >>>> >>>> >>>> >>>> >>> >>> >>> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Miase-discuss mailing list >> Mia...@li... >> https://lists.sourceforge.net/lists/listinfo/miase-discuss >> >> >> > > > > |
From: Richard A. <ra...@st...> - 2009-07-24 09:32:58
|
Hi Dagmar The system of having a supported schema and an experimental one seems fine, especially at this early stage, I don't think we want to have > 2 though, it would be too confusing. Is there any reason not to include the new simulation classes? Richard > Hej Richard, > > sorry for the late reply... > Not sure what you mean by "first MIASE paper", but > I agree that v0r1 is pretty outdated. > I would say that sed-tmp is accepted well enough to be turned into v0r2. > But then we should include the new simulation classes as well. I am > reluctant on putting out a new "release" of the schema and not using the > most current one (i.e. including the changed simulation classes). > > To my knowledge apart from you, Frank currently has an implementation, > so I'd like to know what he thinks about updating the schema to the new > version? Frank? > > (Maybe same question goes to Ion?! How far are you with SED-ML support?) > > Best, > Dagmar > > > Richard Adams wrote: >> Hi Dagmar, >> Is there any intention /need to add these changes to the Sedml schema >> just yet, or is it better to wait, especially if the first MIASe >> paper will only cover standard simulations? >> Also I was wondering if the 'sedml-tmp' schema is now accepted enough >> to become the current schema? at present we have the original >> version-0-release-1 which now seems rather outdated, since it does not >> support notes and annotations. E.g., in order for all the example >> models to be compliant with the version-0-release-1 schema, we need to >> add support for notes via SedBase extension, and also to add in >> 'maxOccurs=unbounded' attributes into the listOfOutputs definition. Is >> this worth doing, or should we just make the sedml-tmp into >> version-0-release-2? >> >> Cheers >> Richard >> >> >>> Dear all, >>> >>> we had some discussion on extending the SED-ML simulation class during >>> the CellML combined workshop in Auckland. Frank and I tried to come up >>> with a good class structure to map bifurcation analyses and steady state >>> analyses. The according UML diagram can be found on sourceforge (PDF): >>> http://miase.svn.sourceforge.net/viewvc/miase/sed-ml/documents/sed-om/sedom-tmp.pdf?revision=115 >>> >>> I marked the changed classes in red. >>> We do have three main simulation classes now, namely: >>> BifurcationSearch1D (a bifurcation analysis over a parameter with a >>> uniform range), TimeCourse (time courses with uniform, vector or >>> functional range) and SteadyStateParameterScan1D (over 1 parameter with >>> different ranges again). >>> >>> Questions are: >>> 1) Do you think this is a good way/structure of mapping steady state and >>> bifurcation experiments to SED-ML? >>> 2) Frank suggested that the AnySimulation class should be removed from >>> the diagram as (1) the use of already defined classes should be >>> encouraged, (2) and self-defined simulation experiment classes cannot >>> be reused anyways... he also mentioned that (3) it was still possible to >>> describe an experiment that is not representable in SED-ML so far. It >>> could always be described inside the notes/annotation element. I thought >>> however that it might be useful to have at least a structure (even if >>> very general) for self-defined experiment types. But maybe it is >>> sufficient to state in the documentation that "not-representable >>> experiments should be defined in the according notes/annotation". Are >>> there any opinions on that? >>> >>> Best, >>> Dagmar >>> >>> ------------------------------------------------------------------------------ >>> Crystal Reports - New Free Runtime and 30 Day Trial >>> Check out the new simplified licensing option that enables unlimited >>> royalty-free distribution of the report engine for externally facing >>> server and web deployment. >>> http://p.sf.net/sfu/businessobjects >>> _______________________________________________ >>> Miase-discuss mailing list >>> Mia...@li... >>> https://lists.sourceforge.net/lists/listinfo/miase-discuss >>> >>> >>> >> >> >> >> > > > ------------------------------------------------------------------------------ > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > > -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Dagmar K. <dag...@un...> - 2009-07-24 09:10:52
|
Hej Richard, sorry for the late reply... Not sure what you mean by "first MIASE paper", but I agree that v0r1 is pretty outdated. I would say that sed-tmp is accepted well enough to be turned into v0r2. But then we should include the new simulation classes as well. I am reluctant on putting out a new "release" of the schema and not using the most current one (i.e. including the changed simulation classes). To my knowledge apart from you, Frank currently has an implementation, so I'd like to know what he thinks about updating the schema to the new version? Frank? (Maybe same question goes to Ion?! How far are you with SED-ML support?) Best, Dagmar Richard Adams wrote: > Hi Dagmar, > Is there any intention /need to add these changes to the Sedml schema > just yet, or is it better to wait, especially if the first MIASe > paper will only cover standard simulations? > Also I was wondering if the 'sedml-tmp' schema is now accepted enough > to become the current schema? at present we have the original > version-0-release-1 which now seems rather outdated, since it does not > support notes and annotations. E.g., in order for all the example > models to be compliant with the version-0-release-1 schema, we need to > add support for notes via SedBase extension, and also to add in > 'maxOccurs=unbounded' attributes into the listOfOutputs definition. Is > this worth doing, or should we just make the sedml-tmp into > version-0-release-2? > > Cheers > Richard > > >> Dear all, >> >> we had some discussion on extending the SED-ML simulation class during >> the CellML combined workshop in Auckland. Frank and I tried to come up >> with a good class structure to map bifurcation analyses and steady state >> analyses. The according UML diagram can be found on sourceforge (PDF): >> http://miase.svn.sourceforge.net/viewvc/miase/sed-ml/documents/sed-om/sedom-tmp.pdf?revision=115 >> >> I marked the changed classes in red. >> We do have three main simulation classes now, namely: >> BifurcationSearch1D (a bifurcation analysis over a parameter with a >> uniform range), TimeCourse (time courses with uniform, vector or >> functional range) and SteadyStateParameterScan1D (over 1 parameter with >> different ranges again). >> >> Questions are: >> 1) Do you think this is a good way/structure of mapping steady state and >> bifurcation experiments to SED-ML? >> 2) Frank suggested that the AnySimulation class should be removed from >> the diagram as (1) the use of already defined classes should be >> encouraged, (2) and self-defined simulation experiment classes cannot >> be reused anyways... he also mentioned that (3) it was still possible to >> describe an experiment that is not representable in SED-ML so far. It >> could always be described inside the notes/annotation element. I thought >> however that it might be useful to have at least a structure (even if >> very general) for self-defined experiment types. But maybe it is >> sufficient to state in the documentation that "not-representable >> experiments should be defined in the according notes/annotation". Are >> there any opinions on that? >> >> Best, >> Dagmar >> >> ------------------------------------------------------------------------------ >> Crystal Reports - New Free Runtime and 30 Day Trial >> Check out the new simplified licensing option that enables unlimited >> royalty-free distribution of the report engine for externally facing >> server and web deployment. >> http://p.sf.net/sfu/businessobjects >> _______________________________________________ >> Miase-discuss mailing list >> Mia...@li... >> https://lists.sourceforge.net/lists/listinfo/miase-discuss >> >> >> > > > > |
From: Richard A. <ra...@st...> - 2009-07-15 23:16:07
|
Hi Dagmar, Is there any intention /need to add these changes to the Sedml schema just yet, or is it better to wait, especially if the first MIASe paper will only cover standard simulations? Also I was wondering if the 'sedml-tmp' schema is now accepted enough to become the current schema? at present we have the original version-0-release-1 which now seems rather outdated, since it does not support notes and annotations. E.g., in order for all the example models to be compliant with the version-0-release-1 schema, we need to add support for notes via SedBase extension, and also to add in 'maxOccurs=unbounded' attributes into the listOfOutputs definition. Is this worth doing, or should we just make the sedml-tmp into version-0-release-2? Cheers Richard > Dear all, > > we had some discussion on extending the SED-ML simulation class during > the CellML combined workshop in Auckland. Frank and I tried to come up > with a good class structure to map bifurcation analyses and steady state > analyses. The according UML diagram can be found on sourceforge (PDF): > http://miase.svn.sourceforge.net/viewvc/miase/sed-ml/documents/sed-om/sedom-tmp.pdf?revision=115 > > I marked the changed classes in red. > We do have three main simulation classes now, namely: > BifurcationSearch1D (a bifurcation analysis over a parameter with a > uniform range), TimeCourse (time courses with uniform, vector or > functional range) and SteadyStateParameterScan1D (over 1 parameter with > different ranges again). > > Questions are: > 1) Do you think this is a good way/structure of mapping steady state and > bifurcation experiments to SED-ML? > 2) Frank suggested that the AnySimulation class should be removed from > the diagram as (1) the use of already defined classes should be > encouraged, (2) and self-defined simulation experiment classes cannot > be reused anyways... he also mentioned that (3) it was still possible to > describe an experiment that is not representable in SED-ML so far. It > could always be described inside the notes/annotation element. I thought > however that it might be useful to have at least a structure (even if > very general) for self-defined experiment types. But maybe it is > sufficient to state in the documentation that "not-representable > experiments should be defined in the according notes/annotation". Are > there any opinions on that? > > Best, > Dagmar > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensing option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > > -- Dr Richard Adams Senior Software Developer, Computational Systems Biology Group, University of Edinburgh Tel: 0131 650 8285 email : ric...@ed... -- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. |
From: Dagmar K. <da...@eb...> - 2009-06-16 08:24:34
|
Dear all, we had some discussion on extending the SED-ML simulation class during the CellML combined workshop in Auckland. Frank and I tried to come up with a good class structure to map bifurcation analyses and steady state analyses. The according UML diagram can be found on sourceforge (PDF): http://miase.svn.sourceforge.net/viewvc/miase/sed-ml/documents/sed-om/sedom-tmp.pdf?revision=115 I marked the changed classes in red. We do have three main simulation classes now, namely: BifurcationSearch1D (a bifurcation analysis over a parameter with a uniform range), TimeCourse (time courses with uniform, vector or functional range) and SteadyStateParameterScan1D (over 1 parameter with different ranges again). Questions are: 1) Do you think this is a good way/structure of mapping steady state and bifurcation experiments to SED-ML? 2) Frank suggested that the AnySimulation class should be removed from the diagram as (1) the use of already defined classes should be encouraged, (2) and self-defined simulation experiment classes cannot be reused anyways... he also mentioned that (3) it was still possible to describe an experiment that is not representable in SED-ML so far. It could always be described inside the notes/annotation element. I thought however that it might be useful to have at least a structure (even if very general) for self-defined experiment types. But maybe it is sufficient to state in the documentation that "not-representable experiments should be defined in the according notes/annotation". Are there any opinions on that? Best, Dagmar |
From: Beard, D. <db...@mc...> - 2009-05-29 20:00:32
|
All, I am a bit reluctant to join this discussion late, and worried that I have not caught all of the points that have been brought up. However, I have finally got to go through the paper and I have some comments. I started marking up the paper by editing the text, but I realized that I was essentially rewriting every sentence. I concluded that maybe that is not the thing for me to do at this stage for two reasons: First, I don't think you want this paper to be "written by a committee". Rather, I think that it would be better written with a single voice, and since you have done a great job of getting this going, it will be great if that voice is yours. That said, I don't think that the paper is in shape yet for editing. So, I will limit my comments to the big picture. In that spirit, I think that an overall goal here is to maintain enough flexibility in these guidelines to keep them practical with enough rigorous strength to make them a powerful standard. My biggest concern is with the Rules. I do not see how the 3 points outlined are entirely clear and ideally organized. For one thing, I am unsure of how the various subheadings (A, B, C) fit under the headings 1, 2, and 3. For example 1-A and 1-C are mathematical requirements, while 1-B is an implementation issue. Yet, the main headings 1, 2, and 3 are all focused on technical implementation. I would suggest a different organization, perhaps separating mathematical issues from computational/implementation issues. For example how about the following: Rules for MIASE... ------------------ The overall guiding principle is that reported simulations must be unambiguously reproducible. To do this, it is required that: 1. The mathematical specification of a model must be complete. A. If it is claimed that the model/simulation has a unique solution, then all governing equations, parameter values, and necessary conditions (initial conditions and/or boundary conditions) are provided. B. For models that include probabilistic/stochastic elements (e.g., chemical master equation, Langevin equation, etc.), then initial state as well as all equations/rules used to generate a trajectory are provided. C. ... 2. All relevant information on the computational implementation must be provided A. The simulation algorithm needed to perform the experiment must be clearly identified. B. .... ------------------ I think that all or nearly all of your points could be organized in this manner, if you think this makes any sense. So, what do you all think? Dan -----Original Message----- From: Dagmar Köhn [mailto:da...@eb...] Sent: Friday, May 29, 2009 1:04 AM To: Poul Nielsen Cc: mia...@li...; p.n...@au...; p.h...@au...; Vijayalakshmi Chelliah; j.l...@au...; hsauro@u.washington.edu; ped...@ma...; Catherine Lloyd; e.c...@au...; bza...@in...; ala...@dp...; Beard, Dan; Stefan Hoops Subject: Re: Dagmar Köhn: MIASE comments Dear Poul, having looked at your suggested changes, there is one thing remaining where I tend not to agree. It concerns the use of repeat/reproduce. It is probably a minor thing, but I just wanted to inform you in case you strongly disagree on the first point and want to object. The last comment I have found on the topic in my Mail folder was by Stefan saying that: "We need to distinguish between the use of MIASE and its scope. The scope of MIASE is to specify repeatable simulations. The use is to determine reproducibility of scientific results." You suggested 3 changes of "repeat" into "reproduce" in the document, those were the following: 1) (Information on the Models) The MIASE guidelines demand that those changes are defined within the experiment description in a way that allows their later repetition (i.e. be traceable), leading to the same instance of the model. Here, I'd say that we are talking about the scope of MIASE rather than the use? I think 'repeat' fits better, so I'd like to leave it. 2) (Discussion) A quantitative model is only useful when it can be simulated in a meaningful way - and in order to do so, model users must be supported in setting up and repeating simulation experiments. Once simulations can be easily reproduced, scientists will start to use models and their simulations as given. The simulations become reliable sources for the composition of models of larger systems which can be re-used instead of starting from the very beginning over and over again. Here, I agree to change 'repeat' to 'reproduce', i.e.: A quantitative model is only useful when it can be simulated in a meaningful way - and in order to do so, model users must be supported in setting up and reproducing simulation experiments. Once simulations can be easily reproduced, scientists will start to use models and their simulations as given. The simulations become reliable sources for the composition of models of larger systems which can be re-used instead of starting from the very beginning over and over again. Best, Dagmar Poul Nielsen wrote: > Dear Dagmar > > I have suggested a couple of minor changes to the MIASE document. > These should be visible as I have turned 'Track Changes' on, as > suggested. It is not clear to me how the altered document should be > uploaded so I am sending my version directly to you. > > Best wishes > Poul |
From: Dagmar K. <da...@eb...> - 2009-05-29 06:04:33
|
Dear Poul, having looked at your suggested changes, there is one thing remaining where I tend not to agree. It concerns the use of repeat/reproduce. It is probably a minor thing, but I just wanted to inform you in case you strongly disagree on the first point and want to object. The last comment I have found on the topic in my Mail folder was by Stefan saying that: "We need to distinguish between the use of MIASE and its scope. The scope of MIASE is to specify repeatable simulations. The use is to determine reproducibility of scientific results." You suggested 3 changes of "repeat" into "reproduce" in the document, those were the following: 1) (Information on the Models) The MIASE guidelines demand that those changes are defined within the experiment description in a way that allows their later repetition (i.e. be traceable), leading to the same instance of the model. Here, I'd say that we are talking about the scope of MIASE rather than the use? I think 'repeat' fits better, so I'd like to leave it. 2) (Discussion) A quantitative model is only useful when it can be simulated in a meaningful way - and in order to do so, model users must be supported in setting up and repeating simulation experiments. Once simulations can be easily reproduced, scientists will start to use models and their simulations as given. The simulations become reliable sources for the composition of models of larger systems which can be re-used instead of starting from the very beginning over and over again. Here, I agree to change 'repeat' to 'reproduce', i.e.: A quantitative model is only useful when it can be simulated in a meaningful way - and in order to do so, model users must be supported in setting up and reproducing simulation experiments. Once simulations can be easily reproduced, scientists will start to use models and their simulations as given. The simulations become reliable sources for the composition of models of larger systems which can be re-used instead of starting from the very beginning over and over again. Best, Dagmar Poul Nielsen wrote: > Dear Dagmar > > I have suggested a couple of minor changes to the MIASE document. > These should be visible as I have turned 'Track Changes' on, as > suggested. It is not clear to me how the altered document should be > uploaded so I am sending my version directly to you. > > Best wishes > Poul |
From: Jonathan C. <jon...@co...> - 2009-05-22 11:58:14
|
Dagmar Köhn wrote: > Hej Jonathan, > >> Rules: >> >> 1. Parts A and C are very similar. I would be inclined to include >> boundary conditions within A. Peter Hunter's comment refers to >> PDE-based models, where the same model (in terms of PDEs) can be >> discretised in many ways (even using different weak forms). How >> much of this should be considered part of the model, and how >> much part of the simulation algorithm, is debatable. This is >> also covered to some extent by 2.E. So it might be best to >> remove 1.C entirely, and just have an explanatory comment about >> PDE models. >> > > Hmm, not sure I agree or not. In any case it would make sense to swap > the order of rules. Like 1B-1A-1C. That sounds reasonable. > Whether or not to merge 1A and 1C I cannot decide. The mention of 'initial conditions' in 1A does imply time course simulations, which might be too restrictive (this goes back to the scope of MIASE). What about boundary value problems, for instance? > > I'd propose: > D. If a model is referenced as a piece of implementation code, then > all information needed to simulate it correctly must be provided. > > 1. Use of open code is encouraged as much as possible, as it is the > best way to evaluate the quality of a simulation. > 2. If using closed code (black box), then all information needed to > simulate it correctly must be provided. Several independent codes > must provide the same result. The phrase "all information needed to simulate it correctly must be provided" is still repeated. But then, it's important enough that it could be worth emphasising. I don't have a strong opinion on this. > >> Information on the Models: >> >> * I'd be inclined to agree with MC's comment that little needs to >> be said about changing parameters. Perhaps just "Model changes >> do not only include atomic changes such as the assignment of new >> values (such as covered under rule 1.A.)." >> > > I think mentioning some concrete examples makes it easier for the > reader. But if I am the only one we can well shorten the paragraph. I don't think it's a big issue. > >> * Information on the Simulations: >> >> * Regarding parameter scans, perhaps "such as the range of >> parameters considered in a parameter scan" > > Could not find where this related to? It was related to the comment "find a better expression for step size". So change "That includes but is not limited to the simulation algorithm, or information such as the step size in the case of parameter scans, for example." to "That includes but is not limited to the simulation algorithm, or information such as the range of parameters considered in a parameter scan, for example." Jonathan |
From: Jonathan C. <jon...@co...> - 2009-05-22 11:14:38
|
Dagmar Köhn wrote: > MODEL > - a set of mathematical equations and associated data representing the > operation or features of a biological process or system (Ref?) I don't have a reference for this - I invented it based on wikipedia and dictionary definitions. Jonathan |
From: Dagmar K. <da...@eb...> - 2009-05-20 15:44:18
|
Hej Jonathan, > Rules: > > 1. Parts A and C are very similar. I would be inclined to include > boundary conditions within A. Peter Hunter's comment refers to > PDE-based models, where the same model (in terms of PDEs) can be > discretised in many ways (even using different weak forms). How > much of this should be considered part of the model, and how > much part of the simulation algorithm, is debatable. This is > also covered to some extent by 2.E. So it might be best to > remove 1.C entirely, and just have an explanatory comment about > PDE models. > Hmm, not sure I agree or not. In any case it would make sense to swap the order of rules. Like 1B-1A-1C. Whether or not to merge 1A and 1C I cannot decide. > 1. > > > 2. For C I don't think the seed is required if results are only > required to be reproducible within a tolerance specified by the > simulation author. Of course, if they wish exactly matching > results, they will need to provide the seed. D.a is rather > repetitive. Perhaps D should change to read: > * D. If a model is referenced as a piece of implementation > code, then all information needed to simulate it correctly > must be provided. > 1. Use of open code is encouraged as much as possible, > as it is the best way to evaluate the quality of a > simulation. > 2. If closed code (black boxes) is used, then several > independent codes should provide the same result. > I'd propose: D. If a model is referenced as a piece of implementation code, then all information needed to simulate it correctly must be provided. 1. Use of open code is encouraged as much as possible, as it is the best way to evaluate the quality of a simulation. 2. If using closed code (black box), then all information needed to simulate it correctly must be provided. Several independent codes must provide the same result. > 1. > > > Information on the Models: > > * I'd be inclined to agree with MC's comment that little needs to > be said about changing parameters. Perhaps just "Model changes > do not only include atomic changes such as the assignment of new > values (such as covered under rule 1.A.)." > I think mentioning some concrete examples makes it easier for the reader. But if I am the only one we can well shorten the paragraph. > * Information on the Simulations: > > * Regarding parameter scans, perhaps "such as the range of > parameters considered in a parameter scan" > Could not find where this related to? > * > > > * Regarding KiSAO, perhaps add a comment that such vocabularies > are still at a very early stage of development? > added > * > > > * Regarding levels of compliance, I think the focus should indeed > be on defining a single minimum standard. However, we can still > encourage people to provide more information! > * I think, as I mentioned in my previous email and as MC has > commented, a MIASE-compliant description should also specify a > tolerance level on results. This would then define what is meant > by 'correct'. This could even be made a rule. > I think there is disagreement between people on the list, thus both points need to be discussed :-) Dagmar > > > On 08/05/09 09:48, Dagmar Köhn wrote: >> Dear all, >> >> there have been some improvements on the paper (hopefully!), thanks >> to many helpful comments by Mike Cooling and Frank Bergman (thanks!). >> I have attached an updated version of the paper draft for you (odt & >> pdf). >> >> Besides some remaining smaller issues, there are also some more >> striking ones which need further discussions. Summarised further down >> and waiting for your comments. Surely, I did forget to put issues, so >> feel free to open the discussion on them! >> >> If you have time to go through the paper (again), I'd say that the >> motivation, "what is not in the scope of miase", the repressilator >> example (which will be enhanced according to a suggestion by MC >> towards describing post-processing also), and discussion sections are >> fine for now. But the rules and the sections belonging to them need >> to be read again (especially "Information on the model" / "- >> simulation")! >> >> Best, >> Dagmar >> >> issues & required opinions: >> >> 1) future direction: Do you want me to include some comments on other >> existing approaches for the description of simulation experiments or >> is it irrelevant as we are talking about MIs? >> >> 2) the glossary: ... needs to be filled with definitions for >> simulation, experiment, MIASE-compliance, (model, model instance). >> Suggestions welcome. >> >> 3) results: the notion of correctness/reproducibility. >> In the paper we currently say that >> "The Minimum Information About a Simulation Experiment (MIASE) is a >> set of guidelines that serve as a recommendation for information to >> be provided together with models in order to enable fast, simple and >> reproducible simulation." >> But we do not define nor say what "reproduce"/"reproducible" means to >> us. >> >> Also, we say that >> "The description of a simulation can lead to different levels of >> correctness. To be MIASE compliant, a description should reproduce >> the result of an experiment in a correct manner." >> But we do not specify what "correct" means for us (When is a >> simulation result actually really corresponding to an earlier >> result?, thinking also about stochastic simulations). >> >> This directly relates to the next issue: >> 5) Information on the simulation: the MIASE-compliance level >> First, I thought it was a good idea to have different levels of MIASE >> compliance in order to make a difference between levels of details of >> the simulation description. But I think this is contra-productive as >> we call the whole thing an "MI" which should either be minimum or >> not!? So I suppose we should agree on the MI and not on different >> levels of it. >> However, there was the idea of using 2 different MIASE levels (which >> we could re-use as 2 different use cases where MIASE-compliant >> descriptions differ a lot, but both are "correct"): >> [comment Mike Cooling] >> "I suspect there could be two levels of MIASE information (this is >> orthogonal to what Mike Hucka was suggesting as in your point on >> progressive guidelines). One might be conceptual and mentions the >> algorithm used (not implementation), and general processes to achieve >> the example result. The second level might be more an inventory of >> how the example result was actually achieved, listing the >> implementation of said algorithm(s). >> The first level would be used by those wanting to provide some >> transparency and quality assessment, and allow those without the >> exact same technology some kind of shot at reproducing the same (or >> similar) results. The second ('implementation') level would provide >> total transparency which could be become useful if a) one doesn't >> have much time and just wants to exactly reproduce an example output >> or b) one tries to use one's own technology to produce the example >> result via the first level but for some reason can't - the exact >> details of an example run would be available to help the keen track >> down the difference between their output and the example. " >> >> 6) MIASE rules: please, could everyone go through them again and make >> sure they all make sense, they are "complete" and correspond to what >> was said in Auckland. >> >> 7) open comments from the Auckland meeting >> There are some of Nicolas' comments on the Auckland meeting left that >> I did not yet include in the paper, because basically I was not sure >> what they meant to say. Those are: >> - (in the rules section, including) Geometry, Space discretization, >> Physics and Constitutive parameters (PH) >> - (in the guidelines section, talking about MIASE applicability) >> Could domain-specific information be encoded in external ontology? (PH) >> - (in information on the simulation) "We need to mention procedures >> to ensure MIASE-compliance. This may be a problem in the case of >> proprietary code. A repository or a registry of simulation codes may >> be a way.” -> this goes to the compliance discussion actually! >> >> 8) KiSAO >> We had a bit of discussion on whether KiSAO at its current state >> could satisfy the needs for MIASE-compliant algorithm specification. >> But I think we found a work around for the example given in the paper >> and the rest of discussion about KiSAO should go to miase-discuss... >> Frank!? >> >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------ >> The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your >> production scanning environment may not be a perfect world - but thanks to >> Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 >> Series Scanner you'll get full speed at 300 dpi even with all image >> processing features enabled. http://p.sf.net/sfu/kodak-com >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Miase-discuss mailing list >> Mia...@li... >> https://lists.sourceforge.net/lists/listinfo/miase-discuss >> > |
From: Dagmar K. <da...@eb...> - 2009-05-20 15:26:34
|
As we decided on having a glossary, here are the terms which I propose to define. Suggestions for the definitions so far are included. Discussion is open. SIMULATION - an experiment performed on a model [Cellier 1991] EXPERIMENT - a test under controlled conditions that is made to examine the validity of a hypothesis, or determine the efficacy of something previously untried. [American Heritage® Dictionary of the English Language, Fourth Edition] - the process of extracting data from a system by exerting it through its inputs [Cellier 1991] MODEL - A model M for a System S and an Experiment E is everything to which E can be applied in order to answer questions about S [Cellier 1991] - a set of mathematical equations and associated data representing the operation or features of a biological process or system (Ref?) MIASE - Minimum Information About a Simulation Experiment MIASE-COMPLIANT - a simulation description that follows the MIASE guidelines FULLY MIASE-COMPLIANT (well, here we have to see whether or not we want different levels of MIASE-compliance) |
From: Dagmar K. <dag...@un...> - 2009-05-20 15:08:28
|
Hej Frank, David disagreed on this paragraph from the motivation (one of your favorites in the paper as well): "Imagine that a modeler wants to use the model, as it is, as a working module to build a larger system. In an ideal world he would want to retrieve the model from a model repository and explore it using the same setting the model's creator used, in order to determine whether this model is suitable for his needs. Currently this is not possible, and the user would have to put effort into understanding the deep mathematics of the model, verify with the paper to discover the simulation algorithm and implementation that were used, and how the parameters would have to be changed in order to arrive at the desired behavior of the model." He proposed to rewrite it to something like (draft): "In an ideal world the model development workflow would follow these steps: (1) query model repositories for existing work that can be utilized; (2) add in modifications and new model components; (3) apply the complete model(s) in one or many simulation experiments [Nickerson&Buist, Phil Trans 2009]. While some initial work has been completed in early investigations of the technologies which can be applied to achieve this [Nickerson et al Bioinformatics 2008; Nickerson & Buist Prog Biophys & Molec Biol 2008], the focus of the models used in these developments has been limited to cellular physiology. Effort is now required to extend the concepts developed in this earlier work to encompass the larger range of models addressed by MIASE. Achieving this goal will enable the above workflow to be utilized by model authors, model users, and tool developers removing the barriers imposed by current workflows (such as the effort required for understanding the deep mathematics of the model, verification with the paper to discover the simulation algorithm and implementation that were used, and how the parameters would have to be changed in order to arrive at the desired behavior of the model.)" My personal feeling is that the workflow description is quite specific and I am not sure I would like to narrow down the papers' motivation section so much to that workflow. I also do not know if this workflow would be a commonly accepted view on the work in model development in general as I have seen quite different ones. And I'd suggest not to get into any discussions about model development workflows with any reviewers. But I agree that we should mention part of this work. What do you think? Dagmar |
From: Dagmar K. <da...@eb...> - 2009-05-20 14:52:15
|
Hej David, David Nickerson wrote: >> One more thing here, we should not enforce the use of MIRIAM. We should only >> recommend people to refer to MIRIAM compliant models, if possible. >> > > yeah, I was really just thinking that since MIRIAM addresses > availability, licensing, etc so we can leave it up to MIRIAM and not > need to address it in MIASE. Is it really that bad to enforce the use > of MIRIAM compliant model descriptions? > I think it is. But we could discuss that on the list if other people do agree with you :-) >> 4) The Repressilator example >> You suggested to do "a cell physiology type example as something more >> familiar to the CellML community. Perhaps something along the lines of >> calculating APDs in a parameter scan?" >> The more example we have, the better. And the more diverse the even more >> better. >> So, if you volunteer to set up the example :-) ... I would not say no :-) >> (BTW, Viji is currently doing an example that includes post-processing of >> the output. ) >> > > cool. I'm kind of hoping someone might volunteer :) I think the > current example plus Viji's one would be sufficient, but it would be > nice to have more. If I get a chance I'll give it a try. > I just left this comment in here as a reminder, in case you want to do the example. Dagmar |
From: David N. <dav...@gm...> - 2009-05-15 10:47:28
|
> 2) "Whether or not the simulation result matches reality and whether or not > an experiment can be conducted on a certain model will neither be described > nor tested." > You argued that a description is tied to a certain model or number of models > which makes the comment on scope irrelevant, i.e. whether an experiment can > be run on the model. But I think what is meant here is to point out that > there is no validation of whether or not it does make sense to run the > simulation on the referenced model. ahhhh...ok, I get that now. Perhaps the "can be conducted on a certain model" can be changed a bit to make this clearer. Maybe "should be conducted" ? > 3) Rules > 1.B. "If the model is not encoded in a standard format, then the model code > must be made available to the user." > You said that: "having a model encoded in a standard format doesn't imply > that it is freely available. Perhaps might be better to state that model > descriptions must be MIRIAM compliant?" > You are right in the sense that there are cases where models are encoded in > a standard format, but they are still not available... But I think the rule > was not addressing the distribution policy... > I am wondering if not the statement in 1. "...All models used in the > experiment must be named and contain a reference to a model source." might > be sufficient? agreed, that should be enough. > Maybe it would make sense to change it to "a reference to a freely available > model source". (If we want to force that at all) > I actually think that people who do not want to share their models might not > want to share their simulation experiments neither ;-) thats true, but I guess we want to address the case where people want to share their work with models hard-coded into proprietary software. I like the comment (can't recall where it was) that things would just be better with freely available models in standard formats, but we're not imposing such in order to be MIASE compliant. > One more thing here, we should not enforce the use of MIRIAM. We should only > recommend people to refer to MIRIAM compliant models, if possible. yeah, I was really just thinking that since MIRIAM addresses availability, licensing, etc so we can leave it up to MIRIAM and not need to address it in MIASE. Is it really that bad to enforce the use of MIRIAM compliant model descriptions? > 4) The Repressilator example > You suggested to do "a cell physiology type example as something more > familiar to the CellML community. Perhaps something along the lines of > calculating APDs in a parameter scan?" > The more example we have, the better. And the more diverse the even more > better. > So, if you volunteer to set up the example :-) ... I would not say no :-) > (BTW, Viji is currently doing an example that includes post-processing of > the output. ) cool. I'm kind of hoping someone might volunteer :) I think the current example plus Viji's one would be sufficient, but it would be nice to have more. If I get a chance I'll give it a try. > 5) Discussion > By "The sole annotation of models is not sufficient to promote reuse of > existing biological knowledge." I meant "the annotation of the models alone > is not sufficient, you also need to information on the simulation as well". > Maybe I could actually change it to "The annotation of \models\ only"... sounds good. Thanks, David. |
From: Dagmar K. <da...@eb...> - 2009-05-15 08:59:31
|
Hej David, thanks a lot for your comments. The tracking of changes worked very well, I should have found all of your comments. I have some remaining remarks: 1) "What is not in the scope of Miase" Currently we say that "output descriptions" are not in scope, but of course you were right that we are mentioning some output information like step sizes, so I changed it to "Graphical output descriptions" as you suggested (just to let you know). 2) "Whether or not the simulation result matches reality and whether or not an experiment can be conducted on a certain model will neither be described nor tested." You argued that a description is tied to a certain model or number of models which makes the comment on scope irrelevant, i.e. whether an experiment can be run on the model. But I think what is meant here is to point out that there is no validation of whether or not it does make sense to run the simulation on the referenced model. 3) Rules 1.B. "If the model is not encoded in a standard format, then the model code must be made available to the user." You said that: "having a model encoded in a standard format doesn't imply that it is freely available. Perhaps might be better to state that model descriptions must be MIRIAM compliant?" You are right in the sense that there are cases where models are encoded in a standard format, but they are still not available... But I think the rule was not addressing the distribution policy... I am wondering if not the statement in 1. "...All models used in the experiment must be named and contain a reference to a model source." might be sufficient? Maybe it would make sense to change it to "a reference to a freely available model source". (If we want to force that at all) I actually think that people who do not want to share their models might not want to share their simulation experiments neither ;-) One more thing here, we should not enforce the use of MIRIAM. We should only recommend people to refer to MIRIAM compliant models, if possible. 4) The Repressilator example You suggested to do "a cell physiology type example as something more familiar to the CellML community. Perhaps something along the lines of calculating APDs in a parameter scan?" The more example we have, the better. And the more diverse the even more better. So, if you volunteer to set up the example :-) ... I would not say no :-) (BTW, Viji is currently doing an example that includes post-processing of the output. ) 5) Discussion By "The sole annotation of models is not sufficient to promote reuse of existing biological knowledge." I meant "the annotation of the models alone is not sufficient, you also need to information on the simulation as well". Maybe I could actually change it to "The annotation of \models\ only"... That's all that remained unclear. Thanks again for the comments. They'll be included in the next paper update :-) Dagmar David Nickerson wrote: > Hi Dagmar, > > Sorry to be out of touch for the last few weeks, but I have finally > had a chance to go through the manuscript. I have attached my modified > version - hopefully all the tracking/comments etc work :) Now to look > into those discussion points... > > > Cheers, > David. |
From: Dagmar K. <da...@eb...> - 2009-05-14 05:29:44
|
Nicolas Le novère wrote: > Andrew Miller wrote: > > >> If models are particularly sensitive to details like the exact rounding >> of floating point numbers, minor algorithmic details, and so on, and the >> author can't explain why we should believe one set of results but not >> the other, then that means that the results of the model should be >> considered unreliable. >> > > If the models is KNOWN to be sensitive, then this has to be described. But > that cannot be part of any simulation description. Asking too much spurious > information will result in noone following MIASE guidelines. There are > precedent. > > >> However, one minimum information item we could add to address this, >> while remaining on scope would be: >> "F. Where the author is aware that the model will produce different >> results in a different simulation environment or, on a different >> computational platform, an explanation of why the model should be run on >> the specified platform in order to properly achieve the purpose of the >> modeling." >> > > That sounds perfect Agree. Although there was a verb missing: F. Where the author is aware that the model will produce different results in a different simulation environment or, on a different computational platform, an explanation of why the model should be run on the specified platform in order to properly achieve the purpose of the modeling must be given." If no further objections, I'll add the new rule in the paper and declare the issue solved ?! Dagmar |
From: Nicolas Le n. <le...@eb...> - 2009-05-13 07:02:23
|
Andrew Miller wrote: > If models are particularly sensitive to details like the exact rounding > of floating point numbers, minor algorithmic details, and so on, and the > author can't explain why we should believe one set of results but not > the other, then that means that the results of the model should be > considered unreliable. If the models is KNOWN to be sensitive, then this has to be described. But that cannot be part of any simulation description. Asking too much spurious information will result in noone following MIASE guidelines. There are precedent. > However, one minimum information item we could add to address this, > while remaining on scope would be: > "F. Where the author is aware that the model will produce different > results in a different simulation environment or, on a different > computational platform, an explanation of why the model should be run on > the specified platform in order to properly achieve the purpose of the > modeling." That sounds perfect. -- Nicolas LE NOVERE, Computational Neurobiology, EMBL-EBI, Wellcome-Trust Genome Campus, Hinxton CB101SD UK, Mob:+447833147074, Tel:+441223494521 Fax:468,Skype:n.lenovere,AIM:nlenovere,MSN:nle...@ho...(NOT email) http://www.ebi.ac.uk/~lenov/, http://www.ebi.ac.uk/compneur/ |
From: Andrew M. <ak....@au...> - 2009-05-12 20:40:54
|
Dagmar Köhn wrote: > I know I should let go, but I can't :-) > Those things are just not clear to me. > I'll try to put simple theses and hopefully can *explain* how I was > interpreting comments. > Please, correct me. > > > I agree that Stefan got Nicolas wrong, who was advocating for repeatability. > I agree with Stefan that repetition is not sufficient. > I also agree that "if a simulation experiment which is described in a > MIASE compliant format is allowed to have different results we have not > specified anything". (in the sense that we have not specified the MIASE > rules appropriately) > I think that Nicolas interpreted "If a simulation experiment which is > described in a MIASE compliant format is allowed to have different > results we have not specified anything at least in my opinion." as > Stefan demanding to repeat a simulation experiment on different > simulators in order to judge it MIASE-compliant (and associated that > with the term "correct"). > If that was indeed Stefan's reason for reproducibility, I disagree with him. > > The reason why I would like to see reproducibility enabled is that I > would be very happy if a MIASE compliant simulation experiment > description was general enough to be applied to more than exactly the > same simulation tool simulating exactly the same model and running on > exactly the same machine with exactly the same algorithm/integrator and > so forth ... > I don't need a MIASE-specification for correctness of simulation > results, but for RE-USABILITY (and I don't think "correctness" should be > part of this discussion about repeatability/reproducibility at all). > > Therefore, I'd like to have MIASE rules set up in a way that enable > reproducibility rather than repeatability only. > > That must not hold for all simulations, but should be possible in general. > Hi, If models are particularly sensitive to details like the exact rounding of floating point numbers, minor algorithmic details, and so on, and the author can't explain why we should believe one set of results but not the other, then that means that the results of the model should be considered unreliable. When considering how to deal with this, I think we should be careful not to grow the scope of MIASE too much; the scope should properly be to ensure people disclose enough information about their model to allow the scientific process to take place, and not to establish good modelling or scientific practices directly. However, one minimum information item we could add to address this, while remaining on scope would be: "F. Where the author is aware that the model will produce different results in a different simulation environment or, on a different computational platform, an explanation of why the model should be run on the specified platform in order to properly achieve the purpose of the modeling." Best regards, Andrew > Dagmar > > PS: Stefan, I'd say that for the purpose "to determine reproducibility > of scientific results" you would rather USE the SED-ML than MIASE, so if > the question is how to use SED-ML to verify simulation results, then it > should imho be discussed elsewhere. > > > Stefan Hoops wrote: > >> Hello Nicolas, >> >> On Mon, 11 May 2009 16:07:34 +0100 (BST) >> "Nicolas Le Novere" <le...@eb...> wrote: >> >> >> >>>> I agree with Nicolas that we should ask for reproducibility. >>>> >>>> >>> I think you got me wrong. I actually advocated the opposite. And I >>> believed I was reflecting the Auckland consensus, that said MIASE did >>> not deal with correctness. >>> >>> >>> >>>> Repeating >>>> a simulation is insufficient. If a simulation experiment which is >>>> described in a MIASE compliant format is allowed to have different >>>> results we have not specified anything at least in my opinion.Yes, we specified a way to discover the discrepancy, which is exactly >>>> what a materials and methods is for. Results are described in another >>>> part of the paper, called ... results. We should encourage the use of >>>> standard formats to describe numerical results, but I do not believe >>>> the results themselves are part of a "simulation experiment >>>> description". And even less the correctness criteria to compare two >>>> sets of results. >>>> >>>> >>>> >> I think we actually agree, I was not precise enough. We need to >> distinguish between the use of MIASE and its scope. The scope of MIASE >> is to specify repeatable simulations. The use is to determine >> reproducibility of scientific results. >> >> Thanks, >> Stefan >> >> >> >> > > > ------------------------------------------------------------------------------ > The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your > production scanning environment may not be a perfect world - but thanks to > Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 > Series Scanner you'll get full speed at 300 dpi even with all image > processing features enabled. http://p.sf.net/sfu/kodak-com > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > |
From: Jonathan C. <jon...@co...> - 2009-05-12 08:27:24
|
On 12/05/09 07:37, Dagmar Köhn wrote: > Hi James, > > that is why we put "within a 'reasonable' level of tolerance" in the > paper - meaning the reproduction of a stochastic result would be called > successful even if the actual result numbers do not map 1:1, but the > result does comply with the original experiment within a tolerance of > whatever-limit... > So, I suppose with "different" Stefan does not mean "not identical" but > rather "not within a certain deviation for the case of stochastic > simulation". > > Dagmar > > (still the question is how precise to be when defining the accepted > deviation) > Presumably this would vary on a case-by-case basis, and we should require authors to specify the tolerance they expect for their results? Jonathan > James Lawson wrote: > >> Hi Stefan et al. >> >> If I may just make a short point here, I have to disagree with Stefan. >> >> It seems that stochastic simulations would be within the scope that >> we'd like MIASE to cover (and I think I recall this being discussed in >> Auckland.) If we were describing a stochastic simulation, a particular >> simulation experiment described in a MIASE compliant format would need >> to be allowed to have different results between iterations of the >> simulation. >> >> This is just one example, but I am sure there are others that would be >> relevant to us. >> >> Kind regards, >> James >> >>> I agree with Nicolas that we should ask for reproducibility. Repeating >>> a simulation is insufficient. If a simulation experiment which is >>> described in a MIASE compliant format is allowed to have different >>> results we have not specified anything at least in my opinion. If the >>> result is not reproducible something is wrong with the initial >>> experiment, the second experiment or the MIASE compliant description. >>> Either of the 3 cases can be resolved through further investigation, >>> which is the normal scientific process. >>> >>> Thanks, >>> Stefan >>> >>> >>> >> ------------------------------------------------------------------------------ >> The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your >> production scanning environment may not be a perfect world - but thanks to >> Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 >> Series Scanner you'll get full speed at 300 dpi even with all image >> processing features enabled. http://p.sf.net/sfu/kodak-com >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Miase-discuss mailing list >> Mia...@li... >> https://lists.sourceforge.net/lists/listinfo/miase-discuss >> >> > > > ------------------------------------------------------------------------------ > The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your > production scanning environment may not be a perfect world - but thanks to > Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 > Series Scanner you'll get full speed at 300 dpi even with all image > processing features enabled. http://p.sf.net/sfu/kodak-com > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > |
From: Dagmar K. <da...@eb...> - 2009-05-12 07:55:47
|
I know I should let go, but I can't :-) Those things are just not clear to me. I'll try to put simple theses and hopefully can *explain* how I was interpreting comments. Please, correct me. I agree that Stefan got Nicolas wrong, who was advocating for repeatability. I agree with Stefan that repetition is not sufficient. I also agree that "if a simulation experiment which is described in a MIASE compliant format is allowed to have different results we have not specified anything". (in the sense that we have not specified the MIASE rules appropriately) I think that Nicolas interpreted "If a simulation experiment which is described in a MIASE compliant format is allowed to have different results we have not specified anything at least in my opinion." as Stefan demanding to repeat a simulation experiment on different simulators in order to judge it MIASE-compliant (and associated that with the term "correct"). If that was indeed Stefan's reason for reproducibility, I disagree with him. The reason why I would like to see reproducibility enabled is that I would be very happy if a MIASE compliant simulation experiment description was general enough to be applied to more than exactly the same simulation tool simulating exactly the same model and running on exactly the same machine with exactly the same algorithm/integrator and so forth ... I don't need a MIASE-specification for correctness of simulation results, but for RE-USABILITY (and I don't think "correctness" should be part of this discussion about repeatability/reproducibility at all). Therefore, I'd like to have MIASE rules set up in a way that enable reproducibility rather than repeatability only. That must not hold for all simulations, but should be possible in general. Dagmar PS: Stefan, I'd say that for the purpose "to determine reproducibility of scientific results" you would rather USE the SED-ML than MIASE, so if the question is how to use SED-ML to verify simulation results, then it should imho be discussed elsewhere. Stefan Hoops wrote: > Hello Nicolas, > > On Mon, 11 May 2009 16:07:34 +0100 (BST) > "Nicolas Le Novere" <le...@eb...> wrote: > > >>> I agree with Nicolas that we should ask for reproducibility. >>> >> I think you got me wrong. I actually advocated the opposite. And I >> believed I was reflecting the Auckland consensus, that said MIASE did >> not deal with correctness. >> >> >>> Repeating >>> a simulation is insufficient. If a simulation experiment which is >>> described in a MIASE compliant format is allowed to have different >>> results we have not specified anything at least in my opinion.Yes, we specified a way to discover the discrepancy, which is exactly >>> what a materials and methods is for. Results are described in another >>> part of the paper, called ... results. We should encourage the use of >>> standard formats to describe numerical results, but I do not believe >>> the results themselves are part of a "simulation experiment >>> description". And even less the correctness criteria to compare two >>> sets of results. >>> >>> > > I think we actually agree, I was not precise enough. We need to > distinguish between the use of MIASE and its scope. The scope of MIASE > is to specify repeatable simulations. The use is to determine > reproducibility of scientific results. > > Thanks, > Stefan > > > |
From: Dagmar K. <da...@eb...> - 2009-05-12 06:38:30
|
Hi James, that is why we put "within a 'reasonable' level of tolerance" in the paper - meaning the reproduction of a stochastic result would be called successful even if the actual result numbers do not map 1:1, but the result does comply with the original experiment within a tolerance of whatever-limit... So, I suppose with "different" Stefan does not mean "not identical" but rather "not within a certain deviation for the case of stochastic simulation". Dagmar (still the question is how precise to be when defining the accepted deviation) James Lawson wrote: > Hi Stefan et al. > > If I may just make a short point here, I have to disagree with Stefan. > > It seems that stochastic simulations would be within the scope that > we'd like MIASE to cover (and I think I recall this being discussed in > Auckland.) If we were describing a stochastic simulation, a particular > simulation experiment described in a MIASE compliant format would need > to be allowed to have different results between iterations of the > simulation. > > This is just one example, but I am sure there are others that would be > relevant to us. > > Kind regards, > James >> I agree with Nicolas that we should ask for reproducibility. Repeating >> a simulation is insufficient. If a simulation experiment which is >> described in a MIASE compliant format is allowed to have different >> results we have not specified anything at least in my opinion. If the >> result is not reproducible something is wrong with the initial >> experiment, the second experiment or the MIASE compliant description. >> Either of the 3 cases can be resolved through further investigation, >> which is the normal scientific process. >> >> Thanks, >> Stefan >> >> > > ------------------------------------------------------------------------------ > The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your > production scanning environment may not be a perfect world - but thanks to > Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700 > Series Scanner you'll get full speed at 300 dpi even with all image > processing features enabled. http://p.sf.net/sfu/kodak-com > ------------------------------------------------------------------------ > > _______________________________________________ > Miase-discuss mailing list > Mia...@li... > https://lists.sourceforge.net/lists/listinfo/miase-discuss > |