You can subscribe to this list here.
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2013 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(10) |
Oct
(27) |
Nov
|
Dec
(6) |
2014 |
Jan
(25) |
Feb
|
Mar
(44) |
Apr
(21) |
May
(1) |
Jun
(7) |
Jul
(3) |
Aug
(18) |
Sep
(7) |
Oct
(1) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
(9) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(15) |
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(31) |
Jun
(4) |
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
From: Sarah K. <ske...@ca...> - 2013-10-01 15:50:03
|
> Sarah, in another thread, you have argued about > allowing flexibility for modelers, and I think this is the same > issue. I completely disagree. The xml structure of the SBML is not going to limit a modeller - who is likely to use some software interface to create/use a model and will not be at all concerned. about whether there is a arraysElement inside an initialAssignment (as per Lucian's example) or not. I am arguing from the point of view of software reading/writing/parsing/validating the resulting files in a fashion that allows us to keep our assurance to people that their software does not have to support every package. I'm not saying we must try to keep validity after reduction in its entirety - I'm just saying that if packages can wholesale rewrite the validation rules of core that validation of any models with a package you do not understand becomes impossible. That is not necessarily helpful to software developers. > We should not put strong limitations on the arrays package > simply to maintain validating after reduction as it makes the package > less usable. I'm not suggesting a limitation; I'm arguing that we need to try harder to find a solution :-) Sarah |
From: Sarah K. <ske...@ca...> - 2013-10-01 15:47:27
|
Hi Chris > Note that listOfIndices though is different. You need an index on > each SBaseRef attribute of an object. For an initial assignments, > rate rules, assignment rules, species references, and event > assignments, this is straightforward because there is just the one, > the variable being assigned. However, for some other elements, there > can be multiple references and thus multiple indices required. Sorry I'm confused - since you can have a listOfIndices surely this encompasses the different indices. > Another example which you will > find in my slides I presented at COMBINE is for a replacement where > you may need an index to say which species is replacing, an index to > say in which model it is replaced, and even an index to say which > species is being replaced. Actually since this slide does not actually explicitly show the indices it does not really make anything clear :-( Sarah |
From: Chris J. M. <my...@ec...> - 2013-10-01 14:08:32
|
Added the editors to this discussion and perhaps it should move to that list (or maybe discuss). I agree that this needs more discussion. My view has always been that maintaining validity after stripping a "required" package is very difficult, and I'm not sure it is worth the effort. While I understand the point of maybe a model may still mean something to someone without understanding the package, I really feel that the only compelling case for this is stripping layout/render in which required is false. In all other cases, if you strip the package, you really are left with a model for which you have lost important semantics. I wish we had decided that "required=true" means that a tool that loads a model with a package that it does not understand and is required that they are told to stop right there and throw an error. It is quite dangerous to go any further, in my opinion. Now, for the specific case of arrays, the good news is that assuming we provide a flatten routine in libsbml/jsbml then a software can actually easily convert an SBML file with Comp and Arrays into a model without these packages and then carry on with analysis. This is not true of other packages such as qual or FBC. All this being said, if we can come up with a reasonable way of keeping validity after stripping, I'm fine with it. However, I would prefer not to require that we come up with such a way, if it proves to be too difficult. Sarah, in another thread, you have argued about allowing flexibility for modelers, and I think this is the same issue. We should not put strong limitations on the arrays package simply to maintain validating after reduction as it makes the package less usable. Chris On Oct 1, 2013, at 4:53 AM, Sarah Keating <sar...@te...> wrote: > I'm with Lucian - in the sense that it would be nice to find a way to > have the constructs without violating a core validation rule. > > >>> Well, L3V2 would not require a model to be valid SBML after >>> removing a package, just valid XML. So, L3V2 would allow two >>> initial assignments to the same variable. > > That statement needs clarifying - L3V2 would allow two initial > assignments to the same variable ONLY if the original model had a > package that when stripped left it with two assignments to the same > variable. > > I think we need to be very careful about this ! > > It actually needs even further clarification - because it would only be > legitimate if the initial assignments contained elements/attributes from > the package that had been stripped. > > The relaxation of validity after reduction conversation has centred > around SIdRefs (ie the dangling reference issue). I don't think anyone > has actually said it wont matter if any of the validation rules no > longer hold. This is actually a wider discussion than I think we have > had as editors. > > From the point of view of some one validating a model where they do not > understand a package; if we say any core rule may be voided by a package > this would mean that the model must always be considered valid because > if you do not understand the package and it may have voided any core > rules - you just do not know anything anymore. I think this is a step > too far ... > > Sarah > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk > _______________________________________________ > sbml-arrays mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-arrays |
From: Chris J. M. <my...@ec...> - 2013-10-01 13:55:04
|
Hi Sarah, I'm okay with your plan to extend SBase with listOfDimensions and then having validation warnings for uses that we do not want to require support for initially. Note that listOfIndices though is different. You need an index on each SBaseRef attribute of an object. For an initial assignments, rate rules, assignment rules, species references, and event assignments, this is straightforward because there is just the one, the variable being assigned. However, for some other elements, there can be multiple references and thus multiple indices required. Consider, for example, a species. It has a compartment for which you may need an index if the compartment is an array. It also has a unit which may require an index. Finally, it has a conversion factor which may be an arrayed parameter. Another example which you will find in my slides I presented at COMBINE is for a replacement where you may need an index to say which species is replacing, an index to say in which model it is replaced, and even an index to say which species is being replaced. So, unfortunately, indices cannot be a simple extension of SBase, since the number indices required is different for various SBML elements. JSBML folks: I've added you to this list because we are using JSBML now and would love to see array support in JSBML as soon as possible :-). Chris On Oct 1, 2013, at 4:29 AM, Sarah Keating <ske...@ca...> wrote: > I would be happier with the approach of adding listfDimensions/listOfIndices to SBase and then explicitly stating in the spec that there is no current use case/meaning established for the use of these on particular (explicitly listed) elements. List the elements that should not have them rather than ones that the current list of elements that can have array elements. > > These could then indeed be stated as validation rules/warnings. > > This means if someone sees a use case that we currently have not considered the spec does not preclude them from using arrays; rather than them have to wait for a new version of the specification. > > The major advantage of extending SBase is that other packages automatically get the extension - and thus rather than wait for other packages to develop specifications that state what elements can have array elements; people could use arrays with other packages as they saw fit/necessary. The packages could then in their next iteration provide information about elements within that package on which it would not make sense to have array elements. > > Chris mentioned he really wants arrays of comp submodels. If SBase is not extended then technically he would have to wait for a new comp spec to detail that he could have arrays of submodels; I'm guessing he would rather not do that :-) > > Sarah > |
From: Sarah K. <sar...@te...> - 2013-10-01 11:12:32
|
I'm with Lucian - in the sense that it would be nice to find a way to have the constructs without violating a core validation rule. >> Well, L3V2 would not require a model to be valid SBML after >> removing a package, just valid XML. So, L3V2 would allow two >> initial assignments to the same variable. That statement needs clarifying - L3V2 would allow two initial assignments to the same variable ONLY if the original model had a package that when stripped left it with two assignments to the same variable. I think we need to be very careful about this ! It actually needs even further clarification - because it would only be legitimate if the initial assignments contained elements/attributes from the package that had been stripped. The relaxation of validity after reduction conversation has centred around SIdRefs (ie the dangling reference issue). I don't think anyone has actually said it wont matter if any of the validation rules no longer hold. This is actually a wider discussion than I think we have had as editors. From the point of view of some one validating a model where they do not understand a package; if we say any core rule may be voided by a package this would mean that the model must always be considered valid because if you do not understand the package and it may have voided any core rules - you just do not know anything anymore. I think this is a step too far ... Sarah |
From: Sarah K. <ske...@ca...> - 2013-10-01 10:30:29
|
I would be happier with the approach of adding listfDimensions/listOfIndices to SBase and then explicitly stating in the spec that there is no current use case/meaning established for the use of these on particular (explicitly listed) elements. List the elements that should not have them rather than ones that the current list of elements that can have array elements. These could then indeed be stated as validation rules/warnings. This means if someone sees a use case that we currently have not considered the spec does not preclude them from using arrays; rather than them have to wait for a new version of the specification. The major advantage of extending SBase is that other packages automatically get the extension - and thus rather than wait for other packages to develop specifications that state what elements can have array elements; people could use arrays with other packages as they saw fit/necessary. The packages could then in their next iteration provide information about elements within that package on which it would not make sense to have array elements. Chris mentioned he really wants arrays of comp submodels. If SBase is not extended then technically he would have to wait for a new comp spec to detail that he could have arrays of submodels; I'm guessing he would rather not do that :-) Sarah |
From: Lucian S. <luc...@gm...> - 2013-09-30 23:32:20
|
On Mon, Sep 30, 2013 at 4:20 PM, Chris J. Myers <my...@ec...> wrote: > > On Sep 30, 2013, at 11:06 AM, Lucian Smith <luc...@gm...> wrote: > >> Well, hmm. There's a couple issues here. One, is it legal to accept >> the changes that 'arrays' is proposing, and two, is it a good idea? >> >> For the first, I don't think there'll be any difference between L3v1 >> and L3v2. Both will continue to have the rule, "You may not have two >> initial assignments/rules that assign to the same variable." As >> proposed, the arrays would relax that restriction to say something >> like "You may not have two initial assignments/rules that assign to >> the same *part* of the same variable." This is a new thing that no >> other package has attempted to do so far: actually lift a restriction >> set in place by core (or another package). Is it legal to do so? I >> think it probably is--ultimately, the specifications are nothing but a >> set of rules, and there's no rule of spec writing that I know of that >> says a later rule can't modify an earlier one. >> > Well, L3V2 would not require a model to be valid SBML after removing a package, just valid XML. So, L3V2 would allow two initial assignments to the same variable. This would mean that tools that did not understand arrays may not be able to simulate the model which may be okay since it is likely not a meaningful simulation anyway. So, it is legal in L3V2 but illegal in L3V1. Hmm. I'm not on the editor's list right now, but if the proposal is actually that packages may repeal any validation rule at all, that's a pretty big swing away from the 'validity after reduction' rule. However, since that rule was not actually in the L3v1 , nobody should have to wait for L3v2 if it's being changed--it's instead a rule from the 'principles for package development' page. > My concern is with replacements in comp. This would likely get very awkward to do in the way you suggest above and for very little benefit. > > The thing to keep in mind is that arrays are like comp in that I suspect many tool developers will be willing to support arrays assuming there is a flatten routine. In that case, they do not need to "really" support it, just flatten and simulate that. In that case, all these issues go away. Well, they go away for that tool, but not for libsbml, or other tools. -Lucian |
From: Chris J. M. <my...@ec...> - 2013-09-30 23:20:30
|
On Sep 30, 2013, at 11:06 AM, Lucian Smith <luc...@gm...> wrote: > Well, hmm. There's a couple issues here. One, is it legal to accept > the changes that 'arrays' is proposing, and two, is it a good idea? > > For the first, I don't think there'll be any difference between L3v1 > and L3v2. Both will continue to have the rule, "You may not have two > initial assignments/rules that assign to the same variable." As > proposed, the arrays would relax that restriction to say something > like "You may not have two initial assignments/rules that assign to > the same *part* of the same variable." This is a new thing that no > other package has attempted to do so far: actually lift a restriction > set in place by core (or another package). Is it legal to do so? I > think it probably is--ultimately, the specifications are nothing but a > set of rules, and there's no rule of spec writing that I know of that > says a later rule can't modify an earlier one. > Well, L3V2 would not require a model to be valid SBML after removing a package, just valid XML. So, L3V2 would allow two initial assignments to the same variable. This would mean that tools that did not understand arrays may not be able to simulate the model which may be okay since it is likely not a meaningful simulation anyway. So, it is legal in L3V2 but illegal in L3V1. > The second issue is whether it's a good idea or not. This is where > the 'validity after reduction' rule comes in, because quite apart from > legality, it's an additional design restriction put in place by the > editors who thought it was a good idea. They could lift or impose it > at any time; it's not tied to the L3v1 vs. L3v2 discussion. It is > true that some of the changes in L3v2 are being proposed to make it > easier for packages to change the rules, but these changes are very > specific and most have to do with SIdRefs; the proposed changes would > not actually affect the 'two rules about one variable' problem. > > So... *is* it a good idea to stick with the 'validity after reduction' > rule in this case? I think we could probably find ways around the > 'multiple initial assignments' rule by condensing everything to a > single element. We could change the example in the arrays proposal > from page 15 to be something like: > > <initialAssignment variable="x"> > <math> <!--Fake math element that gets ignored because it's > overridden by:--> </math> > <listOfArrayAssignments> > <arrayAssignment> > <orderedListOfDimensions> [same] > <orderedListOfIndices> [same] > <math><cn>5.7</cn></math> > </arrayAssignment> > <arrayAssignment> > <orderedListOfDimensions> [same] > <orderedListOfIndices> [same] > <math><cn>3.2</cn></math> > </arrayAssignment> > </listOfArrayAssignments> > </initialAssignment> > > What this wouldn't cover is the case where the modeler wanted initial > assignments for part of the array, but (say) an assignment rule for > the rest of the array. But is that something we want to support? And > is there some situation I'm not thinking of where bundling everything > together *wouldn't* work? > My concern is with replacements in comp. This would likely get very awkward to do in the way you suggest above and for very little benefit. The thing to keep in mind is that arrays are like comp in that I suspect many tool developers will be willing to support arrays assuming there is a flatten routine. In that case, they do not need to "really" support it, just flatten and simulate that. In that case, all these issues go away. Chris |
From: Lucian S. <luc...@gm...> - 2013-09-30 17:06:51
|
Well, hmm. There's a couple issues here. One, is it legal to accept the changes that 'arrays' is proposing, and two, is it a good idea? For the first, I don't think there'll be any difference between L3v1 and L3v2. Both will continue to have the rule, "You may not have two initial assignments/rules that assign to the same variable." As proposed, the arrays would relax that restriction to say something like "You may not have two initial assignments/rules that assign to the same *part* of the same variable." This is a new thing that no other package has attempted to do so far: actually lift a restriction set in place by core (or another package). Is it legal to do so? I think it probably is--ultimately, the specifications are nothing but a set of rules, and there's no rule of spec writing that I know of that says a later rule can't modify an earlier one. The second issue is whether it's a good idea or not. This is where the 'validity after reduction' rule comes in, because quite apart from legality, it's an additional design restriction put in place by the editors who thought it was a good idea. They could lift or impose it at any time; it's not tied to the L3v1 vs. L3v2 discussion. It is true that some of the changes in L3v2 are being proposed to make it easier for packages to change the rules, but these changes are very specific and most have to do with SIdRefs; the proposed changes would not actually affect the 'two rules about one variable' problem. So... *is* it a good idea to stick with the 'validity after reduction' rule in this case? I think we could probably find ways around the 'multiple initial assignments' rule by condensing everything to a single element. We could change the example in the arrays proposal from page 15 to be something like: <initialAssignment variable="x"> <math> <!--Fake math element that gets ignored because it's overridden by:--> </math> <listOfArrayAssignments> <arrayAssignment> <orderedListOfDimensions> [same] <orderedListOfIndices> [same] <math><cn>5.7</cn></math> </arrayAssignment> <arrayAssignment> <orderedListOfDimensions> [same] <orderedListOfIndices> [same] <math><cn>3.2</cn></math> </arrayAssignment> </listOfArrayAssignments> </initialAssignment> What this wouldn't cover is the case where the modeler wanted initial assignments for part of the array, but (say) an assignment rule for the rest of the array. But is that something we want to support? And is there some situation I'm not thinking of where bundling everything together *wouldn't* work? -Lucian On Mon, Sep 30, 2013 at 1:53 AM, Sarah Keating <ske...@ca...> wrote: > Hi Guys > > Having listened to the discussion at COMBINE (sorry I wasn't there even > virtually) am I correct in thinking that the arrays package will aim at > targeting L3V2 core - basically so that it does not have to jump > through the validity after reduction hoop :-) > > Thanks > > Sarah > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk > _______________________________________________ > sbml-arrays mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-arrays |
From: Chris J. M. <my...@ec...> - 2013-09-30 15:01:10
|
It is okay to put list of dimensions on SBase. However, I would still like to consider having some validation rules to eliminate some uses at least initially. We can always remove validation rules but once we open it up to all things, we cannot take that away. Besides ListOf, there are things like SpeciesReferences, Triggers, Delay, KineticLaw, etc., basically child objects which I cannot currently think of a use case and will complicate implementations. I think a good mode going forward is to be flexible in design as Sarah suggests, but also start with initial limitations via validation rules. In this way, we can enable faster acceptance of at least a limited package. Otherwise, we may be waiting some time on a pair of implementations. Chris On Sep 30, 2013, at 3:31 AM, Sarah Keating <ske...@ca...> wrote: > Hi Guys > > When Chris talked at COMBINE he mentioned that one of the reasons for > not extending SBase with the listOfDimensions was that it would not be > meaningful in some cases. He specifically used ListOf objects as an example. > > I think we need to be careful about imposing restrictions on the model > that a modeller can describe just because at the moment *we* do not see > a rationale for something. This is an accusation that has been levelled > at SBML from time to time. > > I could actually see a use case for having dimensions on a ListOf > object. Say you have 100 species that take part in 500 reactions and you > want to express the fact that each species/reaction is an array of 10. > This would mean putting the listOfDimensions on each species/reaction. > You could put the listOfDimensions on the ListOfSpecies and the > listOfReactions and convey the same thing surely. Obviously your > simulation implementation would be different but as far as I can see - > this would be a valid use case. > > It is also not unprecedented to have things in SBML for which we do not > have a current use case. For example every SBase object can have an > sboTerm attribute BUT there are some objects for which there are no > appropriate SBO terms. > > Sarah > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk > _______________________________________________ > sbml-arrays mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-arrays |
From: Chris J. M. <my...@ec...> - 2013-09-30 14:55:44
|
Perhaps for final release, but I would like to start prototyping as soon as possible. The main thing I need for that is the ability to extend the mathML subset to include the array functions. Dimensions and index would be nice, but I can get around those with annotations to start with. As for validity, that is also something we can get around by having a flatten out arrays function which we already have been using. Thanks, Chris On Sep 30, 2013, at 2:53 AM, Sarah Keating <ske...@ca...> wrote: > Hi Guys > > Having listened to the discussion at COMBINE (sorry I wasn't there even > virtually) am I correct in thinking that the arrays package will aim at > targeting L3V2 core - basically so that it does not have to jump > through the validity after reduction hoop :-) > > Thanks > > Sarah > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60133471&iu=/4140/ostg.clktrk > _______________________________________________ > sbml-arrays mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-arrays |
From: Sarah K. <ske...@ca...> - 2013-09-30 09:32:28
|
Hi Guys When Chris talked at COMBINE he mentioned that one of the reasons for not extending SBase with the listOfDimensions was that it would not be meaningful in some cases. He specifically used ListOf objects as an example. I think we need to be careful about imposing restrictions on the model that a modeller can describe just because at the moment *we* do not see a rationale for something. This is an accusation that has been levelled at SBML from time to time. I could actually see a use case for having dimensions on a ListOf object. Say you have 100 species that take part in 500 reactions and you want to express the fact that each species/reaction is an array of 10. This would mean putting the listOfDimensions on each species/reaction. You could put the listOfDimensions on the ListOfSpecies and the listOfReactions and convey the same thing surely. Obviously your simulation implementation would be different but as far as I can see - this would be a valid use case. It is also not unprecedented to have things in SBML for which we do not have a current use case. For example every SBase object can have an sboTerm attribute BUT there are some objects for which there are no appropriate SBO terms. Sarah |
From: Sarah K. <ske...@ca...> - 2013-09-30 08:54:02
|
Hi Guys Having listened to the discussion at COMBINE (sorry I wasn't there even virtually) am I correct in thinking that the arrays package will aim at targeting L3V2 core - basically so that it does not have to jump through the validity after reduction hoop :-) Thanks Sarah |
From: Sarah K. <ske...@ca...> - 2013-09-19 10:37:23
|
The comp and arrays session will take place at 14.00 CEST on Friday 20th. The dynamics session will take place at 16.00 CEST on Friday 20th. You can join and interact via a google hangout but will need to go through some steps beforehand. It is best to do this in advance as experience shows there can be a time delay between issuing invitations and them being received. Details of what to do are available here: https://plus.google.com/115309720146468337003/posts/BTWtebCeHbL Thanks Sarah |
From: Chris J. M. <my...@ec...> - 2013-09-04 22:02:11
|
Hi, Lucian and I will be chairing sessions on Friday afternoon at COMBINE to discuss the Comp, Arrays, and Dynamic packages. Here are some of the things that I think would be interesting to discuss: Comp package: 1) Issues that Lucian raised about the interplay between replacements and deletions of parent and child objects. 2) Is it possible to come up with a set of rules on replacements and deletions to ensure (or increase the probability) of a model being valid after flatten? What to do about stale references? 3) We have recently started using the comp implementation in JSBML, but we have had some issues with differences with the libsbml implementation that we would like to discuss. 4) What is needed in the next version of comp? Arrays package: 1) Are people satisfied with the current approach or are changes needed? 2) Should arrays be in the core? Not having them in the core and requiring valid after removing the package causes some problems such as an inability to have multiple initial assignments for different array entries. 3) How will arrays interact with comp? There are some tricky issues with references in replacements and deletions. 4) What support is needed in libsbml/JSBML to allow for array prototyping to begin? Dynamic package: 1) What biological processes must a dynamic package provide support for? 2) What is required to support the modeling of these processes? Events for birth and death? A notion of a cells location to model movement? What else? 3) What objects need to be "dynamic"? All SBML elements or perhaps just subModels? Ok, I think this is good for a start. Please send any additional discussion items you would like to add. See you in Paris! Chris |
From: Sarah K. <ske...@ca...> - 2013-09-02 07:36:20
|
Hi members of the arrays mailing list http://co.mbine.org/events/COMBINE_2013 During the COMBINE meeting we are trying to schedule a breakout session on the arrays package (probably together with the dynamics package) . It looks like the Friday afternoon (either 14.00-15.30 or 16.00-17.30 CEST) would be the most likely timeslots. If you are attending COMBINE and want to be in this session but these times are problematic for you can you let us know with details of when you would be available. We cannot guarantee to satisfy everyone's requirements but obviously having the session with as many interested parties present would be optimal. It would be useful to know if you are attending COMBINE and are likely to attend this session; so we make sure the relevant people are informed if there are any last minute changes. Also, if you are not attending COMBINE but would like to attend this breakout session remotely then can you also let us know. Note remote connection for breakouts is likely to be via Skype on someone's laptop; so we cannot guarantee amazing quality but we can try and connect people if they would like to attend remotely. Thanks SBML Editors |
From: Chris J. M. <my...@ec...> - 2013-04-05 01:30:19
|
Hi, I've tried to address some of your comments in this version. Please let me know if you have further comments. Thanks, Chris |
From: Chris J. M. <my...@ec...> - 2013-04-04 22:33:38
|
Hi Stuart, Thanks for the feedback. I do apologize about its rough state. I was planning to use it just as a conversation starter then work out the details when there was some broad agreement. It is basically a combination of parts of the old proposals and discussions that we have had at recent meetings. > > I've been through spec and I have to say I found it a bit confusing. I'm not at all clear how the index class is supposed to work. Sorry. > The idea of the index class is to allow one to index into an array in an assignment. The example on page 7 and 8 is doing the following: for (i=1; i < n; i++) { y(11-i) = x(i) } (sorry the "m" should have been an "n") Therefore, the index is being used as the math to calculate the array index for the element being assigned to. Is this more clear? > It might be worth expanding some of the explanations and annotating the examples inline (I use <!-- XML comments for that -->). Also being explicit about the package namespace will help make it clear what is array package markup and what is core. > Good points. > I took a look at the Shapiro proposal and I couldn't work out how much of it you reuse. It seems to me that the explicit array definitions and references are broadly OK and could be pretty much re-used as is. You changed the dimensions element. Was there a reason? The implicit stuff seems nice to me, but probably too complex for an initial implementation. > We stuck primarily with the explicit stuff with some modifications to fix some issues we found during our discussions. I agree the implicit stuff seems nice but a bit tricky for version one, we thought. We changed from lower limit and upper limit to just having one size term during one of our discussions. I don't recall the rationale (does anyone else?). I don't think it is a big deal either way. Perhaps, it was simply that having one value is simpler and more like many programming languages. > Anyway those are my limited thoughts so far. > Thanks again. > One other question is how would it work with libSBML? For example since parameters are of type double in the core how do get the values of parameters that are arrays types? The method Parameter::getValue() const returns a double. Presumably it would need to be modified to return a pointer to an array of arbitrary dimensions? The getValue() method is returning a field in SBML core which is still a single value. Therefore, it would still return a single value. If you want the elements of your array to have different values, then you will need to use an initial assignment or other mechanism to change there values. Remember though that values of parameters during a simulation are NOT stored in libsbml. The simulator must create their own data structures to store these values as they evolve. The actual libsbml support would simply be some get/set functions for indices and dimensions. The objects that need these to be added to them are listed on the document. Please let me know if you have further confusions and/or you see any problems with this being able to work with distributions. Cheers, Chris |
From: Chris J. M. <my...@ec...> - 2012-12-17 23:09:56
|
Hi, Thanks for agreeing to participate in the discussion about the new SBML arrays package. To get the ball rolling, I'm attaching a VERY rough draft of a specification for the arrays package. Any and all feedback is most welcome. Thanks again, Chris |