You can subscribe to this list here.
| 2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
|
Oct
|
Nov
|
Dec
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(11) |
Jul
(32) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(23) |
| 2014 |
Jan
(12) |
Feb
|
Mar
(1) |
Apr
(4) |
May
(17) |
Jun
(14) |
Jul
(3) |
Aug
(26) |
Sep
(100) |
Oct
(42) |
Nov
(15) |
Dec
(6) |
| 2015 |
Jan
(3) |
Feb
|
Mar
(19) |
Apr
(4) |
May
(9) |
Jun
(4) |
Jul
(4) |
Aug
|
Sep
(2) |
Oct
(1) |
Nov
|
Dec
|
| 2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
(22) |
Dec
(22) |
| 2017 |
Jan
(5) |
Feb
(4) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
(6) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
| 2019 |
Jan
(1) |
Feb
(4) |
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
(12) |
Aug
(2) |
Sep
|
Oct
(2) |
Nov
(6) |
Dec
(1) |
| 2020 |
Jan
|
Feb
(3) |
Mar
(1) |
Apr
|
May
(6) |
Jun
(4) |
Jul
|
Aug
|
Sep
(1) |
Oct
(1) |
Nov
|
Dec
|
| 2021 |
Jan
|
Feb
(1) |
Mar
(3) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
(5) |
May
(1) |
Jun
|
Jul
(8) |
Aug
(3) |
Sep
|
Oct
(7) |
Nov
|
Dec
|
|
From: Robert M. <mu...@an...> - 2019-11-21 18:33:55
|
I didn’t have a chance to look at where minBoundary and maxBoundary are used, but for a 3D sampled-field geometry (voxels), an object might have a one voxel width in one dimension, in which case min=max for that dimension. Best, Bob ======================= Robert F. Murphy, Ph.D. Ray and Stephanie Lane Professor of Computational Biology and Professor of Biological Sciences, Biomedical Engineering, and Machine Learning Head, Computational Biology Department, School of Computer Science, Carnegie Mellon University 7723 Gates and Hillman Centers, 5000 Forbes Ave., Pittsburgh, PA 15213 Phone: 412-268-3480, FAX: 412-268-2977, email: mu...@cm..., Skype: rfmurphy1953 Personal: Home Page (http://www.andrew.cmu.edu/user/murphy), Twitter @murphy2537 Comp Bio Department: Facebook (cmucbd), Twitter @CMUCompBio Honorary Professor, University of Freiburg Senior Fellow, Allen Institute for Cell Science On Nov 21, 2019, at 9:53 AM, Lucian Smith <luc...@gm...> wrote: Yes, this would be the same error as max<min--it's just a question of whether the error should trigger if 'max<min' or if 'max<=min'. Sounds like I'll make it so that min must be strictly less than max (unless anyone else has an opposing use-case). Thanks! -Lucian On Thu, Nov 21, 2019 at 12:29 AM Frank Bergmann <fbe...@ca... <mailto:fbe...@ca...>> wrote: Hello Lucian, I can see no particular usecase for the minBoundary and maxBoundary being the same. It would basically fix the coordinate to a fixed value for when it would appear in formulas. As for the severity of the error, whether it should be a warning or an error, what are you currently doing. Is it an error not to have the max boundary defined? What happens if max < min? This seems to be in the same category. Cheers Frank From: Lucian Smith <luc...@gm... <mailto:luc...@gm...>> Sent: Wednesday, November 20, 2019 6:28 PM To: The SBML L3 Spatial Processes and Geometries package discussion list <sbm...@li... <mailto:sbm...@li...>> Subject: [sbml-spatial] Quick question: axis min <= max, or min < max? Hello! I'm in the process of adding validation rules to the spatial spec, and have a question: When a CoordinateComponent (the Geometry axis) has a minBoundary and a maxBoundary, is it OK if the two boundaries are equal? This is essentially reducing the dimensionality of the Geometry, but perhaps it would be useful in some situations? -Lucian _______________________________________________ sbml-spatial mailing list sbm...@li... <mailto:sbm...@li...> https://lists.sourceforge.net/lists/listinfo/sbml-spatial <https://lists.sourceforge.net/lists/listinfo/sbml-spatial> _______________________________________________ sbml-spatial mailing list sbm...@li... https://lists.sourceforge.net/lists/listinfo/sbml-spatial |
|
From: Lucian S. <luc...@gm...> - 2019-11-21 14:53:38
|
Yes, this would be the same error as max<min--it's just a question of whether the error should trigger if 'max<min' or if 'max<=min'. Sounds like I'll make it so that min must be strictly less than max (unless anyone else has an opposing use-case). Thanks! -Lucian On Thu, Nov 21, 2019 at 12:29 AM Frank Bergmann <fbe...@ca...> wrote: > Hello Lucian, > > > > I can see no particular usecase for the minBoundary and maxBoundary being > the same. It would basically fix the coordinate to a fixed value for when > it would appear in formulas. > > > > As for the severity of the error, whether it should be a warning or an > error, what are you currently doing. Is it an error not to have the max > boundary defined? What happens if max < min? This seems to be in the same > category. > > > > Cheers > > Frank > > > > *From:* Lucian Smith <luc...@gm...> > *Sent:* Wednesday, November 20, 2019 6:28 PM > *To:* The SBML L3 Spatial Processes and Geometries package discussion > list <sbm...@li...> > *Subject:* [sbml-spatial] Quick question: axis min <= max, or min < max? > > > > Hello! I'm in the process of adding validation rules to the spatial spec, > and have a question: When a CoordinateComponent (the Geometry axis) has a > minBoundary and a maxBoundary, is it OK if the two boundaries are equal? > This is essentially reducing the dimensionality of the Geometry, but > perhaps it would be useful in some situations? > > > > -Lucian > _______________________________________________ > sbml-spatial mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-spatial > |
|
From: Frank B. <fbe...@ca...> - 2019-11-21 08:28:51
|
Hello Lucian, I can see no particular usecase for the minBoundary and maxBoundary being the same. It would basically fix the coordinate to a fixed value for when it would appear in formulas. As for the severity of the error, whether it should be a warning or an error, what are you currently doing. Is it an error not to have the max boundary defined? What happens if max < min? This seems to be in the same category. Cheers Frank From: Lucian Smith <luc...@gm...> Sent: Wednesday, November 20, 2019 6:28 PM To: The SBML L3 Spatial Processes and Geometries package discussion list <sbm...@li...> Subject: [sbml-spatial] Quick question: axis min <= max, or min < max? Hello! I'm in the process of adding validation rules to the spatial spec, and have a question: When a CoordinateComponent (the Geometry axis) has a minBoundary and a maxBoundary, is it OK if the two boundaries are equal? This is essentially reducing the dimensionality of the Geometry, but perhaps it would be useful in some situations? -Lucian |
|
From: Lucian S. <luc...@gm...> - 2019-11-20 17:28:16
|
Hello! I'm in the process of adding validation rules to the spatial spec, and have a question: When a CoordinateComponent (the Geometry axis) has a minBoundary and a maxBoundary, is it OK if the two boundaries are equal? This is essentially reducing the dimensionality of the Geometry, but perhaps it would be useful in some situations? -Lucian |
|
From: Michael H. <mh...@li...> - 2019-10-31 22:30:13
|
I just wanted to add that it's exciting to see this progress and the publication of works that use what you have all been developing for so long! MH On 23 Oct 2019, at 21:19, Lucian Smith wrote: > Hey, everyone! Congratulations to Akira for publishing a paper about > XitoSBML: > > https://www.frontiersin.org/articles/10.3389/fgene.2019.01027/full > > It seems like now is a good time to finalize the spec. There was a > Google > Summer of Code student who added spatial validation to JSBML this last > summer, and as part of that work, they went through the spec and > pulled out > some new validation rules that weren't automatically generated by > Sarah's > 'Deviser'. I've since gone through and added more; the final result > is at: > > https://docs.google.com/document/d/13lIj_KXmGxlI-F2TWkXOnF78erLyPLYrXT-fagnxFvk/edit# > > > I'm currently working on integrating the new validation rules into the > spec > itself, and into libsbml. It would be great if anyone working on > SBML-spatial could check through the list to make sure the rules I'm > coming > up with make sense in the context of your own program. > > In general, though, everything seems to be in place, and everything is > working for people. What I would like to do is sit down at HARMONY in > March (in Cambridge) and go through the spec item by item and ensure > that > everything in it has been implemented by someone, and that if two > groups > implement support for it, that they properly exchange the information. > If > we do that, I think we can call the spec finalized by the end of the > meeting! > > So, between now and then (any time in the next five months), it would > be > super helpful if people could: > * Check the new validation rules to make sure they seem OK. > * Check > > https://docs.google.com/spreadsheets/d/1DwcNlQxIK_zf8gAQx6tVst28dMyd3bdFtwXaHqGg118/edit#gid=0 > > for anything that needs to be updated > * Finish implementing anything you had planned to implement support > for. > * Check to see how your tool supports the models at > https://github.com/sbmlteam/sbml-spatial-models > * Upload (or email me) any other models your tool creates that you > think > might be useful to test. > > With that, we should be able to finalize at HARMONY, and publications > like > Akira's should be even better positioned to make an impact. Thank you > all > for all your efforts over the years! > > -Lucian > _______________________________________________ > sbml-spatial mailing list > sbm...@li... > https://lists.sourceforge.net/lists/listinfo/sbml-spatial |
|
From: Lucian S. <luc...@gm...> - 2019-10-24 04:20:08
|
Hey, everyone! Congratulations to Akira for publishing a paper about XitoSBML: https://www.frontiersin.org/articles/10.3389/fgene.2019.01027/full It seems like now is a good time to finalize the spec. There was a Google Summer of Code student who added spatial validation to JSBML this last summer, and as part of that work, they went through the spec and pulled out some new validation rules that weren't automatically generated by Sarah's 'Deviser'. I've since gone through and added more; the final result is at: https://docs.google.com/document/d/13lIj_KXmGxlI-F2TWkXOnF78erLyPLYrXT-fagnxFvk/edit# I'm currently working on integrating the new validation rules into the spec itself, and into libsbml. It would be great if anyone working on SBML-spatial could check through the list to make sure the rules I'm coming up with make sense in the context of your own program. In general, though, everything seems to be in place, and everything is working for people. What I would like to do is sit down at HARMONY in March (in Cambridge) and go through the spec item by item and ensure that everything in it has been implemented by someone, and that if two groups implement support for it, that they properly exchange the information. If we do that, I think we can call the spec finalized by the end of the meeting! So, between now and then (any time in the next five months), it would be super helpful if people could: * Check the new validation rules to make sure they seem OK. * Check https://docs.google.com/spreadsheets/d/1DwcNlQxIK_zf8gAQx6tVst28dMyd3bdFtwXaHqGg118/edit#gid=0 for anything that needs to be updated * Finish implementing anything you had planned to implement support for. * Check to see how your tool supports the models at https://github.com/sbmlteam/sbml-spatial-models * Upload (or email me) any other models your tool creates that you think might be useful to test. With that, we should be able to finalize at HARMONY, and publications like Akira's should be even better positioned to make an impact. Thank you all for all your efforts over the years! -Lucian |
|
From: Robert M. <mu...@an...> - 2019-08-01 13:29:06
|
Thanks for this discussion. The proposed text and addition are fine with me. Best, Bob ======================= Robert F. Murphy, Ph.D. Ray and Stephanie Lane Professor of Computational Biology and Professor of Biological Sciences, Biomedical Engineering, and Machine Learning Head, Computational Biology Department, School of Computer Science, Carnegie Mellon University 7723 Gates and Hillman Centers, 5000 Forbes Ave., Pittsburgh, PA 15213 Phone: 412-268-3480, FAX: 412-268-2977, email: mu...@cm..., Skype: rfmurphy1953 Personal: Home Page (http://www.andrew.cmu.edu/user/murphy), Twitter @murphy2537 Comp Bio Department: Facebook (cmucbd), Twitter @CMUCompBio Honorary Professor, University of Freiburg Senior Fellow, Allen Institute for Cell Science On Aug 1, 2019, at 2:54 AM, Frank Bergmann <fbe...@ca...> wrote: > How's this for text for the 'compression' attributes: > > The required compression attribute is of type CompressionKind. It is used to specify the compression used when > encoding the data, and can have the value “uncompressed” if no compression was used, or “deflated” if the > deflation algorithm was used to compress the data. The deflation compression algorithm to be used is gzip, which > adds a header to the deflated data. This algorithm is freely available. The version of the data to be compressed is the > string version of the values in the array, which may consist of numbers, whitespace, commas, and semicolons. > > And, for the 'dataType' attribute: > > The dataType attribute is of type DataKind and is optional. It is used to specify the type of the data being stored, so > that the uncompressed data can be stored in an appropriate storage type. The three main value types are “uint” > for unsigned integers, “int” for signed integers, and “double” for double-precision floating point values. For > backwards compatibility, and for cases where storage space might be an issue, other values may also be used: > “float” to indicate single-precision (32-bit) floating point values, and “uint8”, “uint16”, and “uint32” to indicate > 8-bit, 16-bit, and 32-bit unsigned integer values, respectively. I think that makes it all clear and it works for me. Thanks Frank _______________________________________________ sbml-spatial mailing list sbm...@li... <mailto:sbm...@li...> https://lists.sourceforge.net/lists/listinfo/sbml-spatial <https://lists.sourceforge.net/lists/listinfo/sbml-spatial> |
|
From: Frank B. <fbe...@ca...> - 2019-08-01 06:54:18
|
> How's this for text for the 'compression' attributes: > > The required compression attribute is of type CompressionKind. It is used to specify the compression used when > encoding the data, and can have the value “uncompressed” if no compression was used, or “deflated” if the > deflation algorithm was used to compress the data. The deflation compression algorithm to be used is gzip, which > adds a header to the deflated data. This algorithm is freely available. The version of the data to be compressed is the > string version of the values in the array, which may consist of numbers, whitespace, commas, and semicolons. > > And, for the 'dataType' attribute: > > The dataType attribute is of type DataKind and is optional. It is used to specify the type of the data being stored, so > that the uncompressed data can be stored in an appropriate storage type. The three main value types are “uint” > for unsigned integers, “int” for signed integers, and “double” for double-precision floating point values. For > backwards compatibility, and for cases where storage space might be an issue, other values may also be used: > “float” to indicate single-precision (32-bit) floating point values, and “uint8”, “uint16”, and “uint32” to indicate > 8-bit, 16-bit, and 32-bit unsigned integer values, respectively. I think that makes it all clear and it works for me. Thanks Frank |
|
From: Lucian S. <luc...@gm...> - 2019-07-31 19:18:00
|
How's this for text for the 'compression' attributes: The required compression attribute is of type CompressionKind. It is used to specify the compression used when encoding the data, and can have the value “uncompressed” if no compression was used, or “deflated” if the deflation algorithm was used to compress the data. The deflation compression algorithm to be used is gzip, which adds a header to the deflated data. This algorithm is freely available. The version of the data to be compressed is the string version of the values in the array, which may consist of numbers, whitespace, commas, and semicolons. And, for the 'dataType' attribute: The dataType attribute is of type DataKind and is optional. It is used to specify the type of the data being stored, so that the uncompressed data can be stored in an appropriate storage type. The three main value types are “uint” for unsigned integers, “int” for signed integers, and “double” for double-precision floating point values. For backwards compatibility, and for cases where storage space might be an issue, other values may also be used: “float” to indicate single-precision (32-bit) floating point values, and “uint8”, “uint16”, and “uint32” to indicate 8-bit, 16-bit, and 32-bit unsigned integer values, respectively. On Wed, Jul 31, 2019 at 9:50 AM Lucian Smith <luc...@gm...> wrote: > On Wed, Jul 31, 2019 at 12:07 AM Frank Bergmann <fbe...@ca...> > wrote: > >> > That seems reasonable. >> > >> > Are there ever going to be sampledField objects with negative >> integers? If so, we clearly need 'integer' as a type. >> >> I don’t see a reason to preclude it. In fact it was one of the first >> questions I received from a colleague after reading the spec, that it >> seemed odd not to allow for a signed type as well. >> >> > If the point of the attribute is 'how much space to allocate', that >> seems like it might actually be useful to have our different types of >> unsigned integers, but if not, maybe we could just go to 'uint', 'int', and >> 'float'? >> > >> > If we care about space for everything, and negative numbers are a >> possibility, we could expand our current list of 'uint8', 'uint16', >> 'uint32', 'float', and 'double' to also include 'int8', 'int16', and >> 'int32'. >> > >> >> I would say that as far as the spec is concerned we should not worry >> overly much about space and just go for the basic types uint, int, double. >> Even 'float' seems like an outlier, as it suggests that we just as well >> might have a 'byte' type as 'uint8' is a predominant type we use for image >> data. >> > > So, we have two options here. One is that we remain backwards-compatible, > and allow the old list, but add 'uint' and 'int', and encourage people to > use those two and 'double'. The other is if we break backwards > compatibility, and reduce the old list to 'uint', 'int', and 'double' alone. > > Given the current state of implementation, I'm inclined to do the former. > Does anyone object? > > >> > Does libsbml compression currently compress strings or values? >> > >> >> libSBML currently compresses strings. >> > > Excellent. I'll update the spec accordingly. > > -Lucian > |
|
From: Lucian S. <luc...@gm...> - 2019-07-31 16:50:32
|
On Wed, Jul 31, 2019 at 12:07 AM Frank Bergmann <fbe...@ca...> wrote: > > That seems reasonable. > > > > Are there ever going to be sampledField objects with negative integers? > If so, we clearly need 'integer' as a type. > > I don’t see a reason to preclude it. In fact it was one of the first > questions I received from a colleague after reading the spec, that it > seemed odd not to allow for a signed type as well. > > > If the point of the attribute is 'how much space to allocate', that > seems like it might actually be useful to have our different types of > unsigned integers, but if not, maybe we could just go to 'uint', 'int', and > 'float'? > > > > If we care about space for everything, and negative numbers are a > possibility, we could expand our current list of 'uint8', 'uint16', > 'uint32', 'float', and 'double' to also include 'int8', 'int16', and > 'int32'. > > > > I would say that as far as the spec is concerned we should not worry > overly much about space and just go for the basic types uint, int, double. > Even 'float' seems like an outlier, as it suggests that we just as well > might have a 'byte' type as 'uint8' is a predominant type we use for image > data. > So, we have two options here. One is that we remain backwards-compatible, and allow the old list, but add 'uint' and 'int', and encourage people to use those two and 'double'. The other is if we break backwards compatibility, and reduce the old list to 'uint', 'int', and 'double' alone. Given the current state of implementation, I'm inclined to do the former. Does anyone object? > > Does libsbml compression currently compress strings or values? > > > > libSBML currently compresses strings. > Excellent. I'll update the spec accordingly. -Lucian |
|
From: Frank B. <fbe...@ca...> - 2019-07-31 07:06:54
|
> That seems reasonable. > > Are there ever going to be sampledField objects with negative integers? If so, we clearly need 'integer' as a type. I don’t see a reason to preclude it. In fact it was one of the first questions I received from a colleague after reading the spec, that it seemed odd not to allow for a signed type as well. > If the point of the attribute is 'how much space to allocate', that seems like it might actually be useful to have our different types of unsigned integers, but if not, maybe we could just go to 'uint', 'int', and 'float'? > > If we care about space for everything, and negative numbers are a possibility, we could expand our current list of 'uint8', 'uint16', 'uint32', 'float', and 'double' to also include 'int8', 'int16', and 'int32'. > I would say that as far as the spec is concerned we should not worry overly much about space and just go for the basic types uint, int, double. Even 'float' seems like an outlier, as it suggests that we just as well might have a 'byte' type as 'uint8' is a predominant type we use for image data. > Does libsbml compression currently compress strings or values? > libSBML currently compresses strings. Cheers Frank > -Lucian |
|
From: Lucian S. <luc...@gm...> - 2019-07-30 21:54:10
|
That seems reasonable.
Are there ever going to be sampledField objects with negative integers? If
so, we clearly need 'integer' as a type. If the point of the attribute is
'how much space to allocate', that seems like it might actually be useful
to have our different types of unsigned integers, but if not, maybe we
could just go to 'uint', 'int', and 'float'?
If we care about space for everything, and negative numbers are a
possibility, we could expand our current list of 'uint8', 'uint16',
'uint32', 'float', and 'double' to also include 'int8', 'int16', and
'int32'.
Does libsbml compression currently compress strings or values?
-Lucian
On Tue, Jul 30, 2019 at 1:05 AM Frank Bergmann <fbe...@ca...> wrote:
> Hello Lucian,
>
>
>
> Indeed the compression happens on the string level. (And just the
> compression) On the other hand I’m against ditching the dataKind attribute
> entirely, since applications may want to know what (uncompressed) data type
> to expect, so as not to loose data. You still want to get your doubles out
> if you put doubles in, and you need an indication that this is what you
> were doing. (Similarly you don’t want to needlessly allocate huge double
> arrays, when all you want is to get image data out).
>
>
>
> Best
>
> Frank
>
>
>
> *From:* Lucian Smith <luc...@gm...>
> *Sent:* Tuesday, July 30, 2019 12:04 AM
> *To:* The SBML L3 Spatial Processes and Geometries package discussion
> list <sbm...@li...>
> *Subject:* Re: [sbml-spatial] Compression & sampled fields.
>
>
>
> If I'm understanding this correctly, you are saying that in current known
> implementations, the only compression that happens is on the *string*
> level, and not for the values themselves?
>
>
>
> And is this true for the three objects for which there can be
> compression: SpatialPoints, ParametricObject, and SampledField?
>
>
>
> If so, we should probably ditch the 'dataKind' attribute entirely, and
> just say that compression happens on the string level everywhere, and
> define how to do that. We don't need to worry about int/unsigned
> int/double if we're just dealing with the raw string.
>
>
>
> -Lucian
>
>
>
>
>
>
>
>
>
> On Tue, Jul 16, 2019 at 6:07 AM Frank T. Bergmann <fbe...@ca...>
> wrote:
>
> Currently the spec on sampled fields has a flag, as to whether the
> encoded data is compressed or not. Since this feature has caused
> issues in the past (data exchanged by different tools could not be
> reliably decoded), we either need more specification of the feature,
> or I wonder whether we should drop it altogether.
>
> While I agree that the sample field elements could get rather bit, I
> wonder about the rational of compressing just this string, rather then
> the whole file.
>
> In any case, currently the spec only says:
>
> -----
>
> In 3.3.7: The \primtype{CompressionKind} primitive data type is used
> in the definition of the \SampledField and \ParametricObject classes.
> It is derived from type \primtype{string} and its values are
> restricted to being one of the following possibilities:
> \val{uncompressed}, and \val{deflated}. Attributes of type
> \primtype{CompressionKind} cannot take on any other values. The
> meaning of these values is discussed in the context of the classes'
> definitions in \sec{sampledfield-class} and
> \sec{parametricobject-class}.
>
>
>
> And then in 3.45.1: The required \token{compression} attribute is of
> type \primtype{CompressionKind}. It is used to specify the compression
> used when encoding the data, and can have the value \val{uncompressed}
> if no compression was used, or \val{deflated} if the deflation
> algorithm was used to compress the text version of the data.
>
> -----
>
>
>
> How precisely the encoding takes place is not described. libSBML used
> to only read samples directly into int arrays. For decompression it
> would directly reinterpret those integers as bytes and was
> decompressing them. Coming back to this in trying to use the double
> sample fields, I had to rethink that implementation. Not wanting to
> get into things like byte ordering for doubles, I now just take the
> whole string representation of the element text representing the array
> data, and compress (using gzip deflate) or uncompress (using gzip
> inflate) it as is. In testing the compressed examples provided by
> Robert Murphy, it seems to also be the approach taken by
> CellOrganizer. In any case we should be describing that explicitly.
>
> Cheers
> Frank
>
>
> _______________________________________________
> sbml-spatial mailing list
> sbm...@li...
> https://lists.sourceforge.net/lists/listinfo/sbml-spatial
>
> _______________________________________________
> sbml-spatial mailing list
> sbm...@li...
> https://lists.sourceforge.net/lists/listinfo/sbml-spatial
>
|
|
From: Frank B. <fbe...@ca...> - 2019-07-30 08:06:53
|
Indeed that works for me. I had to look at your commit, as your formula got garbled, but the commit with:
$samples[x + numSamples1*y + numsamples1*numSamples2*z]$
Works for me.
Thanks
Frank
From: Lucian Smith <luc...@gm...>
Sent: Tuesday, July 30, 2019 12:48 AM
To: The SBML L3 Spatial Processes and Geometries package discussion list <sbm...@li...>
Subject: Re: [sbml-spatial] Addressing elements in sampled fields
This is a good suggestion. I've updated the spec to add the following paragraph to section 3.50.7:
The order of data points in the Samples should be the same as the dimensionality of the object, that is, first by the 10
first (x) dimension, then by the second (y) dimension (if present), and then by the third (z) dimension (if present). 11
Thus, the array is indexed such that to access data point (x, y, z), one would look in entry: 12
samples[x ¯numSamples1⁄ y ¯numsamples1⁄numSamples2⁄ z]
Will this work?
-Lucian
On Tue, Jul 16, 2019 at 6:09 AM Frank T. Bergmann <fbe...@ca...<mailto:fbe...@ca...>> wrote:
I wonder if we should add a bit of information on how to access
individual elements from the sample field. This is currently not
written down, but I think it should be:
We currently have (considering only the uncompressed case for now)
sampleLength many space separated values. These are separated into
numSamples1, numSamples2 and numSamples3. So there are numSample1
entries along the first dimension (x), numSample2 entries for the
second dimension (y) and numSample3 entries along the third dimension
(z), so to access the element at position x,y,z you would address them
using:
array[x + numSamples1*y + numSamples1*numSamples2*z]
current text in spec:
The \token{numSamples1}, \token{numSamples2}, and \token{numSamples3}
attributes represent the number of samples in each of the coordinate
components. (e.g. numX, numY, numZ) in an image dataset. These
attributes are of type \primtype{positive int} and are required to
specify the \SampledField. The samples are assumed to be uniformly
sampled. It is required to have as many \token{numSamples} attributes
as there are \CoordinateComponent elements in the \Geometry, with
\token{numSamples1} defined if there is a \token{cartesianX} element;
\token{numSamples2} defined if there is a \token{cartesianY} element,
and \token{numSamples3} defined if there is a \token{cartesianZ}
element, each attribute corresponding to the \CoordinateComponent with
the respective \token{type}.
Cheers
Frank
_______________________________________________
sbml-spatial mailing list
sbm...@li...<mailto:sbm...@li...>
https://lists.sourceforge.net/lists/listinfo/sbml-spatial
|
|
From: Frank B. <fbe...@ca...> - 2019-07-30 08:04:32
|
Hello Lucian,
Indeed the compression happens on the string level. (And just the compression) On the other hand I’m against ditching the dataKind attribute entirely, since applications may want to know what (uncompressed) data type to expect, so as not to loose data. You still want to get your doubles out if you put doubles in, and you need an indication that this is what you were doing. (Similarly you don’t want to needlessly allocate huge double arrays, when all you want is to get image data out).
Best
Frank
From: Lucian Smith <luc...@gm...>
Sent: Tuesday, July 30, 2019 12:04 AM
To: The SBML L3 Spatial Processes and Geometries package discussion list <sbm...@li...>
Subject: Re: [sbml-spatial] Compression & sampled fields.
If I'm understanding this correctly, you are saying that in current known implementations, the only compression that happens is on the *string* level, and not for the values themselves?
And is this true for the three objects for which there can be compression: SpatialPoints, ParametricObject, and SampledField?
If so, we should probably ditch the 'dataKind' attribute entirely, and just say that compression happens on the string level everywhere, and define how to do that. We don't need to worry about int/unsigned int/double if we're just dealing with the raw string.
-Lucian
On Tue, Jul 16, 2019 at 6:07 AM Frank T. Bergmann <fbe...@ca...<mailto:fbe...@ca...>> wrote:
Currently the spec on sampled fields has a flag, as to whether the
encoded data is compressed or not. Since this feature has caused
issues in the past (data exchanged by different tools could not be
reliably decoded), we either need more specification of the feature,
or I wonder whether we should drop it altogether.
While I agree that the sample field elements could get rather bit, I
wonder about the rational of compressing just this string, rather then
the whole file.
In any case, currently the spec only says:
-----
In 3.3.7: The \primtype{CompressionKind} primitive data type is used
in the definition of the \SampledField and \ParametricObject classes.
It is derived from type \primtype{string} and its values are
restricted to being one of the following possibilities:
\val{uncompressed}, and \val{deflated}. Attributes of type
\primtype{CompressionKind} cannot take on any other values. The
meaning of these values is discussed in the context of the classes'
definitions in \sec{sampledfield-class} and
\sec{parametricobject-class}.
And then in 3.45.1: The required \token{compression} attribute is of
type \primtype{CompressionKind}. It is used to specify the compression
used when encoding the data, and can have the value \val{uncompressed}
if no compression was used, or \val{deflated} if the deflation
algorithm was used to compress the text version of the data.
-----
How precisely the encoding takes place is not described. libSBML used
to only read samples directly into int arrays. For decompression it
would directly reinterpret those integers as bytes and was
decompressing them. Coming back to this in trying to use the double
sample fields, I had to rethink that implementation. Not wanting to
get into things like byte ordering for doubles, I now just take the
whole string representation of the element text representing the array
data, and compress (using gzip deflate) or uncompress (using gzip
inflate) it as is. In testing the compressed examples provided by
Robert Murphy, it seems to also be the approach taken by
CellOrganizer. In any case we should be describing that explicitly.
Cheers
Frank
_______________________________________________
sbml-spatial mailing list
sbm...@li...<mailto:sbm...@li...>
https://lists.sourceforge.net/lists/listinfo/sbml-spatial
|
|
From: Lucian S. <luc...@gm...> - 2019-07-29 22:48:18
|
This is a good suggestion. I've updated the spec to add the following
paragraph to section 3.50.7:
The order of data points in the Samples should be the same as the
dimensionality of the object, that is, first by the 10
first (x) dimension, then by the second (y) dimension (if present), and
then by the third (z) dimension (if present). 11
Thus, the array is indexed such that to access data point (x, y, z), one
would look in entry: 12
samples[x ¯numSamples1⁄ y ¯numsamples1⁄numSamples2⁄ z]
Will this work?
-Lucian
On Tue, Jul 16, 2019 at 6:09 AM Frank T. Bergmann <fbe...@ca...>
wrote:
> I wonder if we should add a bit of information on how to access
> individual elements from the sample field. This is currently not
> written down, but I think it should be:
>
> We currently have (considering only the uncompressed case for now)
> sampleLength many space separated values. These are separated into
> numSamples1, numSamples2 and numSamples3. So there are numSample1
> entries along the first dimension (x), numSample2 entries for the
> second dimension (y) and numSample3 entries along the third dimension
> (z), so to access the element at position x,y,z you would address them
> using:
>
> array[x + numSamples1*y + numSamples1*numSamples2*z]
>
>
> current text in spec:
>
> The \token{numSamples1}, \token{numSamples2}, and \token{numSamples3}
> attributes represent the number of samples in each of the coordinate
> components. (e.g. numX, numY, numZ) in an image dataset. These
> attributes are of type \primtype{positive int} and are required to
> specify the \SampledField. The samples are assumed to be uniformly
> sampled. It is required to have as many \token{numSamples} attributes
> as there are \CoordinateComponent elements in the \Geometry, with
> \token{numSamples1} defined if there is a \token{cartesianX} element;
> \token{numSamples2} defined if there is a \token{cartesianY} element,
> and \token{numSamples3} defined if there is a \token{cartesianZ}
> element, each attribute corresponding to the \CoordinateComponent with
> the respective \token{type}.
>
> Cheers
> Frank
>
>
> _______________________________________________
> sbml-spatial mailing list
> sbm...@li...
> https://lists.sourceforge.net/lists/listinfo/sbml-spatial
>
|
|
From: Lucian S. <luc...@gm...> - 2019-07-29 22:04:46
|
If I'm understanding this correctly, you are saying that in current known
implementations, the only compression that happens is on the *string*
level, and not for the values themselves?
And is this true for the three objects for which there can be compression:
SpatialPoints, ParametricObject, and SampledField?
If so, we should probably ditch the 'dataKind' attribute entirely, and just
say that compression happens on the string level everywhere, and define how
to do that. We don't need to worry about int/unsigned int/double if we're
just dealing with the raw string.
-Lucian
On Tue, Jul 16, 2019 at 6:07 AM Frank T. Bergmann <fbe...@ca...>
wrote:
> Currently the spec on sampled fields has a flag, as to whether the
> encoded data is compressed or not. Since this feature has caused
> issues in the past (data exchanged by different tools could not be
> reliably decoded), we either need more specification of the feature,
> or I wonder whether we should drop it altogether.
>
> While I agree that the sample field elements could get rather bit, I
> wonder about the rational of compressing just this string, rather then
> the whole file.
>
> In any case, currently the spec only says:
>
> -----
>
> In 3.3.7: The \primtype{CompressionKind} primitive data type is used
> in the definition of the \SampledField and \ParametricObject classes.
> It is derived from type \primtype{string} and its values are
> restricted to being one of the following possibilities:
> \val{uncompressed}, and \val{deflated}. Attributes of type
> \primtype{CompressionKind} cannot take on any other values. The
> meaning of these values is discussed in the context of the classes'
> definitions in \sec{sampledfield-class} and
> \sec{parametricobject-class}.
>
>
>
> And then in 3.45.1: The required \token{compression} attribute is of
> type \primtype{CompressionKind}. It is used to specify the compression
> used when encoding the data, and can have the value \val{uncompressed}
> if no compression was used, or \val{deflated} if the deflation
> algorithm was used to compress the text version of the data.
>
> -----
>
>
>
> How precisely the encoding takes place is not described. libSBML used
> to only read samples directly into int arrays. For decompression it
> would directly reinterpret those integers as bytes and was
> decompressing them. Coming back to this in trying to use the double
> sample fields, I had to rethink that implementation. Not wanting to
> get into things like byte ordering for doubles, I now just take the
> whole string representation of the element text representing the array
> data, and compress (using gzip deflate) or uncompress (using gzip
> inflate) it as is. In testing the compressed examples provided by
> Robert Murphy, it seems to also be the approach taken by
> CellOrganizer. In any case we should be describing that explicitly.
>
> Cheers
> Frank
>
>
> _______________________________________________
> sbml-spatial mailing list
> sbm...@li...
> https://lists.sourceforge.net/lists/listinfo/sbml-spatial
>
|
|
From: 森田慶一 <mor...@ke...> - 2019-07-17 02:21:28
|
|
From: Frank T. B. <fbe...@ca...> - 2019-07-16 13:09:12
|
I wonder if we should add a bit of information on how to access
individual elements from the sample field. This is currently not
written down, but I think it should be:
We currently have (considering only the uncompressed case for now)
sampleLength many space separated values. These are separated into
numSamples1, numSamples2 and numSamples3. So there are numSample1
entries along the first dimension (x), numSample2 entries for the
second dimension (y) and numSample3 entries along the third dimension
(z), so to access the element at position x,y,z you would address them
using:
array[x + numSamples1*y + numSamples1*numSamples2*z]
current text in spec:
The \token{numSamples1}, \token{numSamples2}, and \token{numSamples3}
attributes represent the number of samples in each of the coordinate
components. (e.g. numX, numY, numZ) in an image dataset. These
attributes are of type \primtype{positive int} and are required to
specify the \SampledField. The samples are assumed to be uniformly
sampled. It is required to have as many \token{numSamples} attributes
as there are \CoordinateComponent elements in the \Geometry, with
\token{numSamples1} defined if there is a \token{cartesianX} element;
\token{numSamples2} defined if there is a \token{cartesianY} element,
and \token{numSamples3} defined if there is a \token{cartesianZ}
element, each attribute corresponding to the \CoordinateComponent with
the respective \token{type}.
Cheers
Frank
|
|
From: Frank T. B. <fbe...@ca...> - 2019-07-16 13:06:56
|
Currently the spec on sampled fields has a flag, as to whether the
encoded data is compressed or not. Since this feature has caused
issues in the past (data exchanged by different tools could not be
reliably decoded), we either need more specification of the feature,
or I wonder whether we should drop it altogether.
While I agree that the sample field elements could get rather bit, I
wonder about the rational of compressing just this string, rather then
the whole file.
In any case, currently the spec only says:
-----
In 3.3.7: The \primtype{CompressionKind} primitive data type is used
in the definition of the \SampledField and \ParametricObject classes.
It is derived from type \primtype{string} and its values are
restricted to being one of the following possibilities:
\val{uncompressed}, and \val{deflated}. Attributes of type
\primtype{CompressionKind} cannot take on any other values. The
meaning of these values is discussed in the context of the classes'
definitions in \sec{sampledfield-class} and
\sec{parametricobject-class}.
And then in 3.45.1: The required \token{compression} attribute is of
type \primtype{CompressionKind}. It is used to specify the compression
used when encoding the data, and can have the value \val{uncompressed}
if no compression was used, or \val{deflated} if the deflation
algorithm was used to compress the text version of the data.
-----
How precisely the encoding takes place is not described. libSBML used
to only read samples directly into int arrays. For decompression it
would directly reinterpret those integers as bytes and was
decompressing them. Coming back to this in trying to use the double
sample fields, I had to rethink that implementation. Not wanting to
get into things like byte ordering for doubles, I now just take the
whole string representation of the element text representing the array
data, and compress (using gzip deflate) or uncompress (using gzip
inflate) it as is. In testing the compressed examples provided by
Robert Murphy, it seems to also be the approach taken by
CellOrganizer. In any case we should be describing that explicitly.
Cheers
Frank
|
|
From: Frank T. B. <fbe...@ca...> - 2019-07-16 13:00:36
|
Originally, samplefields were used only to encode the image data for
sample field geometries. However, in order to facilitate the encoding
of fields for initial conditions, we did add the double and float
types as well. It came to mind, that having only unsigned int types (3
of them!) but no unsigned one seemed a bit odd.
I would propose to add an ‘integer’ option. Additionally, I wonder if
we really need 3 unsigned int types, would not one be sufficient? It
seems like tools would be able to interconvert between smaller types
if needed?
The current spec section is 3.3.5:
The \primtype{DataKind} primitive data type is used in the definition
of the \SampledField and \ParametricGeometry classes. It is derived
from type \primtype{string} and its values are restricted to being one
of the following possibilities: \val{double}, \val{float},
\val{uint8}, \val{uint16}, and \val{uint32}. Attributes of type
\primtype{DataKind} cannot take on any other values. The meaning of
these values is discussed in the context of the classes' definitions
in \sec{sampledfield-class} and \sec{parametricgeometry-class}.
cheers
Frank
|
|
From: bhavye j. <bh...@gm...> - 2019-05-20 16:09:38
|
Dear All, I am Bhavye Jain, an undergraduate computer science student at the Indian Institute of Technology (Roorkee) in India. I have recently completed my sophomore year, and I am participating in the Google Summer of Code, 2019, under the organization National Resource for Network Biology (NRNB <https://nrnb.org>). My project for this summer is "Validation of Spatial Systems Biology Models in Java(TM) <https://github.com/nrnb/GoogleSummerOfCode/issues/120>." This project is being mentored by Nicolas Rodriguez, Thomas M. Hamm, and Andreas Dräger. The JSBML offline validator is a self-contained SBML validation facility which checks syntax and internal consistency of the SBML file. My project aims to improve this facility by building the validation constraints required to validate SBML models that use a spatial component. Currently, the offline validator is incomplete and not suitable for production use. Validation rules for spatial are taken from the latest specification of the package. For details on the project, consider reading my proposal for GSoC 2019. The URLs for my project proposal, blog, and the GitHub repository are provided below: Project Proposal: https://tinyurl.com/y2bg9l3b Blog: https://gsoc19-spatial.blogspot.com GitHub repository: https://github.com/sbmlteam/jsbml Thank You. -- Bhavye Jain B.Tech Student Department of Computer Science and Engineering Indian Institute of Technology, Roorkee |
|
From: Lucian S. <luc...@gm...> - 2019-03-26 21:19:10
|
Hello! We're all set up here at HARMONY to have remote attendees, thanks to Mike Hucka. Here's the URL https://hangouts.google.com/call/OGU19VPit_Omt4pzP4T4AEEE -Lucian On Tue, Feb 19, 2019 at 10:09 AM Lucian Smith <luc...@gm...> wrote: > On Tue, Feb 19, 2019 at 10:08 AM Robert Murphy <mu...@an...> > wrote: > >> Dear all: >> >> I’d love to participate in the discussion but I won’t be at HARMONY this >> year. Can I join by zoom or equivalent? >> > > Absolutely. Will you be free Tuesday afternoon Pacific? (I think it'll > be Pacific Standard at that point.) > > -Lucian > |
|
From: Lucian S. <luc...@gm...> - 2019-02-19 18:09:25
|
On Tue, Feb 19, 2019 at 10:08 AM Robert Murphy <mu...@an...> wrote: > Dear all: > > I’d love to participate in the discussion but I won’t be at HARMONY this > year. Can I join by zoom or equivalent? > Absolutely. Will you be free Tuesday afternoon Pacific? (I think it'll be Pacific Standard at that point.) -Lucian |
|
From: Robert M. <mu...@an...> - 2019-02-19 18:04:52
|
Dear all: I’d love to participate in the discussion but I won’t be at HARMONY this year. Can I join by zoom or equivalent? Best, Bob ======================= Robert F. Murphy, Ph.D. Ray and Stephanie Lane Professor of Computational Biology and Professor of Biological Sciences, Biomedical Engineering, and Machine Learning Head, Computational Biology Department, School of Computer Science, Carnegie Mellon University 7723 Gates and Hillman Centers, 5000 Forbes Ave., Pittsburgh, PA 15213 Phone: 412-268-3480, FAX: 412-268-2977, email: mu...@cm..., Skype: rfmurphy1953 Personal: Home Page (http://www.andrew.cmu.edu/user/murphy), Twitter @murphy2537 Comp Bio Department: Facebook (cmucbd), Twitter @CMUCompBio Honorary Professor, University of Freiburg Senior Fellow, Allen Institute for Cell Science On Feb 19, 2019, at 4:22 AM, Frank T. Bergmann <fra...@gm...> wrote: I’ll be there on the Tuesday Cheers Frank From: Lucian Smith <luc...@gm...> Sent: Monday, February 18, 2019 9:47 PM To: The SBML L3 Spatial Processes and Geometries package discussion list <sbm...@li...> Subject: [sbml-spatial] Meeting at HARMONY Hey, everyone! We should claim a time at HARMONY to meet, both with whoever is there in Pasadena, and with whoever would like to join online. Does anyone have any schedule restrictions? Might Tuesday, March 26th work for people? -Lucian _______________________________________________ sbml-spatial mailing list sbm...@li... https://lists.sourceforge.net/lists/listinfo/sbml-spatial |
|
From: Frank T. B. <fra...@gm...> - 2019-02-19 09:22:28
|
I’ll be there on the Tuesday Cheers Frank From: Lucian Smith <luc...@gm...> Sent: Monday, February 18, 2019 9:47 PM To: The SBML L3 Spatial Processes and Geometries package discussion list <sbm...@li...> Subject: [sbml-spatial] Meeting at HARMONY Hey, everyone! We should claim a time at HARMONY to meet, both with whoever is there in Pasadena, and with whoever would like to join online. Does anyone have any schedule restrictions? Might Tuesday, March 26th work for people? -Lucian |