sparrow-devel Mailing List for INACTIVE: Sparrow: Java Data Objects (Page 2)
Status: Inactive
Brought to you by:
ikestrel
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(11) |
Nov
(27) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
|
Feb
(1) |
Mar
|
Apr
(14) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Joel S. <jo...@ik...> - 2001-11-14 16:30:33
|
On Wed, 2001-11-14 at 02:56, Geir Ove GrXnmo wrote: > Exactly. :) > > Two questions: > > 1. Has anybody started on the design of the JDO query support? Not yet. You're welcome to do it. > 2. Is there a computer readable form of the JDO Query BNF available > anywhere? Not sure, but you can download it from the JDO page. If it's in PDF format, it should be convertable to another format. What format do you need it in? -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Geir O. G. <gr...@on...> - 2001-11-14 10:56:36
|
* Andy Lewis | >> | >> As you can see I would like to see an abstract query representation | >> first, and then map to the actual quering syntax. This should make it | >> much easier to extend and optimize the queries generated for the | >> persistent backend - and of course code reuse. | >> | > What sort of abstract query representation are you referring to? The | > object model you refer to creating? Sounds good. Yes, the object instance that is the result having parsed the JDO Query. The class of this instance would be an implementation of the abstract query object model interfaces. This query instance would then be interpreted by a backend-specific query builder that for example would generate the SQL statement to be issued in a relational database. | This is exactly what is needed here. The only way to really achieve the | pluggability that is being discussed ywould be to create something that | abstract. The query parser can make very few assumptions about the type | of data store it will be processed against. Exactly. :) Two questions: 1. Has anybody started on the design of the JDO query support? 2. Is there a computer readable form of the JDO Query BNF available anywhere? Cheers, Geir O. |
From: Andy L. <aj...@as...> - 2001-11-13 18:40:29
|
>> >> As you can see I would like to see an abstract query representation >> first, and then map to the actual quering syntax. This should make it >> much easier to extend and optimize the queries generated for the >> persistent backend - and of course code reuse. >> > What sort of abstract query representation are you referring to? The > object model you refer to creating? Sounds good. This is exactly what is needed here. The only way to really achieve the pluggability that is being discussed ywould be to create something that abstract. The query parser can make very few assumptions about the type of data store it will be processed against. > >> >> Instead of developing this myself as proprietary code I would very >> much like to contribute to Sparrow's querying framework. >> > Sounds great. We would welcome it. Fantastic... |
From: Joel S. <jo...@ik...> - 2001-11-13 17:42:11
|
Geir Ove GrXnmo wrote: >Is "making it a framework for ease the development of further JDO >implementations" a goal of the project? > As part of the development process, we're going to build something that would be a good framework yes. As we discussed a couple weeks ago, our goal will be a very modular design. You could plugin, mix and match implementations of different modules. That's the goal anyway. >It is my hope that Sparrow will be something similar. > >Lately I've developed a object-relational mapping framework, and have >been thinking about how to represent queries in terms of object >models. JDO Query looks like a perfect fit. The nice thing about doing >this is that the query is independent of how the object model is >persisted. > >So, the actual tasks that I've identified and would like to pursuit >are: > > - Design an object model for JDO queries (using Java > interfaces). Implementations of this object model will then be > used for representing the queries in terms of objects. > > - Write a JDO query parser, which output is a JDO Query object model > instance. > > - A query builder for each persistent backend. The query builder would > traverse the query object model to generate a query optimized for > the persistent backend, e.g. SQL or OQL. Ideally this query builder > would create queries tuned for individual RDBMSes, like PostgreSQL > or Oracle, and perhaps also use statistics, genetic algorithms and > indexes to ensure optimal performance. > >As you can see I would like to see an abstract query representation >first, and then map to the actual quering syntax. This should make it >much easier to extend and optimize the queries generated for the >persistent backend - and of course code reuse. > What sort of abstract query representation are you referring to? The object model you refer to creating? Sounds good. > >Instead of developing this myself as proprietary code I would very >much like to contribute to Sparrow's querying framework. > Sounds great. We would welcome it. -joel |
From: Geir O. G. <gr...@on...> - 2001-11-13 10:42:02
|
Hello all! I've been reading this list for about a half year, but not posted until now. I'll start by introducing myself; 28 years old, Norwegian. Particular interests are knowledge databases and persistence frameworks. Co-founder and developer for Ontopia[1], a Norwegian company specializing in knowledge representation tools using a new standard called Topic Maps. [Pretty cool stuff.] Anyway, my current interests in Sparrow is the querying functionality, but also that it is [to become] a complete JDO implementation - if not to say a set of implementations(?). :) Is "making it a framework for ease the development of further JDO implementations" a goal of the project? I can vaguely remember that Tech-Trader writes something similar in their Kodo JDO documentation. They seem to be targeting the OEM market from what I could see. They're selling their JDO framework to customers which then can create their own JDO implementations. This is very much in the same spirit as jxDBCon[2], which states that "jxDBCon a framework to ease the development JDBC drivers". It is my hope that Sparrow will be something similar. Lately I've developed a object-relational mapping framework, and have been thinking about how to represent queries in terms of object models. JDO Query looks like a perfect fit. The nice thing about doing this is that the query is independent of how the object model is persisted. So, the actual tasks that I've identified and would like to pursuit are: - Design an object model for JDO queries (using Java interfaces). Implementations of this object model will then be used for representing the queries in terms of objects. - Write a JDO query parser, which output is a JDO Query object model instance. - A query builder for each persistent backend. The query builder would traverse the query object model to generate a query optimized for the persistent backend, e.g. SQL or OQL. Ideally this query builder would create queries tuned for individual RDBMSes, like PostgreSQL or Oracle, and perhaps also use statistics, genetic algorithms and indexes to ensure optimal performance. As you can see I would like to see an abstract query representation first, and then map to the actual quering syntax. This should make it much easier to extend and optimize the queries generated for the persistent backend - and of course code reuse. Instead of developing this myself as proprietary code I would very much like to contribute to Sparrow's querying framework. All the best, Geir O. [1] http://www.ontopia.net [2] http://sourceforge.net/projects/jxdbcon/ |
From: Joel S. <jo...@ik...> - 2001-11-12 09:01:55
|
> >Tech Trader is being used by Kodo (and looks like was developed by > >someone from same company) and so they might not like us using it in a > >competing product. > > > They released it BSD - they had to expect it would be used. If it is the > best technical choice I wouldn't rule it out because of who wrote it. True. > >The one from Convenientstore: I'm concerned it might have been done just > >for that purpose and might not be a general use, maintained library. > > > Not as familair with it. I looked through them all again and it appears that they are all pretty much similar enough as to not make much of a difference I can tell. I'm thinking of using the one that's already in ConvenientStore as there is also work on an enhancer using that already available there, too. I would suggest we keep the underlying library calls separated so it would be easy to swap out a different bytecode manipulator in the future. It would be nice to have a higher level API on top of the bytecode manipulator anyway. > >I took a look at jclasslib and it looked a little too low level for what > >I'm interested in. Also it was developed by one person it seems. > > > >So, I'm thinking gnu.bytecode is what to use. There is a community using > >it. There might be a license issue but: > > > >1) I'll ask them if it would be okay for us to use it in an LGPL > >project. > > > >2) I've been considering switching to GPL for sparrow. > > > That would concern me. That definately limits the ability to use the > tool in commerical situations. Realize that the FSF does not take a > casual view to the licence and tend towards the strictest interpretation > of it. I remain a fan of BSD-style licenses. Yeah, I'm still undecided regarding the license. I'm even considering doing a dual license thing--release it GPL generally, and negotiate other licenses with others to support further development--similar to what SleepyCat software has done with Berkeley DB. Unfortunately, that would mean contributors would have to assign copyright over. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Joel S. <jo...@ik...> - 2001-11-12 08:30:29
|
On Sat, 2001-11-10 at 18:33, Andy Lewis wrote: > ok, so as a starting point: Ant, JUnit, AspectJ, Log4J (I definately > agree with this one) I would suggest though using Log4J as simply the logger that we use in the aspects. I'm not sure I want Log4J objects/calls scattered around the code, though. Have to think about that, though. > How about Xerces as an XML parser? Other preferences I would suggest JAXP. It's been a little while since I've messed with Xerces, but last I was looking at it (and yes, it's been over a year) it had known memory leaks and such. -joel |
From: Andy L. <aj...@as...> - 2001-11-11 04:28:07
|
Joel Shellman wrote: >Tech Trader is being used by Kodo (and looks like was developed by >someone from same company) and so they might not like us using it in a >competing product. > They released it BSD - they had to expect it would be used. If it is the best technical choice I wouldn't rule it out because of who wrote it. >The one from Convenientstore: I'm concerned it might have been done just >for that purpose and might not be a general use, maintained library. > Not as familair with it. >So the choice is really between gnu.bytecode and jclasslib (unless >someone knows of another one?). > not currently. >I took a look at jclasslib and it looked a little too low level for what >I'm interested in. Also it was developed by one person it seems. > >So, I'm thinking gnu.bytecode is what to use. There is a community using >it. There might be a license issue but: > >1) I'll ask them if it would be okay for us to use it in an LGPL >project. > >2) I've been considering switching to GPL for sparrow. > That would concern me. That definately limits the ability to use the tool in commerical situations. Realize that the FSF does not take a casual view to the licence and tend towards the strictest interpretation of it. I remain a fan of BSD-style licenses. > > >3) We can release our code LGPL and the bytecode enhancer would just be >dependent on GPLed code. Which means if someone wanted to use sparrow in >a situation that conformed to LGPL but not GPL, they would have to >rewrite that part to use a different bytecode manipulation library. > |
From: Andy L. <aj...@as...> - 2001-11-11 02:33:54
|
Joel Shellman wrote: >I know not everyone is like me, but many people do join OSS projects to >learn--so adding the ability to learn how to use aspectJ might be a >valuabe proposition for them. > I'm not strongly adverse - in fact, contrary to what I'm sure my earlier comment sounded like, provided the decision is made with a solid attempt to understand the implications, and a determination that those ar eacceptable outcomes, I'm fine with it. I see it definately adding value, though I am not sure if it will do as much as you hope... >Right. I think ant as a build tool is pretty standard these days and I >don't know any reason to choose something else. > >Same thing with jUnit. Unless someone has a better suggestion, I think >we'll stick with jUnit. > >I was planning on using log4j for logging... might still want to, but I >think it would be very valuable to use aspect for that. Maybe both >together... > ok, so as a starting point: Ant, JUnit, AspectJ, Log4J (I definately agree with this one) How about Xerces as an XML parser? Other preferences > >At this point, I think a "general consensus reached via mailing list (or >some other tool) punctuated by the benevolent dictator (me) stamping >approval (so we know when it's decided) or resolving disputes (so we can >get to a decision when things get hairy)" would work. How does that >sound? I think in most cases general consensus will govern. > That's clear. Consensus when consensus can be reached, identifed authortarion role for resolution of durable conflicts. |
From: Joel S. <jo...@ik...> - 2001-11-10 23:37:16
|
Okay, with the new domain name we now have, I think we'll use that as our package structure to make things simple. So unde org.sparrowdb.* will be all things sparrow. Initially, I think we'll probably want: org.sparrowdb org.sparrowdb.core: Core implementation of spec classes org.sparrowdb.or: classes specific to O/R mappings org.sparrowdb.or.query: query engine classes specific to O/R mapping org.sparrowdb.odb: classes specific to object storage org.sparrowdb.odb.query: query engine classes specific to object storage We could then have multiple vendor specific classes under: org.sparrowdb.vendor.[vendor name] org.sparrowdb.vendor.[vendor name].[module] And other appropriate packages following the same paradigm. Any comments? I'm going to rethink it again, but I wanted to get a conversation started. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Joel S. <jo...@ik...> - 2001-11-10 23:33:35
|
Tech Trader is being used by Kodo (and looks like was developed by someone from same company) and so they might not like us using it in a competing product. The one from Convenientstore: I'm concerned it might have been done just for that purpose and might not be a general use, maintained library. So the choice is really between gnu.bytecode and jclasslib (unless someone knows of another one?). I took a look at jclasslib and it looked a little too low level for what I'm interested in. Also it was developed by one person it seems. So, I'm thinking gnu.bytecode is what to use. There is a community using it. There might be a license issue but: 1) I'll ask them if it would be okay for us to use it in an LGPL project. 2) I've been considering switching to GPL for sparrow. 3) We can release our code LGPL and the bytecode enhancer would just be dependent on GPLed code. Which means if someone wanted to use sparrow in a situation that conformed to LGPL but not GPL, they would have to rewrite that part to use a different bytecode manipulation library. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Joel S. <jo...@ik...> - 2001-11-09 20:14:54
|
On Fri, 2001-11-09 at 11:53, Andy Lewis wrote: > I have often found it frustrating when using an open source package and > finding that the prerequisites to using it are high. Simplicity has > great value here. I would definately consider it valid for a development > tool for removable code though. I don't think broader use of it here > would be of enough value to impose the requirement. Right... here's the other side... It kind of goes with what I mentioned in my goals email--I want to build the best software. If that means using some technology that isn't known by as many people but will significantly increase the elegance and quality of the code... I might have to err on the side of using it even if it means less help from the developer community. And also, if I was investigating an OSS project and found they used aspectJ, I might consider jumping in for just that reason--to see a real world example of using it so I could learn it better. So... there's two sides of the coin. And if we provide good documentation and explanation (okay... I know... I wish) it could arguably be easier for people to jump in because the code is not cluttered with all the crosscutting concerns (logging, etc.). I know not everyone is like me, but many people do join OSS projects to learn--so adding the ability to learn how to use aspectJ might be a valuabe proposition for them. > This leads into a discussion of the development tool set - ant? junit? > coding standards if any? Right. I think ant as a build tool is pretty standard these days and I don't know any reason to choose something else. Same thing with jUnit. Unless someone has a better suggestion, I think we'll stick with jUnit. I was planning on using log4j for logging... might still want to, but I think it would be very valuable to use aspect for that. Maybe both together... > Not to mention project governance. What is the decision making process? At this point, I think a "general consensus reached via mailing list (or some other tool) punctuated by the benevolent dictator (me) stamping approval (so we know when it's decided) or resolving disputes (so we can get to a decision when things get hairy)" would work. How does that sound? I think in most cases general consensus will govern. > >1) Should we use AspectJ at all? > > > If it can truly add value, yes. > > > > >2) Should we use it for only removable code? > > > I would be inclned to say yes, only for removable code. > > > > >3) Should we use it for prefer non-removable code? > > > see above > > > > >4) Should we use it as much as it is useful (which I would see as far > >more than #3)? > > > see above. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Andy L. <aj...@as...> - 2001-11-09 19:54:12
|
I have often found it frustrating when using an open source package and finding that the prerequisites to using it are high. Simplicity has great value here. I would definately consider it valid for a development tool for removable code though. I don't think broader use of it here would be of enough value to impose the requirement. This leads into a discussion of the development tool set - ant? junit? coding standards if any? Not to mention project governance. What is the decision making process? > >1) Should we use AspectJ at all? > If it can truly add value, yes. > >2) Should we use it for only removable code? > I would be inclned to say yes, only for removable code. > >3) Should we use it for prefer non-removable code? > see above > >4) Should we use it as much as it is useful (which I would see as far >more than #3)? > see above. > >Any questions, concerns? > >-joel > > >_______________________________________________ >Sparrow-devel mailing list >Spa...@li... >https://lists.sourceforge.net/lists/listinfo/sparrow-devel > |
From: Joel S. <jo...@ik...> - 2001-11-09 19:13:25
|
On Fri, 2001-11-09 at 10:58, Andy Lewis wrote: > I've looked at it in detail - very cool concept. It does pre-compile > time source code modification. > > Overall ,the Aspect Oriented Concepts are good, and Aspect J is > impressive. As a tool for a test harnesses and impossing simple aspects, > it would do very well. There are also Ant tasks for it readily available. Great. Glad to hear it. Immediately, I see it useful for: 1) Debug logging (removable) 2) non-debug logging (prefer non-removable) 3) error handling (some removable, some prefer non-removable) 4) security (non-removable) 5) profiling (removable) 6) Testing (removable) I'm sure there is more (especially things we could do with core non-removable code). However, the question is if we should rely on it for non-removable code. Obviously the removable stuff is not such a big deal--someone could not compile it in and the project still works. And even for the non-removable stuff above, someone could remove it and it still might work (except that it might not do security checks, or might not handle or report errors gracefully, etc. but it would still be functional). So: 1) Should we use AspectJ at all? 2) Should we use it for only removable code? 3) Should we use it for prefer non-removable code? 4) Should we use it as much as it is useful (which I would see as far more than #3)? Any questions, concerns? -joel |
From: Andy L. <aj...@as...> - 2001-11-09 18:58:54
|
I've looked at it in detail - very cool concept. It does pre-compile time source code modification. Overall ,the Aspect Oriented Concepts are good, and Aspect J is impressive. As a tool for a test harnesses and impossing simple aspects, it would do very well. There are also Ant tasks for it readily available. Joel Shellman wrote: >Anyone else ever heard of it? > >http://www.aspectj.com > >It has some very powerful uses. There's a small concern in that if we >use it, it might be more difficult for others to get up to speed on the >project, might make debugging more difficult (though I think it would >make it MUCH easier because that's one of the things it does best). > >Anyway, you might want to take a look if you haven't heard of it. Very >interesting. Crosscutting issues are often disorganized and a mess to >keep consistent. > |
From: Joel S. <jo...@ik...> - 2001-11-09 18:44:54
|
Anyone else ever heard of it? http://www.aspectj.com It has some very powerful uses. There's a small concern in that if we use it, it might be more difficult for others to get up to speed on the project, might make debugging more difficult (though I think it would make it MUCH easier because that's one of the things it does best). Anyway, you might want to take a look if you haven't heard of it. Very interesting. Crosscutting issues are often disorganized and a mess to keep consistent. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Joel S. <jo...@ik...> - 2001-11-09 17:48:18
|
On Fri, 2001-11-09 at 09:19, Andy Lewis wrote: > >I'm not sure what you consider "basic", but for a serious file store, > >there is some complexity to the paging and indexing algorithms and > >structures. Though much work has already gone into that and so much of > >it may be borrowed from other projects. > > > Again, poor communication. Clearly what you are thinking of is much > closer to a true ODBMS than just a simple file store (which is what I > was thinking). I guess from my viewpoint, I would see something as > complex as that as a seperate project from the JDO implementation, > rather than its focus. I would be inclined to move forward with a JDBC > variant with something like HypersonicSQL for the inital phase, then add > the ODBMS plugin. I've written database systems like you are > discussing, paged, indexed, etc. Not small undertakings. Just a good > indexing scheme can be very time consuming to develop and test. Again, I > would be hesitant to make the JDO implmentation itself dependnat on the > development of a complete database engine. Right. Dependant? No. However, A full true ODBMS is our goal. It's actually one of the primary reasons I started the Sparrow project. There are soooo many O/R mapping attempts out there, but I see a desparate need (at least for us if no one else) for an excellent OS object database. The only one I could find was Ozone. As you said, there's a lot of work behind that, however, I think we can build off of already existing code bases so we'll be a long way there already. > >Right. I agree I think there is a lot we can do there. And with JDO, I > >think we have a lot more power than for example Entity beans do--because > >we can know exactly what is changed when and so have far more > >information about what we can cache and such. > > > I agree.... > > <rant>I was all of the people out there building thier entire system on > fine granularity EJBs would read the spec!!! It is specifically intended > for LARGE granularity objects, and only really adds true scalability > value if you need to be able to passivate pending transactions. How many > projects REALLY have THAT need? Outside of that it adds dependance on > complex and generally expensive tools, runtime overhead in most cases, > and make debugging and profiling much more difficult. People should use > it where it is needed, and not treat it as the end-all of the > universe.</rant> agreed. And people have often made the argument for entity EJBs saying that it gets a bad rap because people don't use it the way it's intended and so end up seeing problems. > Among other things, JDO is what should be used for the myriad of > supporting classes that sit behind a large granularity EJB. Which is one reason I see JDO as more interesting than EJBs at this point--many more applications I think. > >>Ok...so, I am not even a member of this project, I know. But I want to > >>see it succeed. I am open to all comments, flames, etc. > >> > > > >Would you like me to add you? If so, what's your sourceforge id? > > > >I greatly appreciate your input. > > > given that you do not claim that even 7 of the 12 members of the project > as active developers, sure. I will do whatever I can to help. > > My id is ascii27net Thanks, you've been added. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Andy L. <aj...@as...> - 2001-11-09 17:20:22
|
Joel Shellman wrote: >developers, let alone enough for 7 "teams". > ok...not having 7 developers would make 7 team a problem - touche. >>The query engine effort has a huge amount of work to do before anything >>useful can even be done with the other components. >> > >I'm not sure I agree with this. If you look at Ozone, you'll see a >useful object database that has no query engine presently. We can make a >working object database without the query engine. Obviously, a query >engine makes things easier and much more powerful, though, and is >crucial. > I was unclear here - let me rephrase - The query engine has a huge amount fo work to do before anything useful can be done with the query engine that invovles the other components. I agree that although crucial, very few other features depend on it. > >>The File and Data stores should have a fairly simple job - basic I/O. >> > >I'm not sure what you consider "basic", but for a serious file store, >there is some complexity to the paging and indexing algorithms and >structures. Though much work has already gone into that and so much of >it may be borrowed from other projects. > Again, poor communication. Clearly what you are thinking of is much closer to a true ODBMS than just a simple file store (which is what I was thinking). I guess from my viewpoint, I would see something as complex as that as a seperate project from the JDO implementation, rather than its focus. I would be inclined to move forward with a JDBC variant with something like HypersonicSQL for the inital phase, then add the ODBMS plugin. I've written database systems like you are discussing, paged, indexed, etc. Not small undertakings. Just a good indexing scheme can be very time consuming to develop and test. Again, I would be hesitant to make the JDO implmentation itself dependnat on the development of a complete database engine. >Right, those types of things were what I was referring to for version 2 >and such. > agreed...future. >>this is where the project sinks or or swims. >> > >I don't think that's where it'll sink or swim considering that we plan >on using it whether it can do that type of thing or not. I do agree it >will be very useful to do that and that much other work we can probably >get from other projects for it. > ok..understandable given the plan for the complete backend engine. As a stand-alone JDO tool though, to be used to plug into other data sources, I reatin my opnion. :) >Right. I agree I think there is a lot we can do there. And with JDO, I >think we have a lot more power than for example Entity beans do--because >we can know exactly what is changed when and so have far more >information about what we can cache and such. > I agree.... <rant>I was all of the people out there building thier entire system on fine granularity EJBs would read the spec!!! It is specifically intended for LARGE granularity objects, and only really adds true scalability value if you need to be able to passivate pending transactions. How many projects REALLY have THAT need? Outside of that it adds dependance on complex and generally expensive tools, runtime overhead in most cases, and make debugging and profiling much more difficult. People should use it where it is needed, and not treat it as the end-all of the universe.</rant> Among other things, JDO is what should be used for the myriad of supporting classes that sit behind a large granularity EJB. >>Ok...so, I am not even a member of this project, I know. But I want to >>see it succeed. I am open to all comments, flames, etc. >> > >Would you like me to add you? If so, what's your sourceforge id? > >I greatly appreciate your input. > given that you do not claim that even 7 of the 12 members of the project as active developers, sure. I will do whatever I can to help. My id is ascii27net |
From: Joel S. <jo...@ik...> - 2001-11-09 16:53:59
|
On Fri, 2001-11-09 at 07:42, Andy Lewis wrote: > A couple of comments on this - I think there is a good bit more > parallel activity that can occur. I didn't mean to imply a lack a parellelism. Just the idea of what we might focus on first and such. Consider--we don't even have 7 active developers, let alone enough for 7 "teams". > I would break it into parallel sub-teams: > > 1 - Core Team - this would be for the PersistenceManger and Factory, > StateManager, and the JDOHelper class > 2 - Bytecode Enahcer - coomand line and Java API, and then Class loader > 3 - Query Engine - this will proably be a longer development cycle, > start early > 4 - File Store - this is the simplest pluggable backend module > 5 - Relational Store - this can actually be deffered a bit, but will be > of more value in the end that the File Store > 6 - Mapper - impedance handling - O/R, Hierarchical, etc. > 7 - Cache Manager - I break this out now because even though a simple > implementation will work, a tuneable cache later will be key for performance Right, and there would be even more finely grained pluggable modules within most of those divisions. > Basic Dependancies Between Team: > > The Core can be fully developed and tested without the enhancer using > hand coded PersistanceAware classes, which is a speced feature anyway, > and will be much cleaner for testing and debugging. This is wher ethe > spec compliance is most critical. By decoupling it from the rest of the > components, you should be able to reach lifecycle compliance much quicker. > > The bytecode enhancer can also be done independant using a simple test > harness for the resulting clases to verify functionality. This is also a > good idea anyway for testing and debugging. Although a critical part of > the spec, this is really a developer convenience to avoid handcoding the > interface implementation and JDOHelper calls in every class. It may even > be worth considering a tool that would impose the implementation on a > class SOURCE file, allowing a developer who needs a hand coded instance > to customize from there. > > The query engine effort has a huge amount of work to do before anything > useful can even be done with the other components. I'm not sure I agree with this. If you look at Ozone, you'll see a useful object database that has no query engine presently. We can make a working object database without the query engine. Obviously, a query engine makes things easier and much more powerful, though, and is crucial. > The File and Data stores should have a fairly simple job - basic I/O. I'm not sure what you consider "basic", but for a serious file store, there is some complexity to the paging and indexing algorithms and structures. Though much work has already gone into that and so much of it may be borrowed from other projects. > This is also an area where pluggability pays of the most. Later there > can be dat stores for LDAP, XML, optimizations for specific RDBMS, as > well as possible abstractions for things like CVS or DAV. It might even > be worth looking at some of the VFS concepts from the NetBeans codebase. Right, those types of things were what I was referring to for version 2 and such. > The Mapper will initially only have to worry about Object/Relational - > but what about when you want to map an object graph to XML, which is > instead hierarchical? This is a piece where a lot of work has been done > to draw upon (Castor, JBoss JAWS, etc), some standards exists (ODMG), > and there are some interesting commerical references (VBSF, Kodo). IMHO, > this is where the project sinks or or swims. I don't think that's where it'll sink or swim considering that we plan on using it whether it can do that type of thing or not. I do agree it will be very useful to do that and that much other work we can probably get from other projects for it. > The cache can be initally stubbed in to always be empty - doesn't get > any easier than that. In the future however, there are immense > possibilites. People often seem to think that a basic weak-reference > cache is all you can do - not true. You can tune caching parameters for > how long or on what terms it stays after references expire, how often to > check for optimistic locking conflicts, etc. You can also establish > seperate caches and cache policies for different objects. This is the > real power. Small data sets for example that are regularly used, and > rarely updated, cache longer, and check infrequently. This adds a lot of > power to tune for hotspots. Right. I agree I think there is a lot we can do there. And with JDO, I think we have a lot more power than for example Entity beans do--because we can know exactly what is changed when and so have far more information about what we can cache and such. > Interfaces between the various components would also need to be clearly > established fairly early, such as what API does the query engine use for > accessing the data store? There is some serious abstraction work to be done. > > Another key point - the APIs for configuring the mapping and caching, > etc, should all be documented and exposed, even though they are > proprietary to this project. If someone wants to configure the O/R with > a XML fragment ffrom somewhere else, let them.... > > Also, every module should have a basic todo list and I would recommend > module leads. > > Ok...so, I am not even a member of this project, I know. But I want to > see it succeed. I am open to all comments, flames, etc. Would you like me to add you? If so, what's your sourceforge id? I greatly appreciate your input. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Andy L. <aj...@as...> - 2001-11-09 15:43:30
|
A couple of comments on this - I think there is a good bit more parallel activity that can occur. I would break it into parallel sub-teams: 1 - Core Team - this would be for the PersistenceManger and Factory, StateManager, and the JDOHelper class 2 - Bytecode Enahcer - coomand line and Java API, and then Class loader 3 - Query Engine - this will proably be a longer development cycle, start early 4 - File Store - this is the simplest pluggable backend module 5 - Relational Store - this can actually be deffered a bit, but will be of more value in the end that the File Store 6 - Mapper - impedance handling - O/R, Hierarchical, etc. 7 - Cache Manager - I break this out now because even though a simple implementation will work, a tuneable cache later will be key for performance Basic Dependancies Between Team: The Core can be fully developed and tested without the enhancer using hand coded PersistanceAware classes, which is a speced feature anyway, and will be much cleaner for testing and debugging. This is wher ethe spec compliance is most critical. By decoupling it from the rest of the components, you should be able to reach lifecycle compliance much quicker. The bytecode enhancer can also be done independant using a simple test harness for the resulting clases to verify functionality. This is also a good idea anyway for testing and debugging. Although a critical part of the spec, this is really a developer convenience to avoid handcoding the interface implementation and JDOHelper calls in every class. It may even be worth considering a tool that would impose the implementation on a class SOURCE file, allowing a developer who needs a hand coded instance to customize from there. The query engine effort has a huge amount of work to do before anything useful can even be done with the other components. The File and Data stores should have a fairly simple job - basic I/O. This is also an area where pluggability pays of the most. Later there can be dat stores for LDAP, XML, optimizations for specific RDBMS, as well as possible abstractions for things like CVS or DAV. It might even be worth looking at some of the VFS concepts from the NetBeans codebase. The Mapper will initially only have to worry about Object/Relational - but what about when you want to map an object graph to XML, which is instead hierarchical? This is a piece where a lot of work has been done to draw upon (Castor, JBoss JAWS, etc), some standards exists (ODMG), and there are some interesting commerical references (VBSF, Kodo). IMHO, this is where the project sinks or or swims. The cache can be initally stubbed in to always be empty - doesn't get any easier than that. In the future however, there are immense possibilites. People often seem to think that a basic weak-reference cache is all you can do - not true. You can tune caching parameters for how long or on what terms it stays after references expire, how often to check for optimistic locking conflicts, etc. You can also establish seperate caches and cache policies for different objects. This is the real power. Small data sets for example that are regularly used, and rarely updated, cache longer, and check infrequently. This adds a lot of power to tune for hotspots. Interfaces between the various components would also need to be clearly established fairly early, such as what API does the query engine use for accessing the data store? There is some serious abstraction work to be done. Another key point - the APIs for configuring the mapping and caching, etc, should all be documented and exposed, even though they are proprietary to this project. If someone wants to configure the O/R with a XML fragment ffrom somewhere else, let them.... Also, every module should have a basic todo list and I would recommend module leads. Ok...so, I am not even a member of this project, I know. But I want to see it succeed. I am open to all comments, flames, etc. thanks... Andy Lewis Joel Shellman wrote: >Sorry again for the delay. Here is a very rough idea of a roadmap up >until version 1.0. Please comment. After version 0.3.0 we might have >separate timelines for the O/R and file store versions--they can each >progress along the following versions at their own pace. As they will be >written as pluggable modules, this should work fine. So we could have a >version 1.0.0 file store version completed before the 1.0.0 O/R mapping >was completed. Care of course will have to be taken during updates to >the common code base. > >0.0.1: Let's get a build system set up, technologies generally agreed on >(and which codebases to begin with for those that already exist that we >can borrow from), and spec classes stubbed out. > >0.1.0: This will be the first meaningful package that will compile and >do something. Let's have the general architecture sketched out with >classes. Things can compile and build against this and act just as if it >was using JDO... but nothing will persist or actually do anything. > >0.2.0: Bytecode enhancer done (or at least sufficient to be meaningful) >and classloader implemented to use the enhancer (using config files). >Still nothing persists or anything, but calls start actually getting >farther than stubs--maybe as far as statemanager. > >0.3.0: StateManager and PersistenceManager are completed and call >through to persistence store stubs. > >Parallel development for O/R mapping and file store persistence begin in >earnest after version 0.3.0. > >0.4.0: Persistence store classes written, but not hooked up to storage >yet necessarily. > >0.5.0: Basic persistent connector completed. Objects can now be made >persistent. > >0.5.5: Enable dynamically making things persistent by running entire app >under special classloader. > >0.6.0: Ensure transactional integrity and object lifecycle works >properly, robustly, and spec compliant. > >0.7.0: Query engine started--functional for most basic uses. > >0.8.0: Query engine mostly completed > >0.9.0: Mostly workable version, only minor features might be missing >that will still go into 1.0.0. Mostly switch into bugfix/testing mode at >this point until release. > >Version 1.0 will be a solid working version though the focus will be on >getting it functional and usable--not necessarily spec complete. This >version should be something that companies could then take and use. > >Version 2.0 will be where we build in higher enterprise class functions >and scalability and be spec complete. > |
From: Joel S. <jo...@ik...> - 2001-11-09 07:48:23
|
Sorry again for the delay. Here is a very rough idea of a roadmap up until version 1.0. Please comment. After version 0.3.0 we might have separate timelines for the O/R and file store versions--they can each progress along the following versions at their own pace. As they will be written as pluggable modules, this should work fine. So we could have a version 1.0.0 file store version completed before the 1.0.0 O/R mapping was completed. Care of course will have to be taken during updates to the common code base. 0.0.1: Let's get a build system set up, technologies generally agreed on (and which codebases to begin with for those that already exist that we can borrow from), and spec classes stubbed out. 0.1.0: This will be the first meaningful package that will compile and do something. Let's have the general architecture sketched out with classes. Things can compile and build against this and act just as if it was using JDO... but nothing will persist or actually do anything. 0.2.0: Bytecode enhancer done (or at least sufficient to be meaningful) and classloader implemented to use the enhancer (using config files). Still nothing persists or anything, but calls start actually getting farther than stubs--maybe as far as statemanager. 0.3.0: StateManager and PersistenceManager are completed and call through to persistence store stubs. Parallel development for O/R mapping and file store persistence begin in earnest after version 0.3.0. 0.4.0: Persistence store classes written, but not hooked up to storage yet necessarily. 0.5.0: Basic persistent connector completed. Objects can now be made persistent. 0.5.5: Enable dynamically making things persistent by running entire app under special classloader. 0.6.0: Ensure transactional integrity and object lifecycle works properly, robustly, and spec compliant. 0.7.0: Query engine started--functional for most basic uses. 0.8.0: Query engine mostly completed 0.9.0: Mostly workable version, only minor features might be missing that will still go into 1.0.0. Mostly switch into bugfix/testing mode at this point until release. Version 1.0 will be a solid working version though the focus will be on getting it functional and usable--not necessarily spec complete. This version should be something that companies could then take and use. Version 2.0 will be where we build in higher enterprise class functions and scalability and be spec complete. -- Joel Shellman iKestrel, Inc. http://www.ikestrel.com/ |
From: Andy L. <aj...@as...> - 2001-10-27 13:08:08
|
Joel Shellman wrote: > >Sparrow DB's design and development goals (not prioritized at this >point): > >) In general: small (in foot print and resource usage), fast, and > definately >) Ease/Transparency of use. I want to make the typical use require >little or no configuration whatsoever. Someone creates an object and >it's persistent. That's it--no other work. I realize there might have to >be something marking it as persistent somewhere, but maybe I can get >around that as well. Have to find out. I did some thinking about that a >while ago (thinking classloader trickery) but wasn't able to quite come >up with it quite yet. > a certain amount of this will be a challenge given that the spec itself defines the mapping file. Tools for autmatically producing the files, or updating will help down this path though. As far as classloader stuff, it should be 100% possible to have a classloader implementaiton of hte bytecode enhancer. >) Everything (within reason and usefulness) tunable. The idea of course > Here, here! One of the best systems I have seen for tunning is Sybase System 11 (and up) - ok, maybe lacks on usability of the tuning, but the power it is incredible. Caching strategies and sizes, locking parameters, transaction isolation in the cache, etc. There are a ton of things that can be tuned. > >Also, as much as possible tunable at runtime. It's an extreme nuisance >(inefficient, tedious) to have to restart a server just to retune and >tweak it. > Agreed. >) Initial primary audience: single server web applications running >inside a servlet container. Obviously keeping in mind the wide range of >other uses, but we need to start somewhere with a specific audience in >mind to help keep us thinking about the real world issues that will come >up. > Being in that initial target audience, I am good with that :) > >) Clustering and such is definitely planned and should be kept in our >minds during development, but will not be done in initial release. > Some implementation models for clustered applications will still be possible even without specific feature support in th JDO package (beyond good locking strategies that is) > >) Optimize for response time. This might be a controversial issue, but I > cool. > >) Modularity: as much as possible have different parts of it easily >pluggable, able to be developed/tested separately. It would be >preferable to be able to swap in and out modules at runtime. I really >don't like having to restart servers. I have long had a goal to write an >infrastructure that could swap in an out modules to a point that you >could completely upgrade the entire thing without ever restarting it--so >you would never have to restart the JVM--ever. It's a lofty goal, but if >we could achieve it, it would be extremely useful especially in web >situations where you never ever want downtime. I know there are a lot of >classloader issues and probably some security issues involved here, but >if we can't do the ideal, we'll come as close as possible. > absolutely - there is a lot that can be done here with classloaders to make this work and a great deal of example code as well. > > >) Development methodology: I lean toward some of the ideals of eXtreme >Programming and generally will follow those ideas in development, though >I do generally prefer to design a little farther forward than XP >recommends I think (though I am no XP expert). > I must admit that XP doesn't strike me as a revolutionary approach. More a refinement of some of the early RAD models with a really cool name. Most of the larger, fully structured lifecycles seem to be bent of making every programmer equal and in doing so equates tham all to hte least common denominator. XP is a much better starting point I think, though some of the concepts I stil question. > |
From: Andy L. <aj...@as...> - 2001-10-27 12:56:25
|
Joel Shellman wrote: >>Let me ask another long term question - is the objective of Sparrow to >>be a quality OSS JDO implementaiton? Or is it, as is the attitude with >>the jBoss group for example, intended to be better than its commercial >>competitors as well? >> > >That's an interesting question... I hadn't really looked at it from a >competition standpoint. My personal goal is to create the "best possible >software" (how's that for ambitious). The result of course is that it >would most likely be "better than its commercial competitors". Also, >considering that I envision iKestrel, Inc. providing commercial support >for Sparrow, there will be a significant incentive to be better than >competitors from a business standpoint as well. > >So.. if your question is, are we here just to make an OSS JDO impl >because there isn't one yet? No. We're here to make a JDO impl that is >going to be used seriously. > >-joel > > I like that answer. One of the things about the JBoss community that I appreciate the most is thier unwillingness to be second best. The firmly believe that OSS can outperform the "corporate" model for hat they are working on, and are holding themseleves to a feature and quality level that is in line with that. The key point was of course, that this is something meant to actually be used. |
From: Joel S. <jo...@ik...> - 2001-10-27 07:16:49
|
I'm sorry this is several days late. Things just came up and swamped me this week. Plus, writers block of course took its toll. This is the very rough draft of my ideas that can be formallized in the future and with all of your input. Here's a start on goals, I'll try to get a rough roadmap submitted as well. Sparrow DB's design and development goals (not prioritized at this point): ) In general: small (in foot print and resource usage), fast, and flexible. "slick" and "sleek" is the feeling I want users to feel. The "wow this is cool" and the "wow this is fast". Hopefully both within the first 5 minutes of using it (see: ease/transparency of use). ) Ease/Transparency of use. I want to make the typical use require little or no configuration whatsoever. Someone creates an object and it's persistent. That's it--no other work. I realize there might have to be something marking it as persistent somewhere, but maybe I can get around that as well. Have to find out. I did some thinking about that a while ago (thinking classloader trickery) but wasn't able to quite come up with it quite yet. ) Everything (within reason and usefulness) tunable. The idea of course is that the defaults should be fine for at least our initial primary audience, and for as wide a range of applications as possible. Part of the reasoning for this is that it actually can make development easier--ie. if in doubt, make it configurable so you can easily change it later. Also, as much as possible tunable at runtime. It's an extreme nuisance (inefficient, tedious) to have to restart a server just to retune and tweak it. ) Initial primary audience: single server web applications running inside a servlet container. Obviously keeping in mind the wide range of other uses, but we need to start somewhere with a specific audience in mind to help keep us thinking about the real world issues that will come up. ) Clustering and such is definitely planned and should be kept in our minds during development, but will not be done in initial release. ) Optimize for response time. This might be a controversial issue, but I would like to see response time optimized for as opposed to throughput initially. Of course these aren't mutually exclusive but sometimes issues might come up where we have to choose. And hopefully we will have sufficient tuning capabilities eventually to switch between optimized for response time and optimized for throughput. I just see that the initial primary audience I mention above is going to be most interested in fast response time. ) Modularity: as much as possible have different parts of it easily pluggable, able to be developed/tested separately. It would be preferable to be able to swap in and out modules at runtime. I really don't like having to restart servers. I have long had a goal to write an infrastructure that could swap in an out modules to a point that you could completely upgrade the entire thing without ever restarting it--so you would never have to restart the JVM--ever. It's a lofty goal, but if we could achieve it, it would be extremely useful especially in web situations where you never ever want downtime. I know there are a lot of classloader issues and probably some security issues involved here, but if we can't do the ideal, we'll come as close as possible. ) Development methodology: I lean toward some of the ideals of eXtreme Programming and generally will follow those ideas in development, though I do generally prefer to design a little farther forward than XP recommends I think (though I am no XP expert). |
From: Joel S. <jo...@ik...> - 2001-10-23 05:05:03
|
On Mon, 2001-10-22 at 21:41, Andy Lewis wrote: > That actually helped a great deal - I hadn't seen that before. On the > theory that you are open to suggestions of course. > I was working towards decoupling the primary components. I admit that my > objectives were a bit aggressive, but not unachievable. I had segmented > out the back end repository, the O/R mapper, the cache manager, and the > transaction manager as seperate pluggable modules. One the core JDO > system was in place, and stable, each of these pieces could then be > indepednatly enhanced, improved, or replaced, without impact the rest. I > was also working towards a bytecode enhancer implementation that was > embeddable in a classsloader. You took the words right out of my mouth. > Let me ask another long term question - is the objective of Sparrow to > be a quality OSS JDO implementaiton? Or is it, as is the attitude with > the jBoss group for example, intended to be better than its commercial > competitors as well? That's an interesting question... I hadn't really looked at it from a competition standpoint. My personal goal is to create the "best possible software" (how's that for ambitious). The result of course is that it would most likely be "better than its commercial competitors". Also, considering that I envision iKestrel, Inc. providing commercial support for Sparrow, there will be a significant incentive to be better than competitors from a business standpoint as well. So.. if your question is, are we here just to make an OSS JDO impl because there isn't one yet? No. We're here to make a JDO impl that is going to be used seriously. -joel |