objectbridge-developers Mailing List for ObJectRelationalBridge (Page 47)
Brought to you by:
thma
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(14) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(33) |
Feb
(8) |
Mar
(3) |
Apr
(1) |
May
(18) |
Jun
(6) |
Jul
(15) |
Aug
(71) |
Sep
(29) |
Oct
(43) |
Nov
(77) |
Dec
(54) |
2002 |
Jan
(54) |
Feb
(147) |
Mar
(144) |
Apr
(163) |
May
(307) |
Jun
(240) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Bischof, R. <rai...@ed...> - 2001-11-19 22:09:38
|
All, I just had a closer look at the locking stuff as it is implemented right now. The only class that makes locks persistent is the AbstractLockStrategy. Unfortunately all LockStrategies inherit from that which limits re-use of the useful code in and around the LockStrategies. What about the following approach for the new in-memory locking: * We extract those parts of the AbstractLockStrategy that store / retrieve the persistent locks. These parts go into a PersistentLockServiceImpl which implements a LockService interface. We create a similar class InMemoryLockServiceImpl that does the same without accessing the DB. (does anybody have a better name for the last class??) * We change the AbstractLockStrategy not to perform DB access itself but to use an implementation of the LockService interface for these activities. * The LockStrategyFactory configures the LockStrategies created to use the appropriate LockService implementation (persistent/in-memory). That way we can re-use the complete LockMgrDefaultImpl, LockEntry, LockStrategyFactory, the different Lockstrategies and their configuration. Most of the logic in AbstractLockStrategy can be re-used in the PersistentLockServiceImpl. We should not have any API changes at all. It seems to be a clean approach. Have I missed something? Looks just too easy. I'm sure I missed something... Any comments? Thomas, is this similar to your intended approach or do you have another idea? I guess it's not that much work for you to provide me a method "String getLockServiceClass()" in PersistenceBrokerFactoryConfiguration, right? That way I only have one area to work on and don't have to find my way through the configuration stuff as well. Or do we need to have different locking approaches for different classes? Than I have no idea how to get that information per class as I have not really looked into the XML configuration classes... ;-) Regarding the timing: I hope to find some time for this end of next week so it might have a chance to go into the next release if it passes the unit tests. Thx rb Rainer Bischof EDS - Electronic Data Systems European Automotive Solution Center - Distributed Solutions Email: rai...@ed... <mailto:rai...@ed...> -----Original Message----- From: Thomas Mahler [mailto:tho...@ho...] Sent: Montag, 19. November 2001 12:56 To: Bischof, Rainer Cc: obj...@li... Subject: Re: [OJB-developers] features Hi Rainer, > > I don't know if you have been working on this: What about the single VM > LockManager? Yes I just started working on it yesterday. I used the well established Interface/ConfigurableFactory Pattern to make it pluggable. > If you have not started I will try my best to implement it. I need it > sometime in December as the LockMgr is the main performance bottleneck for > us... > If you have already started but have other priorities you could probably > provide me your stuff and I will continue... > Oh, that will speed up things a lot! I will clean up my code a bit and then post you a current snapshot! > Btw: I have rarely seen a project with a project lead as repsonsive as you! > Keep up the good work. > Thank you, these compliments speed up things even more, as they result in better motivation of the "OJB bottleneck" ;-) -- Thomas |
From: Thomas M. <tho...@ho...> - 2001-11-19 20:09:45
|
Hi Rainer, > > I don't know if you have been working on this: What about the single VM > LockManager? Yes I just started working on it yesterday. I used the well established Interface/ConfigurableFactory Pattern to make it pluggable. > If you have not started I will try my best to implement it. I need it > sometime in December as the LockMgr is the main performance bottleneck for > us... > If you have already started but have other priorities you could probably > provide me your stuff and I will continue... > Oh, that will speed up things a lot! I will clean up my code a bit and then post you a current snapshot! > Btw: I have rarely seen a project with a project lead as repsonsive as you! > Keep up the good work. > Thank you, these compliments speed up things even more, as they result in better motivation of the "OJB bottleneck" ;-) -- Thomas |
From: Bischof, R. <rai...@ed...> - 2001-11-19 19:31:37
|
Hi Thomas, <snip> >Path expressions are an important feature and I hope to start on them >after m:n and automatic foreign key assignment. So maybe January 2002 is >a good estimate. > >For the time beeing OJB supports a direct SQL interface that allows >Object loading based on arbitrary SQL statements. Works pretty good, especially if you use the ClassDescriptor to get the field and table names to be indedependent of schema modifications. > >OJB is based on the contributions of volunteers an of my efforts to keep >everything in sync and to maintain the overall design. If there are no >contributions coming in, I have to implement things on my own. >That's OK for me, but I'm obviously the OJB "bottleneck". >Once I start to commit myself to certain deadlines the pressure on this >bottleneck grows! > >But OJB should be fun to work on and not be stress! > >Of course people intending to use OJB need some reliability on when they >can use the tool for certain features. I don't know if you have been working on this: What about the single VM LockManager? If you have not started I will try my best to implement it. I need it sometime in December as the LockMgr is the main performance bottleneck for us... If you have already started but have other priorities you could probably provide me your stuff and I will continue... Btw: I have rarely seen a project with a project lead as repsonsive as you! Keep up the good work. Thx Rainer Bischof EDS - Electronic Data Systems European Automotive Solution Center - Distributed Solutions Email: rai...@ed... |
From: Thomas M. <tho...@ho...> - 2001-11-19 18:36:39
|
Hi Georg, thanks for your interest in OJB! Georg Schneider wrote: > > Hi, > I am working on a project where we have been using castor so far. > Unfortunately polymorphism is not very high on castors development agenda > and since I also regard this as one of the basic features of any > object-oriented design effort I was very pleased to find it in OJB. I have > been looking through the OJB documentation and I was quite impressed with > the clear design. I have also taken a look at your TODO list and I was > wondering if you could give a rough time estimate for the inclusion of the > following features: > - m:n relations I will start working on support m:n relations this week. When there are no other priority 1 tasks interfering there should be something running until 7th of December. > - path expressions in OQL > Path expressions are an important feature and I hope to start on them after m:n and automatic foreign key assignment. So maybe January 2002 is a good estimate. For the time beeing OJB supports a direct SQL interface that allows Object loading based on arbitrary SQL statements. > I would appreciate very much if you could give me some idea about the time > horizon. > In general I don't like to make promises on when a certain feature will be implemented. OJB is based on the contributions of volunteers an of my efforts to keep everything in sync and to maintain the overall design. If there are no contributions coming in, I have to implement things on my own. That's OK for me, but I'm obviously the OJB "bottleneck". Once I start to commit myself to certain deadlines the pressure on this bottleneck grows! But OJB should be fun to work on and not be stress! Of course people intending to use OJB need some reliability on when they can use the tool for certain features. -- Thomas > thanks in advance > > Georg > > _______________________________________________ > Objectbridge-developers mailing list > Obj...@li... > https://lists.sourceforge.net/lists/listinfo/objectbridge-developers |
From: Georg S. <ge...@me...> - 2001-11-19 16:05:11
|
Hi, I am working on a project where we have been using castor so far. Unfortunately polymorphism is not very high on castors development agenda and since I also regard this as one of the basic features of any object-oriented design effort I was very pleased to find it in OJB. I have been looking through the OJB documentation and I was quite impressed with the clear design. I have also taken a look at your TODO list and I was wondering if you could give a rough time estimate for the inclusion of the following features: - m:n relations - path expressions in OQL I would appreciate very much if you could give me some idea about the time horizon. thanks in advance Georg |
From: Mahler T. <tho...@it...> - 2001-11-14 18:25:13
|
Hi all, > -----Urspr=FCngliche Nachricht----- > Von: Bischof, Rainer [mailto:rai...@ed...] > Gesendet: Dienstag, 13. November 2001 22:09 >=20 > >This cleanup is already on the todo list. I will try to it without > >changes to the PersistenceBroker API.=20 > >I agree that it will be easier to do this myself than to=20 > delegate it and > >to reintegrate the results... >=20 > From my point of view API changes are justified if they lead=20 > to a better > useability. Especially these chnages only mean throwing=20 > Exceptions where > they haven't been thrown before. I know this leads to code changes in > existing applications but that's something we have to live=20 > with in order to > enhance reliability which should be the main focus. It's just=20 > that IMHO OJB > has a limited reliability if it just ignores Exceptions which arise > somewhere in the framework. >=20 I agree. All I wanted to say is: all PersistenceBroker methods are = already declared with throws clauses (PersistenceBrokerException or derived Exceptions). I will try to throw no other exceptions than these if possible, to = reduce modifications to client code. =20 > Another point - next feature request ;-) : > What about adding a method to the Broker which deletes=20 > objects that match a > query ( & again SQL statement). > Right now it's somewhat tedious to cycle through a collection=20 > and delete all > obecjts by hand. The broker can even optimize this if the descriptor > indicates that affected objects have no dependend objects it=20 > can directly > issue a delete statement without instantiating all objects=20 > (needs to clear > the cache for this class, though). >=20 OK, I will add this to the to-do list ! Thomas > Thanks, > rb >=20 > Rainer Bischof > EDS - Electronic Data Systems > European Automotive Solution Center - Distributed Solutions > Email: rai...@ed... >=20 > _______________________________________________ > Objectbridge-developers mailing list > Obj...@li... > https://lists.sourceforge.net/lists/listinfo/objectbridge-developers >=20 |
From: Charles A. <cha...@hp...> - 2001-11-14 08:52:12
|
Hello, > There will be no problems if different threads are using different > RowReaders for the same class. > But: there my be problems if to threads are manipulating a > ClassDescriptor > simultaneously. > thread 1 says: use RowReaderDefaultImpl, > thread 2 says: use PagingRowReader > thread 1 acquires a ResultSetIterator an will be using a > PagingRowreader > although it asked for a default RowReader. > > You'll have to implement some additional access management to > avoid such > errors. Looking at the code, as I understand it - and please correct me if I'm wrong, you could also get this situation thread 1 says : do a query on sample.class, and (by default) uses a RowReaderDefaultImpl thread 1 says : start to iterate on the results thread 2 says : set sample.class rowreader to PagingRowReader, and do a query thread 2 says : start to iterate on the results thread 1 says : get me another object. I think at this point, thread 1 will get the object using the PaginRowReader - not the RowReadeDefaultImpl. The reason I think this, is this line in the RsIterator.getObjectFromResultSet method : Object result = m_mif.getRowReader().readObjectFrom(m_rs, m_mif); The result of which is that on each Iterator.next() execution, the RowReader is got from the ClassDescriptor. And that could be problematic. As I say, I could be wrong, and if so, please accept my apologies. > A general remark: > I'm not quite sure what you want to to with your paging component. > My concept of a paging display is as follows. [...] Ah. My requirement is a little more sophisticated than that. I need to be able to present the user with a search form, to build up the query criteria. When the user hits 'search', I need to be able to get the first 'page' of - say 10 objects. I also need to display a list of page numbers, to enable the user to jump to a specified page. This will be in a web application; I do not want to keep the ResultSet open over more than one HTTP request. It will also be a common requirement in many different searches, which is why it needs to be a generic component. I have attached a jar file containing my source code so far, containing the following classes : domain.DebtorMaster: a simple domain class for searching. pager.PagingReader a simple rowreader that has a flag that can prevent objects from being materialized. pager.QueryCount an interface for getting the number of results from a query. pager.QueryCountImpl an implementation of the above. pager.QueryPager provides a way of getting the number of pages in a query, and retrieving a specified page. This is quite well javadoc-ed. test.Main ignore this : just an experimentation class that I meant to take out of the jar and forgot to. test.TestQueryCount a test of the QueryCountImpl object. test.TestQueryPager an example (and test) of using the pager. This is work-in-progress. The result objects are not registered to a transaction. I have commented out the use of the PagingReader, due to the NullPointerException being thrown as I mentioned in my last email (as well as my threading concerns). Finally, I attach a screen dump of a test screen of this component in our application : hopefully it will make it all a little clearer. (Note that the search criteria are currently ignored; indeed, the search criteria don't even apply to the results displayed ! also, the PageNo and PageSize fields are just for debugging). I hope this is of some use to someone, and I would be happy for any of this code to be incorporated into OJB (if it is useful). Best Wishes, Charles. P.S. I hope people don't mind binaries being sent to this list; they're not very big. If you do mind, please accept my apologies. This email and any attachments are strictly confidential and are intended solely for the addressee. If you are not the intended recipient you must not disclose, forward, copy or take any action in reliance on this message or its attachments. If you have received this email in error please notify the sender as soon as possible and delete it from your computer systems. Any views or opinions presented are solely those of the author and do not necessarily reflect those of HPD Software Limited or its affiliates. At present the integrity of email across the internet cannot be guaranteed and messages sent via this medium are potentially at risk. All liability is excluded to the extent permitted by law for any claims arising as a re- sult of the use of this medium to transmit information by or to HPD Software Limited or its affiliates. |
From: Mahler T. <tho...@it...> - 2001-11-14 08:50:07
|
Hi all, > -----Urspr=FCngliche Nachricht----- > Von: Christian Sell [mailto:chr...@ne...] >=20 > I would like to re-emphasize my suggestion to move connection=20 > management > issues out of OJB's domain. There are other=20 > tools/infrastructures which > already make this their focus and do a very good job. Lets=20 > focus OJB on > object/relational mapping. That's a good argument. I want to support only simple things like = obtaining JDBCConnections through a JDBCDriver or to request Connections from a = JNDI lookup. I think it's a good idea to make the ConnectionManager pluggable to = allow implementations that use other mechanisms to obtain JDBCConnections. > I would even suggest dropping the per-class > connection mechanism, as it is very problematic without 2=20 > phase commit. >=20 You are right. Some time ago someone suggested to implement 2P commit = to make the per-class connection mechanism safe. I answered that I would rather drop the per-class mechanism than to = struggle with a 2P solution inside OJB. My suggestion: just drop the per-class JdbcDescriptors from the = repository DTD. Thus we have a clean API. On the other hand we may provide the per-class mechanism as a non-documented feature for special = applications scenarios (e.g. transfering objects between databases). =20 > anyone? >=20 > regards, Christian >=20 > msg 1 > =3D=3D=3D=3D=3D=3D > sorry for jumping in (but I guess thats what we're here for).=20 > I personally > dont think ConnectionManager is worth much consideration. IMO=20 > connections > sould not be "managed" inside OJB at all. Also, I feel the idea of > maintaining connection information (optionally) on a=20 > per-class level is > rather questionable. >=20 > reason 1: > management of transactions becomes rather difficult, if you=20 > have objects > from classes with different connection specs participating in=20 > the same OJB > transaction. This would require a full 2PC implementation and=20 > coordination. > I dont see OJB addressing this, which can cause major problems. >=20 see my comment above. > reason 2: > you are bound to get into conflicts with runtime=20 > infrastructures (Connection > Pools, J2EE containers) which make exactly this (management=20 > and coordination > of external resources) their primary focus. Along this line,=20 > I havent found > any place where connections are closed in OJB. In a pooling=20 > environment (any > app server) this is a major issue, as connections should=20 > always be closed > (and thus released to the pool) as early as possible (i.e.=20 > immediately after > commit/rollback). >=20 OJB keeps all Connections open to have the prepared statements "hot" = and avoid multiple prepares. > BTW, why is PersistenceBrokerImpl calling=20 > setAutoCommit(true/false) all the > time? Setting it to false once and for all should suffice,=20 > shouldnt it? >=20 In it's early days the PersistenceBroker did not have methods for = beginning and commiting transactions. It simply used the JDBC autocommit feature. after introducing beginTransaction() and commitTransaction() I had to = do all this setAutoCommit(...) switching to allow clients to work with or = without PersistenceBroker transactions. If we decide that client applications MUST call beginTransaction() = before any other PB API calls, we can drop this switching stuff. =20 > msg 2 > =3D=3D=3D=3D=3D=3D > I will introduce a cache-configuration element (something=20 > like that) under > the class node in the new repository schema. If there is a=20 > class-specific > element, use that, otherwise use the global default.=20 That's a good idea! > IMO it=20 > is always good > to maintain the cache on a per-class basis, as this is an=20 > easy and safe way > to reduce Map lookup operations. Yup!=20 Thomas >=20 > _______________________________________________ > Objectbridge-developers mailing list > Obj...@li... > https://lists.sourceforge.net/lists/listinfo/objectbridge-developers >=20 |
From: Bischof, R. <rai...@ed...> - 2001-11-13 21:08:46
|
Thomas, >> BTW: >> I keep running into traps where Exceptions are silently ignored. I >> guess this should change but that would lead to lots of API changes as >> methods have to declare Exceptions that they haven't thrown before. >> This would also lead to lots of small modifications to many classes, >> so I guess this can only be done by you as it will be too much work to >> integrate many small changes into your concurrently changing codebase. >> Or we need a code-freeze for some time after the next release so that >> a volunteer can do it. >> >> What do you think? >> >This cleanup is already on the todo list. I will try to it without >changes to the PersistenceBroker API. >I agree that it will be easier to do this myself than to delegate it and >to reintegrate the results... From my point of view API changes are justified if they lead to a better useability. Especially these chnages only mean throwing Exceptions where they haven't been thrown before. I know this leads to code changes in existing applications but that's something we have to live with in order to enhance reliability which should be the main focus. It's just that IMHO OJB has a limited reliability if it just ignores Exceptions which arise somewhere in the framework. Another point - next feature request ;-) : What about adding a method to the Broker which deletes objects that match a query ( & again SQL statement). Right now it's somewhat tedious to cycle through a collection and delete all obecjts by hand. The broker can even optimize this if the descriptor indicates that affected objects have no dependend objects it can directly issue a delete statement without instantiating all objects (needs to clear the cache for this class, though). Thanks, rb Rainer Bischof EDS - Electronic Data Systems European Automotive Solution Center - Distributed Solutions Email: rai...@ed... |
From: Christian S. <chr...@ne...> - 2001-11-13 20:51:51
|
Hello all, and hello Thomas in particular.. at the end of this message I have included 2 previous messages which I accidentally sent to Rainer Bischof only instead to the the whole list. Rainer replied to them earlier, and cc'ed the list, but I have included them just in case.. I would like to re-emphasize my suggestion to move connection management issues out of OJB's domain. There are other tools/infrastructures which already make this their focus and do a very good job. Lets focus OJB on object/relational mapping. I would even suggest dropping the per-class connection mechanism, as it is very problematic without 2 phase commit. anyone? regards, Christian msg 1 ====== sorry for jumping in (but I guess thats what we're here for). I personally dont think ConnectionManager is worth much consideration. IMO connections sould not be "managed" inside OJB at all. Also, I feel the idea of maintaining connection information (optionally) on a per-class level is rather questionable. reason 1: management of transactions becomes rather difficult, if you have objects from classes with different connection specs participating in the same OJB transaction. This would require a full 2PC implementation and coordination. I dont see OJB addressing this, which can cause major problems. reason 2: you are bound to get into conflicts with runtime infrastructures (Connection Pools, J2EE containers) which make exactly this (management and coordination of external resources) their primary focus. Along this line, I havent found any place where connections are closed in OJB. In a pooling environment (any app server) this is a major issue, as connections should always be closed (and thus released to the pool) as early as possible (i.e. immediately after commit/rollback). BTW, why is PersistenceBrokerImpl calling setAutoCommit(true/false) all the time? Setting it to false once and for all should suffice, shouldnt it? msg 2 ====== I will introduce a cache-configuration element (something like that) under the class node in the new repository schema. If there is a class-specific element, use that, otherwise use the global default. IMO it is always good to maintain the cache on a per-class basis, as this is an easy and safe way to reduce Map lookup operations. |
From: Thomas M. <tho...@ho...> - 2001-11-13 18:12:13
|
Hi Rainer, > "Bischof, Rainer" wrote: > > Hi Thomas, > > I found a superflous Exception ojb.broker.query.CriteriaException > > It is only declared to be thrown by Criteria.addOrCriteria() but it is > never thrown at all. Should be removed, I guess. > This leads to some simple modifications in DListImpl, DSetImpl and > OJBSearchFilter. Please find attached all affected sources. > Thanks, I will remove it! > BTW: > I keep running into traps where Exceptions are silently ignored. I > guess this should change but that would lead to lots of API changes as > methods have to declare Exceptions that they haven't thrown before. > This would also lead to lots of small modifications to many classes, > so I guess this can only be done by you as it will be too much work to > integrate many small changes into your concurrently changing codebase. > Or we need a code-freeze for some time after the next release so that > a volunteer can do it. > > What do you think? > This cleanup is already on the todo list. I will try to it without changes to the PersistenceBroker API. I agree that it will be easier to do this myself than to delegate it and to reintegrate the results... So, I just need some spare time... If there are urgent things that are not working, I can provide you with quick bug fixes. Thomas > Rainer Bischof > EDS - Electronic Data Systems > European Automotive Solution Center - Distributed Solutions > Email: rai...@ed... > > Name: Criteria.java > Criteria.java Type: unspecified type (application/octet-stream) > Encoding: quoted-printable > > Name: OJBSearchFilter.java > OJBSearchFilter.java Type: unspecified type > (application/octet-stream) > Encoding: quoted-printable > > Name: DSetImpl.java > DSetImpl.java Type: unspecified type (application/octet-stream) > Encoding: quoted-printable > > Name: DListImpl.java > DListImpl.java Type: unspecified type > (application/octet-stream) > Encoding: quoted-printable |
From: Mahler T. <tho...@it...> - 2001-11-13 17:02:47
|
Hi Charles, > > 1. When my RowReader returns null, > RSIterator.getObjectFromResultSet still > tries to create an Identity, look it up in the cache etc. Creating an > Indentity from a null reference results a NullPointer > exception. This is > easily fixable, I believe, just by changeing the appropriate > part of the > method to > Object result = > m_mif.getRowReader().readObjectFrom(m_rs, m_mif); > if(result!=null) { > ... > } I will at this to the todo-list! > > 2. I'm a little concerned about the ClassDescriptor.setRowReader in a > multi-threaded environment. What would happen if one thread > was using a > RowReaderDefaultImpl, and another was using my > PagingRowReader - on the same > class ? I think that would cause problems. > There will be no problems if different threads are using different RowReaders for the same class. But: there my be problems if to threads are manipulating a ClassDescriptor simultaneously. thread 1 says: use RowReaderDefaultImpl, thread 2 says: use PagingRowReader thread 1 acquires a ResultSetIterator an will be using a PagingRowreader although it asked for a default RowReader. You'll have to implement some additional access management to avoid such errors. A general remark: I'm not quite sure what you want to to with your paging component. My concept of a paging display is as follows. You a ResultSetIterator which may produce any number of Objects. You don't want to display all those objects immediately, but in pages of say 20 objects. The Paging component should support forward and backward navigation ("show last 20 items", "show next 20 items"). I would implement this as follows: public class Pager { private ResultSetIterator iter; private Vector items = new Vector(); private int offset = 0; private int pagesize; public Pager(ResultSetIterator i, int size) { iter = i; pagesize = size; } public Collection getNextPage() { Collection page = new Vector() for (int i = 0; i<pagesize; i++) { page.add(iter.next()); offset++; } items.addAll(page); return page; } public Collection getPreviousPage() { Collection page = new Vector(); int oldoffset = offset; offset = offset - pagesize; if (offset < 0) offset = 0; for (int i = offset; i < oldoffset; i++) { page.add(items.get(i)); } return page; } } cheers, Thomas > Thanks, and sorry to keep bothering you, > > > Charles. > > > This email and any attachments are strictly confidential and > are intended > solely for the addressee. If you are not the intended > recipient you must > not disclose, forward, copy or take any action in reliance on > this message > or its attachments. If you have received this email in error > please notify > the sender as soon as possible and delete it from your > computer systems. > Any views or opinions presented are solely those of the > author and do not > necessarily reflect those of HPD Software Limited or its affiliates. > > At present the integrity of email across the internet cannot > be guaranteed > and messages sent via this medium are potentially at risk. > All liability > is excluded to the extent permitted by law for any claims > arising as a re- > sult of the use of this medium to transmit information by or to > HPD Software Limited or its affiliates. > > |
From: Bischof, R. <rai...@ed...> - 2001-11-12 20:43:04
|
Christian, re 1: OJB does manage multiple connections to different DBs but it does not implement a full 2pc. This of course may introduce inconsistencies in the DBs if OJB runs into trouble when commiting a transaction on multiple DBs. re 2: I also see that OJB uses only a simple caching for connections and does not close connections (at least I haven't found that also). It keeps the connection open al time. For me not a big deal but in other environments (e.g. public Internet site) this might be a problem I personally would also favour managing connections outside of OJB but I guess Thomas intended this as a feature to map different objects to different DBs. I think if we can switch the internal connection caching off to have the ConnectionFactory called for each new transaction it would be OK. If the ConnectionFactory is enhanced to not only create connections but also close them you have full control over the connections. Btw: You rote this mail to me personally although I guess it was intended for the dev-list. The list-mgr is somewhat strange as you can't just hit the reply-button ;-) rb -----Original Message----- From: Christian Sell [mailto:chr...@ne...] Sent: Montag, 12. November 2001 19:53 To: Bischof, Rainer Subject: Re: [OJB-developers] ConnectionManager sorry for jumping in (but I guess thats what we're here for). I personally dont think ConnectionManager is worth much consideration. IMO connections sould not be "managed" inside OJB at all. Also, I feel the idea of maintaining connection information (optionally) on a per-class level is rather questionable. reason 1: management of transactions becomes rather difficult, if you have objects from classes with different connection specs participating in the same OJB transaction. This would require a full 2PC implementation and coordination. I dont see OJB addressing this, which can cause major problems. reason 2: you are bound to get into conflicts with runtime infrastructures (Connection Pools, J2EE containers) which make exactly this (management and coordination of external resources) their primary focus. Along this line, I havent found any place where connections are closed in OJB. In a pooling environment (any app server) this is a major issue, as connections should always be closed (and thus released to the pool) as early as possible (i.e. immediately after commit/rollback). BTW, why is PersistenceBrokerImpl calling setAutoCommit(true/false) all the time? Setting it to false once and for all should suffice, shouldnt it? Christian's 2c ----- Original Message ----- From: "Bischof, Rainer" <rai...@ed...> To: "objectbridge" <obj...@li...> Sent: Monday, November 12, 2001 11:25 AM Subject: [OJB-developers] ConnectionManager > Thomas, > > what about making the ConnectionManager pluggable? > This would enable me to use user specific connections. E.g. in > getConnectionForClassDescriptor I determine the user session that the > currently executing thread is working for and return a user specific > connection. > The already pluggable ConnectionFactory does not really solve this as it is > only called once to create a connection for each classdescriptor. > > The problem is that we have to use session specific connections since there > is no Dummy ID the application can use for all user requests. This is > because of security restrictions in our environment. > > Any objections? > > Rainer > |
From: Bischof, R. <rai...@ed...> - 2001-11-12 20:31:51
|
Christian, I agree. The MetaObjectCacheImpl was just a quick fix to allow per Class caching as I need it right now, or to be more precise: I just needed the PermanentObjectCacheImpl for some special classes which are really expensive to re-create from DB. So I fully support this! rb -----Original Message----- From: Christian Sell [mailto:chr...@ne...] Sent: Montag, 12. November 2001 20:00 To: Bischof, Rainer Subject: Re: [OJB-developers] Cache Implementations I will introduce a cache-configuration element (something like that) under the class node in the new repository schema. If there is a class-specific element, use that, otherwise use the global default. IMO it is always good to maintain the cache on a per-class basis, as this is an easy and safe way to reduce Map lookup operations. - Christian ----- Original Message ----- From: "Bischof, Rainer" <rai...@ed...> To: "objectbridge" <obj...@li...> Sent: Monday, November 12, 2001 11:10 AM Subject: [OJB-developers] Cache Implementations > All, > > please find attached two implementations of the ObjectCache: > > MetaObjectCacheImpl is a cache which allows the developer to configure > separate cache implementations for different classes / inheritance trees. > > PermanentObjectCacheImpl is a cache that never automatically removes objects > from the cache, regardless of memory consumption, time, etc. This is useful > in conjunction with the MetaObjectCacheImpl if you have objects of a certain > type which are few in number but very expensive to re-create from the > DB.That way you can be sure to have them in cache all time. > > Thomas, > please have a look and feel free to do with it whatever you like. > > Cheers > Rainer > > > |
From: Charles A. <cha...@hp...> - 2001-11-12 15:24:33
|
Hi all; [snip] > I would suggest to use the PersistenceBroker.getIteratorByQuery(...). > This will give you full control on when and how many objects are to > instantiated. It will also reduce memory load as there are only > references to the objects in the current page (say 20) and not to all > objects in the collection (say 10000). This will help the Java Garbage > collector to do a better job. Is there a way, using the getIteratorByQuery, of *not* instatiating the objects on a .next call ? I don't think there is, but I wanted to check to be sure. I would like to be able to skip the first n records of the record set without instantion of the objects; this may well be more resource-friendly. Cheers, Charles. This email and any attachments are strictly confidential and are intended solely for the addressee. If you are not the intended recipient you must not disclose, forward, copy or take any action in reliance on this message or its attachments. If you have received this email in error please notify the sender as soon as possible and delete it from your computer systems. Any views or opinions presented are solely those of the author and do not necessarily reflect those of HPD Software Limited or its affiliates. At present the integrity of email across the internet cannot be guaranteed and messages sent via this medium are potentially at risk. All liability is excluded to the extent permitted by law for any claims arising as a re- sult of the use of this medium to transmit information by or to HPD Software Limited or its affiliates. |
From: Mahler T. <tho...@it...> - 2001-11-12 13:46:41
|
Hi Rainer, =20 thanks for your code, =20 I will have a look at it and include it in the ditribution. =20 cheers, =20 Thomas -----Urspr=FCngliche Nachricht----- Von: Bischof, Rainer [mailto:rai...@ed...] Gesendet: Montag, 12. November 2001 11:11 An: objectbridge Betreff: [OJB-developers] Cache Implementations All, =20 please find attached two implementations of the ObjectCache: =20 MetaObjectCacheImpl is a cache which allows the developer to configure separate cache implementations for different classes / inheritance = trees. =20 PermanentObjectCacheImpl is a cache that never automatically removes = objects from the cache, regardless of memory consumption, time, etc. This is = useful in conjunction with the MetaObjectCacheImpl if you have objects of a = certain type which are few in number but very expensive to re-create from the DB.That way you can be sure to have them in cache all time. =20 Thomas, please have a look and feel free to do with it whatever you like. =20 Cheers Rainer =20 =20 |
From: Bischof, R. <rai...@ed...> - 2001-11-12 10:25:41
|
Thomas, what about making the ConnectionManager pluggable? This would enable me to use user specific connections. E.g. in getConnectionForClassDescriptor I determine the user session that the currently executing thread is working for and return a user specific connection. The already pluggable ConnectionFactory does not really solve this as it is only called once to create a connection for each classdescriptor. The problem is that we have to use session specific connections since there is no Dummy ID the application can use for all user requests. This is because of security restrictions in our environment. Any objections? Rainer |
From: Thomas M. <tho...@ho...> - 2001-11-11 22:40:42
|
Hi Cristian, Christian Sell wrote: > Hello, > > After having looked into the mapping repository for a while, I would like to > discuss a few preliminary propositions. This is what I have come up with: > > 1. base redesign on XML schema > =========== > I would really like to base the redesign on XML schema, as I already > proposed in an earlier discussion thread. The reasons are the following: > > XML schema is a complete data modeling language, which can not at all be > said of DTD. One example, which bears rather heavily on the current design > of the repository, is the fact that using DTD, identity and referential > constraints can only be expressed globally, using the ID and IDREF > properties. This (and not the parser implementation as I suggested earlier) > has brought about the need to introduce identifiers for many elements, which > had to duplicate structural information (e.g. "Table1.Column1" for a column > of name "Column1" defined in table "Table1"). Referential constraints cannot > be expressed type-safe in DTD. That's an important point pro schema ! > > > With XML schema all this is not an issue. Constraints can be defined such > that they only apply within a certain scope (e.g. column names must be > unique within the scope of the enclosing table). If we really want parser > validation, there are nowadays parsers that support XML schema (xerces 1.4 > does). Sounds good. I see only 1 disadvantage (not really a killer argument): The xerces parser is rather big. I'd like to keep the OJB distributions as small as possible. > > > 2. remove id attributes > ============ > >From this follows, that I would suggest removing all extra id attributes > where possible, and using "natural" constraints (like column name, etc.). > Where no such natural identifiers exist (foreign key names, primary key > names), they can be introduced within the given scope. Yes there are superfluous ids (like the ClassDescriptor ids). BUT FieldDescriptor id's are required for ordering the list of FieldDescriptors of a Class. Important for STatement binds etc. > > > 2a. keep database schem information separately? > =========== > This was an issue in the earlier thread. I personally lean towards keeping > database schem information as part of the repository, but separated from the > mappings. Of course, this duplicates the schema information from the DBMS, > and may be tedious to write, but I think it will help in the long run. > Especially with regard to the specification of foreign keys, primary keys > and the like, I think it is better to keep them in one place than to carry > them along with the mapping - which would also imply the potential of > idiosyncrasies and errors. > > An existing DBMS schema can easily be imported via JDBC. I'm also not sure what is the best solution. If we really can get rid of the duplicate structural information with XML schema, separating the Table descriptions will be quite elegant. Reverse engineering of RDBMS will be much simpler this way. (Even generation of DDL information will be much simpler this way) OTOH: Even the mother of all persistence layers (TOPLink) does not separate database schema info from mapping data. ANd keeps things simple (stupid). When Ivan first brought in the idea of separating table descriptors from class descriptors I liked his idea very much. Then I applied his suggestion to the repository.xml and saw that it produced a lot of redundancy and did not help to make things clearer. But this was mostly due to the limitations of DTD you mentioned above! > > > 3. change relationship specification > ============== > I suggest changing the relatonship specifications (collection and reference) > such that they contain the following: > > a) an optional "inverse" specification (like ODMG IDL) for relationships > which are bidirectionally navigable > b) a foreign key reference, which points to a foreign key specification from > the DBMS schema part M:N relationships included here ? > > > 4. Remove Connection information? > ============== > I am not sure on this, just a thought - why not let the application > establish the JDBC connection, and hand it in to the OJB API? Yes I have thought in this direction too. I think we should support both options 1. declaration in the Repository, 2. Providing JDBC Connections programmitically. > > > thats it for now. I would like to hear your opinion on this. Also, we should > determine how to proceed from here. I am aware that my suggestion amounts to > a rather extensive redesign, so if and before we settle on it, we should > make sure it fits into the overall timeframe and interfaces well with other > development progress. I think I could provide the design and parser > implementation within 3 weeks (depending on the amount of further > discussion). > I'm planning to provide support for m:n relationships soon. This is the only place where I expect conflicting activities regarding XML redesign. But this will be only a minor issue. So it should be save to start on 1.) and 2.) and 2a.) at any time. Regarding the advanced topics under 3.) we'll get more trouble. Please let me finish the work on automatic assignment of foreign key values first, before you start on this. (aprx. 3-4 weeks from now) thanks, Thomas |
From: Christian S. <chr...@ne...> - 2001-11-11 19:32:16
|
Hello, After having looked into the mapping repository for a while, I would like to discuss a few preliminary propositions. This is what I have come up with: 1. base redesign on XML schema =========== I would really like to base the redesign on XML schema, as I already proposed in an earlier discussion thread. The reasons are the following: XML schema is a complete data modeling language, which can not at all be said of DTD. One example, which bears rather heavily on the current design of the repository, is the fact that using DTD, identity and referential constraints can only be expressed globally, using the ID and IDREF properties. This (and not the parser implementation as I suggested earlier) has brought about the need to introduce identifiers for many elements, which had to duplicate structural information (e.g. "Table1.Column1" for a column of name "Column1" defined in table "Table1"). Referential constraints cannot be expressed type-safe in DTD. With XML schema all this is not an issue. Constraints can be defined such that they only apply within a certain scope (e.g. column names must be unique within the scope of the enclosing table). If we really want parser validation, there are nowadays parsers that support XML schema (xerces 1.4 does). 2. remove id attributes ============ From this follows, that I would suggest removing all extra id attributes where possible, and using "natural" constraints (like column name, etc.). Where no such natural identifiers exist (foreign key names, primary key names), they can be introduced within the given scope. 2. keep database schem information separately? =========== This was an issue in the earlier thread. I personally lean towards keeping database schem information as part of the repository, but separated from the mappings. Of course, this duplicates the schema information from the DBMS, and may be tedious to write, but I think it will help in the long run. Especially with regard to the specification of foreign keys, primary keys and the like, I think it is better to keep them in one place than to carry them along with the mapping - which would also imply the potential of idiosyncrasies and errors. An existing DBMS schema can easily be imported via JDBC. 3. change relationship specification ============== I suggest changing the relatonship specifications (collection and reference) such that they contain the following: a) an optional "inverse" specification (like ODMG IDL) for relationships which are bidirectionally navigable b) a foreign key reference, which points to a foreign key specification from the DBMS schema part 4. Remove Connection information? ============== I am not sure on this, just a thought - why not let the application establish the JDBC connection, and hand it in to the OJB API? thats it for now. I would like to hear your opinion on this. Also, we should determine how to proceed from here. I am aware that my suggestion amounts to a rather extensive redesign, so if and before we settle on it, we should make sure it fits into the overall timeframe and interfaces well with other development progress. I think I could provide the design and parser implementation within 3 weeks (depending on the amount of further discussion). regards, Christian Sell |
From: Thomas M. <tho...@ho...> - 2001-11-06 17:39:23
|
Hi Theo, I don't have heared of any attempts in this direction. I personnally have no intention to do such a port. But I have no objections if someone tries to work on this. thanks for your interest, Thomas Theo Albers wrote: > > Hi Thomas, > > I dived into your ObjectRelationalBridge sources and I was wondering if > someone is working on a port to MS.Net? Of course you must be thinking "who > wants to port Java to C#", but...sometimes you'll have to! > > Kind regards, > > Theo Albers > Unit 4 Agresso |
From: Charles A. <cha...@hp...> - 2001-11-05 14:54:50
|
Hi > >Is there any workaround anyone can think of ? I'm sort > > of toying with the idea of creating a class that would create the > > select count(*) from x where xxx statement from the OQL query.... > > > > Have a look at the Distribution page. There is a new preliminary > release, that supports "report queries". > ((http://prdownloads.sourceforge.net/objectbridge/ojb-0.7.207- > src.tgz)) > You can perform any SQL select and just a Collection of Object[] is > returned, that represent the ResultSet-rows. OK. I've got a hardcoded SQL query to work, and return a count. This Is Good. But now, I want to go from a Query to a count; in other words, I want to re-use the class descriptors to build the actual SQL statement. In a quick-hack-to-try-it-out type way, I did the following. <----------------- Start Of Code Snippet -------------------------------> Transaction tx = odmg.newTransaction(); tx.begin(); OQLQuery oqlQuery = odmg.newOQLQuery(); oqlQuery.create("select i from " + DebtorMaster.class.getName() + " where name like \"F%\""); tx.commit(); Query brokerQuery = ((OQLQueryImpl) oqlQuery).getQuery(); ClassDescriptor cld = DescriptorRepository.getInstance().getDescriptorFor(brokerQuery.getSearchCla ss()); String whereClause = SqlGenerator.getInstance().asSQLStatement(brokerQuery.getCriteria(), cld); String sql = "select count(*) from " + cld.getTableName(); if(whereClause.length()>0){ sql +=" where " + whereClause; } System.out.println("sql = " + sql); PreparedStatement stmt = broker.getStatementManager().getPreparedStatement(cld, sql); broker.getStatementManager().bindStatement(stmt, brokerQuery.getCriteria(), cld, 1); ResultSet rs = stmt.executeQuery(); rs.first(); long count = rs.getLong(1); rs.close(); System.out.println("count = " + count); <----------------- End Of Code Snippet -------------------------------> I could not use the getReportQueryIteratorBySQL as that takes a sql statement, not a PreparedStatment as a parameter. This is fair enough; but to re-use the SQL generation bits-and-pieces in ojb.broker.accesslayer, I need to use a prepared statement. Ultimately, I am aiming to have an object that has an interface like this : public interface QueryCount { public void setQuery(ojb.broker.query.Query query); public long getNumberOfResults(); } This way, it should work with OQL queries, broker by-example and broker by-criteria queries. Am I going about this the right way ? With thanks, Charles. This email and any attachments are strictly confidential and are intended solely for the addressee. If you are not the intended recipient you must not disclose, forward, copy or take any action in reliance on this message or its attachments. If you have received this email in error please notify the sender as soon as possible and delete it from your computer systems. Any views or opinions presented are solely those of the author and do not necessarily reflect those of HPD Software Limited or its affiliates. At present the integrity of email across the internet cannot be guaranteed and messages sent via this medium are potentially at risk. All liability is excluded to the extent permitted by law for any claims arising as a re- sult of the use of this medium to transmit information by or to HPD Software Limited or its affiliates. |
From: Thomas M. <tho...@ho...> - 2001-11-03 19:24:48
|
Hi Cristian, Christian Sell wrote: > > > I think, when a relationship is resolved, you will alost always end up > accessing all the objects therein. Your argument about the "fourth element" > is not convincing to me, as there is no notion of ordering in relationships. > You would always have to iterate the relationship and decide based on the > object state which one you need. And, if you only need a few particular > objects from a relationship, you should use an explicit query in the first > place. My point: when a relationship is resolved, the underlying semantics > are that ALL objects are requested. > Good argument. I added this as a feature request to the todo list. > What I do see is that an iteration over a relationship could be dropped > before the end is reached because some condition is met, e.g. because the > user stops paging in the list she is presented with. This should be handled > by a cursor-based relationship iterator, i.e. the iterator is internally > based on a JDBC ResultSet (an SQL cursor), and objects are only instantiated > when iterated over. > That's a good idea. OJB already supports Iterators (PersistenceBroker.getIteratorByQuery(...)). It's not difficult to write lazy collections (based on such iterators) instead just exhausting the Iterator to fill a Vector as it is done now in PersistenceBroker.getCollectionByQuery(...). I'll add this to the todo list. > > Class ID's are superfluous. Field ID's are necessary to provide a order > > relation for the Comparator used for sorting. > > There will be a complete review and cleanup of the existing DTD once we > > have included all 1.0 features to the existing DTD (n:m relationships, > > automatic foreignkey assignment, JNDI naming) > > Looking at the respoistory, I also see some need for clean up. The things > that immediately come to my mind were: > > - make the structure XML-like. Currently, the repository syntax is a hybrid > between XML and java properties files. As I see it, this is due to your > parser, which only works on the first level (it does not use a stack > internally). Therefore, you have introduced stuff like "<url.protocol>", > where > it should really be "<url><protocol></protocol></url>". I really think this > should be changed asap. I could donate a SAX ContentHandler base class which > makes this rather easy. Ivan provided a clean design for a DTD. (I guess you'll find it in the mailinglist archive. If not I can post it to you). When all features required for release 1.0 are there I'm planning to have a rewrite of DTD and Parser. Your help is of course welcome. > - introduce field-level ConversionStrategies. This is a must IMO. > - introduce the "jndi-lookup" subelement for the JdbcConnectionDescriptor > ALready on the todo list. > > > Q4: ================================== > > > Are there any plans to remove the need for separately mapping foreign > key > > > attributes? I find this a rather annoying requirement, which clearly > defeats > > > the notion of "transparence". > > > > > > > I'm not sure if I understand you right? > > Currently OJB requires to set foreign key attributes manually. > > (E.G: we create a new A with 20 B objects in a collection attribute. > > a.primary_key is computed automatically with the <autoincrement> > > features. But we have to set the b.foreign_key attributes referencing to > > a.primary_key manually.) > > This is of course annoying and has to be replaced by some automatic > > solution. This feature is already on the todo list. But I have not > > started working on it. > > Well, my point is going a bit further. I would even want to avoid the > b.foreign_key attribute altogether, as it clearly is a database artifact > that has no significance to the object model. After all, B already has a > my_a attribute. If you want to persist a plain Java class which was not > designed with OJB in mind (which is what transparence is about), it will > most certainly not have the b.foreign_key attribute. I am sure this can be > done rather easily, as the value is there in the target objects PK, and can > be retreived through metadata during storage operations (done this before). > OJB started as a MAPPING tool. That is we assumed that there is a full database model. The Developers I talked to had no objections against having primary key attributes and foreign key atributes in the persistent classes. Obviously those things don't belong to the domain model. but they they can be clearly marked as not being part of the domain model (e.g. by declaring them as "private transient"). As most OO designs don't allow direct access to attributes but only through getters and setters these "technical" attributes can completely hidden from the application developer. Thus I decided to consider this kind of transparency as "nice to have" but not essential. Regarding your idea to eliminate the need of foreign key attributes I agree that this should be possible without to much effort. > > Any help is appreciated a lot! > > I am currently making up my mind with respect to an object persistence > solution for my projects. TopLink is not an option for obvious reasons, and TopLink is really crowded with an immense mass of really good features. Also the mapping Workbench is really great. But it's so expensive. In my company we have developed an abstraction layer that encapsulates persistence layers to let projects decide whether they want to use TopLink or OJB. It's only a configuration entry to be changed... > EJB 2.0 is giving me a headache with all those interfaces and helper classes > you need to make it work, and the lack of O/R features like inheritance & > polymorphism. I agree! THAT is poor design (not my harmless foreign key attributes ;-) I also owe a lot to the OSS community, so it would be about > time to make a contribution myself. That's great. > As I would plan to use OJB in > production, one of my main requirements would be to remove all the > performance-critical issues like singular loading, and all other excess DBMS > calls (like those someone else mentioned about DLists, which allocate > sequence values for themselves and all elements - yuck). OK DLists must work this way. With a better SequenceManager (as it comes with the next release) they won't be a big Problem. What may become a performance bottle neck is the LockManager. Currently it's based on a DB table. That is all locking, unlocking, checking for locks etc. produces database traffic. For heavy duty apps a separate LockManager server is a must. For the singlevm mode a inmemory Lockmanager would be sufficient. > Second would be the > removal of non-transparency, > and a JDO interface is also quite important. The next release will be called 0.7. It will provide a completed client server architecture (now also the MetaData Layer will be accessed remotely). With this release there will be a solid persistence kernel working that could be used for a JDO Implementation. I see 2 possible approaches for the JDO implementation: 1. start coding and make as much "copy paste"-reuse from the ODMG implementation. 2. Start with a refactoring: - Extract everything from the ODMG implementation that might be useful for JDO too. Build a generic Object Transaction Kernel based on this refactoring. - Reimplement ODMG based on this new Kernel, - Implement JDO based on the new Kernel. This would be a layered architecture with - OJB PersistenceBroker as base - OJB Object Transaction Kernel (OTK) (based on the Broker) - ODMG and JDO Implementation build on top of the OTK. This is obviously the best way to go but will be a lot of work too... > Some of this would possibly require a redesign/reworking of the kernel. If > we could come to an agreement (possibly after more discussion), I may be > willing to make a significant time investment in the near future (towards > end of the year). What do you think? > Sounds great. Of course will need some more clarifications on details. But as mentioned before: I'm really glad to get support! cheers, Thomas > regards, > Christian |
From: Thomas M. <tho...@ho...> - 2001-11-03 17:20:43
|
Hi Satish, Sorry for the problems OJB is causing! This is really a severe bug. I'll have a look at it asap and will keep you updated. --Thomas sa...@na... wrote: > > Hi Thomas, > > I have run into a particularly simple but nasty bug with > SequenceManager very much on the same lines as the previous bugs I ran > into with stale connections. > > Basically, several database records have been overwritten (and have > several people annoyed at me right now) as follows: > > I use broker.getUniqueId(classname) to get the next id from OJB_SEQ > table. However, broker.getUniqueId(..) calls > m_Sequencemanager.getUniqueId(..) in > singlevm/PersistenceBrokerDefaultImpl.java. > > The m_SequenceManager in the broker instance is obtained by using > SequenceManagerFactory.getSequenceManager(this) in the broker > constructor. > > However, SequenceManagerFactory.getSequenceManager(broker) returns the > static SEQMAN that it has created the first time it was used. The > broker argument is not used except for the first time. So, SEQMAN uses > a broker instance that will potentially become stale due to idle > connection timeout in mysql. Even without the timeouts, > brokerInstance3.getUniqueId(..) will use brokerInstance1 for actually > reading the next id from the db which does not seem right. > > One fix that will be thread-safe is to add broker as an argument to > SequenceManager.getUniqueId(classname, fieldname, broker). > > Let me know what you think. > > If you do have a quick fix/workaround for this, please let me know. > I am under the gun right now. > > ----------------------------------------------------------------- > > Now if you are interested, here is how several records have been > overwritten due to this bug: > > After a while after the web server was started, the cached broker > instance in SEQMAN became stale and stopped updating (lets say > sequence number = 50), the OJB_SEQ table. However, due to poor > exception/error handling in SequenceManagerDefaultImpl.getNextId(), > getNextId() returned the in-memory copy of lastId+1 and hence > everything seemed to be ok and did not throw the exception back to the > application. > > Now, a few days later (lets say we were at seq# = 150 at this time), > the web server was restarted to update to ojb-0.5.200 and then the bad > value (50) was read from OJB_SEQ and was updated for a while. So, the > entries starting at 50 started getting overwritten. > > Thanks, > Satish. > > -- > Get your firstname@lastname email for FREE at http://Nameplanet.com/?su |
From: Thomas M. <tho...@ho...> - 2001-11-02 12:45:05
|
Hello Charles, Charles Anthony wrote: > > Hello once more, > > Is it possible to do a count() in an OQL expression ? OQL allows you to all kind of aggregation that comes with SQL. > Or rather, is it > possible to do a count() in OJB's implementation of OQL ? > Unfortunately No! The OJB OQL implementation is not complete. It does only support the simple selection mechanism that is implemented in the ojb.broker.Query package. The OJB OQL Parser translates OQL to Query expressions which are then executed with PersistenceBroker.getCollectionByQuery(ManageableCollectionClass, Query) > As far as I can see, it isn't possible at the moment. > You are right. The OJB mechanism are good for selection and navigation but not for aggregations etc. > Basically, I'm trying to give an end user the ability to do a flexible > search that could return many hundred of rows; I need to be able to page the > results. I can get 'blocks' of objects quite simply (by skipping the first n > objects returned in a query) - but what I need to be able to do is to work > out how many blocks of results there are for the results of a query (but > without iterating the entire results), in order to display a paging-thing to > the enduser (A'la Google). > > I'm sure this is a common requirement; I can think of other places where > count(n) would be useful, as well as max(), min() etc. > > I can also see that putting functions into the OQL parser/query code could > be quite tricky, too. Yes, the parser is one of the OJB dark spots... It has bugs even for the most simple tasks as to parse Integers properly... >Is there any workaround anyone can think of ? I'm sort > of toying with the idea of creating a class that would create the > select count(*) from x where xxx statement from the OQL query.... > Have a look at the Distribution page. There is a new preliminary release, that supports "report queries". ((http://prdownloads.sourceforge.net/objectbridge/ojb-0.7.207-src.tgz)) You can perform any SQL select and just a Collection of Object[] is returned, that represent the ResultSet-rows. To build a paging solution I think the OQL mechanism is not so good: It is based on the PersistenceBroker.getCollectionByQuery(...) method. Thus ALL objects (or their respective proxies) matching the query (maybe millions) are returned. This may become really dangerous if you allow user defined queries... I would suggest to use the PersistenceBroker.getIteratorByQuery(...). This will give you full control on when and how many objects are to instantiated. It will also reduce memory load as there are only references to the objects in the current page (say 20) and not to all objects in the collection (say 10000). This will help the Java Garbage collector to do a better job. You can use the PersistenceBroker Query API safely within ODMG transactions. You only have to maintain that objects that are to be managed by ODMG transactions are properly locked to the transactions. Only disadvantage: Your app is not fully ODMG compliant... As paging is a common problem, your solution could be useful as a sample code for others! HTH, Thomas |
From: Thomas M. <tho...@ho...> - 2001-11-02 12:23:30
|
Hi again Charles, Charles Anthony wrote: > > Hi all, > > Whilst using OJB in a prototype, I found that using the standard > ojb.odmg.oql.OQLQueryImpl class to do an OQL query seemed a little > inefficient. > > It turns out that this is because OQLQueryImpl uses a DList to return the > results as a collection. The DList implementation access the OJB_SEQ table > to gain an ID for itself, and one for each entry in the list. > You are right. Using DList for certain circumstances may be inefficient. Using a DList for a collection result is also not specified by ODMG. I choose to use DList for the following reasons: 1. it supports a lot of useful interfaces: Collection, List, DCollection, DList. 2. it is persistence capable. That is you can store the result of a query for subsequent operations. 3. it integrates into the ODMG transaction mechanism. Of course it is not always necessary to have all these features. In certain scenarios, where I did not need any of the above features, I used the simple getCollectionByQuery(query) method that just returns a java.util.Vector to improve performance... > I have attached a quick-n-dirty try to improve on this; basically, I've > implemented a ManageableArrayList collection, which extends ArrayList and > implements ManageableCollection. I've also implemented a SpeedyQuery class, > which extends OQLQueryImpl and overrides execute to return a > ManageableArrayList instead of a DList. I've also attached a simple > CompareQueries class that tests the SpeedyQuery class. > > In this test (against a table with 1889 rows on SQL server), the > OQLQueryImpl took on average 37 secs. The SpeedyQuery took 1.5 secs. > A side note: could you send me the SQL Server settings for sql1.txt and the JDBCConnectionDescriptor? Did you have any problems with OJB and SQL server? Was it necessary to make changes to SQL1.txt or the repository.xml apart from JDBC connections? > I'm sure there are drawbacks to this approach, and I'm sure that I've > overlooked something terribly important. The only things that I note at the > moment are : > > * I'm not currently registering the resulting items with the current > transaction; this would be > trivial to achieve. This is important to provide transactional integrity. each object returned by a query should be read-locked immediately. > * If you remove or add an item to the ManageableArraylist, the changes would > not be propogated to > the database. If we look at a result collection as a snapshot, updates are not necessary... > One might add: thread safety. > Still, even with those caveats (which I'm sure are possible to deal with), I > still think this speed improvement is not to be sniffed at. > Right: 1.5 versus 37 is quite a difference. > Comments are welcome; just be gentle with me - this is the first code I have > sent to an open-source mailing list.... > Thanks for you contribution! I will try to integrate it into the main distribution if you don't mind. My idea is to have a special configuration entry in OJB.properties that determines which Collection type is to be used for OQL queries. Maybe it's also a good idea to have the classes that are use to implement certain ODMG interfaces configurable. This way it will be a lot easier to allow experimentation with user defined stuff like your SpeedyQuery Class... Thanks a lot again for you contribution, --Thomas |