objectbridge-developers Mailing List for ObJectRelationalBridge (Page 3)
Brought to you by:
thma
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(14) |
Dec
(20) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(33) |
Feb
(8) |
Mar
(3) |
Apr
(1) |
May
(18) |
Jun
(6) |
Jul
(15) |
Aug
(71) |
Sep
(29) |
Oct
(43) |
Nov
(77) |
Dec
(54) |
| 2002 |
Jan
(54) |
Feb
(147) |
Mar
(144) |
Apr
(163) |
May
(307) |
Jun
(240) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
|
From: Bradley A. S. <br...@ba...> - 2002-06-13 16:13:16
|
I believe the calls to getSubString and getBytes are incorrect.
case Types.CLOB :
{
java.sql.Clob aClob = rs.getClob(columnId);
result = aClob.getSubString(0L, (int) aClob.length());
break;
}
case Types.BLOB :
{
java.sql.Blob aBlob = rs.getBlob(columnId);
result = aBlob.getBytes(0L, (int) aBlob.length());
break;
}
According to the Javadoc for java.sql.Blob, the first parameter of getBytes
or getSubString is the position, and the first byte/character is at position
1, not position 0.
Thanks,
Bradley
|
|
From: Bradley A. S. <br...@ba...> - 2002-06-13 16:12:27
|
I believe the calls to getSubString and getBytes are incorrect.
case Types.CLOB :
{
java.sql.Clob aClob = rs.getClob(columnId);
result = aClob.getSubString(0L, (int) aClob.length());
break;
}
case Types.BLOB :
{
java.sql.Blob aBlob = rs.getBlob(columnId);
result = aBlob.getBytes(0L, (int) aBlob.length());
break;
}
According to the Javadoc for java.sql.Blob, the first parameter of getBytes
or getSubString is the position, and the first byte/character is at position
1, not position 0.
Thanks,
Bradley
|
|
From: Mahler T. <tho...@it...> - 2002-06-13 15:42:12
|
=20
-----Urspr=FCngliche Nachricht-----
Von: Srividya Vullanki [mailto:p_s...@ya...]
Gesendet: Donnerstag, 13. Juni 2002 17:38
An: Mahler Thomas
Betreff: Re: AW: AW: [Fwd: Thank you]
Hi Thomas,=20
The OJB tool worked great on Win2K using MySQL as the database. But, =
when
we tried to implement on Tandem box using NonStop SQL as the Database, =
we
had a couple of problems. We are not sure what the problem is though.=20
Probably you could help us out, so that we could succefully implement =
on
Tandem.=20
Below, Is the attached error message. If you could explain to us what =
SQL
statements are passed to JDBC, it could help us to track down whether =
its a
problem with the JDBC driver on Tandem.=20
Once again Thnaks a bunch.=20
FYI, We added a some print statements for debugging purposes.=20
Sri=20
/usr/tandem/objectpersistence>java TandemTester
Persisting an ATRDataContainer Object
[BOOT] INFO: OJB.properties:
file:/usr/tandem/objectpersistence/OJB.properties
[DEFAULT] INFO: OJB Descriptor Repository:
file:/usr/tandem/objectpersistence/re
pository.xml
[BOOT] INFO: loading XML took 1080 msecs
[DEFAULT] INFO: ...Finished parsing
Created an instance of ATREStaticData
Got the Static Data
[ojb.broker.singlevm.PersistenceBrokerImpl] DEBUG: store
com.co.atre.engine.cont
ainer.ATRDataContainer@90132bee =
<mailto:ainer.ATRDataContainer@90132bee>=20
[ojb.broker.singlevm.PersistenceBrokerImpl] DEBUG: getObjectByIdentity
ojb.broke
r.util.sequence.HighLowSequence{com.co.atre.engine.container.ATRDataCont=
aine
r,m_
intPnrID}
[ojb.broker.accesslayer.SqlGenerator] DEBUG: SQL: SELECT
A0.CLASSNAME,A0.FIELDNA
ME,A0.MAX_KEY,A0.GRAB_SIZE FROM =3DOJB_HL_SEQ A0 WHERE (A0.CLASSNAME =
=3D ?) AND
(A0.
FIELDNAME =3D ?)
[ojb.broker.accesslayer! .StatementsForClass] DEBUG: prepareStatement:
SELECT A0.C
LASSNAME,A0.FIELDNAME,A0.MAX_KEY,A0.GRAB_SIZE FROM =3DOJB_HL_SEQ A0 =
WHERE
(A0.CLAS
SNAME =3D ?) AND (A0.FIELDNAME =3D ?)
[ojb.broker.accesslayer.JdbcAccess] DEBUG: before bind select....
Number of primary Keys...2
First Value is com.co.atre.engine.container.ATRDataContainer
Value is not null.. and setting the object with values
Arg 1 01
arg 2 com.co.atre.engine.container.ATRDataContainer
arg 3 12
First Value is m_intPnrID
Value is not null.. and setting the object with values
Arg 1 11
arg 2 m_intPnrID
arg 3 12
[ojb.broker.accesslayer.JdbcAccess] DEBUG: After bind select....
[ojb.broker.accesslayer.JdbcAccess] DEBUG: After EXCUTING
QUERYD***************=20
[ojb.broker.singlevm.PersistenceBrokerImpl] DEBUG: store
ojb.broker.util.sequenc
e.HighLowSequence@e9e32beb <mailto:e.HighLowSequence@e9e32beb>=20
[ojb.broker.accesslayer.JdbcAccess] DEBUG: before bind select....
Number of primary Keys...2
First Value is com.co.atre.engine.container.ATRDataContainer
Value is not null.. and setting the object with values
Arg 1 01
arg 2 com.co.atre.engine.container.ATRDataContainer
arg 3 12
[DEFAULT] ERROR: bindSelect failed for:
ojb.broker.util.sequence.HighLowSequence
{com.co.atre.engine.container.ATRDataContainer,m_intPnrID}, PK: 0, =
value:
com.co
.atre.engine.container.ATRDataContainer
[ojb.broker.accesslayer.JdbcAccess] ERROR: SQLException during the =
execution
of
materializeObject: SQLMP: Cannot set a value in a statement with an =
open
cursor
SQLMP: Cannot set a value in a statement with an open cursor
java.sql.SQLException: SQLMP: Cannot set a value in a! statement with =
an
open cu
rsor
at
com.tandem.sqlmp.SQLMPPreparedStatement.setObject(SQLMPPreparedStatem
ent.java, Compiled Code)
at
com.tandem.sqlmp.SQLMPPreparedStatement.setObject(SQLMPPreparedStatem
ent.java, Compiled Code)
at ojb.broker.accesslayer.StatementManager.bindSelect(Unknown
Source)
at ojb.broker.accesslayer.JdbcAccess.materializeObject(Unknown
Source)
at ojb.broker.singlevm.PersistenceBrokerImpl.store(Unknown =
Source)
at
ojb.broker.util.sequence.SequenceManagerHighLowImpl.getUniqueId(Unkno
wn Source)
at
ojb.broker.util.sequence.SequenceManagerDefaultImpl.getUniqueLong(Unk
nown Source)
! at =
ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueLong(Unkn
own Sourc
e)
at ojb.broker.singlevm.PersistenceBrokerImpl.store(Unknown Source)
at
ojb.broker.util.sequence.SequenceManagerHighLowImpl.getUniqueId(Unkno
wn Source)
at
ojb.broker.util.sequence.SequenceManagerDefaultImpl.getUniqueLong(Unk
nown Source)
at =
ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueLong(Unknown
Sourc
e)
at =
ojb.broker.metadata.ClassDescriptor.getAutoIncrementValue(Unknown
Sou
rce)
at ojb.broker.metadata.ClassDescriptor.getKeyValues(Unknown =
Source)
at ojb.broker.Identity.<init>(Unknown Source)
at ojb.broker.singlevm.PersistenceBrokerImpl.store(Unknown =
Source)
at object! =
persistence.ATRObjectPersistor.writeATREObject(Unknown
Source)
at TandemTester.main(TandemTester.java, Compiled Code)
[DEFAULT] ERROR: OJB ERROR: Dont know how to autoincrement field class
com.co.at
re.engine.container.ATRDataContainer.m_intPnrID
java.lang.RuntimeException: OJB ERROR: Dont know how to autoincrement =
field
clas
s com.co.atre.engine.container.ATRDataContainer.m_intPnrID
at =
ojb.broker.metadata.ClassDescriptor.getAutoIncrementValue(Unknown
Sou
rce)
at ojb.broker.metadata.ClassDescriptor.getKeyValues(Unknown =
Source)
at ojb.broker.Identity.<init>(Unknown Source)
at ojb.broker.singlevm.PersistenceBrokerImpl.store(Unknown =
Source)
at objectpersistence.ATRObjectPersistor.write! =
ATREObject(Unknown
Source)
&nbs p; at TandemTester.main(TandemTester.java, Compiled Code)
[DEFAULT] ERROR: null
ojb.broker.PersistenceBrokerException
at ojb.broker.metadata.ClassDescriptor.getKeyValues(Unknown =
Source)
at ojb.broker.Identity.<init>(Unknown Source)
at ojb.broker.singlevm.PersistenceBrokerImpl.store(Unknown =
Source)
at objectpersistence.ATRObjectPersistor.writeATREObject(Unknown
Source)
at TandemTester.main(TandemTester.java, Compiled Code)
[ojb.broker.singlevm.PersistenceBrokerImpl] ERROR: Error in Transaction
abort: S
QLMP: Cannot rollback in auto commit mode
null
ojb.broker.metadata.ClassNotPersistenceCapableException
at ojb.broker.singlevm.PersistenceBrokerImpl.store(Unknown =
Source)
&nb! sp; at
objectpersistence.ATRObjectPersistor.writeATREObject(Unknown Source)
at TandemTester.main(TandemTester.java, Compiled Code)
Writing into the database!!!! =20
_____ =20
Do You Yahoo!?
Sign-up
<http://rd.yahoo.com/welcome/*http://fifaworldcup.yahoo.com/fc/en/spl> =
for
Video Highlights of 2002 FIFA World Cup
|
|
From: Matthew B. <ma...@so...> - 2002-06-13 15:22:01
|
I'll write a test case for this tonight and fix if it is a bug. Thanks for reporting! -----Original Message----- From: Stephan Merker [mailto:Ste...@ce...] Sent: Thursday, June 13, 2002 6:12 AM To: obj...@li... Subject: [OJB-developers] OQLQuery Bug? Hello, according to JavaDoc, the parameter list should be reset after OqlQuery.execute(), so that I can reuse the query with a different parameter set. However, if I call OqlQuery.bind() after the query was executed once, I get an org.odmg.QueryParameterCountInvalidException. In OqlQueryImpl.execute(), I can't see any code that would reset the parameter list. Did I miss something? Stephan _______________________________________________________________ Don't miss the 2002 Sprint PCS Application Developer's Conference August 25-28 in Las Vegas - http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink _______________________________________________ Objectbridge-developers mailing list Obj...@li... https://lists.sourceforge.net/lists/listinfo/objectbridge-developers |
|
From: Stephan M. <Ste...@ce...> - 2002-06-13 13:09:47
|
Hello, according to JavaDoc, the parameter list should be reset after OqlQuery.execute(), so that I can reuse the query with a different parameter set. However, if I call OqlQuery.bind() after the query was executed once, I get an org.odmg.QueryParameterCountInvalidException. In OqlQueryImpl.execute(), I can't see any code that would reset the parameter list. Did I miss something? Stephan |
|
From: Chris G. <CGr...@de...> - 2002-06-13 12:28:31
|
+1 for ABC, ABCImpl. C. > hi, > > i have a question about naming interfaces and implementations: > > mostly the pattern interface: ABC implementation ABCImpl is used. > but there are some classes that use another pattern ABCIF for the > interface and ABC for the implementation (StatementManager implements > StatementManagerIF) . > > configuration have their own impl package although they follow ABC, > ABCImpl. > > another pattern i haven't found in ojb could be IABC, ABC (like in > eclipse). > > imho we should all use the same naming pattern here. because of the > number of classes (and history too) i propose to use ABC, ABCImpl. > > jakob |
|
From: Charles A. <cha...@hp...> - 2002-06-13 09:46:07
|
Hi, > If you enable proxies on reference- and > collection-descriptors you can setup > a lazy load scheme for large object nets. Doh ! Silly me - I'd forgot about them. I've never investigated OJB proxies for reasons listed at the end... [...] > > IMHO the proxy mechanism for references > (http://objectbridge.sourceforge.net/tutorial3.html#proxies-reference) and > collection attributes > (http://objectbridge.sourceforge.net/tutorial3.html#proxies-collection) is > quite elegant. Indeed they are; well, for collections. For references, I'm not sure it's quite so good; that would mean creating an interface for every single class that is referenced as an attribute. I understand the reasons why, having recently dabbled in the world of Dynamic Proxies, but I can also see it being a big overhead in the modelling of those classes. Essentially, we would have to create an interface for every modelled class, since I would estimate that around 98% of them would be referenced in the '1' end of an 1-n link. > given the proxy concept mentioned above, do you still think, those methods > are required on the PersistenceBroker interface. I'm not sure; does anyone else share my concerns ? Or am I barking up the wrong tree ? Cheers, Charles. This email and any attachments are strictly confidential and are intended solely for the addressee. If you are not the intended recipient you must not disclose, forward, copy or take any action in reliance on this message or its attachments. If you have received this email in error please notify the sender as soon as possible and delete it from your computer systems. Any views or opinions presented are solely those of the author and do not necessarily reflect those of HPD Software Limited or its affiliates. At present the integrity of email across the internet cannot be guaranteed and messages sent via this medium are potentially at risk. All liability is excluded to the extent permitted by law for any claims arising as a re- sult of the use of this medium to transmit information by or to HPD Software Limited or its affiliates. |
|
From: Mahler T. <tho...@it...> - 2002-06-13 09:17:56
|
Hi Charles, > Hi all, > > Is this the only approach to loading references ? > > In our (large-andsoon-to-be-huge) object model, virtually > everything is > reference by something else. By loading one object, if > auto.retrieve is > turned on for all classes, most of the database would be loaded ! > If you enable proxies on reference- and collection-descriptors you can setup a lazy load scheme for large object nets. > Essentially, we have not created any > ReferenceDescriptors/CollectionDescriptors in our repository, > purely because > I cannot see a way to use them without dynamically changing > the repository > as Matt describes (as well as the bug I reported in April > 'ODMG Cascading > Updates : possible/probable bug'). Even without the > ramifications discussed, > the approach seems a bit clunky. > IMHO the proxy mechanism for references (http://objectbridge.sourceforge.net/tutorial3.html#proxies-reference) and collection attributes (http://objectbridge.sourceforge.net/tutorial3.html#proxies-collection) is quite elegant. > A suggestion : how about a loadReferences(Object, > nameOfReferenceField) on > the PersistenceBroker. I'm only concerned with loading > references at the > moment, but deleting/updating reference could be similairly > approached. > given the proxy concept mentioned above, do you still think, those methods are required on the PersistenceBroker interface. Of course such methods can be added, but we have to be carefull not to expose too much in the PB API. ciao, Thomas > Cheers, > > Charles. > > > > -----Original Message----- > > From: Mahler Thomas [mailto:tho...@it...] > > Sent: 13 June 2002 08:44 > > To: 'Matthew Baird'; ojb > > Subject: AW: [OJB-developers] Changing Metadata at runtime > > > > > > Hi Matthew, > > > > > > > If we change metadata at runtime, is the change global, or > > > limited to the > > > persistencebroker in which the change was made. Since the > descriptor > > > repository is a singleton, I'm going to assume the changes > > > are global. > > > > Right, One Repository instance == global changes! > > > > > That > > > has some interesting ramifications. > > > > I know :-( > > > > > > > > If we change data at runtime, like this: > > > > > > if (r != null) > > > { > > > // temporarily disable cascading to avoid recursions > > > rds.setCascadeDelete(false); > > > delete(r); > > > rds.setCascadeDelete(true); > > > } > > > > > > and we are running in an environment where operations might > > > be parallel, we > > > could end up with some non-deterministic behaviour based on > > > transaction > > > timing. > > > > > > > Right ! > > > > > Ie. > > > PB one starts to delete references on object of type foo > > > PB two starts to delete references on object of type foo > > > PB gets value of cascadedelete=true and sets it to > > > false and starts > > > delete > > > PB gets value of cascadedelete=false and doesn't delete > > > > > > Whoops! > > > > > > Is my analysis correct? > > > > Absolutely correct :-( > > > > > > > I the current recursion avoidance schema needs to be > > > rewritten to be correct > > > anyway. > > > > > > > agreed! > > > > 1. We need one Repository per PersistenceBroker! This is > > prepared in the > > PersistenceBrokerFactory already, where broker are > constructed with a > > repository instance as parameter. > > But there are several places in the metadata stuff still rely on the > > singleton instance approach. > > > > 2. The current recursion avoiance scheme does not work > correctly. Even > > without threading conflicts it is buggy. > > recursions during store() must not be avoided by manipulating > > the metadata, > > but by maintaining a list of objects already "touched" by the store > > operation. before calling store recursively on an object we > > have to check > > whether this object has already been stored during the > > current user call to > > store. > > > > This can be done similar to the recursion avoiding mechanism in > > ojb.odmg.TransactionImpl.lock(...) > > > > (This issue has been on my todo list for a long time, I'd > > like to see this > > fixed !) > > > > cheers, > > Thomas > > > > > > > > > Regards, > > > Matthew > > > > > > _______________________________________________________________ > > > > > > Sponsored by: > > > ThinkGeek at http://www.ThinkGeek.com/ > > > _______________________________________________ > > > Objectbridge-developers mailing list > > > Obj...@li... > > > https://lists.sourceforge.net/lists/listinfo/objectbridge-developers > > > > _______________________________________________________________ > > Don't miss the 2002 Sprint PCS Application Developer's Conference > August 25-28 in Las Vegas - http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink _______________________________________________ Objectbridge-developers mailing list Obj...@li... https://lists.sourceforge.net/lists/listinfo/objectbridge-developers This email and any attachments are strictly confidential and are intended solely for the addressee. If you are not the intended recipient you must not disclose, forward, copy or take any action in reliance on this message or its attachments. If you have received this email in error please notify the sender as soon as possible and delete it from your computer systems. Any views or opinions presented are solely those of the author and do not necessarily reflect those of HPD Software Limited or its affiliates. At present the integrity of email across the internet cannot be guaranteed and messages sent via this medium are potentially at risk. All liability is excluded to the extent permitted by law for any claims arising as a re- sult of the use of this medium to transmit information by or to HPD Software Limited or its affiliates. |
|
From: Charles A. <cha...@hp...> - 2002-06-13 08:51:59
|
Hi all,
Is this the only approach to loading references ?
In our (large-andsoon-to-be-huge) object model, virtually everything is
reference by something else. By loading one object, if auto.retrieve is
turned on for all classes, most of the database would be loaded !
Essentially, we have not created any
ReferenceDescriptors/CollectionDescriptors in our repository, purely because
I cannot see a way to use them without dynamically changing the repository
as Matt describes (as well as the bug I reported in April 'ODMG Cascading
Updates : possible/probable bug'). Even without the ramifications discussed,
the approach seems a bit clunky.
A suggestion : how about a loadReferences(Object, nameOfReferenceField) on
the PersistenceBroker. I'm only concerned with loading references at the
moment, but deleting/updating reference could be similairly approached.
Cheers,
Charles.
> -----Original Message-----
> From: Mahler Thomas [mailto:tho...@it...]
> Sent: 13 June 2002 08:44
> To: 'Matthew Baird'; ojb
> Subject: AW: [OJB-developers] Changing Metadata at runtime
>
>
> Hi Matthew,
>
>
> > If we change metadata at runtime, is the change global, or
> > limited to the
> > persistencebroker in which the change was made. Since the descriptor
> > repository is a singleton, I'm going to assume the changes
> > are global.
>
> Right, One Repository instance == global changes!
>
> > That
> > has some interesting ramifications.
>
> I know :-(
>
> >
> > If we change data at runtime, like this:
> >
> > if (r != null)
> > {
> > // temporarily disable cascading to avoid recursions
> > rds.setCascadeDelete(false);
> > delete(r);
> > rds.setCascadeDelete(true);
> > }
> >
> > and we are running in an environment where operations might
> > be parallel, we
> > could end up with some non-deterministic behaviour based on
> > transaction
> > timing.
> >
>
> Right !
>
> > Ie.
> > PB one starts to delete references on object of type foo
> > PB two starts to delete references on object of type foo
> > PB gets value of cascadedelete=true and sets it to
> > false and starts
> > delete
> > PB gets value of cascadedelete=false and doesn't delete
> >
> > Whoops!
> >
> > Is my analysis correct?
>
> Absolutely correct :-(
>
>
> > I the current recursion avoidance schema needs to be
> > rewritten to be correct
> > anyway.
> >
>
> agreed!
>
> 1. We need one Repository per PersistenceBroker! This is
> prepared in the
> PersistenceBrokerFactory already, where broker are constructed with a
> repository instance as parameter.
> But there are several places in the metadata stuff still rely on the
> singleton instance approach.
>
> 2. The current recursion avoiance scheme does not work correctly. Even
> without threading conflicts it is buggy.
> recursions during store() must not be avoided by manipulating
> the metadata,
> but by maintaining a list of objects already "touched" by the store
> operation. before calling store recursively on an object we
> have to check
> whether this object has already been stored during the
> current user call to
> store.
>
> This can be done similar to the recursion avoiding mechanism in
> ojb.odmg.TransactionImpl.lock(...)
>
> (This issue has been on my todo list for a long time, I'd
> like to see this
> fixed !)
>
> cheers,
> Thomas
>
>
>
> > Regards,
> > Matthew
> >
> > _______________________________________________________________
> >
> > Sponsored by:
> > ThinkGeek at http://www.ThinkGeek.com/
> > _______________________________________________
> > Objectbridge-developers mailing list
> > Obj...@li...
> > https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
> >
>
> _______________________________________________________________
>
> Don't miss the 2002 Sprint PCS Application Developer's Conference
> August 25-28 in Las Vegas -
http://devcon.sprintpcs.com/adp/index.cfm?source=osdntextlink
_______________________________________________
Objectbridge-developers mailing list
Obj...@li...
https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
This email and any attachments are strictly confidential and are intended
solely for the addressee. If you are not the intended recipient you must
not disclose, forward, copy or take any action in reliance on this message
or its attachments. If you have received this email in error please notify
the sender as soon as possible and delete it from your computer systems.
Any views or opinions presented are solely those of the author and do not
necessarily reflect those of HPD Software Limited or its affiliates.
At present the integrity of email across the internet cannot be guaranteed
and messages sent via this medium are potentially at risk. All liability
is excluded to the extent permitted by law for any claims arising as a re-
sult of the use of this medium to transmit information by or to
HPD Software Limited or its affiliates.
|
|
From: Mahler T. <tho...@it...> - 2002-06-13 07:44:06
|
Hi Matthew,
> If we change metadata at runtime, is the change global, or
> limited to the
> persistencebroker in which the change was made. Since the descriptor
> repository is a singleton, I'm going to assume the changes
> are global.
Right, One Repository instance == global changes!
> That
> has some interesting ramifications.
I know :-(
>
> If we change data at runtime, like this:
>
> if (r != null)
> {
> // temporarily disable cascading to avoid recursions
> rds.setCascadeDelete(false);
> delete(r);
> rds.setCascadeDelete(true);
> }
>
> and we are running in an environment where operations might
> be parallel, we
> could end up with some non-deterministic behaviour based on
> transaction
> timing.
>
Right !
> Ie.
> PB one starts to delete references on object of type foo
> PB two starts to delete references on object of type foo
> PB gets value of cascadedelete=true and sets it to
> false and starts
> delete
> PB gets value of cascadedelete=false and doesn't delete
>
> Whoops!
>
> Is my analysis correct?
Absolutely correct :-(
> I the current recursion avoidance schema needs to be
> rewritten to be correct
> anyway.
>
agreed!
1. We need one Repository per PersistenceBroker! This is prepared in the
PersistenceBrokerFactory already, where broker are constructed with a
repository instance as parameter.
But there are several places in the metadata stuff still rely on the
singleton instance approach.
2. The current recursion avoiance scheme does not work correctly. Even
without threading conflicts it is buggy.
recursions during store() must not be avoided by manipulating the metadata,
but by maintaining a list of objects already "touched" by the store
operation. before calling store recursively on an object we have to check
whether this object has already been stored during the current user call to
store.
This can be done similar to the recursion avoiding mechanism in
ojb.odmg.TransactionImpl.lock(...)
(This issue has been on my todo list for a long time, I'd like to see this
fixed !)
cheers,
Thomas
> Regards,
> Matthew
>
> _______________________________________________________________
>
> Sponsored by:
> ThinkGeek at http://www.ThinkGeek.com/
> _______________________________________________
> Objectbridge-developers mailing list
> Obj...@li...
> https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
>
|
|
From: Mahler T. <tho...@it...> - 2002-06-13 07:21:37
|
Hi all,
OJB is making a lot of progress in the last months. We have tons of new
features and a lot of very good refactorings of the internal mechanisms.
So far so good!
We did not succeed to improve our regression testsuite accordingly. Of
course there a lot of new TestCases. But there are still many things not
covered thoroughly.
Matthew prepared a list of missing testcases, enumerating some of the most
urgent tests:
1. One-To-One test (for both ODMG and PB)
Create/Read/Update/Delete
Create/Read/Update/Delete with Proxies
Create/Read/Update/Delete with class that belongs to set of classes
mapped to same table.
2. One-To-Many test (for both ODMG and PB)
Create/Read/Update/Delete
Create/Read/Update/Delete with Proxies
Create/Read/Update/Delete with class that belongs to set of classes
mapped to same table.
3. Many-To-Many test (for both ODMG and PB) (both types of composition)
Create/Read/Update/Delete
Create/Read/Update/Delete with Proxies
Create/Read/Update/Delete with class that belongs to set of classes
mapped to same table.
4. ODMGCollections test
Create/Read/Update/Delete
Create/Read/Update/Delete with proxies
PerformanceTest
Empty Collections Test
Create/Read/Update/Delete with class that belongs to set of classes
mapped to same table.
5. Conversions Tests
6. LockMap performance tests
7. OQL ORDER BY tests
8. Concurrent Update Test
9. Deadlock test
10. Key generation tests
11. Duplicate key insertion test
12. multi-pk's test
13. callbacks test
14. Race Condition tests
15. Lock Modes test
16. CLOB/BLOB/LOB tests.
17. Delete by Query Tests
18. Query By Example tests with multiple classes mapped to same table)
Each of us could easily add at least 5 additional urgent testcases to this
list!
So I'm requesting all to join the OJB quality improvement initiative and
contribute TestCases.
Feel free to implement some of those 18 tests mentioned above or to
contribute your own TestCases.
As I mentioned in my <http://objectbridge.sourceforge.net/team.html>
http://objectbridge.sourceforge.net/team.html document it's best practice to
add new TestCases with all new features or refactorings!
Thanks for helping to improve code quality,
cheers,
Thomas
|
|
From: Jason v. Z. <jv...@ze...> - 2002-06-13 04:24:19
|
Hi, Ok, the tentative plan is to move the code on Saturday. I will: 1) Update the license references 2) Move the code over 3) Setup the Maven generated website 4) Turn on CVS access for those who have submitted their agreements So plan on leaving the repository alone all of Saturday. But I will let you know as soon as I'm done so that you can get back to coding :-) -- jvz. Jason van Zyl jv...@ap... http://tambora.zenplex.org |
|
From: Matthew B. <ma...@so...> - 2002-06-13 03:59:30
|
If we change metadata at runtime, is the change global, or limited to the
persistencebroker in which the change was made. Since the descriptor
repository is a singleton, I'm going to assume the changes are global. That
has some interesting ramifications.
If we change data at runtime, like this:
if (r != null)
{
// temporarily disable cascading to avoid recursions
rds.setCascadeDelete(false);
delete(r);
rds.setCascadeDelete(true);
}
and we are running in an environment where operations might be parallel, we
could end up with some non-deterministic behaviour based on transaction
timing.
Ie.
PB one starts to delete references on object of type foo
PB two starts to delete references on object of type foo
PB gets value of cascadedelete=true and sets it to false and starts
delete
PB gets value of cascadedelete=false and doesn't delete
Whoops!
Is my analysis correct?
I the current recursion avoidance schema needs to be rewritten to be correct
anyway.
Regards,
Matthew
|
|
From: Thomas M. <tho...@ho...> - 2002-06-12 20:49:33
|
I just checked in a patch to avoid materialization due to logging.
Of course testcases are still most welcome...
Matthew Baird wrote:
> Great find!
>
> Post the test case to the list and I'll see what I can do, and put the test
> case in the suite to make sure we don't regress. There are not near enough
> proxy tests in there yet.
>
> Thanks again for the contribution.
>
> regards,
> Matthew
>
> -----Original Message-----
> From: Morgan Troke
> To: obj...@li...
> Sent: 6/12/02 11:03 AM
> Subject: [OJB-developers] Clarification of Persistence Broker Storage
>
> Hi,
>
> I have a few questions about the way in which the Persistence Broker
> stores objects that I was hoping could be cleared up.
>
> First, when storing an object which contains a collection (a 1:N
> mapping), it appears that even when proxies are used for the elements in
> the
> collection, when the object is stored each of the elements in the
> collection
> are materialized from the database. For example, within
> test.ojb.broker.PersistenceBrokerTest I have added the following test
> which
> retrieves a test.ojb.broker.ProductGroup and stores it again (all XML
> and
> .properties files have been left unmodified from the 0.9 release):
>
> public void testRetrieval()
> {
> try
> {
> ProductGroup pg = new ProductGroup();
> pg.setId(8);
> Identity pgOID = new Identity(pg);
>
> pg = (ProductGroup) broker.getObjectByIdentity(pgOID);
> broker.store(pg);
>
> catch (Throwable t)
> {
> fail(t.getMessage());
> }
> }
>
> When analyzing the SQL statements which are generated, it appears that
> all
> of the elements in the ProductGroup's collection are materialized when
> the
> broker.store(pg) line is executed. I traced through the code and tracked
> it
> to the calls to the logger, such as the one in store(Object obj) within
> ojb.broker.singlevm.PersistenceBrokerImpl.java:
>
> logger.debug("store " + obj);
>
> which calls the toString() method on the object being stored, in this
> case
> the ProductGroup, which returns the String values of all its fields. But
> since allArticlesInGroup is a Vector, this in turn calls toString() on
> each
> of the proxy elements in the collection, which then materializes all of
> the
> elements. This means that a potentially large number of calls to the DB
> are
> being made when objects with a collection are being stored. Is this
> behaviour intended, and if so is there anyway to modify it (perhaps in
> the
> XML files?) such that the calls to the DB are minimized?
>
>
> Second, when storing objects (again with PersistenceBroker), it
> appears
> that two calls to the DB are made - one to retrieve the object, and one
> to
> update it - regardless of whether or not the object has been modified.
> This
> has particular implications again with objects that contain collections.
> Using ProductGroup again as an example, when the elements of the
> collection
> have already been materialized (eg. by iterating over and calling a
> method
> on each of them), and the ProductGroup is stored, each of the elements
> in
> the collection is materialized (even though they already have been), and
> then updated. This results in a large number of calls to the DB being
> made
> when the ProductGroup is stored, even if nothing has been changed.
>
>
> In both of the above cases, have I used PersistenceBroker to
> retrieve
> and store objects correctly, as well as interpreted the calls made to
> the DB
> correctly? If so, is this behaviour intended and is there anyway to
> change
> it to minimize the number of calls made?
>
> Thanks in advace for any help,
>
> Morgan
>
>
>
>
>
>
> _______________________________________________________________
>
> Sponsored by:
> ThinkGeek at http://www.ThinkGeek.com/
> _______________________________________________
> Objectbridge-developers mailing list
> Obj...@li...
> https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
>
> _______________________________________________________________
>
> Sponsored by:
> ThinkGeek at http://www.ThinkGeek.com/
> _______________________________________________
> Objectbridge-developers mailing list
> Obj...@li...
> https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
>
>
>
|
|
From: Thomas M. <tho...@ho...> - 2002-06-12 19:28:29
|
Hi Morgan,
Morgan Troke wrote:
> Hi,
>
> I have a few questions about the way in which the Persistence Broker
> stores objects that I was hoping could be cleared up.
>
> First, when storing an object which contains a collection (a 1:N
> mapping), it appears that even when proxies are used for the elements in the
> collection, when the object is stored each of the elements in the collection
> are materialized from the database. For example, within
> test.ojb.broker.PersistenceBrokerTest I have added the following test which
> retrieves a test.ojb.broker.ProductGroup and stores it again (all XML and
> .properties files have been left unmodified from the 0.9 release):
>
> public void testRetrieval()
> {
> try
> {
> ProductGroup pg = new ProductGroup();
> pg.setId(8);
> Identity pgOID = new Identity(pg);
>
> pg = (ProductGroup) broker.getObjectByIdentity(pgOID);
> broker.store(pg);
>
> catch (Throwable t)
> {
> fail(t.getMessage());
> }
> }
>
> When analyzing the SQL statements which are generated, it appears that all
> of the elements in the ProductGroup's collection are materialized when the
> broker.store(pg) line is executed. I traced through the code and tracked it
> to the calls to the logger, such as the one in store(Object obj) within
> ojb.broker.singlevm.PersistenceBrokerImpl.java:
>
> logger.debug("store " + obj);
>
> which calls the toString() method on the object being stored, in this case
> the ProductGroup, which returns the String values of all its fields. But
> since allArticlesInGroup is a Vector, this in turn calls toString() on each
> of the proxy elements in the collection, which then materializes all of the
> elements. This means that a potentially large number of calls to the DB are
> being made when objects with a collection are being stored. Is this
> behaviour intended, and if so is there anyway to modify it (perhaps in the
> XML files?) such that the calls to the DB are minimized?
This is definitely not wanted. I think it can well be regarded as a bug!
>
>
> Second, when storing objects (again with PersistenceBroker), it appears
> that two calls to the DB are made - one to retrieve the object, and one to
> update it - regardless of whether or not the object has been modified.
if you just call PB.store(instance) OJB must first determine if a INSERT
or update statement is required. That's OJB performs a primary key
lookup to see if the object does already exist.
You might consider to use another signature of store:
store(Object,ObjectModification). The ObjectModification object tells
the broker if Insert or Update is needed and no additional lookup is
performed.
The OJB PB performs an update even if an object was not modified! The PB
does not track object state during transactions. It is not an Object
Transaction Manager. If you tell it to perform an update, it simply
performs the update.
The PB is a stupid persistence kernel!
If you want to have addional "intelligence" you must use one of the
highler level APIs, ODMG or JDO. Those two provide full state tracking
of registered objects.
(or you can implement your own object transaction manager)
This
> has particular implications again with objects that contain collections.
> Using ProductGroup again as an example, when the elements of the collection
> have already been materialized (eg. by iterating over and calling a method
> on each of them), and the ProductGroup is stored, each of the elements in
> the collection is materialized (even though they already have been), and
> then updated. This results in a large number of calls to the DB being made
> when the ProductGroup is stored, even if nothing has been changed.
>
This does only occur if you set auto-update="true"! If you were using
the ODMG implementation only modified objects are stored !
>
> In both of the above cases, have I used PersistenceBroker to retrieve
> and store objects correctly, as well as interpreted the calls made to the DB
> correctly?
I think so.
If so, is this behaviour intended and is there anyway to change
> it to minimize the number of calls made?
>
As I mentioned before, this behaviour is intended. The broker is our
inner kernel. It does not know anything about your objects state.
This may sound strange but it this microkernle approach that makes OJB
so flexible.
You can use several tricks to reduce overhead:
- use ObjectModification objects (Have a look at
test.ojb.broker.PerformanceTest to see how this is done efficiently)
- set auto-update to false or change it at runtime if needed
- use the ODMG Implemenation that provides mature object level
transaction management
- implement your own transaction manager
cheers,
Thomas
> Thanks in advace for any help,
>
> Morgan
>
>
>
>
>
>
> _______________________________________________________________
>
> Sponsored by:
> ThinkGeek at http://www.ThinkGeek.com/
> _______________________________________________
> Objectbridge-developers mailing list
> Obj...@li...
> https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
>
>
>
|
|
From: Matthew B. <ma...@so...> - 2002-06-12 18:09:37
|
Great find!
Post the test case to the list and I'll see what I can do, and put the test
case in the suite to make sure we don't regress. There are not near enough
proxy tests in there yet.
Thanks again for the contribution.
regards,
Matthew
-----Original Message-----
From: Morgan Troke
To: obj...@li...
Sent: 6/12/02 11:03 AM
Subject: [OJB-developers] Clarification of Persistence Broker Storage
Hi,
I have a few questions about the way in which the Persistence Broker
stores objects that I was hoping could be cleared up.
First, when storing an object which contains a collection (a 1:N
mapping), it appears that even when proxies are used for the elements in
the
collection, when the object is stored each of the elements in the
collection
are materialized from the database. For example, within
test.ojb.broker.PersistenceBrokerTest I have added the following test
which
retrieves a test.ojb.broker.ProductGroup and stores it again (all XML
and
.properties files have been left unmodified from the 0.9 release):
public void testRetrieval()
{
try
{
ProductGroup pg = new ProductGroup();
pg.setId(8);
Identity pgOID = new Identity(pg);
pg = (ProductGroup) broker.getObjectByIdentity(pgOID);
broker.store(pg);
catch (Throwable t)
{
fail(t.getMessage());
}
}
When analyzing the SQL statements which are generated, it appears that
all
of the elements in the ProductGroup's collection are materialized when
the
broker.store(pg) line is executed. I traced through the code and tracked
it
to the calls to the logger, such as the one in store(Object obj) within
ojb.broker.singlevm.PersistenceBrokerImpl.java:
logger.debug("store " + obj);
which calls the toString() method on the object being stored, in this
case
the ProductGroup, which returns the String values of all its fields. But
since allArticlesInGroup is a Vector, this in turn calls toString() on
each
of the proxy elements in the collection, which then materializes all of
the
elements. This means that a potentially large number of calls to the DB
are
being made when objects with a collection are being stored. Is this
behaviour intended, and if so is there anyway to modify it (perhaps in
the
XML files?) such that the calls to the DB are minimized?
Second, when storing objects (again with PersistenceBroker), it
appears
that two calls to the DB are made - one to retrieve the object, and one
to
update it - regardless of whether or not the object has been modified.
This
has particular implications again with objects that contain collections.
Using ProductGroup again as an example, when the elements of the
collection
have already been materialized (eg. by iterating over and calling a
method
on each of them), and the ProductGroup is stored, each of the elements
in
the collection is materialized (even though they already have been), and
then updated. This results in a large number of calls to the DB being
made
when the ProductGroup is stored, even if nothing has been changed.
In both of the above cases, have I used PersistenceBroker to
retrieve
and store objects correctly, as well as interpreted the calls made to
the DB
correctly? If so, is this behaviour intended and is there anyway to
change
it to minimize the number of calls made?
Thanks in advace for any help,
Morgan
_______________________________________________________________
Sponsored by:
ThinkGeek at http://www.ThinkGeek.com/
_______________________________________________
Objectbridge-developers mailing list
Obj...@li...
https://lists.sourceforge.net/lists/listinfo/objectbridge-developers
|
|
From: Morgan T. <mor...@ne...> - 2002-06-12 18:03:44
|
Hi,
I have a few questions about the way in which the Persistence Broker
stores objects that I was hoping could be cleared up.
First, when storing an object which contains a collection (a 1:N
mapping), it appears that even when proxies are used for the elements in the
collection, when the object is stored each of the elements in the collection
are materialized from the database. For example, within
test.ojb.broker.PersistenceBrokerTest I have added the following test which
retrieves a test.ojb.broker.ProductGroup and stores it again (all XML and
.properties files have been left unmodified from the 0.9 release):
public void testRetrieval()
{
try
{
ProductGroup pg = new ProductGroup();
pg.setId(8);
Identity pgOID = new Identity(pg);
pg = (ProductGroup) broker.getObjectByIdentity(pgOID);
broker.store(pg);
catch (Throwable t)
{
fail(t.getMessage());
}
}
When analyzing the SQL statements which are generated, it appears that all
of the elements in the ProductGroup's collection are materialized when the
broker.store(pg) line is executed. I traced through the code and tracked it
to the calls to the logger, such as the one in store(Object obj) within
ojb.broker.singlevm.PersistenceBrokerImpl.java:
logger.debug("store " + obj);
which calls the toString() method on the object being stored, in this case
the ProductGroup, which returns the String values of all its fields. But
since allArticlesInGroup is a Vector, this in turn calls toString() on each
of the proxy elements in the collection, which then materializes all of the
elements. This means that a potentially large number of calls to the DB are
being made when objects with a collection are being stored. Is this
behaviour intended, and if so is there anyway to modify it (perhaps in the
XML files?) such that the calls to the DB are minimized?
Second, when storing objects (again with PersistenceBroker), it appears
that two calls to the DB are made - one to retrieve the object, and one to
update it - regardless of whether or not the object has been modified. This
has particular implications again with objects that contain collections.
Using ProductGroup again as an example, when the elements of the collection
have already been materialized (eg. by iterating over and calling a method
on each of them), and the ProductGroup is stored, each of the elements in
the collection is materialized (even though they already have been), and
then updated. This results in a large number of calls to the DB being made
when the ProductGroup is stored, even if nothing has been changed.
In both of the above cases, have I used PersistenceBroker to retrieve
and store objects correctly, as well as interpreted the calls made to the DB
correctly? If so, is this behaviour intended and is there anyway to change
it to minimize the number of calls made?
Thanks in advace for any help,
Morgan
|
|
From: Matthew B. <ma...@so...> - 2002-06-12 16:35:31
|
-----Original Message----- From: Brian Devries [mailto:bde...@in...] Sent: Wednesday, June 12, 2002 9:16 AM To: ObjectBridge Developers Subject: [OJB-developers] Problems with multiple connections in J2EE environment Hi all, I'm having problems when trying to use the JTA aware implementation of the ODMG API, and I think the same would happen with the Broker API. My guess is that the problem is because a new database connection is used for every SQL statement. The problem is that I'm creating a new object, A, which has a reference descriptor to the new object, B. All things happen as expected, B is created first and inserted into the database, then the primary key of B is set in the reference field of A, and then A is inserted. However, the database schema that I am working with has a referential constraint on that reference field to ensure that it has a matching record in table B. > Please send me a test case. I *think* what is happening, is that because the insert of B and the insert of A happen on different connections, B is not visible to the database transaction where the A insert occurs and I get an SQL exception complaining that the key I'm inserting in A doesn't exist in table B. > Default transaction isolation is read_uncommitted, so this shouldn't be the reason. Out of curiosity, why does the J2EE implementation not reuse statements and connections the same way as the non-J2EE implementation? > I wrote about this a while ago, so check the archives. Essentially it's wrong to hold on to a managed connection in a j2ee environment. On a side note, I also noticed that when a transaction is rolled back through JTA (as opposed to calling abort on the ODMG transaction), the synchronization on the J2EETransactionImpl clears the broker cache, but doesn't call the abort on the ODMG transaction, so no locks are released. > Good find, I'll write a test case for this and fix it. |
|
From: Brian D. <bde...@in...> - 2002-06-12 16:27:46
|
Hi all, I'm having problems when trying to use the JTA aware implementation of the ODMG API, and I think the same would happen with the Broker API. My guess is that the problem is because a new database connection is used for every SQL statement. The problem is that I'm creating a new object, A, which has a reference descriptor to the new object, B. All things happen as expected, B is created first and inserted into the database, then the primary key of B is set in the reference field of A, and then A is inserted. However, the database schema that I am working with has a referential constraint on that reference field to ensure that it has a matching record in table B. I *think* what is happening, is that because the insert of B and the insert of A happen on different connections, B is not visible to the database transaction where the A insert occurs and I get an SQL exception complaining that the key I'm inserting in A doesn't exist in table B. Out of curiosity, why does the J2EE implementation not reuse statements and connections the same way as the non-J2EE implementation? On a side note, I also noticed that when a transaction is rolled back through JTA (as opposed to calling abort on the ODMG transaction), the synchronization on the J2EETransactionImpl clears the broker cache, but doesn't call the abort on the ODMG transaction, so no locks are released. -Brian -- Brian DeVries Sr. Software Engineer mailto:bde...@in... http://www.intraware.com Voice: 925.253.6516 Fax: 925.253.6785 -------------------------------------------------------- Intraware... The leading provider of Electronic Software Delivery and Management (ESDM) Solutions |
|
From: David F. <dw...@la...> - 2002-06-12 15:17:24
|
I'm having a problem with a prepared statement using MySQL. It works fine
with HSQLDB.
There error message I'm getting is an SQL error:
No value specified for parameter 2java.sql.SQLException: No value specified
for parameter 2
at org.gjt.mm.mysql.PreparedStatement.executeQuery(Unknown Source)
at ojb.broker.accesslayer.JdbcAccess.materializeObject(Unknown Source)
at ojb.broker.singlevm.PersistenceBrokerImpl.getDBObject(Unknown
Source)
This is occurring in the materializeObject. It appears to be occurring
because of a empty value
in an element I'm querying. Apparently MySQL doesn't like this. We want to
use a "wild card"
for this value, but it looks like we need to be explicit about this.
thanks,
Dave
|
|
From: Georg S. <ge...@me...> - 2002-06-12 14:25:58
|
Hi,
I am trying to adapt the torque based stuff to work with postgres. Since I
don't have any experience with it, I started out copying the mysql.profile
file to postgres.profile and setting corresponding values to my settings.
When I then do
./build.sh junit
I get the following error:
[sql] Executing file:
/people/georg/work/ojb/cvs/ojb-1-0/target/src/sql/create-db.sql
[sql] Failed to execute: drop database if exists @DATABASE_DEFAULT@
[sql] java.sql.SQLException: ERROR: parser: parse error at or near
"exists"
[sql] Failed to execute: create database @DATABASE_DEFAULT@
[sql] java.sql.SQLException: ERROR: parser: parse error at or near
"@"
[sql] 0 of 2 SQL statements executed successfully
obviously @DATABASE_DEFAULT@ should have been substituted with the name of
the database, but wasn't. What else do I have to change to make it work
for a different database?
Any help is appreciated
Cheers
Georg
|
|
From: Mahler T. <tho...@it...> - 2002-06-12 13:16:41
|
Hi Jakob, =20 I used the Abc, AbcImpl scheme throughout my code. I'd prefer to = maintain this style, as you propose. =20 -----Urspr=FCngliche Nachricht----- Von: Jakob Braeuchi [mailto:jbr...@ho...] Gesendet: Mittwoch, 12. Juni 2002 08:52 An: Objectbridge Developers List (E-mail) Betreff: [OJB-developers] Coding conventions hi, =20 i have a question about naming interfaces and implementations: =20 mostly the pattern interface: ABC implementation ABCImpl is used. but there are some classes that use another pattern ABCIF for the = interface and ABC for the implementation (StatementManager implements StatementManagerIF) . =20 configuration have their own impl package although they follow ABC, = ABCImpl. =20 Right I did this for depency reasons. The configuration package is completely abstract and has no depencies to other packages. The Impl = package has a lot of depencies to all parts of OJB ! Using this scheme will help to reuse the OJB config stuff in other frameworks. Maybe it could also be integrated into jakarta-commons... =20 cheers, =20 Thomas=20 =20 another pattern i haven't found in ojb could be IABC, ABC (like in = eclipse). =20 imho we should all use the same naming pattern here. because of the = number of classes (and history too) i propose to use ABC, ABCImpl. =20 jakob =20 =20 |
|
From: Leandro R. S. C. <le...@ib...> - 2002-06-12 12:56:42
|
+1 for me . On Wed, 2002-06-12 at 03:52, Jakob Braeuchi wrote: > hi, > > i have a question about naming interfaces and implementations: > > mostly the pattern interface: ABC implementation ABCImpl is used. > but there are some classes that use another pattern ABCIF for the interface and ABC for the implementation > (StatementManager implements StatementManagerIF) . > > configuration have their own impl package although they follow ABC, ABCImpl. > > another pattern i haven't found in ojb could be IABC, ABC (like in eclipse). > > imho we should all use the same naming pattern here. because of the number of classes (and history too) i propose to > use ABC, ABCImpl. > > jakob > > -- Leandro Rodrigo Saad Cruz IT - Inter Business Tecnologia e Servicos (IB) http://www.ibnetwork.com.br |
|
From: Jakob B. <jbr...@ho...> - 2002-06-12 06:52:10
|
hi, i have a question about naming interfaces and implementations: mostly the pattern interface: ABC implementation ABCImpl is used. but there are some classes that use another pattern ABCIF for the = interface and ABC for the implementation (StatementManager implements StatementManagerIF) . configuration have their own impl package although they follow ABC, = ABCImpl. another pattern i haven't found in ojb could be IABC, ABC (like in = eclipse). imho we should all use the same naming pattern here. because of the = number of classes (and history too) i propose to use ABC, ABCImpl. jakob |
|
From: Thomas M. <tho...@ho...> - 2002-06-11 18:44:23
|
Hi Joachim, Joa...@tp... wrote: > [snip updateing lists/collections] > >>tricky question. >>If you were using an ODMG collection like DList instead of a normal >>Vector or ArrayList, OJB would behave as expected and store the new > > entry. > >>The ODMG collections inform the current ODMG transaction about adding >>and removing of elements and do the locking for you. > > > So I only have to create the array with "implementation.newDList()" > instead of a "new ArrayList()", or do I have to change anything in the > mapping? > The collection Attribute must be typed as DList. So OJB will know that your collection attribute must be filled with on reloading from the database. If you don't want to change the collection attribute type to DList you can set the attribute collection-class="ojb.odmg.collections.DListImpl" of the class-descriptor element describing the collection. > >>As far as I remember this is compliant to the ODMG spec. It is not >>required to provide tranparent persistence for any Collection class but >>only for the ODMG collections. It's sometimes difficult to decide how >>certain features should be implemented as the spec for the Java language > > >>binding is not always precise (The JDO spec is much cleaner). >> >>Even commercial OODBMS ODMG Java Bindings differ quite a lot ! >> >>Of course it would be better to let our ODMG implementation handle this >>automatically for all kind of collections! >> >>This would require some kind of byte-code enhancement. Maybe we can >>reuse parts of the JDO byte-code enhancer to implement this feature. > > > I massively dislike any kind of byte-code enhancement, as it looks like > evil, black magic to me and I can't imagine that it can be debugged in a > sane manner. +1 ciao, Thomas |