Thread: Re: [Modeling-users] ec.insert() timing
Status: Abandoned
Brought to you by:
sbigaret
|
From: Yannick G. <yan...@sa...> - 2003-09-11 13:44:46
|
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On September 11, 2003 07:08 am, you wrote:
> Maybe sys.stderr is buffered, which may explain why you do not see the
> "c\n" after insert, and why you do not notice any pause for
> saveChanges() (which is very strange). Do you experience the same thing
> with sys.stderr.flush() after write()?
Unlike stdout, stderr should not but buffered but it's easy to try and
most bugs lies in false assumptions:
for i in range(100):
sys.stderr.write("%d\n" % i)
newRec = self._dbClass()
sys.stderr.write("a\n")
sys.stderr.flush()
for key, value in self._defaults.items():
newRec.takeValueForKey(value, key)
sys.stderr.write("b\n")
sys.stderr.flush()
self._ec.insert(newRec)
sys.stderr.write("c\n")
sys.stderr.flush()
self._ec.saveChanges()
sys.stderr.write("d\n")
sys.stderr.flush()
Same result : \
> Last, you probably know this already, but just in case: if you insert()
> a bunch of objects in a batch, you'd probably want to saveChanges()
> after having inserted every objects.
Yes, I just made the loop for benchmark purpose. I usually insert
items one by one.
> Reassure me: when you say that <<The 100 record insert takes a few
> minutes>>, that's under profiling, isn't it?
Yes, the time I gave is under profiling:
>>> beg=time(); resp = insert(); end=time()
>>> print "%.2f s." % (end-beg)
230.03 s.
But the non-profiled time is still a few minutes:
>>> beg=time(); resp = insert(); end=time()
>>> print "%.2f s." % (end-beg)
212.44 s.
Without profiling, CPU usage stays under 20%, thats why I think about
a lock synchronization problem. I can't recall when this started
because I usually insert an item at a time but I'm pretty sure that it
have not alway been like this. I'll try to downgrade my Modeling
version and to load an old model with less entities.
Stay tuned !
- --
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
http://www.savoirfairelinux.com/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQE/YHpyrhy5Fqn/MRARAvuzAJ9wD4Z23r8bLL1KDnDeN4xLy5gOAQCeLJe+
5uWG7Mbs6Xs/0TbUviw65KI=
=OqlV
-----END PGP SIGNATURE-----
|
|
From: Yannick G. <yan...@sa...> - 2003-09-11 14:22:17
|
=2D----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On September 11, 2003 09:55 am, you wrote:
> and report t_kvc, t_insert and t_save ?
ok, I did :=20
pprint([t_kvc, t_insert, t_save])
=20
and the result (with *NO* profiling going on) is (sorry for the
formating):
[[0.0012559890747070312,
0.0015519857406616211,
0.0014079809188842773,
0.0012360811233520508,
0.0012440681457519531,
0.001253962516784668,
0.0012469291687011719,
0.001255035400390625,
0.0012500286102294922,
0.0014050006866455078,
0.0016050338745117188,
0.0012500286102294922,
0.0012619495391845703,
0.0012490749359130859,
0.0015799999237060547,
0.0012569427490234375,
0.0012700557708740234,
0.0012619495391845703,
0.0012520551681518555,
0.0014119148254394531,
0.0012480020523071289,
0.0012559890747070312,
0.0014290809631347656,
0.0012629032135009766,
0.0012450218200683594,
0.0012600421905517578,
0.0012469291687011719,
0.0012499094009399414,
0.0012609958648681641,
0.0012609958648681641,
0.0012499094009399414,
0.0014309883117675781,
0.0012490749359130859,
0.0012580156326293945,
0.001256108283996582,
0.0012530088424682617,
0.0015770196914672852,
0.0012530088424682617,
0.0012511014938354492,
0.0012699365615844727,
0.0012640953063964844,
0.0014120340347290039,
0.0012450218200683594,
0.0015490055084228516,
0.0012459754943847656,
0.0012530088424682617,
0.00156402587890625,
0.0014159679412841797,
0.0012350082397460938,
0.0012639760971069336,
0.0012480020523071289,
0.0015670061111450195,
0.0012700557708740234,
0.0012509822845458984,
0.0012459754943847656,
0.0014040470123291016,
0.0012429952621459961,
0.0012470483779907227,
0.0012669563293457031,
0.0012220144271850586,
0.0014249086380004883,
0.001273036003112793,
0.00156402587890625,
0.0012530088424682617,
0.001259922981262207,
0.0015740394592285156,
0.0012480020523071289,
0.0012640953063964844,
0.0012450218200683594,
0.0014100074768066406,
0.0015610456466674805,
0.0012609958648681641,
0.0012549161911010742,
0.0012720823287963867,
0.0012459754943847656,
0.0014209747314453125,
0.0012519359588623047,
0.0012470483779907227,
0.0012470483779907227,
0.0012509822845458984,
0.0015619993209838867,
0.0012420415878295898,
0.0012559890747070312,
0.0012569427490234375,
0.0012480020523071289,
0.001562952995300293,
0.0015299320220947266,
0.001256108283996582,
0.0012440681457519531,
0.0015490055084228516,
0.0021089315414428711,
0.0015860795974731445,
0.0012379884719848633,
0.0012689828872680664,
0.0015679597854614258,
0.0024440288543701172,
0.001255035400390625,
0.0012450218200683594,
0.0014050006866455078,
0.0015419721603393555],
[1.9966299533843994,
1.9969379901885986,
1.9991899728775024,
1.9998400211334229,
1.9998140335083008,
1.9989409446716309,
1.5233700275421143,
2.0757629871368408,
2.0056970119476318,
2.0016980171203613,
2.0033340454101562,
1.9996179342269897,
1.9986929893493652,
1.9990460872650146,
1.9989680051803589,
1.9983110427856445,
2.0008178949356079,
1.9969730377197266,
1.9981610774993896,
1.9982980489730835,
1.9970279932022095,
2.000012993812561,
1.9982099533081055,
1.9992450475692749,
1.9985290765762329,
2.0004779100418091,
1.9987590312957764,
2.0098719596862793,
1.9988129138946533,
1.9985009431838989,
1.9981800317764282,
1.9987269639968872,
1.9986039400100708,
1.9995909929275513,
1.9970769882202148,
1.9977390766143799,
1.998181939125061,
1.9989529848098755,
1.9993369579315186,
1.9985369443893433,
1.9971719980239868,
1.998412013053894,
1.9972670078277588,
2.0035970211029053,
1.9960908889770508,
1.9974290132522583,
1.9964050054550171,
1.9983960390090942,
1.9978470802307129,
1.9963589906692505,
1.9999489784240723,
2.030364990234375,
1.99782395362854,
2.0071769952774048,
1.9968540668487549,
2.0052520036697388,
1.9974780082702637,
2.0052690505981445,
1.997094988822937,
1.9970070123672485,
1.9967869520187378,
1.9971680641174316,
1.9959059953689575,
1.9986159801483154,
2.0014950037002563,
2.0054119825363159,
2.0044909715652466,
1.9959390163421631,
1.99754798412323,
2.0053850412368774,
1.9957480430603027,
1.9961370229721069,
1.9969979524612427,
1.9969429969787598,
1.9963600635528564,
1.9959028959274292,
1.9961450099945068,
1.9974430799484253,
1.995980978012085,
1.9960340261459351,
1.995959997177124,
2.0054160356521606,
1.9965320825576782,
1.9980199337005615,
1.9968359470367432,
1.9994961023330688,
1.9959698915481567,
1.9970650672912598,
2.0057209730148315,
2.0049780607223511,
2.003493070602417,
1.9966839551925659,
2.0044209957122803,
1.9966210126876831,
1.9961740970611572,
1.998405933380127,
1.9961420297622681,
2.0062381029129028,
2.0116540193557739,
2.0041409730911255],
[0.039458036422729492,
0.037539958953857422,
0.036861062049865723,
0.037307977676391602,
0.037559032440185547,
0.03775489330291748,
0.047219991683959961,
0.051325082778930664,
0.036828041076660156,
0.040776968002319336,
0.037160038948059082,
0.038051009178161621,
0.037733078002929688,
0.038002967834472656,
0.03803706169128418,
0.037999987602233887,
0.047974109649658203,
0.038440942764282227,
0.039523005485534668,
0.039058923721313477,
0.036335945129394531,
0.0384979248046875,
0.037342071533203125,
0.038197994232177734,
0.044772028923034668,
0.038097977638244629,
0.037073969841003418,
0.038112998008728027,
0.038234949111938477,
0.03847503662109375,
0.038445949554443359,
0.03744196891784668,
0.037266016006469727,
0.039924025535583496,
0.038985013961791992,
0.038390040397644043,
0.037743091583251953,
0.037485957145690918,
0.038331985473632812,
0.039402008056640625,
0.03902900218963623,
0.039088010787963867,
0.042624950408935547,
0.040650010108947754,
0.039441943168640137,
0.039898037910461426,
0.038405895233154297,
0.039148092269897461,
0.040221929550170898,
0.039301037788391113,
0.04309391975402832,
0.039062976837158203,
0.03976595401763916,
0.039913058280944824,
0.061332941055297852,
0.039245009422302246,
0.041972041130065918,
0.039283037185668945,
0.03953397274017334,
0.039782047271728516,
0.039796948432922363,
0.040288090705871582,
0.039728999137878418,
0.04366600513458252,
0.040779948234558105,
0.042225003242492676,
0.040544033050537109,
0.040629029273986816,
0.039927005767822266,
0.040782928466796875,
0.040778994560241699,
0.039677023887634277,
0.039734005928039551,
0.040410041809082031,
0.040774941444396973,
0.040843963623046875,
0.03924095630645752,
0.040840983390808105,
0.040881037712097168,
0.040665984153747559,
0.041599035263061523,
0.040225028991699219,
0.039021968841552734,
0.039937973022460938,
0.047130942344665527,
0.040129899978637695,
0.040014982223510742,
0.041290044784545898,
0.041398048400878906,
0.042516946792602539,
0.039626955986022949,
0.042227029800415039,
0.040439963340759277,
0.04065096378326416,
0.044528961181640625,
0.040642976760864258,
0.040132045745849609,
0.075582981109619141,
0.042561054229736328,
0.05450904369354248]]
> > Without profiling, CPU usage stays under 20%, thats why I think about
> > a lock synchronization problem. I can't recall when this started
> > because I usually insert an item at a time but I'm pretty sure that it
> > have not alway been like this. I'll try to downgrade my Modeling
> > version and to load an old model with less entities.
Exact same timing with 0.9-pre-10
> As I said, I do not understand. insertObject() does not lock anything
> on its own except the NotificationCenter (addObserver() is called in
> recordObject() called by insertObjectWithGlobalID() called by
> insertObject()), but I can't see any reason why it would hang there.
>
> I'm not surprised that it takes such a long time *when profiling*, but
> having about the same time when *not* profiling is astonishing --never
> seen this before.
I have the .profile if you think you can get anything out of it but
the profiler only reoport CPU time, no wall clock time so this problem
is invisible to him...
=2D --=20
Yannick Gingras
Byte Gardener, Savoir-faire Linux inc.
http://www.savoirfairelinux.com/
=2D----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.2 (GNU/Linux)
iD8DBQE/YIUVrhy5Fqn/MRARAm/GAJ9w2tAV1EOPJ6E0RNvrGDycMzQ60gCggn78
TXtGAaPTZl+llMWy9XfHn7U=3D
=3Du1Qr
=2D----END PGP SIGNATURE-----
|
|
From: Sebastien B. <sbi...@us...> - 2003-09-11 15:27:43
|
> On September 11, 2003 09:55 am, you wrote:
> > and report t_kvc, t_insert and t_save ?
>=20
> ok, I did :=20
>=20
> pprint([t_kvc, t_insert, t_save])
>=20=20=20=20=20=20=20=20=20=20=20=20=20
> and the result (with *NO* profiling going on) is (sorry for the
> formating):
[...]
So we have:
>>> reduce(operator.__add__, t_kvc)
0.13458442687988281
>>> reduce(operator.__add__, t_insert)
199.5601282119751
>>> reduce(operator.__add__, t_save)
4.0703457593917847
I keep thinking about this but I can't see any reason why it
happens. BTW how many objects do you have in your EC?
(e.g. len(self.ec._uniquingTable._table))
> > > Without profiling, CPU usage stays under 20%, thats why I think about
> > > a lock synchronization problem. I can't recall when this started
> > > because I usually insert an item at a time but I'm pretty sure that it
> > > have not alway been like this. I'll try to downgrade my Modeling
> > > version and to load an old model with less entities.
>=20
> Exact same timing with 0.9-pre-10
> I have the .profile if you think you can get anything out of it but
> the profiler only reoport CPU time, no wall clock time so this problem
> is invisible to him...
You're right, I'm not sure it will be useful in this situation. Could
you consider reducing this example to the strict minimum and send me the
model+the test module so that I can see here what happens? (possibly
privately if for any reasons you don't want to publicly expose the model)
BTW did you try it on another machine?
Did you overwrote awakeFromInsertion() in your objects?
=20=20
I'm just wild-guessing here, and I do not like that. You can send
privately to me the profile.out generated by the profiler, ok there's
not real-time in it, but maybe I could get some hints out of it. I'll
be able to examine it in, say, two hours, and will report then.
Till then,
-- S=E9bastien.
|
|
From: Yannick G. <yan...@sa...> - 2003-09-11 16:10:56
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On September 11, 2003 11:27 am, Sebastien Bigaret wrote: > >>> reduce(operator.__add__, t_kvc) > > 0.13458442687988281 > > >>> reduce(operator.__add__, t_insert) > > 199.5601282119751 > > >>> reduce(operator.__add__, t_save) > > 4.0703457593917847 > > I keep thinking about this but I can't see any reason why it > happens. BTW how many objects do you have in your EC? > (e.g. len(self.ec._uniquingTable._table)) 2 : I did all the benchmarcks from scratch > BTW did you try it on another machine? > Did you overwrote awakeFromInsertion() in your objects? I did, it was a Mandrake just like mine with the same result. I just tried on a Red Had box and the problem vanished. I can't see what is wrong with Mandrake but as soon as I find out I'll let you know. Thanks for reminding me the obvious 1st test I should have done ! =20 =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. http://www.savoirfairelinux.com/ =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux) iD8DBQE/YJ6Nrhy5Fqn/MRARAtueAJ9gGMBdLSsvV+3yzS/cYpirXpANNACfeCEy =46XUfecc+EXkx9100rzfM0MQ=3D =3DkDyE =2D----END PGP SIGNATURE----- |
|
From: Sebastien B. <sbi...@us...> - 2003-09-11 17:35:07
|
Yannick Gingras <yan...@sa...> wrote:
> > BTW did you try it on another machine?
> > Did you overwrote awakeFromInsertion() in your objects?
>=20
> I did, it was a Mandrake just like mine with the same result. I just
> tried on a Red Had box and the problem vanished. I can't see what is
> wrong with Mandrake but as soon as I find out I'll let you know.
> Thanks for reminding me the obvious 1st test I should have done !=20=20
No problem, at least you know where to search, hm, or maybe where not
to search :/ I'll be interested in knowing what happened there,
obviously.
Good luck in hunting the problem!
-- S=E9bastien.
|
|
From: Yannick G. <yan...@sa...> - 2003-09-11 20:37:05
|
=2D----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On September 11, 2003 01:33 pm, Sebastien Bigaret wrote: > No problem, at least you know where to search, hm, or maybe where not > to search :/ I'll be interested in knowing what happened there, > obviously. > > Good luck in hunting the problem! I've been lucky ! : D Thanks to your advices, I quickly pinned down the lag to a call to gethostbyaddr() . I fixed it by installing a caching name server on my box. Further improvement can be achieved by putting the hostname of the machine in /etc/hosts if you don't expect it to change between reboots. Maybe the FQDN should be cached by the framework to prevent this weird slowdown to happen to other people. Mandrake does install a local name server by defaults : tmdns . The default configuration doesn't do any caching though. =2D --=20 Yannick Gingras Byte Gardener, Savoir-faire Linux inc. http://www.savoirfairelinux.com/ =2D----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux) iD8DBQE/YNzvrhy5Fqn/MRARAoxAAJ9J33WHLFVLpxk/nMJNguFLf0AzfgCfST2N 0L+SnmabkDXpWk3dhUuHlDo=3D =3DNArK =2D----END PGP SIGNATURE----- |
|
From: Sebastien B. <sbi...@us...> - 2003-09-11 13:56:02
|
Yannick Gingras <yan...@sa...> wrote:
> On September 11, 2003 07:08 am, you wrote:
> > Maybe sys.stderr is buffered, which may explain why you do not see the
> > "c\n" after insert, and why you do not notice any pause for
> > saveChanges() (which is very strange). Do you experience the same thing
> > with sys.stderr.flush() after write()?
>=20
> Unlike stdout, stderr should not but buffered but it's easy to try and
> most bugs lies in false assumptions:
[snip]
> Same result : \
I do not understand what happens to you here. Could you try something
like this:
t_kvc=3D[]
t_insert=3D[]
t_save=3D[]
for i in range(100):
sys.stderr.write("%d\n" % i)
newRec =3D self._dbClass()
t=3Dtime()
for key, value in self._defaults.items():
newRec.takeValueForKey(value, key)
t_kvc.append(time()-t)
t=3Dtime()
self._ec.insert(newRec)
t_insert.append(time()-t)
t=3Dtime()
self._ec.saveChanges()
t_save.append(time()-t)
and report t_kvc, t_insert and t_save ?
> > Reassure me: when you say that <<The 100 record insert takes a few
> > minutes>>, that's under profiling, isn't it?
>=20
> Yes, the time I gave is under profiling:
>=20
> >>> beg=3Dtime(); resp =3D insert(); end=3Dtime()
> >>> print "%.2f s." % (end-beg)
> 230.03 s.
>=20
> But the non-profiled time is still a few minutes:
>=20
> >>> beg=3Dtime(); resp =3D insert(); end=3Dtime()
> >>> print "%.2f s." % (end-beg)
> 212.44 s.
>=20
> Without profiling, CPU usage stays under 20%, thats why I think about
> a lock synchronization problem. I can't recall when this started
> because I usually insert an item at a time but I'm pretty sure that it
> have not alway been like this. I'll try to downgrade my Modeling
> version and to load an old model with less entities.
As I said, I do not understand. insertObject() does not lock anything
on its own except the NotificationCenter (addObserver() is called in
recordObject() called by insertObjectWithGlobalID() called by
insertObject()), but I can't see any reason why it would hang there.
I'm not surprised that it takes such a long time *when profiling*, but
having about the same time when *not* profiling is astonishing --never
seen this before.
> Stay tuned !
Ok!
-- S=E9bastien.
|