From: jamal <ha...@cy...> - 2008-01-11 14:43:07
|
On Fri, 2008-11-01 at 14:50 +0200, Timo Teräs wrote: > jamal wrote: > Right. This means we should probably fix racoon code too. Currently whan > SA dump is required, racoon first reads all the responses and puts them > in a big big array. So it might require quite a bit of memory temporarily > too. Ok, so you would check them as they come in then? > Though the kernel memory is more precious. true. > > if racoon dies while the refcount is incremented then you may get into > > strange states - for that reason i dont think this is a practically good > > idea. You could solve it by keeping track of who incremented and > > autoincrement - but you add complexity. > > Not that complicated. The netlink dumping code has done callbacks where > clean up can be done. Not too hard to add something similar in af_key. > And many other things you can dump using netlink do it similarly > (e.g in netfilter). I dont recall any of them using refcount on a per-table column (i could be wrong); you could find out when racoon dies and cleanup the socket if you do it the way netlink does witch cb; the challenge is to remember which table entry was refcnt incremented because of a dump to racoon that is now dead. > > Did you mean this without the snapshoting? If it is related to what you > > said earlier on extra tags on a per-table entry, then that is nice. > > Yes, no snapshotting. Dumping the live database and keeping the iterator > to that in a way that the locks can be released when we are waiting for > user-space. Very nice. > Yes. If you lost events, it'd be better to do a full dump. Though in > some cases incremental dump might be useful (though you'd miss the > deleted entries). At least with incremental dumps, hopefully the need for full dumps is now reduced (setup dependent). i.e it becomes an exception as opposed to be the rule. One approach maybe to store some info about the deleted entries for a period of time (age them out after some time) so i just get enough detail when i come back. This ofcourse will require some memory, but if theres not that many deletes, it will be less memory used than i would have needed to dump say 50K entries. > I started working on the linux kernel patch for fixing these problems. > I even got it working at some level already; I'm able to dump large > SPDs/SADBs via af_key. Though, it needs still some bug-fixing related to > subpolicies etc. If you are interested to giving it a test drive email > me privately. I might be able to get it in usable state some time next > week. fantastic. I wont have time to test it in the next week (I have a very poor machine setup these days) - but what i can do is stare at the patch and give you some feedback. For testing, can you suggest a setup that will aggrevate racoon to misbehave if the old combo of kernel/racoon is used vs a new version? This way we have some good metric. cheers, jamal |