Re: [lc-devel] hello
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@im...> - 2003-05-19 00:49:19
|
Hey Bob, On Mon, May 19, 2003 at 02:08:53AM +0200, Bob Arctor wrote: > > > i patched 2.4.21rc2-ac2 , there were few trivial rejects, i didn't > > > > i'll install this kernel on my firewall too, which is only > > > non-smp machine left :> > > it works too. It may work, but I don't know how stable it is, since I have never performed a test on such a system. It is much likelier that you hit a race condition on a SMP system than on a system with a preempted kernel, because the former has a more intense concurrency. > there is swap over NFS on this firewall , which benefited a _lot_ > from compression and grouping of requests as it swaps over 10Mbit > link. Are you using compressed cache and swap compressed with swap over NFS? It is the first time I heard someone using comp cache with these systems (although I heard of some development on swap over NFS sometime ago). That is great. > _here_ i have problems with enabling swap via an file, it is > possible _only_ via /dev/loop! Oh, yeah, I remember swapon man page warning about that. Actually, the problem I noticed with compressed cache code is not related to loop device, but with swap files, so as soon as it works stably with swap files, I guess you will be able to use NFS over swap. > swapping isn't very intensive, it swaps out some inactive data when > i log in there and swaps out a bit more when i perform > administrative tasks, but during normal operation it just swaps out > inactive shells, or inactive services. i felt difference > though. compression ratio is around 41%! Wow. Since you rely on network transfers, compressing it probably will help a lot. It is a scenario that I hadn't first taken much into account. In this particular case the transfer is really important (for ordinary systems where swap is done on local disk, the access reduction, rather than transfer, is more important). > btw. i lurked into code, and it seems compression algorithm is now > determined automatically? or it can be changed somehow? The default algorithm, which has achieved the best results, is LZO. However, you can enable WKdm or WK4x4 changing the "compalg=" kernel parameter. compalg=0 (WKdm) compalg=1 (WK4x4) compalg=2 (LZO, default) There is no system detection concerning the amount of memory and compression algorithms that are best suited for each case. > on lowmem systems implementing more complex algorithms would be obsolete, > but when there is plenty of memory algorithm which would store large book > of possible words could greatly improve caching. Agreed, that may be an interesting idea. From the experiments I ran, it is more important the compression ration than the compression speed. Of course, the speed is important, but if we get a better ratio with a not very longer time, it would be a better option. That is something we should consider when thinking about different compression algoritms for each system. > review www.autosophy.com > it is explained well there, and the idea is old and very well known. > if you're worried about legal problems -author (klaus holtz) answers to > emails and will not have anything against using it, as it isn't really hard > to deduct how to make it by your own, and he just got the idea long time ago, > before computing was so popular. Thanks, I am going to read it carefully later. Best regards, -- Rodrigo |