linuxcompressed-devel Mailing List for Linux Compressed Cache (Page 14)
Status: Beta
Brought to you by:
nitin_sf
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(6) |
Nov
(1) |
Dec
(11) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(22) |
Feb
(11) |
Mar
(31) |
Apr
(19) |
May
(17) |
Jun
(9) |
Jul
(13) |
Aug
(1) |
Sep
(10) |
Oct
(4) |
Nov
(10) |
Dec
(4) |
2003 |
Jan
|
Feb
(8) |
Mar
|
Apr
(5) |
May
(39) |
Jun
(10) |
Jul
(2) |
Aug
(1) |
Sep
(1) |
Oct
(27) |
Nov
(1) |
Dec
(2) |
2004 |
Jan
|
Feb
(3) |
Mar
(1) |
Apr
|
May
|
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2005 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(9) |
Dec
(2) |
2006 |
Jan
(7) |
Feb
(4) |
Mar
(12) |
Apr
(16) |
May
(11) |
Jun
(48) |
Jul
(19) |
Aug
(16) |
Sep
(13) |
Oct
|
Nov
(8) |
Dec
(1) |
2007 |
Jan
(4) |
Feb
|
Mar
|
Apr
(3) |
May
(26) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
(7) |
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From:
<tak...@ni...> - 2002-10-26 14:15:08
|
<事業者>ジュエリーノン 2度と配信いたしませんので配信不要の方はこのままご返信くださいtak...@ni... <送信者>mcco taketu yosihttp://www6.plala.or.jp/taketu 拒否tak...@ni... <内容>無料プレゼント ●リニューアルオープン記念につきシルバーリングまたは18金ピアスを500名様にプレゼントいたします。応 募方法はこちらからどうぞ http://www.paw.hi-ho.ne.jp/taketu/ |
From: zongo f. <fra...@la...> - 2002-10-17 10:06:45
|
REQUEST FOR AN URGENT ASSISTANCE: Telephone: 234-1-7747825 I am Mr. Frank Zongo a former special adviser on petroleum and economic matters to the late Head of State of Federal Republic of Nigeria General Sani Abacha. Because of my strategic position in the former Goverment, and also being a close confidant of the Head of State, I was able to acquire personally, the sum of $45,000,000.00USD (Fourty five Million United States Dollars) presently lodged in Union bank of Nigeria plc. I made this money largely through "CONSULTANCY FEE" and Good Faith Fees" paid by foreign oil companies prior to Allocation of Deep Water Oil Blocks and other Lifting/prospecting Right. Nigeria is the 6th largest Producer/Exporter of Crude Petroleum in the World. As you are probably aware Nigeria is prone to Political/Economic Instability, hyper Inflation, and among other problems, I have therefore resolved to invest my money abroad, preferable in Real Estate Properties and Importation of Goods for safety and optimum returns on Investments. However, straight transfer of this money into a bank abroad will present two major problems: 1. The tax incidence will be too high, as much as 60% of this money will go up in Taxes, Levies, Penalties etc. 2. As an official in a former military government on which the present democratic government of Chief Olusegun Obasanjo is fighting very hard to freeze the accounts of the men that serve in the government of General Abacha. So as a result of Government deliberate restrictions on flight of capital Abroad. The solution is to Courier this money in cash abroad, through Courier Service Company here in Nigeria in conjuction with an embassy here in Nigeria, the money will be packed in a Diplomatic Bag or Carton tagged Diplomatic lauggage which will be address to you. This system is secret and the money is therefore untraceable. It is the system used by most top Government officials in Nigeria to remove their fortunes to safety abroad. I have therefore concluded every arrangement with a Courier Company in Nigeria to Courier this money abroad using the courtesy and safety of Diplomatic Bag. All I now need is a honest partner who can receive the money on my behalf and help me to invest as aforementioned. There is absolutely no risk involved in this transaction as the money will be delivered to you in United States Dollars Bills. If you are intrested in assisting me, please send to me by email immediately your preferred contact address where this money will be delivered to you, upon delivery, you are to lodge this money in a bank account and contact me for necessary arrangements for the investment. For your help and assistance in this deal, you will receive 30% of this money in cash, 10% will be set aside to offset all expenses while the remaining 60% is for me. Finally, you are to please urgently email your personal phone and fax numbers for an easy communication. You are also required to email your contact address to me, so that I can instruct the Courier Company to despatch the money to you before we go into other necessary details. You can also kindly contact me through Fax: 234-7593311 Expecting to hear from you. Best regards, MR. Frank Zongo. _________________________________________________________ http://www.latinmail.com. Gratuito, latino y en español. |
From: Con K. <co...@ko...> - 2002-10-14 13:32:19
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I've run some benchmarks with contest (http://contest.kolivas.net) and my= =20 latest patchset (http://kernel.kolivas.net) for public consumption. These= are=20 to explain why I've included compressed caching and removed the added vm=20 work. CAVEAT: ck9 will only be advantageous over ck7 on uniprocessor machines with a no= rmal=20 or small amount of memory. If you have heaps of memory and almost never g= et=20 into swap then the compressed caching will offer you no advantage. Worse = yet,=20 SMP machines no longer benefit from the added vm changes in ck7 - these w= ere=20 not compatible with compressed caching. noload: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [3] 67.7 98 0 0 1.01 2.4.19-ck7 [3] 73.8 96 0 0 1.10 2.4.19-ck9 [2] 68.8 97 0 0 1.02 You can see a slight difference here. ck7 and the 2.5 kernels show this=20 unusual feature of taking longer to get started on a noload kernel compil= e=20 after the memory and swap is all flushed. ck9 seems to have tamed this=20 problem. process_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [3] 106.5 59 112 43 1.59 2.4.19-ck7 [3] 93.4 76 68 27 1.39 2.4.19-ck9 [2] 94.3 70 83 32 1.40 Minimal difference from ck7 in time taken, but during that time the backg= round=20 process has accomplished more work. ctar_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [2] 106.5 70 1 8 1.59 2.4.19-ck7 [3] 142.3 57 2 10 2.12 2.4.19-ck9 [2] 110.5 71 1 9 1.65 ck7 exhibited quite aggressive work in the background load at the expense= of=20 the foreground process in tar creation. ck9 tames this a lot. xtar_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [1] 132.4 55 2 9 1.97 2.4.19-ck7 [3] 238.3 33 5 11 3.55 2.4.19-ck9 [2] 138.6 58 2 11 2.06 Even more background work done by ck7 here; tamed by ck9. io_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [3] 492.6 14 38 10 7.33 2.4.19-ck7 [2] 174.6 41 8 8 2.60 2.4.19-ck9 [2] 140.6 49 5 5 2.09 Now under heavy file writing ck9 has relaxed even more than ck7 has. read_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [2] 134.1 54 14 5 2.00 2.4.19-ck7 [2] 119.4 66 12 5 1.78 2.4.19-ck9 [2] 77.4 85 11 9 1.15 This overstates the advantage of ck9 over ck7 because the file read would= be=20 easy to compress. Nonetheless it is faster. list_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [1] 89.8 77 1 20 1.34 2.4.19-ck7 [2] 104.0 70 1 21 1.55 2.4.19-ck9 [2] 85.2 79 1 22 1.27 Not sure why ck7 was slower than vanilla here, but ck9 improves on it. Th= e=20 resolution of loads performed cannot show less than 1 to show if ck7 or c= k9=20 did more work. mem_load: Kernel [runs] Time CPU% Loads LCPU% Ratio 2.4.19 [3] 100.0 72 33 3 1.49 2.4.19-cc [3] 92.7 76 146 21 1.38 2.4.19-ck7 [2] 116.0 69 35 3 1.73 2.4.19-ck9 [2] 78.3 88 31 8 1.17 This is the most interesting. I've included 2.4.19 with just cc added to = show=20 the difference. With cc heaps more work is done by the background load an= d=20 only a modest improvement occurs in the kernel compilation time. ck9 on t= he=20 other hand does about the same amount of background work as vanilla, but = is=20 significantly faster on kernel compilation time - a preferable balance I=20 believe. Once again the advantage is overstated because the data would=20 compress well but is present. Cheers, Con -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE9qsbVF6dfvkL3i1gRAuZ6AJwMDozAY7Bvujh8csCStlC5mWFUHwCfdSg0 2MNWYIRedppbyWh7P18uJN8=3D =3DqWgv -----END PGP SIGNATURE----- |
From: Rodrigo S. de C. <rc...@im...> - 2002-10-01 12:49:02
|
On Sun, Sep 29, 2002 at 09:11:10PM +1000, Con Kolivas wrote: > Rodrigo. Contest has changed markedly. The constant limitation is > the effect of previous loads on each load test because of caching > etc. I believe I have eliminated it in the current version (0.41) > which basically empties the memory completely and swaps off and > on. Then it runs the next load on basically a fresh system isolating > just that load's effect on kernel compilation time. Anyway here's a > quick set of results I put together for you (only a single run).: This memory flush surely is much more efficient than the priming compile you were performing. Now I think the benchmark is fair enough between the tests that are run in a row. However, only flushing the memory between the tests doesn't solve the mem_load problem I told you about. And I am actually concerned about this particular load, since a kernel that is biased in kernel compilation will be taken as a better kernel (since it has better completion time and CPU usage). This bias can be a MM or a scheduling matter, for example. I guess that the main problem is that we are not taking into account if the performance of the background processes are being killed. Sometime ago, I had vanilla kernels on my box and I couldn't run a somewhat intensive test under User Mode Linux (UML), because my whole system performance would be a crap. It's unbelievable, but I couldn't even read email. That's when I decided to switch to rmap, which gave me a much responsive desktop, no matter if the performance in my tests under UML is a little worse (I wouldn't matter anyway). Given that, I am afraid a responsiveness benchmark like contest tells me that the vanilla I had on my system is much more responsive, by only measuring the "foreground" process, forgetting about the damage this process may cause on the whole system. > mem_load: > Kernel Time CPU Ratio > 2.4.19 105.40 70% 1.56 > 2.4.19-cc 94.86 75% 1.40 > > Seem good enough to me. > > Maybe you should download the latest version and see if you can > repeat these results yourself. I only ran contest with mem_load and we still lose, so I still can't repeat the results (with mem_load). Anyway, I think you understand that I am not very concerned about having results better than vanilla, but if the benchmark may be improved. Best regards, -- Rodrigo |
From: Con K. <co...@ko...> - 2002-09-29 12:16:11
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 [BIG SNIP removing all...] Rodrigo. Contest has changed markedly. The constant limitation is the eff= ect=20 of previous loads on each load test because of caching etc. I believe I h= ave=20 eliminated it in the current version (0.41) which basically empties the=20 memory completely and swaps off and on. Then it runs the next load on=20 basically a fresh system isolating just that load's effect on kernel=20 compilation time. Anyway here's a quick set of results I put together for= you=20 (only a single run).: noload: Kernel Time CPU Ratio 2.4.19 67.71 98% 1.00 2.4.19-cc 70.87 95% 1.05 process_load: Kernel Time CPU Ratio 2.4.19 110.75 57% 1.64 2.4.19-cc 106.46 59% 1.57 io_load: Kernel Time CPU Ratio 2.4.19 216.05 33% 3.19 2.4.19-cc 124.98 59% 1.85 mem_load: Kernel Time CPU Ratio 2.4.19 105.40 70% 1.56 2.4.19-cc 94.86 75% 1.40 Seem good enough to me. Maybe you should download the latest version and see if you can repeat th= ese=20 results yourself. Regards, Con. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQE9lt/PF6dfvkL3i1gRAsEEAKCLm90kiKsYbgOw5H9bEJGox/tJSwCfT7Lj KPHa+4i/BjpAzeoBJipykiM=3D =3Dk2vJ -----END PGP SIGNATURE----- |
From: Rodrigo S. de C. <rc...@im...> - 2002-09-28 19:06:55
|
On Tue, Sep 24, 2002 at 07:20:35PM +0800, Patrick wrote: > For clarification. Can you explain each of the algorithm that are > supported by Compressed Cache ( wkm, lzo ...) From our statistics page: - WK4x4: is the variant of WK compression family developed by Paul Wilson and Scott F. Kaplan that achieves the tighest compression by itself by using a 4x4 set-associative dictionary of recently seen words. - WKdm: compresses nearly as tightly as WK4x4, but is much faster because of the simple, direct-mapped dictionary of recently seen words. - miniLZO: is a very fast Lempel-Ziv implementation by Markus Oberhumer. Note: support added back since 0.23pre4. WK4x4 and WKdm were developed for in-memory data and LZO is a general purpose compression algorithm. Feel free to ask more if those data aren't enough. > And thanks for the statistics. But it have LZO, what are significant > of LZO compare to others? In our recent benchmarks, it looks LZO has a better performance over WKdm (we don't test WK4x4 very often). In spite of spending more CPU cycles to compress, it compresses tighter than WKdm, even for in-memory data. The setup tested had page cache support, which I encourage you to enable since it allows compressed cache to achieve better performance. Regards, -- Rodrigo |
From: Rodrigo S. de C. <rc...@im...> - 2002-09-28 18:54:16
|
[Con and Rik, it's a long email, but I'd like you to know some of my conclusions about contest and mem_load. Comments are welcome.] Contest [1] is a new benchmark by Con Kolivas that aims to test system responsiveness. It has been announced in the linux kernel mailing list and since then has attracted much attention from the kernel community. Two weeks ago, Con kindly sent me some results running his benchmark on a kernel with compressed cache. The first results were very good, hence I published them right away with the other statistics. However, after some bug fixes in contest, we noticed that compressed cache wasn't improving system performance under memory load test. As a matter of fact, the performance was worse than a vanilla kernel. Under IO load it improves and the other loads is indifferent to use compressed cache or not. First, let me tell briefly how contest works. It runs a kernel compilation (with -j4 concurrency level) under different load conditions. For example, memory load, which is the load I focused on, uses 110% of the system memory, allocating, touching and moving memory. There are other loads, like process load, IO load and even contest benchmarks the compilation under a system without any load condition. Given that the idea behind contest benchmark is very interesting, I think it would be nice to have compressed cache improving the system performance when running this benchmark under memory load condition. Even if not possible, at least understand what is going on. Therefore, I focused on this problem and I think I've come to interesting conclusions. Running 2.4.18 vanilla and 2.4.18-0.24pre5 (not released yet), the results I got were: 2.4.18: 95.03s completion time 76% CPU usage 2.4.18-cc: 99.84s completion time 72% CPU usage First of all, I thought that our problem was the high number of compressions and decompressions, but in this case we would have a higher CPU usage. Checking /proc/stat outputs, I could check that compressed cache reduces expressively the number of IO performed by system, what should make the kernel compilation performed by contest to complete faster. From some profiling data, I also noticed that compressed cache reduces the time the CPU is in idle state in a very significant manner: from 20.14 to 1.62 seconds. The reason why we have a worse completion time is that a kernel with compressed cache may and probably has a different scheduling in comparison to a vanilla kernel. That's because we reduce (and very much, depending on the case) the IO performed by the system. Notably to service a page fault, IO forces the current process to relinquish CPU and the scheduler tries to execute another task. Given that we reduce the total IO, we have much less of this compulsory scheduling due to a page fault, for example. We also spend some system time compressing and decompressing memory pages, what sums up to the current task system time (another reason to a slightly different scheduling), but that ends up to be less time than performed by IO operations. Running contest with the mem_load, a kernel with compressed cache doesn't perform any swap, very far from the over 60 thousand swapins and over 70 thousand swapouts performed by vanilla. Concerning mem_load and kernel compilation, the former has all its IO saved (its IO is only swapin/out operation), so it doesn't relinquish CPU to perform IO as on a vanilla kernel. On the other hand, kernel compilation still has to perform some operations that cannot be saved (like writing .o files or reading its source files) and even though we relinquish CPU less than on vanilla because we compress pages from page cache, it is more than mem_load does. To be brief, in vanilla case, mem_load is scheduled much more due to IO (think about the swapins/swapouts mentioned above), giving more control of the CPU to kernel compilation than on a kernel with compressed cache. In compressed cache case, mem_load uses most it's CPU time slice because it doesn't have to perform IO, so kernel compilation takes less control of CPU (that's why the CPU usage is smaller), taking a little longer to finish. In spite of having a worse compilation performance, the system, generally speaking, runs smoother. The mem_load "for" runs much more than on vanilla. If you run mem_load with the debug printf()s, it is quite expressive the difference. Under memory load condition, contest only measures the time spent to compile the kernel, but it doesn't take into account how the other background processes are affected by the compilation. With compressed cache, this particular background process (mem_load) and the overall system have better performance. Note that other background processes might have different results with compressed cache depending on what they do. I don't think the current contest, for memory load situation, is suitable to benchmark a system with compressed cache. It doesn't check the improvement to the whole system, only for a process, what may be influenced by scheduling issues. [1] http://contest.kolivas.net Regards, -- Rodrigo |
From: Patrick <pa...@al...> - 2002-09-24 11:20:38
|
hi, thanks for the great patch. For clarification. Can you explain each of the algorithm that are supported by Compressed Cache ( wkm, lzo ...) And thanks for the statistics. But it have LZO, what are significant of LZO compare to others? hopefully, can hear from you guys soon. best regard, Patrick |
From: Marc-Christian P. <m....@gm...> - 2002-09-19 10:35:27
|
On Sunday 15 September 2002 02:56, Marc-Christian Petersen wrote: why the hell I wrote this?! fully uninterrested ;) --=20 Kind regards Marc-Christian Petersen http://sourceforge.net/projects/wolk PGP/GnuPG Key: 1024D/569DE2E3DB441A16 Fingerprint: 3469 0CF8 CA7E 0042 7824 080A 569D E2E3 DB44 1A16 Key available at www.keyserver.net. Encrypted e-mail preferred. |
From: Marc-Christian P. <m....@gm...> - 2002-09-15 00:56:49
|
On Saturday 14 September 2002 22:30, Rodrigo Souza de Castro wrote: Hi Rodrigo, > He first included the 0.24pre1 version for 2.4.18 Linux kernel. After > releasing the patch for 2.4.19, he had some issues (hangs) when > running with the Andrea Arcangeli's latest VM patches. I helped him to > merge the compressed cache code and now it works stably, but an issue > has been noticed by him: some pauses (about 1-2 seconds) after using > the machine for an hour and loading and unloading many programs. If > somebody also has this problem, please let me know. Anyway, as you know Rodrigo, I also do benchmarking alot of kernels/kerne= l=20 patches/kernel patchsets around, also including 2.4.19 and also -ckX tree= =2E Anyway, the stops cannot be caused by Compressed Cache since all 2.4.19=20 kernels (including vanilla) and patchsets (-aa, -rmap, -ck, -jp, -mjc) et= c.=20 are causing those stops for some seconds on heavy load. I've posted this=20 issues right 1-2 days after releasing 2.4.19 final and also _BEFORE_ 2.4.= 19=20 gone final. I've posted this behaviour when 2.4.19-pre5 came out (-pre4 w= as=20 ok) ... Anyway, I know some users that are experiencing the same behaviou= r,=20 but as far as I can see on the kernel mailing list, this either isn't=20 experienced by all or they just don't notice it. Rodrigo, if this is real and appears for Con Kolivas, you hit a bug in 2.= 4.19,=20 O(1), Lowlat or -aa VM, not in your code (I bet 99,9%, no, almost 100%) ;= -) Also note: This behaviour does _not_ occur with 2.4.18 + compcache nor wi= th=20 2.4.18-wolk3.6 + compcache :) I've done ALOT benchmarking (since no one e= lse=20 do this with WOLK ;( Rodrigo, I've compiled 0.24pre4 right after we talked on irc, no slowdown= s or=20 anything !! - Also, what I am really wondering about is, why all the world uses (and al= so=20 wants) -rmap stuff. My experience with -rmap is not good. In any situatio= n=20 I've tested those -rmap enabled kernels, it is alot slower than vanilla/-= aa=20 or similar. For sure I've also tested 2.4.19-ck7 with -aa (alot faster as with -rmap)= =2E Con's purpose is for Desktop environments, if I read it correct. Just do=20 something like this (normal desktop pc doings; ok, almost ;) 1. start unreal tournament _OR_ start quake2/3 or any other heavy loaded game 2. compile 2 (two) kernel trees at once 3. do "updatedb" 4. do "find /" Try to play this game without noticing you were doing 2., 3. and 4. (clos= e=20 your ears if you have loud harddisks ;-))) 2 kernel's performs well, only two, but also with a little difference: 1. 2.4.19-rc5-aaX, 2.4.20-pre5-aaX you will notice, that you're doing 2-4 but gameplay is ok. 2. 2.4.18-wolk3.6rc1 you won't notice anything. Don't ask me why 2. is as it is :) wolk3.6-rc1 with standard config, Preempt/Lockbreak/Lowlat-Mini, Mempools All other kernel's I've tested are really horribly in that circumstance. For sure, this is definitive NOT a server's purpose, but desktop users mi= ght=20 do this (even compiling only one kernel tree at once). Unfortunately all other kernels slows down alot. 2.4.19-ck7 unfortunately stops every 15-20 seconds for 2-5 seconds with h= eavy=20 load. Testmashine is: --------------- - Celeron 800MHz (128k cache) - 256MB RAM - 60GB IDE Disk (UDMA5 mode) - 32MB ATI All-In-Wonder 128PRO Env: ---- FrameBuffer (1024x768-8@75) ext3 fs (data=3Djournal) XFree 4.2.0 Debian SID (from yesterday's updated) sysctl.conf values: ------------------- kernel.random.poolsize =3D 8192=20 vm.comp_cache.size =3D 8192 vm.overcommit_memory =3D 1 vm.bdflush =3D 50 0 0 0 60 300 60 0 0 fs.file-max =3D 15872 fs.inode-max =3D 16384 kernel.threads-max =3D 65535 vm.max-readahead =3D 512 vm.min-readahead =3D 64 vm.pagetable_cache =3D 75 90 vm.freepages =3D 300 400 500 and many tcp/ip tuning, but irrelevant for this tests :) root@codeman:[/] # mount /dev/hda3 on / type ext3,ext2 (rw,errors=3Dremount-ro) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw,gid=3D5,mode=3D620) /dev/hda2 on /boot type ext3 (rw,noexec,nosuid,nodev,noatime,data=3Djourn= al) /dev/hda5 on /home type ext3 (rw,data=3Djournal) /dev/hda6 on /opt type ext3 (rw,data=3Dwriteback) /dev/hda7 on /opt/squid/cache type ext3 \ (rw,noexec,nosuid,nodev,noatime,data=3Dwri= teback) /dev/hda9 on /usr/src type ext3 (rw,data=3Dordered) /dev/hda10 on /var/log type ext3 (rw,noexec,nosuid,nodev,noatime,data=3Do= rdered) /dev/hda11 on /var/mail type ext3 \ (rw,noexec,nosuid,nodev,noatime,data=3Do= rdered) /dev/hda12 on /var/spool type ext3 (rw,noexec,nodev,noatime,data=3Dwriteb= ack) tmpfs on /dev/shm type tmpfs (rw,size=3D200M,mode=3D777) tmpfs on /tmp type tmpfs (rw) none on /cdrom type supermount (ro,dev=3D/dev/cdrom,fs=3Diso9660) root@codeman:[/] # cat /proc/swaps=20 Filename Type Size Used Priority /dev/hda1 partition 1028120 0 1 root@codeman:[/] # cat /proc/meminfo |egrep "Mem|kB"|grep -v "Mem:" MemTotal: 255420 kB MemFree: 7440 kB root@codeman:[/] # dmesg|grep "Kernel command line" Kernel command line: BOOT_IMAGE=3DWOLK-3.6 ro root=3D303 rootflags=3Ddata= =3Djournal=20 compsize=3D32M gracl=3Doff video=3Daty128fb:accel,1024x768-8@75 panic=3D6= 0 kbd-reset=20 ether=3D0,0,eth1 debug devfs=3Dnomount idebus=3D33 hda=3Dautotune hdd=3Di= de-scsi=20 ide0=3Data66 max_scsi_luns=3D1 maxcpus=3D0 > There is also a version of his patchset including rmap VM by Rik van > Riel. In this case, compressed cache cannot be integrated given that > we don't have a port to this VM currently. hehe, if you'll do the merge also to -rmap you probably have to fix the=20 performance issues with -rmap with your -compcache ;) > Recall that compressed cache was first included into another patchset, > the Wolk project [2] by Marc-Christian Petersen. hehe :-) Thanks for mentioning. Anyway, 0.24pre4 is included right now and works ok, but I wait for grsec= urity=20 v1.9.7 final to get out WOLK v3.6-final. (there must be a reason why I do= n't=20 switch over to 2.4.19 as base for WOLK ;) P.S.: Anyway, all these kernels performs alot _better_ than current 2.5.3= 4 :) Also just started up Unreal Tournament (my girlfriend wants to play= a bit; yes, a woman is playing UT :) woohoo) and I connected to that mashine via ssh, just did a "mailq" (sendmail stuff) and she cried: "Hey, the game just stops, ah, goas ahead!" ... hmm, I thought, str= ange. Did a "while true; do mailq; done" and the game freezes until I pre= ssed=20 Ctrl-C to stop the loop. ... and my girlfriends starts crying again= :) Funny eh? I laughed out loud :) Hopefully all those bugs with 2.4.19 are gone with 2.4.20 final! Nice weekend! (Sunday) --=20 Kind regards Marc-Christian Petersen http://sourceforge.net/projects/wolk PGP/GnuPG Key: 1024D/569DE2E3DB441A16 Fingerprint: 3469 0CF8 CA7E 0042 7824 080A 569D E2E3 DB44 1A16 Key available at www.keyserver.net. Encrypted e-mail preferred. |
From: Rodrigo S. de C. <rc...@im...> - 2002-09-14 20:30:15
|
Compressed cache was recently included into the kernel patchset [1] by Con Kolivas. He intends to improve system responsiveness, with emphasys on desktop PCs. His ck7 version for 2.4.19 version is composed of: - O(1) + Batch scheduler - Preemptible - Low latency - Andrea Arcangeli's VM patches - Supermount - Rik Van Riel's rmap optional He first included the 0.24pre1 version for 2.4.18 Linux kernel. After releasing the patch for 2.4.19, he had some issues (hangs) when running with the Andrea Arcangeli's latest VM patches. I helped him to merge the compressed cache code and now it works stably, but an issue has been noticed by him: some pauses (about 1-2 seconds) after using the machine for an hour and loading and unloading many programs. If somebody also has this problem, please let me know. This problem is hindering Con from releasing the ck8 patch with compressed cache at the moment. Compressed cache is marked as testing on his home page. There is also a version of his patchset including rmap VM by Rik van Riel. In this case, compressed cache cannot be integrated given that we don't have a port to this VM currently. Recall that compressed cache was first included into another patchset, the Wolk project [2] by Marc-Christian Petersen. [1] http://kernel.kolivas.net [2] http://wolk.sourceforge.net -- Rodrigo |
From: Rodrigo S. de C. <rc...@im...> - 2002-09-14 19:57:45
|
It's available a set of performance statistics for the 0.24pre4 version. This set includes: - DBench - Fillmem (memtest) - Linux Kernel Compilation - MUMmer Scientific Program - Open Source Database Benchmark We have very good results with those tests, check them out. http://linuxcompressed.sourceforge.net/statistics/ Ack: thanks to Paolo Ciarrocchi that ran dbench and fillmem on a kernel with compressed cache and on vanilla, sending me his data. -- Rodrigo |
From: Rodrigo S. de C. <rc...@im...> - 2002-09-14 18:55:17
|
The same version as 2.4.18-0.24pre4, now for 2.4.19 Linux Kernel. Details: http://sourceforge.net/project/shownotes.php?release_id=48824 Download (indirect link): http://prdownloads.sourceforge.net/linuxcompressed/patch-comp-cache-2.4.19-0.24pre4.bz2?download The news about this version, which has been published on the project page, was chosen to be displayed on Sourceforge.net home page. It is the third news (from top to bottom) as of Sep 14th 2002 06h54 PM GMT. -- Rodrigo |
From: Rodrigo S. de C. <rc...@im...> - 2002-09-14 18:45:14
|
After a long development, a new patch is out with some great new features. In this version, we have adaptivity implementation, which will allow compressed cache to try to resize itself to the size it has the best benefits. You will also find compressed swap, where compressed cache swaps out fragments in compressed format, in order to delay decompression (and even avoid, if the fragment is not needed again). Along with compressed swap, there is compacted swap, feature that writes out many fragments to a single disk block, reducing the number of write submissions. Many other changes, including features, cleanups and bug fixes are in this patch. Give it a try. Details: http://sourceforge.net/project/shownotes.php?release_id=45561 Download (indirect link): http://prdownloads.sourceforge.net/linuxcompressed/patch-comp-cache-2.4.18-0.24pre4.bz2?download -- Rodrigo |
From: Alex K. <sof...@al...> - 2002-08-08 02:09:18
|
Good day! Our company ( www.alarit.com ) offers software for developers - "Virtual Serial Driver". If you are dealing with development in area of serial streams our new product can help you in your work. Virtual Serial Driver is appointed for emulation of one, two or more (up to 200) serial streams. Hardware devices are not necessary. This product is based on Windows XP DDK and doesnt require using devices of other manufacturers. The whole product is created with the help of Microsoft tools. Virtual Serial Driver realizes full emulation of Serial Port functions, connected with each other with the help of NULL-MODEM or full cable, including DTR, CTS, DSR, RTS. Virtual Serial Driver can be interesting for software (Windows Platforms) developers, who use external devices connected with computer by serial port. Also, it will be interesting for developers of the products appointed for connection of two or more computers through serial ports. More details and free demo version (only 28 kb) available at http://www.alarit.com/index.php?pg=products&subpg=p_afsd Also if you are or know somebody who may be interested in such software just let us know and you can earn 20% comission from each sold copy as our reseller. Regards Alex Kotov AlarIT Inc. Products manager E-mail: sof...@al... Telephone: +380-61-234-8109 Telephone-fax: +380-61-234-5457 P.S. our team of 120 full time programmers can develop any individual solution for you for fee 10 USD per hour. For more information take a look at http://www.alarit.com --------------------------------------------------------------- Anti-spam policy : Your e-mail was taken from open sources. Sorry if you are not interested in receiving of our information. In this case please visit the following link to be removed from mailing list: http://193.201.100.234/nocafe.php?em=lin...@li... Sorry again for disturbing. |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-31 21:03:42
|
New patch is out. Besides many bug fixes, this version features full support for preempted kernels. The last tests run on a preempted kernel (ie. comp cache + page cache support + double page size + preempt + lockbreak) have run for a long time on my machine, showing to be somewhat stable. It has not been tested on SMP systems yet. Release notes: http://sourceforge.net/project/shownotes.php?release_id=43957 Download: http://prdownloads.sourceforge.net/linuxcompressed/patch-comp-cache-2.4.18-0.24pre2.bz2?download Direct link: http://osdn.dl.sourceforge.net/sourceforge/linuxcompressed/patch-comp-cache-2.4.18-0.24pre2.bz2 PS: The patch was uploaded again a few minutes ago due to a bug which caused compilation error. Download it again if you did it earlier. -- Rodrigo |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-15 11:36:25
|
Paolo Ciarrocchi reported a compilation error with 0.23 that happens when this version is compiled with the next configuration: CONFIG_COMP_CACHE=y CONFIG_COMP_PAGE_CACHE=y # CONFIG_COMP_DEMAND_RESIZE is not set Below there is a patch that fixes it. I also replaced (as of Jul 15th, 11h33 GMT) the 0.23 patch on the web site for a new version with this bug fixed. --- cvs/linux/mm/comp_cache/main.c Thu Jul 11 16:05:14 2002 +++ linuxcompressed/mm/comp_cache/main.c Mon Jul 15 08:20:26 2002 @@ -1,7 +1,7 @@ /* * linux/mm/comp_cache/main.c * - * Time-stamp: <2002-07-11 09:32:13 rcastro> + * Time-stamp: <2002-07-15 08:20:26 rcastro> * * Linux Virtual Memory Compressed Cache * @@ -215,7 +215,7 @@ { struct comp_cache_fragment * fragment; struct comp_cache_page * comp_page; - unsigned short comp_size, dirty; + unsigned short comp_size; struct page * old_page; int ret = 0; @@ -246,9 +246,7 @@ /* it's not mapped by any process, therefore we can trade this * page with a page reserved for compressed cache use */ comp_size = PAGE_SIZE; - dirty = 0; - - comp_page = get_comp_cache_page(*page, comp_size, &fragment, dirty, 0, gfp_mask, priority); + comp_page = get_comp_cache_page(*page, comp_size, &fragment, 0, gfp_mask, priority); if (!comp_page) return ret; -- Rodrigo S. de Castro <rc...@im...> |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-12 15:43:24
|
This major version features a much more stable version of compressed cache code, with many bugs which have been fixed in the last 9 pre versions. New online data about compressed cache is provided with new two /proc entries: comp_cache_hist and comp_cache_frag. The former shows how many fragments are stored in compressed cache, so you can check how it is distributed throughout the several pages of compressed cache. The latter shows the fragmentation within the compressed cache. It shows the free space (space that can be used right away) and the fragmented space (space that is free, but isn't contiguos, so it needs to compact the fragments). The already existing /proc/comp_cache_stat entry now shows more info about compressed cache and its output has also been redesigned. The most important improvements in this new version of compressed cache concerns performance. That matter has been heavily researched and we tried to improve as much as we could (we believe there are still some improvements to go). Mostly conceptual bugs were found out and fixed with the aid of the complex Linux Kernel compilation test. Regarding performance, VM watermarks are changed when compressed cache is enabled to decrease pressure on the uncompressed cache. Also readaheads (for swapins and file reads) are much more intelligent and do not mess up pages ordering nor forces a larger number of disk reads. A new feature present in this version is the support for pages with buffers when page cache support is enabled. That helps decreasing the IO performed for cleaning buffers in VM system. Another feature is the ability to resize compressed cache on demand. The compressed cache starts with a minimum set of reserved pages and grows to a maximum size which is set at the boot time. As soon as the compressed cache is not need by the system, it shrinks. LZO compression algorithm is back. And compression algorithm can be selected, besides the sysctl entry, also by a kernel parameter (compalg=). Thus, "compalg=0" selects WKdm, "compalg=1" picks WK4x4 and "compalg=2" chooses LZO. Compressed cache memory is correctly accounted in /proc/meminfo, so "free" program shows the correct amount of memory for each cache (notably when page cache support is enabled). The meminfo proc entry also displays how much memory is reserved for compressed cache and how much memory is effectively used. Some work on adaptivity has been done, but it was discarded since it resulted in non-functional code. More details can be checked below with the logs for every pre version. Note: There are some remaining bugs which affects stability on some systems. We will try to fix them asap. Notes ----- http://sourceforge.net/project/shownotes.php?release_id=99282 Download -------- http://prdownloads.sourceforge.net/linuxcompressed/patch-comp-cache-2.4.18-0.23.bz2?download Direct Link http://west.dl.sourceforge.net/sourceforge/linuxcompressed/patch-comp-cache-2.4.18-0.23.bz2 -- Rodrigo S. de Castro <rc...@im...> |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-11 20:09:15
|
Along with #kernelnewbies channel, we decided to move our IRC channel (#linux-mm-cc) to irc.oftc.net server too. Regards, -- Rodrigo S. de Castro <rc...@im...> |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-10 19:37:09
|
Hi Marc-Christian, On Sat, Jul 06, 2002 at 04:18:27PM +0200, Marc-Christian Petersen wrote: > again, pre9 is a part of WOLK (local tree 3.5rc4) and i must say, > it's doing fine. I've stress test it ~ 24h of doing heavy disk I/O > without any freeze, oops, panic or kinda that AND WITH Preempt AND > LockBreak! Great!! :) Very nice to hear that! Thank you very much for running those tests! > Above test was without on demand resizing and without Page > Compression. Nevertheless, today (Jul 10th) an anonymous user posted a bug report to Sourceforge BTS about a hang happening when opening a huge JPEG file or when the swap space gets full. His compressed cache setup was resizing on demand only (no page cache support). That looks like the hang reported by Paolo Ciarrocchi when running mmap001 that I couldn't reproduce so far. http://sourceforge.net/tracker/index.php?func=detail&aid=579690&group_id=13472&atid=113472 > With Page Compression and Lockbreak/Preempt the mashine freeze. The same behaviour as before. > P.S.: Anyone want to spend me a SMP system? Compiling takes hours > due to testing different configs. :-) Best regards, -- Rodrigo S. de Castro <rc...@im...> |
From: Marc-Christian P. <m....@gm...> - 2002-07-06 14:18:52
|
Hi Rodrigo, hi all other :) again, pre9 is a part of WOLK (local tree 3.5rc4) and i must say, it's do= ing=20 fine. I've stress test it ~ 24h of doing heavy disk I/O without any freez= e,=20 oops, panic or kinda that AND WITH Preempt AND LockBreak! Great!! :) Above test was without on demand resizing and without Page Compression. With Page Compression and Lockbreak/Preempt the mashine freeze. compcache=3D32M P.S.: Anyone want to spend me a SMP system? Compiling takes hours due to=20 testing different configs. ;-))) --=20 Kind regards Marc-Christian Petersen http://sourceforge.net/projects/wolk PGP/GnuPG Key: 1024D/569DE2E3DB441A16 Fingerprint: 3469 0CF8 CA7E 0042 7824 080A 569D E2E3 DB44 1A16 Key available at www.keyserver.net. Encrypted e-mail preferred. |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-05 12:59:54
|
On Fri, Jul 05, 2002 at 01:13:03PM +0200, Michal Plachta wrote: > i found, that /proc/sys/vm/comp_cache/size is chainging itself as it > wants... Since all options for compressed cache are on, you have enabled the "resize compressed cache on demand" option. Therefore, you cannot resize compressed cache via the sysctl entry (it is written in the configuration help). Compressed cache resizes by itself, using up to size defined by "compsize=" kernel parameter. Recall that "compsize=" accepts input like the "mem=" parameter (compsize=32M, for example). I will change permissions for /proc/sys/vm/comp_cache/size when resize on demand option is enabled, so you won't be able to echo any value to it. > when i try to "echo 4096 > /proc/sys/vm/comp_cache/size" it changes > for a few seconds (probably to the first next compress/decompress) > and then it gains the last value. (i'm trying to lower from ~11000 > to 4096) That's right. Even if you set the variable to a new variable, the resize on demand code uses the current size to grow or shrink the cache. > All options for cache compression in kernel: ON Regards, -- Rodrigo S. de Castro <rc...@im...> |
From: Rodrigo S. de C. <rc...@im...> - 2002-07-05 12:26:33
|
I performed some tests with the Linux Kernel compilation (2.4.18) and an scientific named MUMmer to check the performance of our latest version, 0.23pre8. The latter test is very interesting because it has a very intensive use of CPU and memory to do its computations. Both set of tests were run with the aid of an script which automates the process. Every run is preceded by a cold reboot. This script links itself to the rcS.d directory in order to be performed before main services come up in the boot process. The compressed cache configuration used for each test: - Compressed Cache version 0.23pre8 - Support for Page Cache compression - Resize Compressed Cache on Demand - WKdm compression algorithm Linux Kernel Compilation ------------------------ http://linuxcompressed.sourceforge.net/statistics/0.23pre8_kernel/ Very nice results for -j2 and -j4 concurrency levels, but not yet good results for -j1 test. Anyway, for the first time there is a gain for this case (j1), even if tiny, as you will be able to check in the complete data available on the page. For the higher concurrency leves (-j2 and -j4), the compressed cache completion time for the compilation is up to 36% smaller than vanilla's time. MUMmer ------ http://linuxcompressed.sourceforge.net/statistics/0.23pre8_mummer/ This program runs very well on a system with compressed cache, running up to 45% faster than on a system with vanilla kernel. There are gains only for small caches, since larger compressed cache end up having so many page faults between uncompressed and compressed caches that make it unusable. -- Rodrigo S. de Castro <rc...@im...> |
From: Michal P. <gl...@ir...> - 2002-07-05 11:12:36
|
Hello, i found, that /proc/sys/vm/comp_cache/size is chainging itself as it wants... when i try to "echo 4096 > /proc/sys/vm/comp_cache/size" it changes for a few seconds (probably to the first next compress/decompress) and then it gains the last value. (i'm trying to lower from ~11000 to 4096) *lowering isn't possible for now?* algorithm chainging is OK Conf: Latest version of your patch (downloaded yesterday) All options for cache compression in kernel: ON krnl2.4.18 sysRH 7.1 92MBSDRAM 366MHzCeleron66MHz 6.4GBATA33SegateDMAon ===================== MemTotal: 92668 kB MemFree: 5732 kB MemShared: 0 kB Buffers: 984 kB Cached: 19096 kB CCacheAlloc: 43572 kB CCacheUsed: 34382 kB SwapCached: 7880 kB Active: 21596 kB Inactive: 9920 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 92668 kB LowFree: 5732 kB SwapTotal: 385520 kB SwapFree: 345268 kB ====================== compressed cache - statistics algorithm WKdm* Compressed Pages: 242208 Swap Cache: 69268 Page Cache: 172940 Dirty: 39359 Clean: 202849 Decompressed Pages: 86229 Swap Cache: 57916 Page Cache: 28313 Written Out: 10379 Swap Cache: 10379 Page Cache: 0 Faulted In: 88679 Swap Cache: 48722 Page Cache: 39957 Compression MinSize: 272 MaxSize: 4372 AvgSize: 2866 Ratio: 69% MinCycles: 24941 MaxCycles: 6049369 AvgCycles: 79958 Decompression MinCycles: 9794 MaxCycles: 2154162 AvgCycles: 29394 ============================ compressed cache - free space histogram total 0f 1f 2f 3f more buffers nopg 0: 1852 0 1844 8 0 0 0 0 1 - 200: 5421 0 1802 3225 306 88 0 0 200 - 400: 1296 0 1024 182 70 20 0 0 400 - 600: 292 0 279 12 0 1 0 0 600 - 800: 318 0 315 2 1 0 0 0 800 - 1000: 203 0 200 3 0 0 0 0 1000 - 1200: 157 0 156 1 0 0 0 0 1200 - 1400: 456 0 456 0 0 0 0 0 1400 - 1600: 204 0 204 0 0 0 0 0 1600 - 1800: 203 0 203 0 0 0 0 0 1800 - 2000: 222 0 222 0 0 0 0 0 2000 - 2200: 23 0 23 0 0 0 0 0 2200 - 2400: 0 0 0 0 0 0 0 0 2400 - 2600: 0 0 0 0 0 0 0 0 2600 - 2800: 0 0 0 0 0 0 0 0 2800 - 3000: 0 0 0 0 0 0 0 0 3000 - 3200: 0 0 0 0 0 0 0 0 3200 - 3400: 0 0 0 0 0 0 0 0 3400 - 3600: 0 0 0 0 0 0 0 0 3600 - 3800: 0 0 0 0 0 0 0 0 3800 - 4000: 0 0 0 0 0 0 0 0 4001 - 4096: 20 20 0 0 0 0 0 0 ============================== -- Michal 'Glaeken' Plachta gl...@ir... Friday, July 05, 2002 12:58 |
From: Marc-Christian P. <m....@gm...> - 2002-07-04 11:35:43
|
On Thursday 04 July 2002 11:15, Patrick Lim wrote: Hi Patrick, > I've tried to patch the pre8 into 2.4.18 kernel but fail. Some of the > files cannot be saved. i don't have any problems so far for my patchset wolk which is based on 2= =2E4.18=20 vanilla. Maybe you want to have a look at http://sf.net/projects/wolk, pr= e8=20 is included in 3.5rc3. > Do you have any idea when is this going to fixes? hmm? :) > Btw, I've look at the stats and it really amazed me. I can't wait to ge= t > the pre8 to work. > Keep up the good job. 100% agreed!! :) > By this way, it will convience many more people to try and to give feed= back > such as bugs and motivation. It will speed things up a little bit. I've > asked many people around. Most of them heard the word 'Alpha', there're > disappointed no matter how good the software is. Many users of wolk also uses compressed cache and almost all have no prob= lems=20 with it :) --=20 Kind regards Marc-Christian Petersen http://sourceforge.net/projects/wolk PGP/GnuPG Key: 1024D/569DE2E3DB441A16 Fingerprint: 3469 0CF8 CA7E 0042 7824 080A 569D E2E3 DB44 1A16 Key available at www.keyserver.net. Encrypted e-mail preferred. |