linuxcompressed-devel Mailing List for Linux Compressed Cache (Page 9)
Status: Beta
Brought to you by:
nitin_sf
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(6) |
Nov
(1) |
Dec
(11) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(22) |
Feb
(11) |
Mar
(31) |
Apr
(19) |
May
(17) |
Jun
(9) |
Jul
(13) |
Aug
(1) |
Sep
(10) |
Oct
(4) |
Nov
(10) |
Dec
(4) |
2003 |
Jan
|
Feb
(8) |
Mar
|
Apr
(5) |
May
(39) |
Jun
(10) |
Jul
(2) |
Aug
(1) |
Sep
(1) |
Oct
(27) |
Nov
(1) |
Dec
(2) |
2004 |
Jan
|
Feb
(3) |
Mar
(1) |
Apr
|
May
|
Jun
(3) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2005 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(9) |
Dec
(2) |
2006 |
Jan
(7) |
Feb
(4) |
Mar
(12) |
Apr
(16) |
May
(11) |
Jun
(48) |
Jul
(19) |
Aug
(16) |
Sep
(13) |
Oct
|
Nov
(8) |
Dec
(1) |
2007 |
Jan
(4) |
Feb
|
Mar
|
Apr
(3) |
May
(26) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
(7) |
Mar
(5) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Rodrigo S de C. <ro...@te...> - 2005-11-26 20:48:29
|
Hi Nitin, I am very sorry for the delay. I had restricted internet access this = week and couldn't answer your email earlier. If we are going to do a new implementation, I agree with the static compressed cache approach. This was the first step of the current 2.4 implementation and I think it was very useful. Actually, even if we were = to port the current code, I think that disabling adaptivity would be very useful to make it work stably initially and then we add and improve the adaptivity heuristic. Concerning the kernel version, I would go with the latest vanilla = version and would update for every release, since I don't think that it would = have many differences in -ac/-mm branches that could justify the effort of keeping updated with them. In the past, during 2.4 version = implementation, we achieved a point that some people asked to keep updated with -ac = branch, but it was already somewhat stable and people were using it. Therefore, = I think your approach would work. This week I discussed a little bit about compressed cache in Linux 2.6 = and there is something that we must think when implementing this new = version. In 2.6 there is support for huge pages (4Mb pages instead of 4Kb, at least = on i386). We didn't have that in 2.4 and we must devise a way to handle = them. Finally, great to hear about taking compressed cache as your final university project. I really hope we can work together on this. PS: Tell me your sourceforge user so I can add as a developer in the = project if you are about to start coding. Let's manage the CVS repository with directories or branches for each implementation (2.4/2.6). Best regards, Rodrigo -----Original Message----- From: Nitin Gupta [mailto:nit...@gm...]=20 Sent: ter=E7a-feira, 22 de novembro de 2005 08:36 To: Rodrigo S de Castro Cc: lin...@li... Subject: Re: Compressed cache status Hi Rodrigo, I think the coding can now start from next week. For the compressed structure, I think the 2-page cell structure would be best as you have done a lot of research here and my guess-work will not help here. For the implementation, I think first getting static compressed cache implemented for 2.6 kernel would be very useful. I think, by actually working on VM code will give us a clear understanding of all the relevant recent kernel changes. Although you have a lot of experience working on VM code but for me, it would be very difficult to practically come-up with a good heuristic now and go on with implementing a dynamic compressed cache. Also, this will allow us to parallelize the work on heuristic without delaying the work. Also, I think working on a new implementation instead of just trying to port your previous work would be better since older bugs may become very hard to trace down and this will also give more flexibility in implementation. To start with the implementation should we start with the latest vanilla kernel or -ac/-mm? In my previous work I went with the latest vanilla and updated it (about) every 3 weeks to keep the work current. Do you think this approach will work here? I have also taken-up this as my final year university project and as an entry for a competition sponsored by Red Hat (http://ekalavya.iitb.ac.in/rhs/ - Group RHS051032). So, it will be great to have you as mentor! Best Regards, Nitin On 11/20/05, Rodrigo S de Castro <ro...@te...> wrote: > Hi Nitin, > > As I mentioned, there is a possibility that I return to compressed = cache > development. In the next two or three weeks I will have a decision = about > this. > > Concerning compressed cache structure, the fragmentation is in the = cells (ie > contiguous pages), and some metadata keep track of the fragmented = space in > the cell. Of course there is a small memory space overhead to keep = this data > and a performance penalty, but it is negligible given the benefit we = have > (some benchmark results would back this statement). What is = interesting in > this metadata is that we don't have much overhead with the = fragmentation and > the compaction. If it is worth to compact a cell rather than allocate = a new > one or evict a compressed page, we do it after a quick hash table = lookup. > Moving data is expensive, but it is less expensive than eviction or a = new > page allocation (remember we are always under memory pressure). > > In case of too much fragmentation in the compressed cache (and we = don't have > any single cell whose compaction would result in enough free space), a > scheme to perform compaction moving compressed pages from a cell to another > could improve a lot the allocated memory space, but I am not sure the = cost > would be affordable. Moreover, I am not sure this is a common case, anyway, > but it is very simple to keep track of the overall free space to have = a > feeling of it after some tests. > > Your ideas for the use of compressed cache are very interesting. I = would say > that its usage in PDA/Cell phones is also something that may be interesting > (even though, in this case, performance improvement may not be the = primary > goal). > > I went through the 2.6 memory code recently, but very briefly. I had = the > impression that it does not have substantial changes in the eviction = path, > except the inclusion of rmap. We could figure out how knowing the processing > mapping a page could help us. Another research topic would be checking > kVMTrace developed by Scott Kaplan. Maybe it could help us improving = the > heuristic to avoid maladaptivity, but still I don't know much about it = to > advance our discussion at this moment. > > As a general guideline, I would suggest that you take into = consideration the > design decisions I took in this work. They are very important and are = all > the lessons I learned during the implementation and my research: > > - Cells composed of 2 contiguous pages > - Page cache (not only swap cache) > - Heuristic to try to disable compressed cache when it is not useful > - Virtual swap address > - Page order > - Possibility of swap compression > - Idea behind profit and cost lists in the heuristic > - Scheduling impact > - Buffer/IO operations impact > - Adjustment of Linux watermarks > > Having all these points in mind will let us to start a port or new > implementation far ahead. In summary, the main lesson is that the = system is > very much sensitive to any changes in it (mainly reduction of memory = for > other caches). > > Let's keep in touch. As soon as I have a decision about the = possibility of > working with you on this, I will let you know. > > Best regards, > > Rodrigo > > -----Original Message----- > From: Nitin Gupta [mailto:nit...@gm...] > Sent: quarta-feira, 16 de novembro de 2005 12:47 > To: ro...@te... > Cc: lin...@li... > Subject: Compressed cache status > > Hi Rodrigo, > > It was great to see your reply! > I've been working on compressed cache port to 2.6 for about 2 > months whenever I can take out the time. Currently not a single line > of code has been written. Since I was new to VMM subsystem, the > initial study took too much time. The details seemed overwhelming but > after a lot of work the picture is much clearer now. > So, the current status is that I have read a lot on VMM and your > paper on adaptive compressed caching. Now, I'm looking for the most > recent changes in kernel that can help compressed cache design. > Since, I've done some kernel patches (for CIFS vfs) before, which > involved lot of concurrency issues, I have a general feel of kernel > programming. So, I think once a complete "roadmap" is prepared, work > can really speed up. > > So, before I go hunting for the right kernel hooks, I wanted to > have following on paper: > > - Compressed cache structure - currently the cell based design (2-4 > pages taken as unit) has problem of fragmentation and relies on > compaction as last resort to free-up memory before starting to > swap-out to disks. Although, fragmentation cannot be eliminated but I > think a different design (for which I am looking for) can surely > reduce it. > Also, it needs to be addressable storage (kernel asks for page = in > swap at xxxxx where is this in compressed cache?). Although you have > implemented this but I couldn't get it. > > - Adaptivity heuristic - I've read your current heuristic method but > as you mentioned that rmap and other recent developments in 2.6 can be > really helpful here. So, I'm now reading more on recent developments. > Heuristic quality will be very significant and can determine success > of this work. As you have shown setting up just the right static cache > size for a particular workload is so good for performance while the > wrong choice produces significant slowdown. > > - Compression algorithm - these can simply be reused from your > implementation. > > With these in hand, I can then proceed with implementation: find the > right kernel hooks, implement a "dummy" compressed cache (not really > do anything but just to make sure I've the right entry points), and > then gradually implement all the parts (from static to dynamic cache). > > Also, I'm particularly looking forward to its use in: > - LiveCD environment - swap in not available and CD-ROM read speed is > exceptionally slow. > - Virtualized environment - hard-disk is a file backed on host OS. So, > RAM and disc speed gap is even greater here (many more +ve factors > here). > > In general i'm not very busy and can spare a lot of time for this > work. But, unfortunately my semester exams are due to start early > december and last for about 3 weeks. However, I'm extremely interested > in this project and it would be great if I can get your help. > > > Best regards, > > Nitin > > > > |
From: Nitin G. <nit...@gm...> - 2005-11-22 10:36:27
|
Hi Rodrigo, I think the coding can now start from next week. For the compressed structure, I think the 2-page cell structure would be best as you have done a lot of research here and my guess-work will not help here. For the implementation, I think first getting static compressed cache implemented for 2.6 kernel would be very useful. I think, by actually working on VM code will give us a clear understanding of all the relevant recent kernel changes. Although you have a lot of experience working on VM code but for me, it would be very difficult to practically come-up with a good heuristic now and go on with implementing a dynamic compressed cache. Also, this will allow us to parallelize the work on heuristic without delaying the work. Also, I think working on a new implementation instead of just trying to port your previous work would be better since older bugs may become very hard to trace down and this will also give more flexibility in implementation. To start with the implementation should we start with the latest vanilla kernel or -ac/-mm? In my previous work I went with the latest vanilla and updated it (about) every 3 weeks to keep the work current. Do you think this approach will work here? I have also taken-up this as my final year university project and as an entry for a competition sponsored by Red Hat (http://ekalavya.iitb.ac.in/rhs/ - Group RHS051032). So, it will be great to have you as mentor! Best Regards, Nitin On 11/20/05, Rodrigo S de Castro <ro...@te...> wrote: > Hi Nitin, > > As I mentioned, there is a possibility that I return to compressed cache > development. In the next two or three weeks I will have a decision about > this. > > Concerning compressed cache structure, the fragmentation is in the cells = (ie > contiguous pages), and some metadata keep track of the fragmented space i= n > the cell. Of course there is a small memory space overhead to keep this d= ata > and a performance penalty, but it is negligible given the benefit we have > (some benchmark results would back this statement). What is interesting i= n > this metadata is that we don't have much overhead with the fragmentation = and > the compaction. If it is worth to compact a cell rather than allocate a n= ew > one or evict a compressed page, we do it after a quick hash table lookup. > Moving data is expensive, but it is less expensive than eviction or a new > page allocation (remember we are always under memory pressure). > > In case of too much fragmentation in the compressed cache (and we don't h= ave > any single cell whose compaction would result in enough free space), a > scheme to perform compaction moving compressed pages from a cell to anoth= er > could improve a lot the allocated memory space, but I am not sure the cos= t > would be affordable. Moreover, I am not sure this is a common case, anywa= y, > but it is very simple to keep track of the overall free space to have a > feeling of it after some tests. > > Your ideas for the use of compressed cache are very interesting. I would = say > that its usage in PDA/Cell phones is also something that may be interesti= ng > (even though, in this case, performance improvement may not be the primar= y > goal). > > I went through the 2.6 memory code recently, but very briefly. I had the > impression that it does not have substantial changes in the eviction path= , > except the inclusion of rmap. We could figure out how knowing the process= ing > mapping a page could help us. Another research topic would be checking > kVMTrace developed by Scott Kaplan. Maybe it could help us improving the > heuristic to avoid maladaptivity, but still I don't know much about it to > advance our discussion at this moment. > > As a general guideline, I would suggest that you take into consideration = the > design decisions I took in this work. They are very important and are all > the lessons I learned during the implementation and my research: > > - Cells composed of 2 contiguous pages > - Page cache (not only swap cache) > - Heuristic to try to disable compressed cache when it is not useful > - Virtual swap address > - Page order > - Possibility of swap compression > - Idea behind profit and cost lists in the heuristic > - Scheduling impact > - Buffer/IO operations impact > - Adjustment of Linux watermarks > > Having all these points in mind will let us to start a port or new > implementation far ahead. In summary, the main lesson is that the system = is > very much sensitive to any changes in it (mainly reduction of memory for > other caches). > > Let's keep in touch. As soon as I have a decision about the possibility o= f > working with you on this, I will let you know. > > Best regards, > > Rodrigo > > -----Original Message----- > From: Nitin Gupta [mailto:nit...@gm...] > Sent: quarta-feira, 16 de novembro de 2005 12:47 > To: ro...@te... > Cc: lin...@li... > Subject: Compressed cache status > > Hi Rodrigo, > > It was great to see your reply! > I've been working on compressed cache port to 2.6 for about 2 > months whenever I can take out the time. Currently not a single line > of code has been written. Since I was new to VMM subsystem, the > initial study took too much time. The details seemed overwhelming but > after a lot of work the picture is much clearer now. > So, the current status is that I have read a lot on VMM and your > paper on adaptive compressed caching. Now, I'm looking for the most > recent changes in kernel that can help compressed cache design. > Since, I've done some kernel patches (for CIFS vfs) before, which > involved lot of concurrency issues, I have a general feel of kernel > programming. So, I think once a complete "roadmap" is prepared, work > can really speed up. > > So, before I go hunting for the right kernel hooks, I wanted to > have following on paper: > > - Compressed cache structure - currently the cell based design (2-4 > pages taken as unit) has problem of fragmentation and relies on > compaction as last resort to free-up memory before starting to > swap-out to disks. Although, fragmentation cannot be eliminated but I > think a different design (for which I am looking for) can surely > reduce it. > Also, it needs to be addressable storage (kernel asks for page in > swap at xxxxx where is this in compressed cache?). Although you have > implemented this but I couldn't get it. > > - Adaptivity heuristic - I've read your current heuristic method but > as you mentioned that rmap and other recent developments in 2.6 can be > really helpful here. So, I'm now reading more on recent developments. > Heuristic quality will be very significant and can determine success > of this work. As you have shown setting up just the right static cache > size for a particular workload is so good for performance while the > wrong choice produces significant slowdown. > > - Compression algorithm - these can simply be reused from your > implementation. > > With these in hand, I can then proceed with implementation: find the > right kernel hooks, implement a "dummy" compressed cache (not really > do anything but just to make sure I've the right entry points), and > then gradually implement all the parts (from static to dynamic cache). > > Also, I'm particularly looking forward to its use in: > - LiveCD environment - swap in not available and CD-ROM read speed is > exceptionally slow. > - Virtualized environment - hard-disk is a file backed on host OS. So, > RAM and disc speed gap is even greater here (many more +ve factors > here). > > In general i'm not very busy and can spare a lot of time for this > work. But, unfortunately my semester exams are due to start early > december and last for about 3 weeks. However, I'm extremely interested > in this project and it would be great if I can get your help. > > > Best regards, > > Nitin > > > > |
From: Rodrigo S de C. <ro...@te...> - 2005-11-20 15:32:23
|
Hi Nitin, As I mentioned, there is a possibility that I return to compressed cache development. In the next two or three weeks I will have a decision about this. Concerning compressed cache structure, the fragmentation is in the cells (ie contiguous pages), and some metadata keep track of the fragmented space in the cell. Of course there is a small memory space overhead to keep this data and a performance penalty, but it is negligible given the benefit we have (some benchmark results would back this statement). What is interesting in this metadata is that we don't have much overhead with the fragmentation and the compaction. If it is worth to compact a cell rather than allocate a new one or evict a compressed page, we do it after a quick hash table lookup. Moving data is expensive, but it is less expensive than eviction or a new page allocation (remember we are always under memory pressure). In case of too much fragmentation in the compressed cache (and we don't have any single cell whose compaction would result in enough free space), a scheme to perform compaction moving compressed pages from a cell to another could improve a lot the allocated memory space, but I am not sure the cost would be affordable. Moreover, I am not sure this is a common case, anyway, but it is very simple to keep track of the overall free space to have a feeling of it after some tests. Your ideas for the use of compressed cache are very interesting. I would say that its usage in PDA/Cell phones is also something that may be interesting (even though, in this case, performance improvement may not be the primary goal). I went through the 2.6 memory code recently, but very briefly. I had the impression that it does not have substantial changes in the eviction path, except the inclusion of rmap. We could figure out how knowing the processing mapping a page could help us. Another research topic would be checking kVMTrace developed by Scott Kaplan. Maybe it could help us improving the heuristic to avoid maladaptivity, but still I don't know much about it to advance our discussion at this moment. As a general guideline, I would suggest that you take into consideration the design decisions I took in this work. They are very important and are all the lessons I learned during the implementation and my research: - Cells composed of 2 contiguous pages - Page cache (not only swap cache) - Heuristic to try to disable compressed cache when it is not useful - Virtual swap address - Page order - Possibility of swap compression - Idea behind profit and cost lists in the heuristic - Scheduling impact - Buffer/IO operations impact - Adjustment of Linux watermarks Having all these points in mind will let us to start a port or new implementation far ahead. In summary, the main lesson is that the system is very much sensitive to any changes in it (mainly reduction of memory for other caches). Let's keep in touch. As soon as I have a decision about the possibility of working with you on this, I will let you know. Best regards, Rodrigo -----Original Message----- From: Nitin Gupta [mailto:nit...@gm...] Sent: quarta-feira, 16 de novembro de 2005 12:47 To: ro...@te... Cc: lin...@li... Subject: Compressed cache status Hi Rodrigo, It was great to see your reply! I've been working on compressed cache port to 2.6 for about 2 months whenever I can take out the time. Currently not a single line of code has been written. Since I was new to VMM subsystem, the initial study took too much time. The details seemed overwhelming but after a lot of work the picture is much clearer now. So, the current status is that I have read a lot on VMM and your paper on adaptive compressed caching. Now, I'm looking for the most recent changes in kernel that can help compressed cache design. Since, I've done some kernel patches (for CIFS vfs) before, which involved lot of concurrency issues, I have a general feel of kernel programming. So, I think once a complete "roadmap" is prepared, work can really speed up. So, before I go hunting for the right kernel hooks, I wanted to have following on paper: - Compressed cache structure - currently the cell based design (2-4 pages taken as unit) has problem of fragmentation and relies on compaction as last resort to free-up memory before starting to swap-out to disks. Although, fragmentation cannot be eliminated but I think a different design (for which I am looking for) can surely reduce it. Also, it needs to be addressable storage (kernel asks for page in swap at xxxxx where is this in compressed cache?). Although you have implemented this but I couldn't get it. - Adaptivity heuristic - I've read your current heuristic method but as you mentioned that rmap and other recent developments in 2.6 can be really helpful here. So, I'm now reading more on recent developments. Heuristic quality will be very significant and can determine success of this work. As you have shown setting up just the right static cache size for a particular workload is so good for performance while the wrong choice produces significant slowdown. - Compression algorithm - these can simply be reused from your implementation. With these in hand, I can then proceed with implementation: find the right kernel hooks, implement a "dummy" compressed cache (not really do anything but just to make sure I've the right entry points), and then gradually implement all the parts (from static to dynamic cache). Also, I'm particularly looking forward to its use in: - LiveCD environment - swap in not available and CD-ROM read speed is exceptionally slow. - Virtualized environment - hard-disk is a file backed on host OS. So, RAM and disc speed gap is even greater here (many more +ve factors here). In general i'm not very busy and can spare a lot of time for this work. But, unfortunately my semester exams are due to start early december and last for about 3 weeks. However, I'm extremely interested in this project and it would be great if I can get your help. Best regards, Nitin |
From: Nitin G. <nit...@gm...> - 2005-11-17 05:22:25
|
The e-mail-id shown next to my name in the post 'compressed cache status' is wrong. it shows nit...@gm... but actually my id is nit...@gm... why it is showing incorrect id? Thanks Nitin |
From: Nitin G. <nit...@gm...> - 2005-11-16 14:47:12
|
Hi Rodrigo, It was great to see your reply! I've been working on compressed cache port to 2.6 for about 2 months whenever I can take out the time. Currently not a single line of code has been written. Since I was new to VMM subsystem, the initial study took too much time. The details seemed overwhelming but after a lot of work the picture is much clearer now. So, the current status is that I have read a lot on VMM and your paper on adaptive compressed caching. Now, I'm looking for the most recent changes in kernel that can help compressed cache design. Since, I've done some kernel patches (for CIFS vfs) before, which involved lot of concurrency issues, I have a general feel of kernel programming. So, I think once a complete "roadmap" is prepared, work can really speed up. So, before I go hunting for the right kernel hooks, I wanted to have following on paper: - Compressed cache structure - currently the cell based design (2-4 pages taken as unit) has problem of fragmentation and relies on compaction as last resort to free-up memory before starting to swap-out to disks. Although, fragmentation cannot be eliminated but I think a different design (for which I am looking for) can surely reduce it. =09Also, it needs to be addressable storage (kernel asks for page in swap at xxxxx where is this in compressed cache?). Although you have implemented this but I couldn't get it. - Adaptivity heuristic - I've read your current heuristic method but as you mentioned that rmap and other recent developments in 2.6 can be really helpful here. So, I'm now reading more on recent developments. Heuristic quality will be very significant and can determine success of this work. As you have shown setting up just the right static cache size for a particular workload is so good for performance while the wrong choice produces significant slowdown. - Compression algorithm - these can simply be reused from your implementati= on. With these in hand, I can then proceed with implementation: find the right kernel hooks, implement a "dummy" compressed cache (not really do anything but just to make sure I've the right entry points), and then gradually implement all the parts (from static to dynamic cache). Also, I'm particularly looking forward to its use in: - LiveCD environment - swap in not available and CD-ROM read speed is exceptionally slow. - Virtualized environment - hard-disk is a file backed on host OS. So, RAM and disc speed gap is even greater here (many more +ve factors here). In general i'm not very busy and can spare a lot of time for this work. But, unfortunately my semester exams are due to start early december and last for about 3 weeks. However, I'm extremely interested in this project and it would be great if I can get your help. Best regards, Nitin |
From: void <chi...@ya...> - 2005-01-05 09:22:15
|
Dear Sirs , I am studying in the 3 rd year of Engineering in India in Information Technology I have been studying the Linux O S for some time and also the Linux Kernel . I have started programming in the Linux Kernel I have found your project interesting and would like to know more and even if possible work on it. Looking forward for your cooperation Chirag Jog Inspired by the Devil , driven by the passion --------------------------------- Do you Yahoo!? Yahoo! Mail - You care about security. So do we. |
From: Olivier K. <ka...@ka...> - 2004-11-25 21:22:33
|
>> I tried to apply the latest patch (for 2.4.18) to 2.4.27, but >> some work needs to be done, as little things seem to have changed in >> the code. (about 7 rejects, not obvious to patch by hand) > > I am going to check what are these rejects. If they are simple, I will > post a patch for you soon. :-) Ho, many thanks Rodrigo. So I'll be able to start working again on this really soon. Great ! |
From: Rodrigo S de C. <ro...@te...> - 2004-11-25 20:30:55
|
Hi Olivier,=20 [Sorry for the delay] On Sun, Nov 07, 2004 at 02:42:08AM +0100, Olivier Kaloudoff wrote: > I'm really interested in linux compressed, as my current > system is really slow when it comes to I/O access; >=20 > 3x180Go IDE Disks, unfortunatelly they stand on > two IDE channels. >=20 > Using "du -sk *" on a 110Go partition is slow and takes no more=20 > than 20% CPU. Launching the same command a second time is not faster,=20 > which makes me think that some caching or cache compression technique=20 > would be able to improve things. >=20 > I have 386Mo RAM, 1Go swap, no special tunning as of now.=20 > Unfortunatelly, running 2.4.27, I'm unable to find the=20 > /proc/sys/vm/buffermem file which would have allowed me to tune I/O=20 > buffers. >=20 > Do you think that compressed linux could improve performance in my=20 > case ? (I think that low ram is one of my problems, even if I don't have= =20 > more than X running) Depending on your workload, compressed cache may help you. It will use memory for compressing pages from the page cache that as less recently used. Given that the memory is used, only a small portion of your disk data will be cached in it. It is very useful for "enlarging" your page cache through compression, reducing the anonymous memory to be written to the disk or the file pages that could be discarded and reread from the disk. It is unlikely to be the solution for your "du", but it may help you when using your system. I'd like to know what would be your experience with it. > I tried to apply the latest patch (for 2.4.18) to 2.4.27, but > some work needs to be done, as little things seem to have changed in > the code. (about 7 rejects, not obvious to patch by hand) I am going to check what are these rejects. If they are simple, I will post a patch for you soon. :-) Regards, --=20 Rodrigo |
From: Olivier K. <ka...@ka...> - 2004-11-07 01:42:18
|
Hi, I'm really interested in linux compressed, as my current system is really slow when it comes to I/O access; 3x180Go IDE Disks, unfortunatelly they stand on two IDE channels. Using "du -sk *" on a 110Go partition is slow and takes no more than 20% CPU. Launching the same command a second time is not faster, which makes me think that some caching or cache compression technique would be able to improve things. I have 386Mo RAM, 1Go swap, no special tunning as of now. Unfortunatelly, running 2.4.27, I'm unable to find the /proc/sys/vm/buffermem file which would have allowed me to tune I/O buffers. Do you think that compressed linux could improve performance in my case ? (I think that low ram is one of my problems, even if I don't have more than X running) I tried to apply the latest patch (for 2.4.18) to 2.4.27, but some work needs to be done, as little things seem to have changed in the code. (about 7 rejects, not obvious to patch by hand) any comments very welcome ! Regards, Olivier Kaloudoff LUG Linux Azur http://www.linux-azur.org |
From: Vuong N. V. <vu...@Cy...> - 2004-07-20 03:40:53
|
Dear , you! please, help me about way by to increase free memory and decrease = cached memory! thank you very much! vuong=20 best regard! |
From: Bob A. <cu...@gr...> - 2004-06-27 16:27:04
|
interesitng approach. project seem to be quite dead and obsolete,=20 but there are few buggy releases. On Sun, 6 Jun 2004 20:18:18 +0900 =B9=DA=C0=AF=C5=C2 <le...@ye...> wrote: > Hello!!! This is Youtae Park, in Korea. I=A1=AFm Univ. student. And now > I=A1=AFll try to compressed cache page in the linux kernel for my term > project like the Paper compressing cache. I understand that Paper. But, > I couldn=A1=AFt make compressing algorithm, especailly=20 >=20 |
From: <le...@ye...> - 2004-06-06 11:20:27
|
Hello!!! This is Youtae Park, in Korea. I=A1=AFm Univ. student. And now I=A1=AFll try to compressed cache page in the linux kernel for my term project like the Paper compressing cache. I understand that Paper. But, I couldn=A1=AFt make compressing algorithm, especailly=20 |
From: <le...@ye...> - 2004-06-06 11:18:04
|
Hello!!! This is Youtae Park, in Korea. I=A1=AFm Univ. student. And now I=A1=AFll try to compressed cache page in the linux kernel for my term project like the Paper compressing cache. I understand that Paper. But, I couldn=A1=AFt make compressing algorithm, especially working in the = linux kernel, so could I get the compressing source that can compress cache page. Please, help me.=20 Have nice day. I=A1=AFd like to looking forward the answer, as soon as quickly=A1=A6^^ =20 |
From: Rodrigo S. de C. <rc...@im...> - 2004-03-03 21:09:20
|
On Sat, Feb 21, 2004 at 02:31:01PM +0000, Peter Ruskin wrote: > I'm interested in trying compressed caching but I'm using linux > kernel 2.6.3. > > Are there any patches yet? Unfortunately I am unable to work on the compressed caching project at the moment in spite of so many interesting ideas that could be implemented and researched. In order to port to 2.6.3, one would have to know all the VM changes involved in this kernel version, what will require quite a while, besides a preliminary knowledge of the 2.4 implementation of CC. Maybe in the near future I will have more spare time to work on the project. Best regards, -- Rodrigo |
From: Bob A. <cu...@gr...> - 2004-02-29 21:01:41
|
> 1. Is my idea useful? to some extent. in current stage of development you could use compressed swap (even if you don't have any you can dedicate bit of memory (see below) for swap, as data in swap are compressed you'll benefit in having some more swap space. why you'll not benefit from compressing cache? because your data are already compressed by cloop. except if in cloop code buffers are stored in uncompressed form. besides this code of compressed cache isn't fully functions (it contains bugs) ofcourse someone maybe will fix them someday :) > 2. You know any other solution? using videoram . it is quite easy to check, and after X windows are started _with nozap option_ even if you'll benefit by obtaining as little as 4M of ram, yo can use compressed swap featre form linxcompressed patch, this will make bit more room for your data. try searching google , there were few people describing how to set it up. > 3. You know anybody working on same thing for one CD Linux? i use compressed cache in i.e. 486 laptop with 8M of ram and wifi x86 based router (so it switches it disk off) having swap compressed is generally good idea (i.e it is harder to spoof it, it is harder to decrypt it too, and anyway disk seek time is larger than compression time |
From: Digital I. Inc. <ok...@di...> - 2004-02-29 19:56:13
|
Hello developers. I am a member of coLinux(a kind of Linux on Windows) and working on KNOPPIX, an one CD Linux distribution also. And both project struggles with same problem ... shortage of memory. KNOPPIX must run without HDD. So it needs more cache buffer than usual but can not even use swap. coLinux has same problem. It stays on special area of Windows NT kernel memory. And it has limitation of size. I think one of the candidates of solution is your compressed caching. I want ask you three questions: 1. Is my idea useful? 2. You know any other solution? 3. You know anybody working on same thing for one CD Linux? Sorry for funny English. I hope you understand it. Okajima, Jun. President, Digitalinfra, Inc. Tokyo, Japan. http://www.digitalinfra.co.jp/ Member of coLinux Development Team. http://www.colinux.org/ |
From: Peter R. <pet...@ds...> - 2004-02-21 14:37:47
|
Hi, I'm interested in trying compressed caching but I'm using linux kernel 2.6.3. Are there any patches yet? |
From: Bob A. <cu...@pb...> - 2003-12-23 17:55:53
|
nope, false alarm, apologize to all who believed in this and did investigation (the pppd part) my isp is down . also my cu...@pb... is down, as something serious f* up @ isp side. warm thanx to all lc-devel for their work :D. i am writting from lc-enabled laptop (using loopback file in ram as a compressed swap with success) :) and i just noticed i used two such machines without serious problems :) On Mon, 22 Dec 2003 19:57:34 +0100 Bob Arctor <cu...@pb...> wrote: > <curious> bluefoxicy: btw. i have also suspection that compcache patch breaks ppp code (or some other code, like buffers so corrupted that it appears as this) . i have patched kernel on firewall, and it have uptime of 8 days... but now it behaves strange, some programs segfault > <-- nevdull has quit (Remote closed the connection) > <curious> bluefoxicy: and ppp0 connection is dropped very often now (not recieving packets, while everything seems to work ok > <curious> i'll switch to vanilla, because i am not sure that it isnt my modem (it is 2 years old device) or my isp (operating also on aged equip) . i'll probably know today for sure > <curious> such behaviour (of ppp) happened before, but very rarely, once a month or so > <curious> on vanilla > <curious> notihing in dmesg though > <bluefoxicy> curious: tell castro o.o > so i do :) > > > ------------------------------------------------------- > This SF.net email is sponsored by: IBM Linux Tutorials. > Become an expert in LINUX or just sharpen your skills. Sign up for IBM's > Free Linux Tutorials. Learn everything from the bash shell to sys admin. > Click now! http://ads.osdn.com/?ad_id=1278&alloc_id=3371&op=click > _______________________________________________ > linuxcompressed-devel mailing list > lin...@li... > https://lists.sourceforge.net/lists/listinfo/linuxcompressed-devel |
From: Bob A. <cu...@pb...> - 2003-12-22 19:13:27
|
<curious> bluefoxicy: btw. i have also suspection that compcache patch breaks ppp code (or some other code, like buffers so corrupted that it appears as this) . i have patched kernel on firewall, and it have uptime of 8 days... but now it behaves strange, some programs segfault <-- nevdull has quit (Remote closed the connection) <curious> bluefoxicy: and ppp0 connection is dropped very often now (not recieving packets, while everything seems to work ok <curious> i'll switch to vanilla, because i am not sure that it isnt my modem (it is 2 years old device) or my isp (operating also on aged equip) . i'll probably know today for sure <curious> such behaviour (of ppp) happened before, but very rarely, once a month or so <curious> on vanilla <curious> notihing in dmesg though <bluefoxicy> curious: tell castro o.o so i do :) |
From: Gavin H. <gd...@ac...> - 2003-11-24 13:30:55
|
Hullo :) As the Subject suggests, can someone tell me what's the status with these patches and kernel 2.4.22? I tried applying both the 'stable' 2.4.18 one and the developemnt 2.4.19 patch from teh sourceforge Files section, but got errors with both. Is 2.4 x still supported or should I migrate to 2.6? Cheers, Gavin. |
From: Bob A. <cu...@pb...> - 2003-10-22 01:15:54
|
ok, it works fine since i compiled and installed it on my firewall. it is nfsroot machine. i prepared version without sanity checks for next reboot there. comp ratio is ~ 60% there , machine is lot more responsive. on laptop it isn't working so fine :/ it is machine with mere hdd, and squashfs patch , because part of fs is compressed - with plan to be moved to r-o flash medium when i will be able to buy such device (they still lack of ~ 512M of DDRAM for r/w things , i wait for flash+dram 'compact flash' cards) machine boots ok, but first sign that something is wrong is that /var/pcmcia.log can't be changed, - i/o error is reported. then i start X and ... machine stops reading from disk just after detection of vidcards. even after ctrl-c , no other program can read from the disk. if i disable 'page cache compression' machine works fine :) i get 36% compression ratio then , and lot of speedup. i have two X servers running, and two vncviewers on each of them - one is mine, other is my gf's Xvnc. when i switch from one X to other (by changing VT) it almost doesn't swap out, and it is quite quick. before patch it took 5-6 seconds (vs 1 sec with patch) and i had 'swap storm' during switching. On Tue, 21 Oct 2003 02:42:53 +0200 Bob Arctor <cu...@pb...> wrote: > > John R Moser sent me fixed WKdm.c , it should compile on older gcc and is C compliant :) > > > > On Fri, 17 Oct 2003 19:39:34 -0400 (EDT) > John R Moser <jm...@st...> wrote: > > > bob, I've tried oM, it's neat but not something i'm interested in ATM > > since it was a little unstable to me. I'd wait until 2.6 or 2.8 and just > > merge them there. I'm actually hoping linus will merge grsecurity and > > ccache into 2.6 by the end of its testing phase, but we'd need ccache on > > rmap and a 2.6 grsec first. oM is something i'm not too worried about > > right now. > > > > One thing I'm worried about though is that the ccache patched kernel > > doesn't seem to like my Nvidia card. Since Nvidia cards are common, this > > needs to be fixed. I'm not sure if it's a ccache bug or an nvdriver bug, > > or just a kernel bug that ccache brings out. I think ccache will work > > better in 2.4.22 and .23 for these though (it worked in .21 with my > > nvdriver) so it may just be kernel. > > > > There's also the 2.4 ccache (and oM, and many other patches) issue that > > `iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE` doesn't seem to > > work due to 'target problem'. Yes, the masq module is compiled. Try > > it. Try to fix this, since it's kind of important, NAT being popular and > > all. > > > > Anyway, here's my test results for a kernel compile. I think the > > measurement for using ccache was a bit high time-wise because I DID swap > > some, and so I am going to redo the test later with no swap enabled. > > Eventually I need to do some 20 tests under each condition. I'm still > > shakey about lzo so I didn't compile its init instruction into my kernel; > > however, i've discovered that compalg=WK4x4 and compalg=WKdm work > > properly! o.o! > > > > The trivial bug on line 309 I think of proc.c that spews so much about > > algorithm like 13000 or something is from being called by swapout.c and > > trying to update the stats for the algorithm that an UNCOMPRESSED page is > > using when it swaps out. Oops. Need to make a NO_ALGORITHM_IDX entry > > load at boot and use that as a dump for uncompressed stats. Also need to > > rewrite the page allocation code and so on. > > > > I'm not going to be writing anymore code in this for now. Go ahead and > > write some without me. I don't know how to work with modules, so none of > > that from me. > > > > Move the LZO, WKdm, and WK4x4 algs into modules and allow them to > > optionally be compiled in (but not default alg if loaded as modules). > > Give them the interface of the core central compression library (same as > > zlib). > > > > If there is no standard interface for the compression library functions, > > write one on top of the kernel's zlib and also place it on top of WKdm, > > WK4x4, and LZO when you dump them into the library core. Also, try to > > insert a registration system init function on top of the kernel's zlib > > interface, and call its init before initing compressed page cache if it > > is compiled in to the kernel for both compression and decompression. > > > > Someone needs to check WKdm and WK4x4 to see if they are really workable > > for >4k pages. > > > > --Bluefox Icy > |
From: Bob A. <cu...@pb...> - 2003-10-21 01:52:20
|
John R Moser sent me fixed WKdm.c , it should compile on older gcc and is C compliant :) On Fri, 17 Oct 2003 19:39:34 -0400 (EDT) John R Moser <jm...@st...> wrote: > bob, I've tried oM, it's neat but not something i'm interested in ATM > since it was a little unstable to me. I'd wait until 2.6 or 2.8 and just > merge them there. I'm actually hoping linus will merge grsecurity and > ccache into 2.6 by the end of its testing phase, but we'd need ccache on > rmap and a 2.6 grsec first. oM is something i'm not too worried about > right now. > > One thing I'm worried about though is that the ccache patched kernel > doesn't seem to like my Nvidia card. Since Nvidia cards are common, this > needs to be fixed. I'm not sure if it's a ccache bug or an nvdriver bug, > or just a kernel bug that ccache brings out. I think ccache will work > better in 2.4.22 and .23 for these though (it worked in .21 with my > nvdriver) so it may just be kernel. > > There's also the 2.4 ccache (and oM, and many other patches) issue that > `iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE` doesn't seem to > work due to 'target problem'. Yes, the masq module is compiled. Try > it. Try to fix this, since it's kind of important, NAT being popular and > all. > > Anyway, here's my test results for a kernel compile. I think the > measurement for using ccache was a bit high time-wise because I DID swap > some, and so I am going to redo the test later with no swap enabled. > Eventually I need to do some 20 tests under each condition. I'm still > shakey about lzo so I didn't compile its init instruction into my kernel; > however, i've discovered that compalg=WK4x4 and compalg=WKdm work > properly! o.o! > > The trivial bug on line 309 I think of proc.c that spews so much about > algorithm like 13000 or something is from being called by swapout.c and > trying to update the stats for the algorithm that an UNCOMPRESSED page is > using when it swaps out. Oops. Need to make a NO_ALGORITHM_IDX entry > load at boot and use that as a dump for uncompressed stats. Also need to > rewrite the page allocation code and so on. > > I'm not going to be writing anymore code in this for now. Go ahead and > write some without me. I don't know how to work with modules, so none of > that from me. > > Move the LZO, WKdm, and WK4x4 algs into modules and allow them to > optionally be compiled in (but not default alg if loaded as modules). > Give them the interface of the core central compression library (same as > zlib). > > If there is no standard interface for the compression library functions, > write one on top of the kernel's zlib and also place it on top of WKdm, > WK4x4, and LZO when you dump them into the library core. Also, try to > insert a registration system init function on top of the kernel's zlib > interface, and call its init before initing compressed page cache if it > is compiled in to the kernel for both compression and decompression. > > Someone needs to check WKdm and WK4x4 to see if they are really workable > for >4k pages. > > --Bluefox Icy |
From: John R M. <jm...@st...> - 2003-10-17 23:51:05
|
bob, I've tried oM, it's neat but not something i'm interested in ATM since it was a little unstable to me. I'd wait until 2.6 or 2.8 and just merge them there. I'm actually hoping linus will merge grsecurity and ccache into 2.6 by the end of its testing phase, but we'd need ccache on rmap and a 2.6 grsec first. oM is something i'm not too worried about right now. One thing I'm worried about though is that the ccache patched kernel doesn't seem to like my Nvidia card. Since Nvidia cards are common, this needs to be fixed. I'm not sure if it's a ccache bug or an nvdriver bug, or just a kernel bug that ccache brings out. I think ccache will work better in 2.4.22 and .23 for these though (it worked in .21 with my nvdriver) so it may just be kernel. There's also the 2.4 ccache (and oM, and many other patches) issue that `iptables -t nat -I POSTROUTING -o eth0 -j MASQUERADE` doesn't seem to work due to 'target problem'. Yes, the masq module is compiled. Try it. Try to fix this, since it's kind of important, NAT being popular and all. Anyway, here's my test results for a kernel compile. I think the measurement for using ccache was a bit high time-wise because I DID swap some, and so I am going to redo the test later with no swap enabled. Eventually I need to do some 20 tests under each condition. I'm still shakey about lzo so I didn't compile its init instruction into my kernel; however, i've discovered that compalg=WK4x4 and compalg=WKdm work properly! o.o! The trivial bug on line 309 I think of proc.c that spews so much about algorithm like 13000 or something is from being called by swapout.c and trying to update the stats for the algorithm that an UNCOMPRESSED page is using when it swaps out. Oops. Need to make a NO_ALGORITHM_IDX entry load at boot and use that as a dump for uncompressed stats. Also need to rewrite the page allocation code and so on. I'm not going to be writing anymore code in this for now. Go ahead and write some without me. I don't know how to work with modules, so none of that from me. Move the LZO, WKdm, and WK4x4 algs into modules and allow them to optionally be compiled in (but not default alg if loaded as modules). Give them the interface of the core central compression library (same as zlib). If there is no standard interface for the compression library functions, write one on top of the kernel's zlib and also place it on top of WKdm, WK4x4, and LZO when you dump them into the library core. Also, try to insert a registration system init function on top of the kernel's zlib interface, and call its init before initing compressed page cache if it is compiled in to the kernel for both compression and decompression. Someone needs to check WKdm and WK4x4 to see if they are really workable for >4k pages. --Bluefox Icy |
From: Bob A. <cu...@pb...> - 2003-10-17 21:32:34
|
ah, i almost forgot... there is openmosix kernel patch, which is very likely to be compatible with ccache, but i doubt that in SMP mode i think it could be good start to try to merge oM patch with your vanilla, and then try to make them both live together just a thought, because compressed cache data migrating over cluster would _greatly_ improve performance (as bandwitch is main issue then) , also shared compressed cache over cluster would be good thing to consider... anyway , even if you do not plan any development toward this, i would recommend you to at least lurk @ oM code, and point out possible problems with ccache patch to oM developers On Thu, 16 Oct 2003 16:37:34 -0400 (EDT) John R Moser <jm...@st...> wrote: > bluefox@Icechip bluefox $ cat /proc/comp_cache_stat > compressed cache - statistics > general > - (S) swap cache support enabled > - (P) page cache support enabled > - compressed swap support enabled > - maximum used size: 1088 KiB > - comp page size: 32 KiB > - failed allocations: 5381 > - Default algorithm: 1 (WKdm) > > algorithm WK4x4 > - (C) compressed pages: 0 (S: 0% P: 0%) > - (D) decompressed pages: 0 (S: 0% P: 0%) D/C: 0% > - (R) read pages: 0 (S: 0% P: 0%) R/C: 0% > - (W) written pages: 0 (S: 0% P: 0%) W/C: 0% > compression ratio: 0% (S: 0% P: 0%) > > algorithm WKdm > - (C) compressed pages: 159233 (S: 76% P: 23%) > - (D) decompressed pages: 94276 (S: 99% P: 0%) D/C: 59% > - (R) read pages: 5042 (S: 88% P: 11%) R/C: 3% > - (W) written pages: 57464 (S: 100% P: 0%) W/C: 36% > compression ratio: 53% (S: 44% P: 81%) > bluefox@Icechip bluefox $ > > A few bugs. It loses some pages once in a while and apps segfault once > in a while. We'll say that by Microsoft standards, it's stable. Quite a > long way to go before it's safe for production systems. > > I'd recommend single or double page size with WKdm (btw I hacked at that > a little. . . looked like you weren't allocating enough ram. I MAY have > overdone some of it though, because i couldn't tell from the code what 2 > of the arrays were supposed to be size-wise). Also, try out the no > spin_lock() code (under Verbose Options) and the boot-up sanity check > (look at it with dmesg). That sanity check has caught a few typos for me > that probably would have destroyed my system (fortunately it's just my > laptop I'm testing on). > > > > HACKISH CODE > > Most of my code is VERY hackish. I don't know how to make SURE that the > algorithm_idx is maintained for each piece of compressed data. I don't > know how to alter the meta data in the swap pages. Thus, I just > constantly set the page->algorithm_idx and fragment->algorithm_idx to > default_algorithm_idx whenever they're looked at. This means that the > algorithms can't be changed or anything! Until these hacks are removed, > there will be no altering of algorithms while running. I don't > understand the code enough to make it safe to remove all my hacks. > > First thing's first, go into comp_cache.h and find the > comp_cache_page_metadata structure on lines 275-280. When reading meta > data, instead of doing crap like: > > fragment_index = meta_data_offset + 4; > size = meta_data_offset; > offset = meta_data_offset + 2; > > You should have crap like the following: > > meta_data = meta_data_offset; > fragment_index = meta_data->fragment; > size = meta_data->size; > offset = meta_data->offset; > > Take a look at proc.c lines 563 to 593 and think about this. Instead of > incrimenting by 8 you could incriment by sizeof(struct > comp_cache_page_metadata). Thus it would be easy to modify the meta data > in the page. It'd also be easier to understand/grok the code. > > I'm not fixing the meta data thing. Castro can, or someone else can fix > it for him. I'm done with this at least until that gets taken care of. > Either way though, don't rely on me to clean up after myself; while > you're in there fixing that you should also clean up my bad/dirty hacks. > Most of them are marked. Just read through proc.c from top to bottom. > > > > UNHANDLED BUSY SIGNALS > > Two or three of the functions use the busy signal system I outlined > earlier to be SMP/Pre-empt safe. When they're busy, they by some method > return a busy signal. One of them does it differently and that should be > rewritten to not be dumb. Other than that, none of the calls really > handle the busy signal. They should wait 200 mS, then try again. After > a few tries (maybe 5? 1 whole second is long enough!) they should just > exit with a failed status. > > > > SMP > > Try this code out on SMP if you've got a spare system. It's buggy, so > YES it will cause quite a few hickups from lost pages, so DO NOT USE ON > SYSTEM WITH CRITICAL DATA!!! If you enable SMP it will force-enable the > spin_lock() free code. I'm interested in knowing if this works on SMP > now. The following changes have been made: > > - No spin_lock() calls. Functions are safe to call multiple times. > - Separate data for all buffers and algorithms. Any algorithm can be run > multiple times in parallel to compress and decompress, only limited by > the amount of physical ram. > > I haven't checked for static variables in all functions so be wary of > that! It may not yet be SMP safe but I think it is. > > --Bluefox Icy |
From: John R M. <jm...@st...> - 2003-10-16 20:42:50
|
bluefox@Icechip bluefox $ cat /proc/comp_cache_stat compressed cache - statistics general - (S) swap cache support enabled - (P) page cache support enabled - compressed swap support enabled - maximum used size: 1088 KiB - comp page size: 32 KiB - failed allocations: 5381 - Default algorithm: 1 (WKdm) algorithm WK4x4 - (C) compressed pages: 0 (S: 0% P: 0%) - (D) decompressed pages: 0 (S: 0% P: 0%) D/C: 0% - (R) read pages: 0 (S: 0% P: 0%) R/C: 0% - (W) written pages: 0 (S: 0% P: 0%) W/C: 0% compression ratio: 0% (S: 0% P: 0%) algorithm WKdm - (C) compressed pages: 159233 (S: 76% P: 23%) - (D) decompressed pages: 94276 (S: 99% P: 0%) D/C: 59% - (R) read pages: 5042 (S: 88% P: 11%) R/C: 3% - (W) written pages: 57464 (S: 100% P: 0%) W/C: 36% compression ratio: 53% (S: 44% P: 81%) bluefox@Icechip bluefox $ A few bugs. It loses some pages once in a while and apps segfault once in a while. We'll say that by Microsoft standards, it's stable. Quite a long way to go before it's safe for production systems. I'd recommend single or double page size with WKdm (btw I hacked at that a little. . . looked like you weren't allocating enough ram. I MAY have overdone some of it though, because i couldn't tell from the code what 2 of the arrays were supposed to be size-wise). Also, try out the no spin_lock() code (under Verbose Options) and the boot-up sanity check (look at it with dmesg). That sanity check has caught a few typos for me that probably would have destroyed my system (fortunately it's just my laptop I'm testing on). HACKISH CODE Most of my code is VERY hackish. I don't know how to make SURE that the algorithm_idx is maintained for each piece of compressed data. I don't know how to alter the meta data in the swap pages. Thus, I just constantly set the page->algorithm_idx and fragment->algorithm_idx to default_algorithm_idx whenever they're looked at. This means that the algorithms can't be changed or anything! Until these hacks are removed, there will be no altering of algorithms while running. I don't understand the code enough to make it safe to remove all my hacks. First thing's first, go into comp_cache.h and find the comp_cache_page_metadata structure on lines 275-280. When reading meta data, instead of doing crap like: fragment_index = meta_data_offset + 4; size = meta_data_offset; offset = meta_data_offset + 2; You should have crap like the following: meta_data = meta_data_offset; fragment_index = meta_data->fragment; size = meta_data->size; offset = meta_data->offset; Take a look at proc.c lines 563 to 593 and think about this. Instead of incrimenting by 8 you could incriment by sizeof(struct comp_cache_page_metadata). Thus it would be easy to modify the meta data in the page. It'd also be easier to understand/grok the code. I'm not fixing the meta data thing. Castro can, or someone else can fix it for him. I'm done with this at least until that gets taken care of. Either way though, don't rely on me to clean up after myself; while you're in there fixing that you should also clean up my bad/dirty hacks. Most of them are marked. Just read through proc.c from top to bottom. UNHANDLED BUSY SIGNALS Two or three of the functions use the busy signal system I outlined earlier to be SMP/Pre-empt safe. When they're busy, they by some method return a busy signal. One of them does it differently and that should be rewritten to not be dumb. Other than that, none of the calls really handle the busy signal. They should wait 200 mS, then try again. After a few tries (maybe 5? 1 whole second is long enough!) they should just exit with a failed status. SMP Try this code out on SMP if you've got a spare system. It's buggy, so YES it will cause quite a few hickups from lost pages, so DO NOT USE ON SYSTEM WITH CRITICAL DATA!!! If you enable SMP it will force-enable the spin_lock() free code. I'm interested in knowing if this works on SMP now. The following changes have been made: - No spin_lock() calls. Functions are safe to call multiple times. - Separate data for all buffers and algorithms. Any algorithm can be run multiple times in parallel to compress and decompress, only limited by the amount of physical ram. I haven't checked for static variables in all functions so be wary of that! It may not yet be SMP safe but I think it is. --Bluefox Icy |