Hi sir, I have been using 7z source code in a embedded system without any problem to decompress files of a maximum 1Mbyte. Recently I have had the requirement of decompressing higher files and I have found out the memory system limitations. Due to them, the system restarts so I wonder if there is any way to limit the 7z dynamic memory allocation in order to avoid this problem although it could make slower the decompress process. Excuse me if it is not the first time you answer this question but I have tried to find it in this website without success. Thanks in advance from Spain.
Write more details:
archive size, number of files, file sizes, lzma dictionary size, RAM size, and so on.
* Number of files: 1
* File size: The reached limit in which the system decompresses correctly is around 1Mbyte. Over this size, problems appear.
* File data format: Text.
* Archive size: Around 4 Kbytes.
* RAM size: 0x120000 (1.179.648 bytes) but it is not possible to know how many bytes are available for application because this OS could give to application the memory it asks for but it is not sure if some bytes are being used by the own OS which leads the system to crash.
* LZMA dictionary size: I believe (but I'm not sure) the LZMA dictionary size is defined in compression process and it is not set up then, so I suppose it should be a default value (we compress files with the mouse right buttom option: 7-Zip-> Add file to "name.7z").
What code do you use? C or C++ version?
Your RAM size is just 1 MB, and you want to extract 2 MB file?
I'm using C version.
RAM issue is a complex subject with this kind of system (even for provider to explain how it is managed by OS and how many is available for apps…). Physical RAM memory is 4 Mbytes although this memory is not full for applications but for OS and applications. It is not known the quantity of RAM the OS requires but it is guaranteed that there must be free for applications around 0x120000 (1.179.648 bytes). On the other hand, when application ask for dynamic memory the system gives it although it could be used by the own system. In this case system crashes. For this reason, the objetive should be using as less RAM as possible in order to decompress a maximum file size of 2 Mbytes (or 50-100 Kbytes 7-Zip archive).
What decompression do you need?
RAM to RAM
file to file ?
Excuse me, my knowledge about compression/decompression is close to zero and now I find out with your question it could be possible to decompress file to file. In my opinion and according to bad experience with RAM to RAM method (due to this crazy system…), which is what currently I am using, it would be very interesting to check the "file to file" option because, although it will be done in flash memory (quite slower than RAM), it could fit to our time requirements, avoiding the dangerous system RAM. So, how could I try this?
If you decompress 2 MB file with C codeб you must have more than 2 MB free in RAM.
For file to file decompression you can use C++ code with callbacks.
It can reduce RAM consumption in some cases.
I have asked manufacturer for confirmation if his compiler (and libraries) supports C++ code and it does. I'm going to try to adapt the current C project to C++. For this task, it would be very interesting to have an example of that callback using with 7-Zip functions. Best regards.
But maybe C++ is not good idea for you. C++ version uses multithreading.
I have another suggestion.
1) raw LZMA,
2) xz format (if you need CRC check)
The code for these format allows to use small RAM amount with small dictionary:
512 KB LZMA dectionary
32 KB - input buffer
32 KB - output buffer
Thanks Igor for the information.
Working on the line you propose I have tried next compression command:
LZMA e input output.lzma -d19 -lc0
in order to obtain a 512 Kbyte LZMA dictionary (-d19) and reduce memory requirements for decompression (-lc0).
It compresses from a ~1Mbyte file to a ~4 Kbyte LZMA archive (very good ratio and similar to 7-Zip).
Now, I just have to adapt the application to LZMA (it was adapted only for 7-Zip), which will keep entertaining for a while. After that, I'll try to exceed the current limit of ~1 Mbyte file.
I'll keep you informed.
-lc0 provides only additional 10 KB.
So you can use LZMA without -lc0.
After two weeks working as a firefighter on other projects, I have come back to LZMA issue and I can tell you my conclusions:
- Such as you told me, LZMA uses a limited quantity of RAM, very useful for embedded systems, with a similar compression ratio to 7-Zip algorithm, very high speed and it has been possible to supperate our objetive of decompress 1,2 Mbytes files (I have tried until 6 Mbytes with no problems). Now the limit is not the RAM but the flash disk; This is a very good improvement.
- The adaptation to LZMA had only one problem: Function Decode2 has two big buffers (inBuf and outBuf) which overload my system's stack. To avoid it, both buffers were defined just as 'static' in order to allocate them in heap memory.
Please, could you send me a link to download the UNIX/LINUX LZMA compressor?
Log in to post a comment.
Sign up for the SourceForge newsletter:
You seem to have CSS turned off.
Please don't fill out this field.