Thank again. I was able to compress the data without any issues now.
I'm trying to understand how the size of the uncompressed buffer gets written in the LZMA header. I understood that first 5 bytes of the header is LZMA properties and uncompressed size is next 8 bytes. If I'm calling LzmaCompress API (LzmaLib.h interface), I'm not sure at what point uncompressed size is written to the header. Can you please point me to that?
*If you compress some data from RAM to RAM with LZMA, you have two ways:
1) You always use LZMA. Then output buffer must be about of 1.1 * uncompressSize + 16K.
-> Isn't output buffer (compressed buffer) supposed to be smaller than input buffer (uncompressed buffer). Also, is it 16K or just 16?
2) You provide only uncompressSize for output buffer. and if you have SZERROROUTPUT_EOF, just write data as uncompressed with some another header.
-> How does this work? Not sure what do you mean here. Can you give some example.
Thanks.
Rajeev
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
1)
LzmaEncode doesn't write header.
You can create header so:
propsEncoded - 5 bytes and then 8 bytes from *destLen.
2) If the data can't be compressed, it will consume more space than original data.
3) 16 KB.
4) You allocate output buffer of exact size as original data. If LzmaEncode returns SZ_ERROR_OUTPUT_EOF error, you don't use LZMA format, just write the data without compression. Fot example, you can use one additional byte in header: 0 - No compresssion, 1- LZMA compression.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
LzmaDecode API takes pointer to the Allocation APIs (ISzAlloc). In our use case, we are allocating the buffer and then managing that buffer internally in ISzAlloc->Allocate API. How can I find out how much should we allocate the buffer. Currently I am using 64K buffer but not sure if that is safe to assume.
Thanks.
Rajeev.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks a lot i got it working. But still i get 50% CPU in this too and the compression is slow. I can compromise on compression ratio but i want good speed and less CPU usage. What paramater should i change to acheive it?
You can use GUI or console version of 7-Zip to select good parameters for your case.
When results with 7-Zip are OK for you, use same parameters in LzmaCompress.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
i tried using FM GUI to check the paramter. I tried all fast, fastest, normal etc.. all those thing gives me good compression ration but CPU is 50 % and more. But compression ratio is perfect. But i need to bring down the Cpu usage. Help me on this
/Thanks
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I use client -server model to transfer data . I read 2MB of data block compress it in LZMA and stream it to the server for effient transfer rate. So i need it to be fast and less CPU usage as the data which is going to be transferred is of large size and compression of 2MB block before streaming occurs before sending every packet.
/Thanks
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Group,
I am geting SZ_ERROR_OUTPUT_EOF error from LzmaEncode API. Further debugging shows that LzmaEnc_Encode returns SZ_ERROR_WRITE.
Does anyone have idea on how to debug this or what could be going wrong.
I'm using LzmaLib.h interface to encode incoming file buffer. Code snippet is here..
Byte outProps;
UInt32 level = CONFIG_COMPRESSION_LEVEL;
UInt32 dictionary = CONFIG_DICTIONARY_SIZE;
UInt32 litContextBits = CONFIG_LITERAL_CONTEXT_BITS; // for normal files
UInt32 litPosBits = CONFIG_LITERAL_POSITION_BITS;
UInt32 posStateBits = CONFIG_POSITION_BITS;
UInt32 numFastBytes = CONFIG_FASTBYTES;
UInt32 numThreads = CONFIG_NUM_THREADS;
SRes result;
result = LzmaCompress (DstBuffer, pDstSize, SrcBuffer, SrcSize, outProps, &outPropsSize,
level, dictionary, litContextBits, litPosBits, posStateBits, numFastBytes, numThreads);
Result is coming out as SZ_ERROR_OUTPUT_EOF.
Thank you.
Rajeev
Related question..
In our code base, we don't know the size of compressed image. Is there a way, we can find out the size by calling LzmaEncode API?
In 7zip version, we used to do following to get the size of the compressed image.
// Get actual required size
RequiredSize = outStream->Write(NULL, 0, NULL);
*pDstSize = RequiredSize;
if (RequiredSize > DstSize) {
// Buffer too small, return the RequiredSize so we can malloc buffer of RequiredSize.
}
else
{
// Buffer is good enough and compress the data
}
Thank you.
Rajeev
If you compress some data from RAM to RAM with LZMA, you have two ways:
1) You always use LZMA. Then output buffer must be about of 1.1 * uncompressSize + 16K.
2) You provide only uncompressSize for output buffer. and if you have SZ_ERROR_OUTPUT_EOF, just write data as uncompressed with some another header.
Or you can use LZMA2. It checks that data can not be compressed, and you must provide only about (1.001 * uncompressSize + 32) for output buffer.
Thank you. That helped. I tried your first suggestion and was able to compress bunch of images.
Now, I am getting SZ_ERROR_DATA when calling LzmaUncompress API (LzmaLib.h interface).
Below is the code snippet.
UInt32 PropsSize = LZMA_PROPS_SIZE;
Byte Props;
SRes result;
result = LzmaUncompress (Destination, &DstSize, Source, &SrcSize, Props, PropsSize);
At the input SrcSize is 3295 (Decimal), DstSize is 4740 (Decimal)
At the failure SrcSize is 5 and DstSize is 0.
Further debugging, I found out that LzmaDec_DecodeToDic API is returning SZ_ERROR_DATA.
Let me know if you have any further suggestion
Thanks again.
Rajeev
You must set Props thjat you get from LzmaCompress.
Thank again. I was able to compress the data without any issues now.
I'm trying to understand how the size of the uncompressed buffer gets written in the LZMA header. I understood that first 5 bytes of the header is LZMA properties and uncompressed size is next 8 bytes. If I'm calling LzmaCompress API (LzmaLib.h interface), I'm not sure at what point uncompressed size is written to the header. Can you please point me to that?
If I read bytes 5 to 12 then I get following:
Byte: 5, EncodedData: 46, DecodedSize: 46
Byte: 6, EncodedData: 25, DecodedSize: 2546
Byte: 7, EncodedData: 37, DecodedSize: 372546
Byte: 8, EncodedData: 39, DecodedSize: 39372546
Byte: 9, EncodedData: 1c, DecodedSize: 39372562
Byte: 10, EncodedData: 20, DecodedSize: 39374562
Byte: 11, EncodedData: 2d, DecodedSize: 39644562
Byte: 12, EncodedData: 3e, DecodedSize: 77644562
GetInfoFunction: DstSize: 0x77644562, UncompressedLength: 0x1284
Below is the code snippet for Getting information of the encoded data.
UInt32
GetDecodedSizeOfBuf(
Byte *EncodedData
)
{
UInt32 DecodedSize;
Int32 Index;
/* Parse header */
DecodedSize = 0;
for (Index = 0; Index < 8; Index++) {
DecodedSize += EncodedData << (Index * 8);
}
return DecodedSize;
}
Thanks again.
Rajeev
I have also few question on your above post…
*If you compress some data from RAM to RAM with LZMA, you have two ways:
1) You always use LZMA. Then output buffer must be about of 1.1 * uncompressSize + 16K.
-> Isn't output buffer (compressed buffer) supposed to be smaller than input buffer (uncompressed buffer). Also, is it 16K or just 16?
2) You provide only uncompressSize for output buffer. and if you have SZERROROUTPUT_EOF, just write data as uncompressed with some another header.
-> How does this work? Not sure what do you mean here. Can you give some example.
Thanks.
Rajeev
1)
LzmaEncode doesn't write header.
You can create header so:
propsEncoded - 5 bytes and then 8 bytes from *destLen.
2) If the data can't be compressed, it will consume more space than original data.
3) 16 KB.
4) You allocate output buffer of exact size as original data. If LzmaEncode returns SZ_ERROR_OUTPUT_EOF error, you don't use LZMA format, just write the data without compression. Fot example, you can use one additional byte in header: 0 - No compresssion, 1- LZMA compression.
Thank you very much for your help. I will look into it and get back to you if I have any questions.
Rajeev
LzmaDecode API takes pointer to the Allocation APIs (ISzAlloc). In our use case, we are allocating the buffer and then managing that buffer internally in ISzAlloc->Allocate API. How can I find out how much should we allocate the buffer. Currently I am using 64K buffer but not sure if that is safe to assume.
Thanks.
Rajeev.
Sorry but I didn't get any answer on my question above. Can someone please respond to my question?
Thank you.
Rajeev
If you compress with default properties, you need 16 KB.
But some .lzma archives (copmpressed with some extra switches) can require up to 6.1 MB.
Hi,
I was trying to compress a data using LZMA but i end up in Buffer over flow error,
Am i doing something wrong here?
/Thanks
1) Set outSize to .
kOutSize = 1.1 * 2097152 + 16384;
out_buff = (char*)malloc(kOutSize);
outSize = kOutSize
2) Send inSize instead of blockSize.
3) Set outPropsSize=5
ans send it
4) allocate props; and send pointer.
Thanks a lot i got it working. But still i get 50% CPU in this too and the compression is slow. I can compromise on compression ratio but i want good speed and less CPU usage. What paramater should i change to acheive it?
/Thanks
You can use GUI or console version of 7-Zip to select good parameters for your case.
When results with 7-Zip are OK for you, use same parameters in LzmaCompress.
i tried using FM GUI to check the paramter. I tried all fast, fastest, normal etc.. all those thing gives me good compression ration but CPU is 50 % and more. But compression ratio is perfect. But i need to bring down the Cpu usage. Help me on this
/Thanks
Why do you want to reduce CPU usage?
You can stop CPU after each block. for example, call
Sleep(1000);
after each 2 MB block.
Also to reduce CPU usage you can upgrade your computer with new CPU.
For example, CPU usage will be only 12% with Intel i7 cpu.
I use client -server model to transfer data . I read 2MB of data block compress it in LZMA and stream it to the server for effient transfer rate. So i need it to be fast and less CPU usage as the data which is going to be transferred is of large size and compression of 2MB block before streaming occurs before sending every packet.
/Thanks