There is a freely available unace.dll that will allow ace decompression. The ace format is copyrighted, and requires licensing to compress files though.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
7-Zip uses very special interfaces to plugins. Plugin in 7-Zip has no access to file. It use interface that allows to read from arbitrary data stream. So I can't use unace.dll.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi!
Do you plan to add a recovery record in the 7z archives (as in RAR archives). This is a great feature of RAR for small and reliable backups.
Thanks!
Milaa_
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If you really want to add recovery record feature, why not make it a more "internet"-friendly?
Most of downloaders have a "zip check/recovery" function, but you can make this more easy with adding (optional) block CRCs. So downloader can read, and use this information.
After this you can contact some donloader developer (i.e. ReGet team) and ask/help to realize "7z check/recovery" feature.
BTW: I think at this time 7zip have a most powerfull compression method for practical usage - LZMA. But archiver itself don't have a very powerfull features ("incrementable archive" a-la UC2/JAR, "setup SFX" a-la WinZip/RAR, "recovery records" a-la RAR). So why not start works in this direction? And last, but not least: imho, better divide development to few parts - "compression engine", "archive file engine", "archiver" (all this must be really portable), "SFX modules", "Far plugin", "GUI Shell" (imho best development tool for this shell is Delphi, and can be started as indepedent SourceForge project).
Uhhh... Thank for your patience... Dixi.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
There is dividing development to few parts in 7-Zip. But this dividing is internal. I don't tnink that it is good idea to divide project to subprojects in current stage.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Windows doesn't allow to use more than 2 GB for one program.
So we can't use more than about 200 MB dictionary with bt4 match finder.
In future (64-bit windows + big RAM) probably 7-zip will support bigger dictionaries.
But I doubt that it will give better compression ratio for most data.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I don't even like the *current* increase of memory consumption imposed on plain -mx (no "=9") added somewhere between 2.30 and 3.11. Talking about ">2GB memory used" in this context seems like plain insanity to me, unless one plans to buy a 64-bit system with *loads* of RAM for the sole purpose of trying to compress data as much as possible using 7zip. This seems to me to be a completely unrealistic assumption of both the foreseeable future and the kind of people using the archiver. Perhaps it could be of interest in some research situations where money is not a problem, but for (what I hope is) the target audience of 7zip it's plain lunacy.
At the very least I think 7zip (incl. 7za, I only use command-line) should check system memory size (GetMemoryStatus in Win32), reduce it by something reasonable (e.g. RAM_size - 32MB - 10%*RAM_size, or something) and at least warn "If you use this setting you may experience excessive swapping during compression).
As it stands now, using 3.11 with the ol' trusty "-mx" setting sends my machine off on a swapping frenzy of the likes I haven't experienced since I last wanted to prove Microsofts lies that "Windows 95 will work with the same amount of memory as Windows 3.11". I for one do not welcome this change in "-mx" behaviour. I don't want, nor need, 7zip to allocate 384MB of memory.
Another thing to perhaps do would be to actually check the combined size of the files specified for inclusion. I mean, if I use -mx on a set of files <4MB, it makes no sense whatsoever to have a dictionary larger than 4MB.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Igor wrote:
>7-zip doesn't use more that 350 MB in -mx.
Please do compare that memory consumption with what was previously used for "-mx" (assuming "-mx" is intended to be somewhat stable over time, and e.g. "-mx=9" is the one intended to use this very large amount of memory - which is what my complaint in this area was about).
Current (3.11) is 1) using close to 384MB, 2) using waaay more than previous (as noted, 2.30) did for the same "-mx" setting.
Igor wrote:
"I suppose that 512 MB (about $60-$100) of RAM will be mainstream RAM size in nearest future."
<sarcasm, no offense>
I'm sorry for living in the present - perhaps even in the past. I've only have got a quarter of a gigabyte RAM. Mea culpa... (I can't imagine trying to compress 400KB of source code using -mx on a 64MB system now)
If I lived in the future and came back to tell you "Hey, I've got 64TB of memory - and that's just about enough to start Windows", would you make "-mx" use at least 16TB of memory?
</sarcasm>
Whatever will be mainstream in the future, is that really of interest for a compression algorithm that (I hope) is supposed to work today? What if I'm a looser running a hopelessly ancient really low-end machine (of the kind many do) of sub-GHz CPU with sub-gigabyte of RAM? Are you (that displayed quite a bit of insight by using the LGPL) telling me the same as many commercial companies "buy more memory or piss off"? I fail to see the reasoning, or even the logic, behind such a standpoint.
I stand behind my statement that I believe plain -mx should *not* use these insane amounts of memory. If people are willing to pay the (memory) price of -mx=9, I accept *their* decision. But making plain -mx use this much memory, then *you* are making that decision for me - a decision I, and I believe many others, don't agree with.
As for the "collecting total file size before starting compression to see if dictionary is oversized" isn't that a piece of cake? Something like (pseudo-code-ish):
if ("-mf=e" option not given)
sum_files_recursively(all)
max_dict = max(all);
else
collection coll;
sum_files_recursively(coll);
max_dict = max_of(coll);
endif
(not really expected to be used for the design of it, just to display the idea)
If caching the results from these initial directory traversals (which seems reasonable), what files to add would already be known why no additional traversal (wasting time doing it again) would be needed too.
Just my 0.02
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
1)Will there be made switch for bzip2 to increase dictionary size (7z mode-only)
2)Can't you make the compression filter more advanced than exe sfx and others use bcj2, and add a peice in options were people freely can change the settings for it (and save it) like
*.TXT *.DOC *.RTF | 0=PPMD:o=27
*.EXE *.SFX | 0=BCJ2 1=LZMA:FB=255 and so on...
the settings should be saved under a short name which you then could call from the dropbox that contain ULTRA maximum in the compression dialog.
You should probarbly also add a forum or something where you can post your file containing those settings.
Detecting the best compression method and parameters can wait till later.
3)Can you add your version of the PPMD2 algorithm to 7-zip (if you make one).
4)When will you add more algorithm's (yours and others)?
5)Why doesn't the language show when I try and improve the dansih language file (I'm not the one who posted an update but i've tried to make one and it didn't work)
And some for anyone (including Igor)
6)Does pat2h (or pat3h or pat4h) give better compression ratio than pat2.
7)Is pat2r lossy or does it provide worse compression ratio than pat2.
8)Does the people (or person) who made the P4 optimized version of 7-zip intent to make more of those?
I also have a comment to the one who wrote "Whatever will be mainstream in the future, is that really of interest for a compression algorithm that (I hope) is supposed to work today? What if I'm a looser running a hopelessly ancient really low-end machine (of the kind many do) of sub-GHz CPU with sub-gigabyte of RAM? Are you (that displayed quite a bit of insight by using the LGPL) telling me the same as many commercial companies "buy more memory or piss off"? I fail to see the reasoning, or even the logic, behind such a standpoint." Igor newer said you should piss of, but I'd be glad to say that you can read the help file (under commandlineversion\sitches -m I think) how to make less memory consuming parameters with compression ratios near ultra or you could download the patch called something like 7-zip parameter generator from patches.
That's all, thanks for taking the time to read this.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
> 1)Will there be made switch for bzip2 to increase dictionary size
> (7z mode-only)
No. bzip2 is standard. I don't want to change it.
> 3)Can you add your version of the PPMD2 algorithm to 7-zip (if you make one).
I don't plan it.
> 4)When will you add more algorithm's (yours and others)?
Next algorithm will be for Audio. I plan that it will be before January.
> 5)Why doesn't the language show when I try and improve the dansih language file
> (I'm not the one who posted an update but i've tried to make one and it didn't
> work)
Use UTF-8 encoding.
> 6)Does pat2h (or pat3h or pat4h) give better compression ratio than pat2.
> 7)Is pat2r lossy or does it provide worse compression ratio than pat2.
All pat* give the same compression ratio. The difference only in speed.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
And about memory: my favorite obsolete archiver JAR32 ( 7zip still can learn sth. from it ) optionally can detect available RAM - maybe by measuring RAM allocation speed - and use only as much RAM to avoid swapping.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Welcome to Open Discussion
will you ad support for .ace files
plz
grtz blabal
Now sources of ACE are not available.
There is a freely available unace.dll that will allow ace decompression. The ace format is copyrighted, and requires licensing to compress files though.
7-Zip uses very special interfaces to plugins. Plugin in 7-Zip has no access to file. It use interface that allows to read from arbitrary data stream. So I can't use unace.dll.
Any chance for Password protection in the future
Yes, I plan using some encryption algorithm in future (maybe AES).
Compiling PropVariant.cpp gives 6 errors related to cVal being undeclared, in function CPropVariant::Compare. Any suggestions?
Thanks in advance for your help.
Hi!
Do you plan to add a recovery record in the 7z archives (as in RAR archives). This is a great feature of RAR for small and reliable backups.
Thanks!
Milaa_
Yes, I have plans about recovery feature in 7z.
If you really want to add recovery record feature, why not make it a more "internet"-friendly?
Most of downloaders have a "zip check/recovery" function, but you can make this more easy with adding (optional) block CRCs. So downloader can read, and use this information.
After this you can contact some donloader developer (i.e. ReGet team) and ask/help to realize "7z check/recovery" feature.
BTW: I think at this time 7zip have a most powerfull compression method for practical usage - LZMA. But archiver itself don't have a very powerfull features ("incrementable archive" a-la UC2/JAR, "setup SFX" a-la WinZip/RAR, "recovery records" a-la RAR). So why not start works in this direction? And last, but not least: imho, better divide development to few parts - "compression engine", "archive file engine", "archiver" (all this must be really portable), "SFX modules", "Far plugin", "GUI Shell" (imho best development tool for this shell is Delphi, and can be started as indepedent SourceForge project).
Uhhh... Thank for your patience... Dixi.
There is dividing development to few parts in 7-Zip. But this dividing is internal. I don't tnink that it is good idea to divide project to subprojects in current stage.
Is ir possible to read this forum with newsreader, eg, is there news server instaled that has the feed from this forums?
why did you change from 3.xx.xx to 3.xx??
(in version name)
3.11:
3 - year
11 - version number in that year.
on your page on 7z format it says that LZMA can have a dictionary size of up to 4GB but from what i know it's 256MB
Windows doesn't allow to use more than 2 GB for one program.
So we can't use more than about 200 MB dictionary with bt4 match finder.
In future (64-bit windows + big RAM) probably 7-zip will support bigger dictionaries.
But I doubt that it will give better compression ratio for most data.
I don't even like the *current* increase of memory consumption imposed on plain -mx (no "=9") added somewhere between 2.30 and 3.11. Talking about ">2GB memory used" in this context seems like plain insanity to me, unless one plans to buy a 64-bit system with *loads* of RAM for the sole purpose of trying to compress data as much as possible using 7zip. This seems to me to be a completely unrealistic assumption of both the foreseeable future and the kind of people using the archiver. Perhaps it could be of interest in some research situations where money is not a problem, but for (what I hope is) the target audience of 7zip it's plain lunacy.
At the very least I think 7zip (incl. 7za, I only use command-line) should check system memory size (GetMemoryStatus in Win32), reduce it by something reasonable (e.g. RAM_size - 32MB - 10%*RAM_size, or something) and at least warn "If you use this setting you may experience excessive swapping during compression).
As it stands now, using 3.11 with the ol' trusty "-mx" setting sends my machine off on a swapping frenzy of the likes I haven't experienced since I last wanted to prove Microsofts lies that "Windows 95 will work with the same amount of memory as Windows 3.11". I for one do not welcome this change in "-mx" behaviour. I don't want, nor need, 7zip to allocate 384MB of memory.
Another thing to perhaps do would be to actually check the combined size of the files specified for inclusion. I mean, if I use -mx on a set of files <4MB, it makes no sense whatsoever to have a dictionary larger than 4MB.
7-zip doesn't use more that 350 MB in -mx.
I suppose that 512 MB (about $60-$100) of RAM will be mainstream RAM size in nearest future.
> I mean, if I use -mx on a set of files <4MB, it makes no sense whatsoever to have a dictionary larger than 4MB.
Maybe. I'll think.
Igor wrote:
>7-zip doesn't use more that 350 MB in -mx.
Please do compare that memory consumption with what was previously used for "-mx" (assuming "-mx" is intended to be somewhat stable over time, and e.g. "-mx=9" is the one intended to use this very large amount of memory - which is what my complaint in this area was about).
Current (3.11) is 1) using close to 384MB, 2) using waaay more than previous (as noted, 2.30) did for the same "-mx" setting.
Igor wrote:
"I suppose that 512 MB (about $60-$100) of RAM will be mainstream RAM size in nearest future."
<sarcasm, no offense>
I'm sorry for living in the present - perhaps even in the past. I've only have got a quarter of a gigabyte RAM. Mea culpa... (I can't imagine trying to compress 400KB of source code using -mx on a 64MB system now)
If I lived in the future and came back to tell you "Hey, I've got 64TB of memory - and that's just about enough to start Windows", would you make "-mx" use at least 16TB of memory?
</sarcasm>
Whatever will be mainstream in the future, is that really of interest for a compression algorithm that (I hope) is supposed to work today? What if I'm a looser running a hopelessly ancient really low-end machine (of the kind many do) of sub-GHz CPU with sub-gigabyte of RAM? Are you (that displayed quite a bit of insight by using the LGPL) telling me the same as many commercial companies "buy more memory or piss off"? I fail to see the reasoning, or even the logic, behind such a standpoint.
I stand behind my statement that I believe plain -mx should *not* use these insane amounts of memory. If people are willing to pay the (memory) price of -mx=9, I accept *their* decision. But making plain -mx use this much memory, then *you* are making that decision for me - a decision I, and I believe many others, don't agree with.
As for the "collecting total file size before starting compression to see if dictionary is oversized" isn't that a piece of cake? Something like (pseudo-code-ish):
if ("-mf=e" option not given)
sum_files_recursively(all)
max_dict = max(all);
else
collection coll;
sum_files_recursively(coll);
max_dict = max_of(coll);
endif
(not really expected to be used for the design of it, just to display the idea)
If caching the results from these initial directory traversals (which seems reasonable), what files to add would already be known why no additional traversal (wasting time doing it again) would be needed too.
Just my 0.02
in the future 7-zip should adjest dictionary size to data compressed ..
for example if i set "ultra" and the compressed data is less than 4MB.. 7-zip then would adjest D-size to 4MB.. (as i mentioned to igor last year..)
this will be implemented as igor stated then and i trust he will..
MAAD
When using bcj or bcj2 filters are the streams compressed simutaniasly or induvidually
I have some questions especially for Igor:
1)Will there be made switch for bzip2 to increase dictionary size (7z mode-only)
2)Can't you make the compression filter more advanced than exe sfx and others use bcj2, and add a peice in options were people freely can change the settings for it (and save it) like
*.TXT *.DOC *.RTF | 0=PPMD:o=27
*.EXE *.SFX | 0=BCJ2 1=LZMA:FB=255 and so on...
the settings should be saved under a short name which you then could call from the dropbox that contain ULTRA maximum in the compression dialog.
You should probarbly also add a forum or something where you can post your file containing those settings.
Detecting the best compression method and parameters can wait till later.
3)Can you add your version of the PPMD2 algorithm to 7-zip (if you make one).
4)When will you add more algorithm's (yours and others)?
5)Why doesn't the language show when I try and improve the dansih language file (I'm not the one who posted an update but i've tried to make one and it didn't work)
And some for anyone (including Igor)
6)Does pat2h (or pat3h or pat4h) give better compression ratio than pat2.
7)Is pat2r lossy or does it provide worse compression ratio than pat2.
8)Does the people (or person) who made the P4 optimized version of 7-zip intent to make more of those?
I also have a comment to the one who wrote "Whatever will be mainstream in the future, is that really of interest for a compression algorithm that (I hope) is supposed to work today? What if I'm a looser running a hopelessly ancient really low-end machine (of the kind many do) of sub-GHz CPU with sub-gigabyte of RAM? Are you (that displayed quite a bit of insight by using the LGPL) telling me the same as many commercial companies "buy more memory or piss off"? I fail to see the reasoning, or even the logic, behind such a standpoint." Igor newer said you should piss of, but I'd be glad to say that you can read the help file (under commandlineversion\sitches -m I think) how to make less memory consuming parameters with compression ratios near ultra or you could download the patch called something like 7-zip parameter generator from patches.
That's all, thanks for taking the time to read this.
> 1)Will there be made switch for bzip2 to increase dictionary size
> (7z mode-only)
No. bzip2 is standard. I don't want to change it.
> 3)Can you add your version of the PPMD2 algorithm to 7-zip (if you make one).
I don't plan it.
> 4)When will you add more algorithm's (yours and others)?
Next algorithm will be for Audio. I plan that it will be before January.
> 5)Why doesn't the language show when I try and improve the dansih language file
> (I'm not the one who posted an update but i've tried to make one and it didn't
> work)
Use UTF-8 encoding.
> 6)Does pat2h (or pat3h or pat4h) give better compression ratio than pat2.
> 7)Is pat2r lossy or does it provide worse compression ratio than pat2.
All pat* give the same compression ratio. The difference only in speed.
I desire for plain text config file too.
And about memory: my favorite obsolete archiver JAR32 ( 7zip still can learn sth. from it ) optionally can detect available RAM - maybe by measuring RAM allocation speed - and use only as much RAM to avoid swapping.