|
From: Michael N. <mic...@gm...> - 2010-06-14 20:08:36
|
On Mon, Jun 14, 2010 at 12:48:21PM -0400, Michael Nahas wrote: > PAR + TAR/PkZip: I agree with P. Cordes, tools for using TAR on Windows > exist, as do tools for using PkZip on Unixs (Unices?). If someone needs > more than 2^16 files, they can use one of those. We don't need to say > which. I think i missunderstood the suggestion. The way i understood it was that we could support just 1 file in par3 and use tar for multiple file support (like for example gzip/bzip do as well). For this case using pkzip on windows and tar on linux would have meant a annoying incompatibilty. if its just for a rare case of >65535 files this is no longer a real issue [...] > UTF-8 Support: I wish it was easy as saying "Unicode has been around for > ages" to say that support exists. Obviously, Java supports it. It looks > like GNU has a C library. ( > http://www.gnu.org/software/libidn/manual/libidn.html#Utility-Functions) > However, since we have already have Unicode's 16-bit filenames as optional > feature, do we need need UTF-8 or just make the Unicode packet's manditory? If the par3 spec is compatible with existing par2 clients then this is an option. otherwise i think supporting only utf8 is the better choice. > > BSD vs. LGPL/GPL licence for reference implementation: First, this is the > reference implementation. It should be clean and clear, not necessarily the > high-performance library used by everyone. Second, as much as I try to work > on the spec and not the code, I think the license on the reference > implementation should change from the GPL to the LGPL or BSD. This will > allow people to use the code in a dynamic library (*.DLL/*.so) and not have > to make public the source of their entire application. (For the LGPL, they > would have to make public their changes to the library.) Changing the > license will require either getting the permission of the authors of the > current reference code or starting a new reference implementation from > scratch. As it is a reference implementation, and not necessarily a > high-performance implementation, I'd suggest the LGPL. But I'm willing to > leave it up to the person who invests the time to write it. > > [Side note: http://news.slashdot.org/article.pl?sid=10/06/04/1953232 Looks > like Google released VP8 with a BSD license plus a separate patent license. > The patent license is voided if a company brings a patent suit against > Google. Thus, if someone sues Google, Google is free to sue them back using > the VP8 patents.] right, ive just remembered that they changed it from their gpl incompatible license not what they changed it to. their bitstream spec license btw seems CC > > The important question, in my opinion, is what use cases should be aiming to > support? > * Usenet transmission of large files > * Multi-disk backup redundancy (e.g., burn 4 CDs, burn a 5th with redundant > data). > * ? redundancy for remote backups > * ? redundancy on single-disk backup (Already done by DVDisaster; we'd like > to support it, but there isn't an easy implementation yet. Is this better > done by a filesystem?) par* surely can be used for this but when done in a filesystem or at a lower level its possible to do things that a file level par* tool cannot do. Examples are: * random read & write to the disk (a filesystem can just update the parity sectors in the background) * disks use their own FEC With a par* like tool, one would be thus writing several layers of ECC information onto a disk and would not be able to pass any information between them that is if too many errors exist in a sector it would not be vissible to par at all. This is quite inefiicient for example 90% of a sector might be undamaged and could significnatly help a RS error&erasure decoder. Furtermore the hardware generally knows which parts are damaged. For CDs you can see 'readcd -edc-corr' which bypasses the last stage of error correction done by cd drives and does it in software. A hypothetical tool could use the information from sectors that fail this last stage correction on CDs instead of treating them like black box uncorrectable and unavailable > * ? redundancy for streaming (e.g., NASA transmissions) deep space communication is affected by bit flips and burst errors the packet erasure correction aimed at by par* is not suitable for this > * ? verification/redundancy for file distribution (stronger than just an MD5 > check?) iam not sure i understand this use case, could you elaborate? > Any others? Any arguments in favor/against one of these? Any ideas on how > to implement redundancy for a single-disk backup? Should we have a sister > project for a user-level file system? (Sounds expense with RS codes.) > > Also, the Par2 spec included optional packets for containing input file > slices. So, people would not have to use a file-splitter, like RAR, for > Usenet or multi-disk backups. Do we want to push to make those packets > mandatory? [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Breaking DRM is a little like attempting to break through a door even though the window is wide open and the only thing in the house is a bunch of things you dont want and which you would get tomorrow for free anyway |