You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}
(3) 
_{Dec}

2015 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}
(12) 
_{Nov}

_{Dec}

S  M  T  W  T  F  S 

1

2
(2) 
3
(6) 
4
(30) 
5
(21) 
6
(24) 
7
(8) 
8

9
(17) 
10
(21) 
11
(8) 
12
(12) 
13
(5) 
14
(1) 
15

16
(34) 
17
(28) 
18
(37) 
19
(34) 
20
(41) 
21
(9) 
22
(10) 
23
(7) 
24
(2) 
25
(5) 
26
(3) 
27

28
(4) 
29

30
(4) 
31





From: Sven Forstmann <udu7@rz...>  20021222 22:35:35

Here is even a page describing animated volume data, which is wavelet compressed; also with paper. http://www.gris.unituebingen.de/areas/volren/graphics/volanim/ but does anybody know about the algorithm which is patented by nvidia? Is it also a kind of wavelet compression ? regards sven 
From: Tim Little <tim_little@ea...>  20021222 21:29:46

My comments interspersed. Tim Little Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Mark Duchaineau Sent: Sunday, December 22, 2002 2:38 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Lossless volume texture compression I actually gave a "linear lifted update once" wavelet. As I recall, it has three vanishing moments for those who care, and is continuous. It is not strictly orthogonal, but is socalled biorthogonal. The upshot is that this is actually one of the better wavelets around in experimental tests on real data (natural imagery for example). It is from what I understand one of the common flavors of JPEG 2000 transforms used under the hood. [Tim Little] My mistake, it seemed similar at a superficial glance, perhaps I'll take a look at it. The Haar is, if IIRC simply average the two inputs for the low band, and save the difference between the two points divided by two for the high band. This can be done as separate passes for each dimension or they can be computed concurrently. > > My experience with the Haar wavelet is that compression ratios are not > stellar without quantization. That of course depends on the image, but > generally I remember it being in the 4:1 to 6:1 compression range for > most images. With a little quantization you can usually get a lot more. > But then you end up needing to use a deblocking filter on reconstruction > with Haar. I also found that arithmetic coders worked better than > Huffman or LZ coders for images with large areas of solid color or > smooth gradient. > This is not Haar and you don't have these problems to deal with (again we are assuming Pierre has prequantized data). Of course if someone insists on lossless compression, you don't get spectacular reductions in data size, even if you quantize up front as is assumed here. [Tim Little] After looking at your approach, it seems to use 3 input samples rather than the two that a Haar uses, so the blocking wouldn't be as pronounced, but seems it still should show similar artifacts, though to a lesser degree. The quantization I was referring to was that of the coefficients, or the high band data (only something you would do for lossy compression, but you can control the amount and type of loss quite explicitly). If you set sufficiently small coefficients to zero, you substantially increase how compressible the resulting data is. I have applied that to good effect in the past, and usually you can just apply it to the coefficients from the first pass or two so that you don't end up accumulating errors. The first pass represents the largest single field of coefficients anyway (it could be quite large for a volume texture). > One thing to consider is as you create each level of lowpass data it > might be good enough for your miplevels, at which point you get a fully > mipmapped version stored for free. If they are good enough then you > will want to iterate down as far as you need miplevels. Good point about MIP levels (for those who happen to want the convenience of using the wavelet lowpass values for this purpose). The fact that this transform does a lifting update helps that a lot. The more lifting steps and updates, the better the quality of the MIP levels. One update step is often sufficient. Cheers, Mark D.  This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Mark Duchaineau <duchaine@ll...>  20021222 20:38:35

Hi Tim, Tim Little wrote: > They become lossy when you start quantizing the coefficients to get more > zeros or to reduce the number of bits in the coefficients to get better > compression. The wavelet transform itself doesn't cause a loss in > theory, but floating point precision problems can cause very small > differences to occur so using a purely integer version eliminates that > problem. You will get loss from rounding if you have floating point input no matter what you do (unless you internally go to double precision and carefully use guard bits etc.). My assumption was that Pierre had textures that were already quantized to some number of bits, such as 8 per channel. >I think Mark gives you the Haar wavelet in his description > which is certainly the easiest one to implement and optimize, but others > yield more compressible coefficients, and or allow you to do > quantization without as many noticeable artifacts, like the Daubechies > wavelet. Any wavelet with an orthogonal basis should extend into > additional dimensions, in a fairly straight forward manner. I actually gave a "linear lifted update once" wavelet. As I recall, it has three vanishing moments for those who care, and is continuous. It is not strictly orthogonal, but is socalled biorthogonal. The upshot is that this is actually one of the better wavelets around in experimental tests on real data (natural imagery for example). It is from what I understand one of the common flavors of JPEG 2000 transforms used under the hood. > > My experience with the Haar wavelet is that compression ratios are not > stellar without quantization. That of course depends on the image, but > generally I remember it being in the 4:1 to 6:1 compression range for > most images. With a little quantization you can usually get a lot more. > But then you end up needing to use a deblocking filter on reconstruction > with Haar. I also found that arithmetic coders worked better than > Huffman or LZ coders for images with large areas of solid color or > smooth gradient. > This is not Haar and you don't have these problems to deal with (again we are assuming Pierre has prequantized data). Of course if someone insists on lossless compression, you don't get spectacular reductions in data size, even if you quantize up front as is assumed here. > One thing to consider is as you create each level of lowpass data it > might be good enough for your miplevels, at which point you get a fully > mipmapped version stored for free. If they are good enough then you > will want to iterate down as far as you need miplevels. Good point about MIP levels (for those who happen to want the convenience of using the wavelet lowpass values for this purpose). The fact that this transform does a lifting update helps that a lot. The more lifting steps and updates, the better the quality of the MIP levels. One update step is often sufficient. Cheers, Mark D. 
From: Tim Little <tim_little@ea...>  20021222 17:36:45

Sorry this ended up longer than I expected. I haven't worked on wavelets for a few years, but I enjoyed them quite a lot. They become lossy when you start quantizing the coefficients to get more zeros or to reduce the number of bits in the coefficients to get better compression. The wavelet transform itself doesn't cause a loss in theory, but floating point precision problems can cause very small differences to occur so using a purely integer version eliminates that problem. I think Mark gives you the Haar wavelet in his description which is certainly the easiest one to implement and optimize, but others yield more compressible coefficients, and or allow you to do quantization without as many noticeable artifacts, like the Daubechies wavelet. Any wavelet with an orthogonal basis should extend into additional dimensions, in a fairly straight forward manner. My experience with the Haar wavelet is that compression ratios are not stellar without quantization. That of course depends on the image, but generally I remember it being in the 4:1 to 6:1 compression range for most images. With a little quantization you can usually get a lot more. But then you end up needing to use a deblocking filter on reconstruction with Haar. I also found that arithmetic coders worked better than Huffman or LZ coders for images with large areas of solid color or smooth gradient. One thing to consider is as you create each level of lowpass data it might be good enough for your miplevels, at which point you get a fully mipmapped version stored for free. If they are good enough then you will want to iterate down as far as you need miplevels. My favorite book for getting to know wavelets was "The World According to Wavelets" by Barbara Burke Hubbard, A.K. Peters is the publisher. It gives a good background first and then goes into the math. It is an entertaining read as well. Tim Original Message From: gdalgorithmslistadmin@... [mailto:gdalgorithmslistadmin@...] On Behalf Of Pierre Terdiman Sent: Sunday, December 22, 2002 1:44 AM To: gdalgorithmslist@... Subject: Re: [Algorithms] Lossless volume texture compression > There are wavelet schemes for 3d textures. In general, > separable transform schemes work for any number of > dimensions. To get lossless coding, just use a lossless > wavelet transform, eg. one made with integer "lifting". Interesting, for some reasons I only associated "wavelets" and "lossy" (that's certainly how it's presented in various papers about volume data compression). I'll dig a bit further then. Pierre  This SF.NET email is sponsored by: Order your Holiday Geek Presents Now! Green Lasers, Hip Geek TShirts, Remote Control Tanks, Caffeinated Soap, MP3 Players, XBox Games, Flying Saucers, WebCams, Smart Putty. T H I N K G E E K . C O M http://www.thinkgeek.com/sf/ _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=6188 
From: Mark Duchaineau <duchaine@ll...>  20021222 09:26:35

Pierre, The lifting literature is pretty hard to penetrate, but the idea is really simple. Here's a good example of a transform for many kinds of data: data: a0 a1 a2 a3 a4 ... after one level of transform: high pass: H0=a1AVE1(a0,a2) H1=a3AVE1(a2,a4) .... low pass: L0=a0+AVE2(0,H0) L1=a2+AVE2(H0,H1) .... The AVE1(a,b) is floor((a+b)/2), the AVE2(a,b) is floor((a+b)/4). Send the low pass values again through this procedure and repeat a few times (you don't need to go all the way to a single lowpass value, five levels should be plenty). After this is all done you get a handful of coarse L values and a bunch of H values. You get the same number of values out as you put in, but whereas you had say 8bit values coming in, you get a sign bit and hence 9bit values coming out. Send these values through your favorite entropy coder (zlib is slow but good, special entropy coders can be made that work nicely on lossless wavelets and are much fasterthink Huffman using table lookups and RLE). Reversing the above steps gets you exactly the 8bit data values you started with: low pass: a0=L0AVE2(0,H0) a2=L1AVE2(H0,H1) .... high pass: a1=H0+AVE1(a0,a2) a3=H1+AVE1(a2,a4) .... To get a 2 or 3d array of data transformed, do this along one axis at a time, say X Y Z X Y Z ... Of course there are clever ways to speed this up by using shifts instead of divide and floor, and reordering the loops for the Y and Z axis directions to make things cache coherent (make the inner loop go in the X direction). Cheers, Mark D. Pierre Terdiman wrote: > > > There are wavelet schemes for 3d textures. In general, > > separable transform schemes work for any number of > > dimensions. To get lossless coding, just use a lossless > > wavelet transform, eg. one made with integer "lifting". > > Interesting, for some reasons I only associated "wavelets" and "lossy" > (that's certainly how it's presented in various papers about volume data > compression). I'll dig a bit further then. > > Pierre 
From: Dorie, Jason <jdorie@ea...>  20021222 09:16:32

Wavelets are simply a transform which decorrelates data  the lossy part = is usually from quantization, though some loss can come from rounding or = numerical imprecision in floating point operations. Looking for papers = on reversible integer wavelets. You could also try a standard lossless image compression scheme, like = CALIC or FELICS. The mechanisms they use to decorrelate image data = should extend into 3D fairly easily. Jason Dorie EA/BlackBox Original Message From: Pierre Terdiman [mailto:p.terdiman@...] Sent: December 21, 2002 11:44 PM To: gdalgorithmslist@... Subject: Re: [Algorithms] Lossless volume texture compression > There are wavelet schemes for 3d textures. In general, > separable transform schemes work for any number of > dimensions. To get lossless coding, just use a lossless > wavelet transform, eg. one made with integer "lifting". Interesting, for some reasons I only associated "wavelets" and "lossy" (that's certainly how it's presented in various papers about volume data compression). I'll dig a bit further then. Pierre  This SF.NET email is sponsored by: Order your Holiday Geek Presents Now! Green Lasers, Hip Geek TShirts, Remote Control Tanks, Caffeinated Soap, MP3 Players, XBox Games, Flying Saucers, WebCams, Smart Putty. T H I N K G E E K . C O M http://www.thinkgeek.com/sf/ _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 
From: Onur \Xtro\ ER <xtro_discuss@gm...>  20021222 08:06:27

Hi! thanks for all! how can I quit the algorithm mail list? 
From: Pierre Terdiman <p.terdiman@wa...>  20021222 07:46:44

> There are wavelet schemes for 3d textures. In general, > separable transform schemes work for any number of > dimensions. To get lossless coding, just use a lossless > wavelet transform, eg. one made with integer "lifting". Interesting, for some reasons I only associated "wavelets" and "lossy" (that's certainly how it's presented in various papers about volume data compression). I'll dig a bit further then. Pierre 
From: Charles Bloom <cbloom@cb...>  20021222 04:29:18

There are wavelet schemes for 3d textures. In general, separable transform schemes work for any number of dimensions. To get lossless coding, just use a lossless wavelet transform, eg. one made with integer "lifting". At 04:41 AM 12/22/2002 +0100, Pierre Terdiman wrote: >Hi, > >Are there known algorithms to compress volume textures efficiently ? I'm >looking for a lossless compression, so VQ is not an option. (or maybe I >could encode the error volume independently, but if there's a better direct >way I'll probably go for it.) > >Pierre  Charles Bloom cb@... http://www.cbloom.com 
From: Pierre Terdiman <p.terdiman@wa...>  20021222 03:43:15

Hi, Are there known algorithms to compress volume textures efficiently ? I'm looking for a lossless compression, so VQ is not an option. (or maybe I could encode the error volume independently, but if there's a better direct way I'll probably go for it.) Pierre 