Thread: [Algorithms] Image compression
Brought to you by:
vexxed72
|
From: Richard F. <ra...@gm...> - 2009-08-07 10:41:57
|
Not sure if i'm reinventing the wheel here, but I've been working on an in house image compressor for our DS games, and had an idea that i thought was a novel approach to solving the lossy compression problem. I started with DCTs (which were terribly slow on the DS, so were soon discouraging), then tried wavelets, which did provide a great speed increase, and good image quality, but after working with them for a while i had the idea that maybe what we need to encode is the terrain of the image. not the two distinct 1D signals. So, with the idea of "doing it in 2D" in mind, i tried to develop an algorithm that instead of taking the differences between pixels, it assumes that the pixels are part of a larger image and starts from an assumption that anything other than the predicted value of a pixel is notable. I took the vertical gradient, horizontal gradient, and the _saddle_ (diagonal difference / Determinant), and these were then the values recorded for later use in the compression. This 2D approach made a small but significant difference in the compressability of the image over wavelets, but I'm not brilliant at compression, just not incompetent, so I requested the right to start a source-forge project based on the library so that all the other games developers could both benefit and help me make this fast and quite good compressor into something really cool for us all to use. https://sourceforge.net/projects/mergelet/ Sorry that my code is a bit wandery in places, and I don't use comments much, I tend to comment with variable names and structure of code instead. Also, I'm new to source-forge, so don't know how everything works yet. -- fabs(); Just because the world is full of people that think just like you, doesn't mean the other ones can't be right. |
|
From: metanet s. <met...@ya...> - 2009-08-07 13:47:08
|
Some of CBloom's blog notes might be of interest to you, here are some recent ones that are relevant: http://cbloomrants.blogspot.com/2009/02/02-19-09-png-sucks.html http://cbloomrants.blogspot.com/2009/05/05-13-09-image-compression-rambling.html http://cbloomrants.blogspot.com/2009/05/05-14-09-image-compression-rambling.html http://cbloomrants.blogspot.com/2009/07/07-06-09-small-image-compression-notes.html http://cbloomrants.blogspot.com/2009/07/07-07-09-small-image-compression-notes.html --- On Fri, 8/7/09, Richard Fabian <ra...@gm...> wrote: > From: Richard Fabian <ra...@gm...> > Subject: [Algorithms] Image compression > To: "Game Development Algorithms" <gda...@li...> > Received: Friday, August 7, 2009, 6:41 AM > Not sure if i'm reinventing the wheel > here, but I've been working on > an in house image compressor for our DS games, and had an > idea that i > thought was a novel approach to solving the lossy > compression problem. > > I started with DCTs (which were terribly slow on the DS, so > were soon > discouraging), then tried wavelets, which did provide a > great speed > increase, and good image quality, but after working with > them for a > while i had the idea that maybe what we need to encode is > the terrain > of the image. not the two distinct 1D signals. > > So, with the idea of "doing it in 2D" in mind, i tried to > develop an > algorithm that instead of taking the differences between > pixels, it > assumes that the pixels are part of a larger image and > starts from an > assumption that anything other than the predicted value of > a pixel is > notable. I took the vertical gradient, horizontal gradient, > and the > _saddle_ (diagonal difference / Determinant), and these > were then the > values recorded for later use in the compression. This 2D > approach > made a small but significant difference in the > compressability of the > image over wavelets, but I'm not brilliant at compression, > just not > incompetent, so I requested the right to start a > source-forge project > based on the library so that all the other games developers > could both > benefit and help me make this fast and quite good > compressor into > something really cool for us all to use. > > https://sourceforge.net/projects/mergelet/ > > Sorry that my code is a bit wandery in places, and I don't > use > comments much, I tend to comment with variable names and > structure of > code instead. > > Also, I'm new to source-forge, so don't know how everything > works yet. > > -- > fabs(); > Just because the world is full of people that think just > like you, > doesn't mean the other ones can't be right. > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal > Reports 2008 30-Day > trial. Simplify your report design, integration and > deployment - and focus on > what you do best, core application coding. Discover what's > new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > __________________________________________________________________ Make your browsing faster, safer, and easier with the new Internet Explorer® 8. Optimized for Yahoo! Get it Now for Free! at http://downloads.yahoo.com/ca/internetexplorer/ |
|
From: Richard F. <ra...@gm...> - 2009-08-10 08:54:44
|
nice links, thanks. Makes me think that approaching the compression of images by blocks is a coders dream but not good for quality. I'm rethinking the approach for non power of two images and trying to do it without using blocks at all. 2009/8/7 metanet software <met...@ya...> > > Some of CBloom's blog notes might be of interest to you, here are some > recent ones that are relevant: > > http://cbloomrants.blogspot.com/2009/02/02-19-09-png-sucks.html > > http://cbloomrants.blogspot.com/2009/05/05-13-09-image-compression-rambling.html > > http://cbloomrants.blogspot.com/2009/05/05-14-09-image-compression-rambling.html > > http://cbloomrants.blogspot.com/2009/07/07-06-09-small-image-compression-notes.html > > http://cbloomrants.blogspot.com/2009/07/07-07-09-small-image-compression-notes.html > > > > > --- On Fri, 8/7/09, Richard Fabian <ra...@gm...> wrote: > > > From: Richard Fabian <ra...@gm...> > > Subject: [Algorithms] Image compression > > To: "Game Development Algorithms" < > gda...@li...> > > Received: Friday, August 7, 2009, 6:41 AM > > Not sure if i'm reinventing the wheel > > here, but I've been working on > > an in house image compressor for our DS games, and had an > > idea that i > > thought was a novel approach to solving the lossy > > compression problem. > > > > I started with DCTs (which were terribly slow on the DS, so > > were soon > > discouraging), then tried wavelets, which did provide a > > great speed > > increase, and good image quality, but after working with > > them for a > > while i had the idea that maybe what we need to encode is > > the terrain > > of the image. not the two distinct 1D signals. > > > > So, with the idea of "doing it in 2D" in mind, i tried to > > develop an > > algorithm that instead of taking the differences between > > pixels, it > > assumes that the pixels are part of a larger image and > > starts from an > > assumption that anything other than the predicted value of > > a pixel is > > notable. I took the vertical gradient, horizontal gradient, > > and the > > _saddle_ (diagonal difference / Determinant), and these > > were then the > > values recorded for later use in the compression. This 2D > > approach > > made a small but significant difference in the > > compressability of the > > image over wavelets, but I'm not brilliant at compression, > > just not > > incompetent, so I requested the right to start a > > source-forge project > > based on the library so that all the other games developers > > could both > > benefit and help me make this fast and quite good > > compressor into > > something really cool for us all to use. > > > > https://sourceforge.net/projects/mergelet/ > > > > Sorry that my code is a bit wandery in places, and I don't > > use > > comments much, I tend to comment with variable names and > > structure of > > code instead. > > > > Also, I'm new to source-forge, so don't know how everything > > works yet. > > > > -- > > fabs(); > > Just because the world is full of people that think just > > like you, > > doesn't mean the other ones can't be right. > > > > > ------------------------------------------------------------------------------ > > Let Crystal Reports handle the reporting - Free Crystal > > Reports 2008 30-Day > > trial. Simplify your report design, integration and > > deployment - and focus on > > what you do best, core application coding. Discover what's > > new with > > Crystal Reports now. http://p.sf.net/sfu/bobj-july > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > > __________________________________________________________________ > Make your browsing faster, safer, and easier with the new Internet > Explorer® 8. Optimized for Yahoo! Get it Now for Free! at > http://downloads.yahoo.com/ca/internetexplorer/ > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); Just because the world is full of people that think just like you, doesn't mean the other ones can't be right. |
|
From: Jon W. <jw...@gm...> - 2009-08-07 16:35:45
|
Richard Fabian wrote: > notable. I took the vertical gradient, horizontal gradient, and the > _saddle_ (diagonal difference / Determinant), and these were then the > values recorded for later use in the compression. This 2D approach > made a small but significant difference in the compressability of the > Isn't that basically a lifting process, based on a 2D predicted value? For previous approaches, I believe that the good-old lossless JPEG method used almost exactly that pre-process to determine deltas, and then used an entropy coder to code the deltas (instead of treating the deltas as something to separately compress using wavelets or whatever). What kind of compression ratios are you seeing now, btw? And have you tried the smaller/embedded profiles of JPEG 2000 for comparison? Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Richard F. <ra...@gm...> - 2009-08-10 08:53:27
|
oddly, I'm seeing better than DCT ratios. However there is something wrong with my code as I'm not getting the same ratios as Photoshop with a JPEG export of the same image. Can't be sure where I'm going wrong. I though it was the huffman encoding and tried to implement jpeg standard encoder, but found that once I had it in, it made no difference, funnily enough, my (supposedly) naive approach to huffman encoding the DCT quantised coefficients was bit-wise identical to the JPEG standard approach. I recently found that some improvements in image quality can be obtained by bleeding the colour data across any black (or very dark) colours before compressing, thus improving the colour values of chromal sections. I wonder if this would work even better if i included white out too? 2009/8/7 Jon Watte <jw...@gm...> > Richard Fabian wrote: > > notable. I took the vertical gradient, horizontal gradient, and the > > _saddle_ (diagonal difference / Determinant), and these were then the > > values recorded for later use in the compression. This 2D approach > > made a small but significant difference in the compressability of the > > > > Isn't that basically a lifting process, based on a 2D predicted value? > For previous approaches, I believe that the good-old lossless JPEG > method used almost exactly that pre-process to determine deltas, and > then used an entropy coder to code the deltas (instead of treating the > deltas as something to separately compress using wavelets or whatever). > What kind of compression ratios are you seeing now, btw? And have you > tried the smaller/embedded profiles of JPEG 2000 for comparison? > > Sincerely, > > jw > > > > -- > > Revenge is the most pointless and damaging of human desires. > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- fabs(); Just because the world is full of people that think just like you, doesn't mean the other ones can't be right. |