You can subscribe to this list here.
2005 |
Jan
(12) |
Feb
(24) |
Mar
(12) |
Apr
(12) |
May
(1) |
Jun
(3) |
Jul
(1) |
Aug
|
Sep
(5) |
Oct
(13) |
Nov
(1) |
Dec
(5) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
(1) |
Feb
(15) |
Mar
(80) |
Apr
(34) |
May
(13) |
Jun
|
Jul
|
Aug
(3) |
Sep
(11) |
Oct
(2) |
Nov
|
Dec
(3) |
2007 |
Jan
(2) |
Feb
(3) |
Mar
(375) |
Apr
(341) |
May
(133) |
Jun
(14) |
Jul
(11) |
Aug
(1) |
Sep
(60) |
Oct
(16) |
Nov
(3) |
Dec
(1) |
2008 |
Jan
|
Feb
(1) |
Mar
(23) |
Apr
(35) |
May
(51) |
Jun
(64) |
Jul
(21) |
Aug
(50) |
Sep
(46) |
Oct
(15) |
Nov
(1) |
Dec
(9) |
2009 |
Jan
|
Feb
(3) |
Mar
(2) |
Apr
(21) |
May
(4) |
Jun
(8) |
Jul
(3) |
Aug
|
Sep
(4) |
Oct
(19) |
Nov
(4) |
Dec
(4) |
2010 |
Jan
(5) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(4) |
Sep
|
Oct
(11) |
Nov
|
Dec
(8) |
2011 |
Jan
(4) |
Feb
(1) |
Mar
(41) |
Apr
(6) |
May
(1) |
Jun
(5) |
Jul
|
Aug
|
Sep
(10) |
Oct
|
Nov
(2) |
Dec
(2) |
2012 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
(19) |
2013 |
Jan
|
Feb
(4) |
Mar
|
Apr
(38) |
May
|
Jun
(26) |
Jul
(28) |
Aug
(1) |
Sep
(17) |
Oct
(10) |
Nov
(6) |
Dec
|
2014 |
Jan
|
Feb
(13) |
Mar
(2) |
Apr
(5) |
May
(2) |
Jun
|
Jul
(21) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2015 |
Jan
(23) |
Feb
(5) |
Mar
(1) |
Apr
|
May
(22) |
Jun
|
Jul
|
Aug
|
Sep
(28) |
Oct
(66) |
Nov
(7) |
Dec
(20) |
2016 |
Jan
(9) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
(7) |
Dec
(20) |
2017 |
Jan
(328) |
Feb
(171) |
Mar
(95) |
Apr
(1) |
May
(22) |
Jun
(97) |
Jul
(18) |
Aug
|
Sep
(12) |
Oct
|
Nov
(1) |
Dec
(2) |
2018 |
Jan
(4) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2019 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(6) |
May
|
Jun
(5) |
Jul
|
Aug
(7) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2020 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: John B. <jb...@ac...> - 2007-05-03 03:25:38
|
From:Adeluc >Thanks for the clues. I have no idea how to configure the Istream *. There should be a constructor for an IStream which takes an array of bytes - pass it the whole anIM PNG (my wonderful email editor corrects my intentional capitalisation of the "S"; look for IStream::IStream methods). Easier but less general (I missed this first time round): Bitmap( const WCHAR *filename, BOOL useIcm ); The data (the PNG image) has to be in a file. >(See Using a Color Matrix to Set Alpha Values in Images in GDI+) Yes, and the code you quote *should* compose the alpha into the image as it is drawn, but I'm betting that it does the calculation in the encoded colour space (i.e. without linearising it first). I might be wrong; the implementors were fully aware of how bad non-linear calculations are (at the time both Alvy Ray Smith and Jim Blinn worked for Microsoft). However I know that the base bicubic and bilinear scaling operations work in non-linear space (and are therefore incorrect) - I'm hoping that my memory is correct and that, despite the documentation, the "HighQuality" variants work in linear space (and, I hope, compose the alpha in linear space.) In SVG the approach is more clear for alpha composition (and less for image resampling, but that's a different battle). Selecting 'color-interpolation: linearRGB' forces the correct alpha composition. color-rendering should always be optimizeQuality - the speed option overrides the interpolation option. It's somewhat obfuscated in SVG how to render an image, I think it is via the feImage stuff, but I'm not sure, I might be missing something. John Bowler <jb...@ac...> |
From: Adeluc <pn...@ad...> - 2007-05-03 01:45:58
|
>>However, until you can point me a function that do alpha >>blend with a not alpha pre-multiplied tile image into the composed > frame >>buffer > > SVG - you need to set up a clipping mask (rectangle) to define the > source tile, but a PNG image is a valid SVG bitmap. > > GDI+ - Graphics::DrawImage, set InterpolationModeHighQualityBilinear on > the Graphics and construct the Image using Bitmap::Bitmap(Istream*, > true/*useICM*/) where the Istream contains the data of a PNG. The > Graphics may be a bitmap with an alpha channel - just select the > relevant pixel format (or you could use PNG I think) or you can just > render to the device. > > I think that covers all the operating systems, doesn't it? > > John Bowler <jb...@ac...> Thanks for the clues. I have no idea how to configure the Istream *. I am a little tired tonigh but I will make some tests tomorrow to apply tile_alpha in one shot. (See Using a Color Matrix to Set Alpha Values in Images in GDI+) // Create a Bitmap object and load it with the texture image. Bitmap bitmap(L"Texture1.jpg"); Pen pen(Color(255, 0, 0, 0), 25); // Initialize the color matrix. // Notice the value 0.8 in row 4, column 4. ColorMatrix colorMatrix = {1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.8f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f}; // Create an ImageAttributes object and set its color matrix. ImageAttributes imageAtt; imageAtt.SetColorMatrix(&colorMatrix, ColorMatrixFlagsDefault, ColorAdjustTypeBitmap); // First draw a wide black line. graphics.DrawLine(&pen, Point(10, 35), Point(200, 35)); // Now draw the semitransparent bitmap image. INT iWidth = bitmap.GetWidth(); INT iHeight = bitmap.GetHeight(); graphics.DrawImage( &bitmap, Rect(30, 0, iWidth, iHeight), // Destination rectangle 0, // Source rectangle X 0, // Source rectangle Y iWidth, // Source rectangle width iHeight, // Source rectangle height UnitPixel, &imageAtt); _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: Adeluc <pn...@ad...> - 2007-05-03 01:01:57
|
OOps! I was wrong in previous e-mail when I wrote: >From here the tile is now pre-multiplied so you can use typical >AlphaBlend() function to place the tile over the composed frame buffer. The tile image is still not pre-multiplied and AlphaBlend() cannot be used to blend the temporary tile buffer over the frame buffer. That brings me again to my first point. Find me a blending function that works with a not alpha pre-multiplied tile image and I will be happy to forget the extra oh-so-nice feature tile_alpha. _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: Adeluc <pn...@ad...> - 2007-05-03 00:55:28
|
OOps! I was wrong in previous e-mail when I wrote: >From here the tile is now pre-multiplied so you can use typical >AlphaBlend() function to place the tile over the composed frame buffer. The tile image is still not pre-multiplied and AlphaBlend() cannot be used to blend the temporary tile buffer over the frame buffer. That brings me again to my first point. Find me a blending function that works with a not alpha pre-multiplied tile image and I will be happy to forget the extra oh-so-nice feature tile_alpha. _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: John B. <jb...@ac...> - 2007-05-03 00:26:16
|
From: Adeluc >However, until you can point me a function that do alpha >blend with a not alpha pre-multiplied tile image into the composed frame >buffer SVG - you need to set up a clipping mask (rectangle) to define the source tile, but a PNG image is a valid SVG bitmap. GDI+ - Graphics::DrawImage, set InterpolationModeHighQualityBilinear on the Graphics and construct the Image using Bitmap::Bitmap(Istream*, true/*useICM*/) where the Istream contains the data of a PNG. The Graphics may be a bitmap with an alpha channel - just select the relevant pixel format (or you could use PNG I think) or you can just render to the device. I think that covers all the operating systems, doesn't it? John Bowler <jb...@ac...> |
From: John B. <jb...@ac...> - 2007-05-03 00:16:33
|
From: Glenn Randers-Pehrson >Gerard, maybe your chart should have a line "realizable with BLIT". Perhaps someone knowledgeable can say what the current Linux graphics libraries can do. Here is a list of things which, while not absolute requirements for any of the current proposals, are things which MNG, anIM and APNG can use directly (plus [5] which none of the base proposals would use). I assume Mac has at least the Linux capabilities these days: 1) Create a surface (a writeable bitmap) with an alpha channel. Windows: supported in GDI+, not supported in any useful way in GDI. Linux etc: I don't know, wasn't possible last time I looked but is pretty much a requirement for any graphics IMO. 2) Render directly from PNG or JPEG data (i.e. blit from a PNG or JPEG without prior convertion to another format via user code). Windows: supported in GDI+, theoretically possible in GDI but no real support. Linux: Imlib supports this, I'm sure other libraries do. 3) Support for rendering bitmaps with an alpha channel (read support). Windows: fully supported in GDI+ (I believe one of the 'xxxHighQuality' interpolation modes has to be chosen to get linearisation of the components - I think the online documentation is wrong!), broken in GDI (non-linear) Linux: must be supported for SVG... 4) Support for rendering part of the source image. Windows: supported everywhere Linux: I think Imlib requires a 'crop_and_clone_image' call, fully supported in the base (X11) APIs and should be fully supported for SVG. 5) Support for ICC, cHRM, sRGB and gAMA color correction. Windows: fully supported Linux: ICC support is in an optional library which, IRC, requires unpacking of the bitmap first. Should be fully supported for SVG. 6) Support for arbitrary colour channel modification [5x5 matrix] Windows: GDI: no. GDI+: a matrix is supported but I believe it does not linearise the colours, so it is effectively broken (I might be wrong). Linux: Might be in SVG - I can no longer remember. GDI+ is just a layer on top of GDI which removes the need to deal with all the device dependence in GDI, so it works anywhere where GDI works. I don't consider APIs which only work on certain output devices to be a supported part of an Operating System, at least not in any useful sense. SVG is effectively an API equivalent to GDI+ or PostScript; in principle an SVG stream can be rendered to any output device. John Bowler <jb...@ac...> |
From: Adeluc <pn...@ad...> - 2007-05-02 23:49:15
|
> I didn't appreciate the fact that anIM/MPNG as presently proposed > (20070426) > can be realized entirely with commonly available blit-type operations, > until your recent explanation. > > It's not a stated goal, but maybe should be. Therefore, unless > someone can come up with a channel-modification method that can be > done with a blit-type operation, we should forget the alpha-multiplier > idea, no matter how "nice-to-have" it is. My "deck" demo shows that > the effect can be crudely achieved without it. > > Gerard, maybe your chart should have a line "realizable with BLIT". Me also I did not like the fact that the tile_alpha cannot be applied with a blit operation. However, until you can point me a function that do alpha blend with a not alpha pre-multiplied tile image into the composed frame buffer, we will need to use a custom function anyway and then the support of the tile_alpha will add no overhead. A possible solution I found is to use a black rectangle of 1x1 since the AlphaBlend() function can stretch it: You set the black rectangle with an alpha channel set to (255 - tile_alpha), then alpha blend it over a copy of the tile image. You will get the correct RGB but you will need to fix the resulting alpha channel by substracting (255 - tile_alpha). After the RGBA( 0, 0, 0, (255 - tile_alpha) ) rectangle blended over the tile: Tile.Red = 0 + Tile.Red * (255 - (255 - tile_alpha)) / 255; Tile.Green = 0 + Tile.Green * (255 - (255 - tile_alpha)) / 255; Tile.Blue = 0 + Tile.Blue * (255 - (255 - tile_alpha)) / 255; Tile.Alpha = (255 - tile_alpha) + Tile.Alpha * (255 - (255 - tile_alpha)) / 255; After substracting (255 - tile_alpha) from resulting alpha channel: Tile.Red = Tile.Red * tile_alpha / 255; Tile.Green = Tile.Green * tile_alpha / 255; Tile.Blue = Tile.Blue * tile_alpha / 255; Tile.Alpha = Tile.Alpha * tile_alpha / 255; >From here the tile is now pre-multiplied so you can use typical AlphaBlend() function to place the tile over the composed frame buffer. Dst.Red = Tile.Red + (1 - Tile.Alpha) * Dst.Red Dst.Green = Tile.Green + (1 - Tile.Alpha) * Dst.Green Dst.Blue = Tile.Blue + (1 - Tile.Alpha) * Dst.Blue Dst.Alpha = Tile.Alpha + (1 - Tile.Alpha) * Dst.Alpha A possible optimization in Windows is to use DirectX and program a pixelshader to do all those calculations at hardware level. But I do not have the knowledges for that. Resume: Copy the tile into a temporary buffer. Set the black pixel rectangle alpha with (255 - tile_alpha). AlphaBlend() the black pixel rectangle over the temporary buffer. Fix the alpha channel of the temporary buffer by substracting (255 - tile_alpha). AlphaBlend() the temporary buffer over the frame buffer. _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: Glenn Randers-P. <gl...@co...> - 2007-05-02 21:57:25
|
At 02:25 PM 5/2/2007 -0700, John Bowler wrote: >From: Glenn Randers-Pehrson >>Your proposal still isn't a MNG replacement without some way >>of changing the amount of offset each time around the loop. > >Indeed, and the save/restore interacts in potentially bad ways with the >loop, and there is no suppport for delta PNG. I don't see any value in >reconstructing the MNG design, it took a long time and, so far as I am >concerned, has no major deficiencies. I don't see any value in >constructing anything which is not minimal - either the minimal >enhancement to GIF to meet stated requirements (PiG) or a minimal >compact animation format with no extra oh-so-nice features (MPNG). I agree. I didn't appreciate the fact that anIM/MPNG as presently proposed (20070426) can be realized entirely with commonly available blit-type operations, until your recent explanation. It's not a stated goal, but maybe should be. Therefore, unless someone can come up with a channel-modification method that can be done with a blit-type operation, we should forget the alpha-multiplier idea, no matter how "nice-to-have" it is. My "deck" demo shows that the effect can be crudely achieved without it. Gerard, maybe your chart should have a line "realizable with BLIT". Glenn |
From: John B. <jb...@ac...> - 2007-05-02 21:28:48
|
From: Glenn Randers-Pehrson >Your proposal still isn't a MNG replacement without some way >of changing the amount of offset each time around the loop. Indeed, and the save/restore interacts in potentially bad ways with the loop, and there is no suppport for delta PNG. I don't see any value in reconstructing the MNG design, it took a long time and, so far as I am concerned, has no major deficiencies. I don't see any value in constructing anything which is not minimal - either the minimal enhancement to GIF to meet stated requirements (PiG) or a minimal compact animation format with no extra oh-so-nice features (MPNG). John Bowler <jb...@ac...> |
From: Adam M. C. <png...@ni...> - 2007-05-02 20:52:53
|
I wrote: > ANG-7 is more like a midpoint between APNG and anIM. I've tried to > keep the most important advantages of both: automatic fallback to a > single frame in browsers, and a clear distinction between still images > and animations as indicated by filename extensions and media types. I should have said "a clear distinction between single-image and multi-image formats". That's the distinction that ANG and anIM both maintain. As for the distinction between still images and animations, anIM would actually blur it by making PNG an animated-single-image format, which is clever but comes at the cost of single-frame automatic fallback. Glenn Randers-Pehrson <gl...@co...> wrote: > The main shortcoming that I see in ANG-7 is that the montage is all in > one chunk. > > Since ANG is not PNG, you don't have the sequencing issue. You could > just declare that a sequence of montage chunks must be kept in order. I could consider that. The concern is that people might want to bypass the media-type-checking to use PNG tools on ANG streams in cases where it ought to be safe (to add ancillary chunks, for example). I suppose we could advise users to run their ANG files through an adAT-combiner tool before letting any PNG editor touch it. And maybe advise encoders to output a single adAT whenever it's not a burden to do so. Any other thoughts on this issue? AMC |
From: Glenn Randers-P. <gl...@co...> - 2007-05-02 20:46:12
|
At 09:07 AM 4/26/2007 -0400, Glenn Randers-Pehrson wrote: > >I updated the aNIM chunk proposal to version 20070426. >removed the >empty section on recommendations for decoders. I think it's still worth saying something here about displaying an interlaced image while it's still being decoded. If the offsets aren't evenly divisible by 8, and either the "sparkle" or "block" method of constructing the PNG image is used, the tiles are likely to look a bit strange. I would recommend only using "interpolation" of the PNG image. If that is beyond the viewer's capability, then I would recommend waiting until the PNG image is fully decoded before starting the animation. This observation applies to APNG, ANG, and MNG as well. Glenn |
From: Glenn Randers-P. <gl...@co...> - 2007-05-02 20:30:52
|
Your proposal still isn't a MNG replacement without some way of changing the amount of offset each time around the loop. Glenn |
From: John B. <jb...@ac...> - 2007-05-02 16:41:08
|
From: Glenn Randers-Pehrson >Can loops be nested? Yes. I would disallow arbitrary 'goto'. John Bowler <jb...@ac...> |
From: Glenn Randers-P. <gl...@co...> - 2007-05-02 13:53:44
|
At 08:50 AM 5/1/2007 -0700, John Bowler wrote: >6) loop(frame, times) > >'loop' replaces the overall loop count, it causes the animation to loop >back to the given frame the given number of times. Can loops be nested? Glenn |
From: Glenn Randers-P. <gl...@co...> - 2007-05-02 13:14:39
|
The main shortcoming that I see in ANG-7 is that the montage is all in one chunk. In anIM and APNG they can be split, like the IDAT chunks. Since ANG is not PNG, you don't have the sequencing issue. You could just declare that a sequence of montage chunks must be kept in order. Glenn |
From: Adam M. C. <png...@ni...> - 2007-05-02 06:36:52
|
Since both APNG and anIM allow frames to be constructed by compositing multiple images, and since none of the implementors here seem the slightest bit put off by this, but instead seem enthusiastic about the potential compression gains, I've added this capability to ANG. I've expressed it in a slightly different way, however, distinguishing between layers and frames, for reasons given in the Rationale section. ANG-7 is more like a midpoint between APNG and anIM. I've tried to keep the most important advantages of both: automatic fallback to a single frame in browsers, and a clear distinction between still images and animations as indicated by filename extensions and media types. I am still hopeful that the PNG folks and the Mozilla folks can meet each other halfway, rather than go off in divergent directions. AMC Animated Network Graphics (ANG), draft 7 (2007-May-01-Tue) Adam M. Costello http://www.nicemice.net/amc/ Changes from draft 6 Introduced the concepts of layer and substrate, to allow frames to be constructed by compositing multiple images, like in APNG and anIM, with the goal of improving compression. This required the addition of a section explaining alpha-over-alpha compositing, expansion of the remarks about frame numbering, and the addition of remarks about the lossyness of frames contrasted with the losslessness of layers. Moved the discussion of the rejected media type out of the Rationale section and into an editorial comment that would not be included in a final draft. Acknowledgements Several good ideas have been taken from the PNG mailing list (currently png...@li...). Contents Goals Relationship to PNG Datastream tagging Conceptual model Datastream format Rationale Goals 1) Capabilities comparable to animated GIF, plus the added features of PNG (like 24-bit color and alpha). 2) Automatic fallback to PNG in existing web browsers, using the <img> tag, showing an author-selected single frame instead of the animation. 3) Respect for the PNG specification and existing PNG applications and users, to the extent possible given goal 2. 4) Simplicity. 5) Compression at least as good as in animated GIF, or even better if possible in a simple format. The compression need not rival that of a complex format like MNG. Relationship to PNG ANG is not PNG, but it is deliberately very similar. PNG contains a single still image, whereas ANG contains both a still image and an animation. The ANG datastream format is identical to the PNG datastream format (including the signature) except that an ANG datastream must contain an ahDR chunk before IDAT and must contain an adAT chunk after IDAT, whereas a PNG datastream must not contain these chunks, because the PNG specification prohibits multiple images in a PNG datastream. The still image in an ANG serves two purposes: 1) A fallback for applications or display technologies (like paper) that do not support animation. 2) A source image to be used (optionally) in the animation, in addition to the montage in adAT. Unlike PNG and GIF, ANG is not a fully streamable format. ANG encoders cannot produce ANG in a streaming fashion because the frame data is contained in a single chunk. ANG decoders can or cannot consume ANG in a streaming fashion depending on how encoders choose to lay out the data. There is typically a trade-off between streamability and compression. Datastream tagging Because both PNG and ANG use the same signature, it is important that ANG be tagged correctly. Its media type is "video/x-ang". The media type "image/png" must not be used for ANG, because ANG is not PNG, and because a video is not an image. [[ If/when this media type is registered, the "x-" prefix will be removed. ]] The recommended file extension for ANG is ".ang". The extension ".png" should never be used for ANG, because it is important that users be able to easily distinguish PNG and ANG, so that they are not surprised if a PNG viewer does not show the animation in an ANG, or if a PNG editor drops the animation from an ANG. The deliberate similarity between the ANG and PNG formats facilitates incremental deployment of ANG, with automatic fallback to PNG. When ANG-unaware PNG-aware applications are fed an ANG datastream, they will misinterpret the ANG as a PNG, ignore the unrecognized ahDR and adAT chunks, and display the still image. This is potentially confusing to users, but hopefully the media type and the filename will mitigate that hazard. For example, this HTML inline image will display as a still-image PNG in most ANG-unaware web browsers, even though the URL ends in ".ang" and the HTTP server tags it as "video/x-ang": <img src="http://example/foo.ang"> [[ I have verified this for Firefox 1.5 & 2.0, IE 6 & 7, Safari, Konqueror 3.1, and Opera Mini. To help people test other browsers, I have created http://www.nicemice.net/amc/test/ang.html, which contains three instances of the same inline PNG image, one served as "image/png; x-anim=1", one served as "video/x-ang", and one served as "application/octet-stream". The first of those types might, in theory, be more likely than "video/x-ang" to facilitate automatic fallback, because RFCs 2045 and 2046 require that unrecognize media type parameters be ignored. However, "video/x-ang" works just fine in practice, so the param-hackery is not needed. ]] The distinct media types for ANG and PNG allow greater control over the fallback, if desired. For example, if you wanted ANG-unaware web browsers to fall back to animated GIF rather than still PNG, you could do something like this: <object data="foo.ang" type="video/x-ang"> <img src="foo.gif"> </object> ANG-aware decoders should use the following logic to determine whether a datastream beginning with the PNG signature is PNG or ANG: 1) If a media type is available, trust it. 2) Otherwise, if a filename is available, and it ends with ".png" or ".ang" (or any capitalization thereof), trust it. 3) Otherwise, if ahDR is present, assume ANG. 4) Otherwise, assume PNG. Since ahDR and adAT are invalid in PNG, they are errors if encountered in a PNG datastream. Decoders that recognize them should treat them like any erroneous ancillary chunks: ignore them, and notify the user if appropriate. For this particular error, the notification could perhaps suggest that the user rename or re-tag the file if possible. Of course decoders that do not recognize these chunks will just ignore them. Conceptual model An ANG datastream encodes a still image, just like PNG, and also encodes an animation, which is a sequence of images, called frames, all the same width and height as the still image, which are to be displayed consecutively in the same place, each for a nonzero duration indicated in the datastream (but interactive applications should allow the user to pause or jump to the next frame at any time). Each frame of the animation is the result of stacking zero or more constituent images, called layers, in front of a default image, called the substrate. The substrate has the same width, height, and position as the frames, and is uniformly filled with a single pixel value. The layers can be smaller than a frame, and they always lie completely within the frame boundary. Each layer is a positioned and clipped copy of either the still image or a second image called the montage. The layers and the substrate do not necessarily hide what lies behind them, because pixels can be transparent or partially transparent. The pixel value used to fill the substrate is different for different color types: For color type 1 (indexed-color), it is palette index 0. For color types 0 and 2 (non-indexed without alpha) it is the value of the tRNS chunk if present, otherwise all zeros (black). For color types 4 and 6 (non-indexed with alpha), it is all zeros (fully transparent black). All meta-data (chunk fields) that apply to the still image in IDAT also apply to the montage in adAT, with only three exceptions: The width, height, and interlace method in IHDR do not apply to the montage. The montage has its own width, height, and interlace method given in ahDR. All meta-data that apply to each pixel of the still image and the montage (like color type, bit depth, significant bits, palette, color space, physical size) also apply to each pixel of the layers and the substrate. The still image and montage are represented losslessly in an ANG datastream. Since the layers are simply positioned and clipped copies of those images, they are also represented losslessly. The frames, however, are represented as compositions of images which can be lossy. For datastreams with alpha channels (color types 4 and 6), the composition involves gamma decoding and alpha blending (and perhaps gamma re-encoding), which are subject to floating-point round-off errors and slight differences in implementation. For datastreams without alpha channels (color types 0, 1, and 4), the composition involves only simple pixel replacement, and the frames are lossless. Frame and layer numbering Frame and layer numbering is specified here for consistency among applications that allow users to refer to particular frames and layers. The frame and layer numbers are not used inside ANG datastreams. The frames of the animation are numbered starting with 1. Frame 0 refers to the still image, which unlike the animation frames does not have the substrate underlying it. An animation can request to be played more than once, but this does not affect the frame count. The layers of the animation are numbered starting with 3. Layer 0 refers to the still image. Layer 1 refers to the montage. Layer 2 refers to the substrate. A frame can inherit the layers of the previous frame and add new layers in front, but in this case only the new layers get new layer numbers; the inherited layers keep their original layer numbers. Layers within a frame can also be numbered relative to the frame, starting with 0 for the back-most layer. For example, if frame 2 is composed of layers 5 and 6, and frame 3 inherits the layers from frame 2, then frame-2-layer-0 equals frame-3-layer-0 equals layer 5. Datastream format See the PNG specification for all aspects of the ANG datastream format except the ahDR and adAT chunks, which are specified here. [[ If/when these chunks are registered, the second letter of each will be capitalized. ]] ahDR must appear exactly once, before IDAT. It contains: num_frames (4 bytes, unsigned) The number of frame specifiers in adAT. ticks_per_second (4 bytes, unsigned) Defines the time unit for frame durations. If this is zero, all frame durations are infinite. num_plays (4 bytes, unsigned) Number of times to play the animation. Zero means infinity. montage_width (4 bytes, unsigned) Width of the montage in adAT, in pixels. Not zero. montage_height (4 bytes, unsigned) Height of the montage in adAT, in pixels. Not zero. montage_interlace_method (1 byte) Interlace method used by the montage in adAT. still_image_used (1 byte, boolean) Must be 0 or 1. If 0, the still image is not used in the animation, and need not be decoded in order to display the animation, and the layer specifiers in adAT do not include from_still_image fields. If 1, the still image may be used in the animation, and each layer specifier in adAT includes a from_still_image field. adAT must appear exactly once, after IDAT. It contains a compressed stream (using the compression method indicated in IHDR), which contains a sequence of num_frames frame specifiers, immediately followed by a montage. Each frame specifier contains: frame_duration (4 bytes, unsigned) Duration of the frame, in ticks. Zero means infinity. keep_prior_layers (1 byte, boolean) Must be 0 or 1. If 0, this frame has layers indicated by its own layer specifiers and no others. If 1, the layers indicated by this frame's layer specifiers are added (in front) to the stack of layers inherited from the previous frame. For the first frame (frame 1), the inherited stack is empty (has zero layers). Even if the animation loops, the first frame does not inherit layers from the last frame of the previous loop. num_layers (1 byte, unsigned) The number of layer specifiers for this frame. layer_specifiers (num_layers * (24 + still_image_used) bytes) A sequence of num_layers layer specifiers, in order from back to front. Each layer specifier contains: from_still_image (0 or 1 byte, boolean) This field appears if and only if the still_image_used field of ahDR is 1. If from_still_image is 0 or absent, the montage is the source image for this layer. If from_still_image is present and 1, the still image is the source image for this layer. No other values are allowed. shift_left (4 bytes, signed) shift_up (4 bytes, signed) clip_left (4 bytes, signed) clip_top (4 bytes, signed) clip_width (4 bytes, unsigned) clip_height (4 bytes, unsigned) The layer is derived from the source image as follows. Starting with the source image positioned with its upper-left corner aligned with the upper-left corner of the frame, the source image is shifted shift_left pixels to the left and shift_up pixels upward, then it is clipped to both the clip boundaries and the frame boundaries, where clip_left and clip_top are the offsets (in pixels) from the upper-left corner of the frame to the upper-left corner of the clip rectangle, and clip_width and clip_height are the dimensions (in pixels) of the clip rectangle. Note that some of these fields are signed and can be negative. The width and height of the layer are the width and height of the overlap of the frame rectangle and the clip rectangle. If the two do not overlap, then the width and height of the layer are zero, which is not a problem for displaying the animation, but is an error when extracting layers as PNG datastreams, because a PNG image is required to have nonzero width and height. Therefore encoders should not specify zero-area layers (which are pointless anyway). Immediately following the sequence of frame specifiers is: montage (bytes) Filtered scanlines. The montage is formatted exactly like the uncompressed contents of IDAT chunk data, except that it uses the width, height, and interlace method indicated in ahDR rather than the ones in IHDR. Typically the best compression is obtained when the montage is very wide and not very tall, with similar layers adjacent; however, this layout makes it necessary for the decoder to decode the entire montage before it can display even the first frame. If the montage is taller and less wide, and earlier layers appear closer to the top, it becomes more possible for the decoder to display as it decodes, but the compression is likely to suffer. Notes on layer composition To display a substrate and layers in front of a background, there are two approaches. One way is to composite the substrate over the background, then composite the back-most layer over the result, and so on, performing the compositing as described in the PNG specification. Another way is to composite the substrate and the layers first (from back to front) to yield the frame image, then composite the frame image over the background. The second approach allows the frame image to be exported, or cached for re-use in case the background changes. The second approach can involve compositing over a not-fully-opaque image, but the PNG specification does not tell how to do that. For images without alpha channels, it is trivial: just keep the front pixel or the back pixel depending on whether the front pixel is transparent. For images with an alpha channel, it can be done as follows. 1. Normalize all the alpha samples the range [0,1]. 2. Gamma-decode all the non-alpha samples (or undo the more sophisticated transfer function indicated by sRGB or iCCP) to yield samples that are proportional to light intensity. 3. Multiply every non-alpha sample by the alpha sample from the same pixel (that is, convert to premultiplied form). 4. Store the substrate in an output buffer. 5. For each layer (in order from back to front), for each pixel in the output buffer: 5a. Let A be the alpha sample of the layer pixel. 5b. For each channel (including the alpha channel), let output_sample = output_sample * (1 - A) + layer_sample At this point the output buffer contains a non-gamma-encoded premultiplied frame image ready to be composited over a background. If the frame image is to be exported, it may be desirable to perform additional steps: 6. Divide each non-alpha sample by the alpha sample of the same pixel (that is, convert to non-premultiplied form). 7. Gamma-encode all the non-alpha samples (or encode them using a more sophisticated transfer function). Gamma encoding and premultiplication are not commutative--the order of steps 2, 3, 6, and 7 is significant. Rationale Putting all the frame data in one chunk avoids the complication of how to deal with reordering by ANG-unaware PNG editors. The cost is encoder streamability, but that capability of animated GIF is not used in practice. If encoder streamability is needed, MNG is available. Separating the concepts of layer and frame, rather than speaking of "zero-duration frames", is more consistent with existing animation terminology. It also helps clarify what is and is not lossless, and avoids tempting decoder implementors to momentarily display partially-constructed "frames" that were never meant to be displayed. The use of alpha composition within the animation (between layers) rather than just between the animation and the external background adds complication that is not strictly necessary, because the encoder could precompute the composed frames and include them in the montage. On the other hand, it can improve compression for animations that can be modeled as sprites moving over each other (possibly with semi-transparent regions and anti-aliased edges), and the general alpha-over-alpha compositing is not much different from the alpha-over-opaque-background compositing that PNG decoders already know. The substrate concept avoids awkward specifications of how to composite an image that lies partly over something and partly over nothing. It avoids questions of what the frame dimensions are if the bounding box of all the layers is smaller than the frame. It ensures that every frame has the same dimensions, which is what people expect. Finally, it lets ANG preserve a property of PNG that the background can show through only if an alpha channel or tRNS is present. The interlace method of the montage is independent of the interlace method of the still image because interlacing is less useful for animations. End of draft. |
From: John B. <jb...@ac...> - 2007-05-01 22:42:12
|
From: Adeluc >Because the goal was to make anIM to be extremely simple to support and >attractive? Well, it was, now you've added a nightmarish requirement for every decoder to be able to perform manipulations on the RGBA values of individual pixels in the tiles. =20 This is no longer KISS. John Bowler <jb...@ac...> |
From: Adeluc <pn...@ad...> - 2007-05-01 22:34:19
|
> Like I said some time before about APNG, if we're going to invent a new > animation format with extra capabilities this is not the way I'd want to > see it done. As a minimum I would also add a 'clear' operation to set a > rectangular region in the frame to RGBA(0,0,0,0) and 'save' and > 'restore' operations. The beauty of actual anIM specifications is that it is extremely simple. It is so simple that we are creating our animation samples by using text script. Each frame are independant to each other removing all the complexity of creation. The add of the tile_alpha simply add more creative flexibility and make anIM more attractive for massive adoption without adding any extra decoding and understanding. The add of the tile_alpha was simply a wise choice to do since the current anIM implementation must support alpha channel blending anyway. > These can simply be added by changing each tile element to be variable > length and adding an 'opcode'. Current values I can see to be a > complete and more powerful superset of PiG, anIM and APNG are: > > 1) blit(rectangle, dest) - the current operation > 2) display(delay) - marks the end of the frame > 3) save - saves the current frame buffer in current state > 4) restore - restores same > 5) clear(rectangle colour) - sets rectangle to colour > 6) loop(frame, times) > > 'loop' replaces the overall loop count, it causes the animation to loop > back to the given frame the given number of times. What you are suggesting is, compared to the actual simple anIM specifications, a nightmare. I am already able to do everything you suggested above with only the cost of few extra tiles those compress very well. In the other hands, if we remove the tile_alpha, to do what the tile_alpha can accomplish we need to add more data in IDAT and those do not compress well and are significant in size. > These operations are all really simple to implement - save and restore > just require a single copy of the current frame, clear is easy. > > *All* these things are easy - so why not do them? Because the goal was to make anIM to be extremely simple to support and attractive? _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: John B. <jb...@ac...> - 2007-05-01 16:05:02
|
From: Adeluc >In other words what you are saying is to make anIM specifications OS=20 >dependant to accomodate existing OS those have some poor implemented=20 >alphablend() functions. Eh, that is what *you* (Adeluc) are saying. You are saying (incorrectly BTW as you are using Windows) that, because your OS doesn't support PNG properly (it does) by handling the alpha correctly using the native support *you* have to write AlphaBlend. Therefore you are prepared to add functionality to anIM which will require everyone to write AlphaBlend. >As I already wrote in previous e-mail, a lot of those OS dependant=20 >alphablend() functions require pre-multiplied source and they will simply=20 >not work anyway with the tiles in IDAT to compose the buffer. Eh... if you have an operating system function which reads a PNG file and produces a native object (e.g. an in memory bitmap) which then can't be used in the operating system alpha blend you have a broken operating system. Using broken systems as justification for doing things which are easy if you have are writing the pixel level bit twiddlinig is simply wrong. I seem to remember having this argument before about the buffers in MNG, that's why we ended up with 'abstract' and 'concrete' buffers - the concrete ones support bit twiddling, but can't necessarily be implemented using native OS facilities. John Bowler <jb...@ac...> |
From: John B. <jb...@ac...> - 2007-05-01 15:54:10
|
From: Glenn Randers-Pehrson >Addition to the proposal: Well, as of this moment I'll vote against anIM, after the two week discussion period is up. Like I said some time before about APNG, if we're going to invent a new animation format with extra capabilities this is not the way I'd want to see it done. As a minimum I would also add a 'clear' operation to set a rectangular region in the frame to RGBA(0,0,0,0) and 'save' and 'restore' operations. These can simply be added by changing each tile element to be variable length and adding an 'opcode'. Current values I can see to be a complete and more powerful superset of PiG, anIM and APNG are: 1) blit(rectangle, dest) - the current operation 2) display(delay) - marks the end of the frame 3) save - saves the current frame buffer in current state 4) restore - restores same 5) clear(rectangle colour) - sets rectangle to colour 6) loop(frame, times) 'loop' replaces the overall loop count, it causes the animation to loop back to the given frame the given number of times. These operations are all really simple to implement - save and restore just require a single copy of the current frame, clear is easy. *All* these things are easy - so why not do them? Still, I think I've just re-invented MNG in a new format, so I'd probably vote against this too. John Bowler <jb...@ac...> |
From: Adeluc <pn...@ad...> - 2007-05-01 15:53:32
|
> The work to blit an image is already supported in any reasonable > graphics library, the functionality is required by any SVG decoder and > many display devices have device-level alpha composition support > (possibly broken) which can do the blit independently of the main CPU. > > Manipulating the information in the PNG image on a per-pixel basis > requires, at minimum, setting up non-default parameters for the blit and > may, in fact, require getting at the individual pixels and doing it > yourself. In other words what you are saying is to make anIM specifications OS dependant to accomodate existing OS those have some poor implemented alphablend() functions. As I already wrote in previous e-mail, a lot of those OS dependant alphablend() functions require pre-multiplied source and they will simply not work anyway with the tiles in IDAT to compose the buffer. _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: Adeluc <pn...@ad...> - 2007-05-01 15:47:24
|
> I guess that should be > > combined_pixel_alpha(x,y) = pixel_alpha(x,y) * (tile_alpha / 255.0) > > so the multiplier doesn't come out zero unexpectedly. To speed up things I always try to avoid floats when it is possible. So to avoid rounding to zero perform all the multiplications and keep the division for the end. combined_pixel_alpha(x,y) = (((pixel_alpha(x,y) * tile_alpha) + 127) / 255); > Is the situation (tile_alpha == 0) really useful? It's basically a no-op, > right? So why not make the calculation: A value of 0 is useful is you want to fade out an image up to the canvas alone while keeping the generation of tile descriptions simple. That lets you set a delay for a period where the animation is invisible. I used it with the spinning stereo pipe cube animation sample. I really do not like the idea of not supporting tile_alpha == 0 simply to write the formula differently. In my implementation of custom AlphaBlend() function I splitted it in 3 cases for fast calculation loops. tile_alpha == 0: nothing to do; tile_alpha == 255: no pre-calculation; tile_alpha == [1,254] do the pre-calculations before blending. _________ / Adeluc / ¯¯¯¯¯¯¯¯¯ |
From: John B. <jb...@ac...> - 2007-05-01 15:43:54
|
From: Glenn Randers-Pehrson [mailto:gl...@co...]=20 >I don't think it adds *any* complexity other than a little bit of code >to read the extra byte and apply the alpha to the tile. Once the tile >has been recovered from the PNG image and the constant alpha applied, >from there on it can be handled exactly as if the tile had had its own >alpha per pixel. It is not necessary to "recover the tile from the PNG image". These are basic operations of the bitmap supporting libraries I know about and are basic operations in SVG: 1) Load a PNG dataset (from a file or stream). 2) Blit a rectangular area of the loaded dataset either onto a device or into an intermediate bitmap. (2) includes alpha composition - a library which loads PNG (like, for example, GDI+ on Windows) has to support alpha. In other words most of the work of implementing anIM is in decoding the playlist, apart from that the it comes down to: png_image =3D load_png_data(png_file); frame =3D NULL; frame_list =3D NULL; for (tile =3D tile_info; tile =3D tile->next; tile !=3D NULL) { if (frame =3D=3D NULL) frame =3D create_bitmap(frame_width, frame_height); blit(png_image, tile->x, tile->y, tile->width, tile->height, tile->height, frame, tile->dest_x, tile->dest_y); if (tile->delay !=3D 0 && tile->next =3D=3D 0) { new_frame =3D malloc(sizeof *new_frame); new_frame->delay =3D tile->delay; new_frame->bitmap =3D frame; new_frame->prev =3D frame_list; frame_list =3D new_frame; } } Take a look at that! Most of the code is in handling the frame list (which the above code produces in the reverse order...) and, of course, the error handling which I left out and the tile_info reading (anIM chunk) which I left out! The work to blit an image is already supported in any reasonable graphics library, the functionality is required by any SVG decoder and many display devices have device-level alpha composition support (possibly broken) which can do the blit independently of the main CPU. Manipulating the information in the PNG image on a per-pixel basis requires, at minimum, setting up non-default parameters for the blit and may, in fact, require getting at the individual pixels and doing it yourself. That is not KISS. John Bowler <jb...@ac...> =09 |
From: Gerard J. <gj...@xs...> - 2007-05-01 15:27:54
|
> I guess that should be > > combined_pixel_alpha(x,y) = pixel_alpha(x,y) * (tile_alpha / 255.0) > > so the multiplier doesn't come out zero unexpectedly. Is the situation (tile_alpha == 0) really useful? It's basically a no-op, right? So why not make the calculation: new_pixel_alpha(x,y) = ((pixel_alpha(x,y) * (tile_alpha + 1)) + 128) >> 8; Should we mention that the PNG image itself may not have any alpha- information defined, which means the tile_alpha simply creates a new alpha-layer for the image? I don't think this really complicates the proposal. It's an easy to implement feature with little overhead. Also in todays world it is quite common to apply a single opaqueness value to a blob of pixels. There are plenty of examples where this is used to partially hide application windows, blocks of HTML, etc, etc... It can even be applied to a still PNG image as well as an animation! Gerard |
From: Glenn Randers-P. <gl...@co...> - 2007-05-01 14:07:01
|
At 09:15 AM 5/1/2007 -0400, I wrote: >The tile_alpha is a multiplier that the decoder applies to alpha value >for each pixel of the tile before compositing it: > > combined_pixel_alpha(x,y) = pixel_alpha(x,y) * (tile_alpha / 255) I guess that should be combined_pixel_alpha(x,y) = pixel_alpha(x,y) * (tile_alpha / 255.0) so the multiplier doesn't come out zero unexpectedly. Glenn |