Menu

#28 time taken by ImageToBlob

v1.0_(example)
closed
None
5
2016-04-23
2015-11-24
nikhil
No

I am having a yuv buffer which I want to do composite with a bmp file. Currently I am manually creating a .yuv file for the buffer and then I call compositeImage and after that I call ImagetoBlob in order to get the output buffer. But, I found that ImageToBlob itself is taking much time for getting me the buffer. Below is the sequence I am following:

create a .yuv file from the input buffer.
call compositeImage to do the composite of a bmp input file with .yuv file.
Imagetoblob to get the resultant buffer.

so, my questions are:
1. Can I do the compositeImage of a yuv buffer (not .yuv file) with a bmp file?
2. Is there any way to read the buffer from compositeImage other than ImageToBlob API?

Discussion

  • Bob Friesenhahn

    Bob Friesenhahn - 2015-11-24

    GraphicsMagick does have a way to represent pixels in YUV space (YUVColorspace). The Image colorspace member indicates the current colorspace. Before any composition can occur, the two images need to be converted to the same colorspace (using RGBTransformImage or TransformRGBImage). The YUV image could be constructed in memory from scratch using AllocateIimage() and then acessed using the low-level pixel cache functions desrcribed at http://www.graphicsmagick.org/api/pixel_cache.html. GraphicsMagick is implemented based on these pixel cache functions.

    The output of CompositeImage is an Image structure which refers to a pixel cache which can be read/written using the pixel cache functions. The image in the pixel cache is represented as an array of PixelPacket or IndexPacket, depending on the indicated Image storage_class.

    Lastly, there are the Export (http://www.graphicsmagick.org/api/export.html) and Import (http://www.graphicsmagick.org/api/import.html) functions which are highly optimized interfaces to export and import pixel regions from some common representations from/to the pixel cache.

     
  • Bob Friesenhahn

    Bob Friesenhahn - 2015-11-24

    The best function to use for converting colorspaces is TransformColorspace(). There seems to be a mistake that these functions are not currently documented on the web site.

     

    Last edit: Bob Friesenhahn 2015-11-24
  • nikhil

    nikhil - 2015-11-26

    1. Earlier I was using ImageToBlob for getting the yuv420 output data buffer as below:
    compBuf = ImageToBlob(bGImageInfo,bGImage, &bGImageInfo->length, &bGImage->exception);

    2. Now I am trying to make use of pixel cache APIs to get the same output but looks like I am missing something to get the final YUV420 output. belo is the code snippet:

    MagickPassFail enStatus = MagickFail;
    ExportPixelAreaOptions export_options;
    ExportPixelAreaInfo export_info;
    
    ExportPixelAreaOptionsInit(&export_options);
    export_options.endian =  LSBEndian;
    
    bGImageInfo->length = bGImage->rows * bGImage->columns * 3;
    
    compBuf = (unsigned char *)malloc( bGImageInfo->length  );
    
    int y;
    register const PixelPacket *p;
    
    (void) TransformColorspace(bGImage,RGBColorspace);
    
    printf ( "====> bGImage Details AFT Transform\nCompression[%d]\nendian[%d]\ncolorspace[%u]\nmagick[%s]\nlength[%u]\n\n",
            bGImage ->compression,
            bGImage ->endian,
            bGImage ->colorspace,
            bGImage ->magick,
            bGImageInfo ->length  );
    
        for (y=0; y < (long) bGImage->rows; y++)
        {
          p=AcquireImagePixels(bGImage,0,y,bGImage->columns,1,&bGImage->exception);
          if (p == (const PixelPacket *) NULL)
            break;
      enStatus |= ExportImagePixelArea(bGImage,RGBQuantum,8,(compBuf+(y*bGImage->columns*3)),
                      &export_options,&export_info);
        }
    

    Request you help on the same.
    3. Also I tried to print the colorspaces for input yuv and bmp files but, I found there is no separate YUV colorspace. Both the files have RGB colorspace only. I think colorspace for both the files should be RGB only. Is the understanding correct?

     
  • nikhil

    nikhil - 2015-11-26

    Hi Bob,

    1. Earlier I was using imageto blob for getting the output yuv 420 data buffer.

    2. but now, I am using image pixel cache functions. Code snippet is attached in the file.
      Still I am not able to read the yuv 420 data which I was able to read from ImageToBlob.

    3. Also I checked for the colorspace of yuv and bmp files. And I found that both the files are already in the RGB colorspaces. there is no separate yuv colorspace for the yuv input file.

    request your inputs on the same.
    Also please let me know if there is any sample code from where I can see how the yuv data of Image is being read from the Image structure.

     
  • nikhil

    nikhil - 2015-11-26

    I forgot to attach the file in last post. please find attached

     
  • Bob Friesenhahn

    Bob Friesenhahn - 2015-11-26

    GraphicsMagick is free software and you can look at the source code to see what it is doing.

    The YUV format reading/writing is in coders/yuv.c. YUV is mapped to the red, green, and blue members of PixelPacket. This is a subsampled format (half the UV samples in both dimensions than Y samples) so the chroma samples are copied into a different (smaller) image and then scaled up using ResizeImage() and the triangle filter. Images within GraphicsMagick are not subsampled so the re-scaled chroma is copied into the green and blue members of PixelPacket. Lastly the image colorspace is indicated as YCbCrColorspace and then TransformColorspace() is used to convert the image data to RGB colorspace. As a result, you would never see direct YUV (YCbCr) samples from the YUV reader without modifiying GraphicsMagick source code (this feels like a weakness to me since it is no longer necessary).

    Likewise the YUV writer is converting from RGB space to YCbCr, and creating a down-sampled chroma image at half resolution (also using ResizeImage()) before outputting any YUV samples. This explains why you are not happy with ImageToBlob() performance.

    The considerable processing when reading/writing YUV likely explains why you are finding it to be slow. Software optimized for video (e.g. ffmpeg) surely includes many optimizations, even including assembly code in order to improve performance. Such software likely avoids any resampling unless absolutely required.

     

    Last edit: Bob Friesenhahn 2015-11-26
  • Bob Friesenhahn

    Bob Friesenhahn - 2015-11-26

    Regarding your question about reading the buffer from CompositeImage(), the pixels will be an array of PixelPacket in RGB format. These pixels may be read using the pixel cache APIs (as described previously). However, the pixels will not be in subsampled YUV format. After requesting to convert the colorspace to YCbCrColorspace (or Rec601YCbCrColorspace or Rec709YCbCrColorspace), the "YUV" values will be available via the PixelPacket red, green, and blue members but they will still not be the subsampled chroma format used in your YUV files. A special filter needs to be used to sample the chroma and produce the resampled equivalent. GraphicsMagick is using ResizeImage() with the triangle filter rather than using the specific filter specified by the Rec.601 or Rec.709 specifications. There is no "export" function which produces downsampled values.

    I definitely recommend purchasing Charles Poynton's "Digital Video And HD, Algorithms And Interfaces" book since it discusses all of the details necessary to deal with video formats (including when converting to/from "computer" formats as used by GraphicsMagick) correctly. In particular, you will notice that subsampling can be very complicated and the results depend on when/where the chroma samples are taken. For example, the results will be very different if the chroma samples occur at the same time Y is sampled (i.e. co-sited), or if they are taken in between the Y samples. There are a great many YUV type formats and GraphicsMagick is only supporting one of them as a distinct format.

     
  • nikhil

    nikhil - 2015-12-04
    Post awaiting moderation.
  • nikhil

    nikhil - 2015-12-07

    Using the pixel cache I'm able to get YUV interleaved data and I'm sub-sampling into the YUV420 planar format manually. And I'm able to reduce the processing time. But I need to further reduce this processing time.

    As I'm having two input files one YUV file and other could be any image(BMP, JPG, PNG, TIFF, ...) and I'm trying to do the composite operation of the two images and then read the final YUV values after composition. Below are my findings :

    1. After reading both the files colorspace remains RGB only
    2. After read image, the magick member of the image structure is "YUV" for the YUV file and "BMP" for the BMP file.

    So, I believe there is no need to call TransformColorspace as both the images are in RGB colorspace only. But I think initially if we convert both the images into raw RGB pattern then it may further reduce the processing time. Is my understanding correct?. If yes, please suggest the way to achieve this.

     
  • Bob Friesenhahn

    Bob Friesenhahn - 2015-12-07

    The current Image colorspace is indicated by the 'colorspace' member of the Image structure (not the 'magick' member). GraphicsMagick only has one simple representation of the image, which is via the PixelPacket structure (which seems like "raw RGB pattern" to me). The PixelPacket values need to be interpreted based on the current value of colorspace. For composition the colorspace between the two images needs to be the same. If you call TransformColorspace() and the image is already in the target colorspace it will simply return immediately.

    After doing the composition you will need to use TransformColorspace() to convert the colorspace to your desired colorspace. The TransformColorspace() function computes matrixes and uses lookup tables comprised of 32-bit 'float' in order to convert from the input colorspace to the output colorspace. For some colorspaces it is likely possible to implement a faster conversion than TransformColorspace(). YUV is one of those colorspaces. When converting from RGB to YUV, the most important thing is accurate computation of Luma (Y), since the other two channels are based on that. The Luma computation is different between Rec.601 and Rec.709.

    As an example, the macro PixelIntensityRec601() from magick/colorspace.h computes intensity using Rec.601 math, and U an V can be computed based on that.

     
  • Bob Friesenhahn

    Bob Friesenhahn - 2016-04-23
    • status: open --> closed
     

Log in to post a comment.

MongoDB Logo MongoDB