Menu

Quad bayer support?

Anonymous
2021-04-23
2023-08-16
  • Anonymous

    Anonymous - 2021-04-23

    Hello

    Is there any support of functionality using quad bayer sensors? What i'm seeing is that my main camera, that has quad bayer sensors, is seen as two separate cameras (api2). It would be cool if this functionality could be somehow used.

     
  • Anonymous

    Anonymous - 2021-04-24

    Quad bayer sensors should support 3 modes, pixel binning, full resolution and HDR. In HDR mode the sensors is split into two sensors and each part is exposed differently. This is the main benefit of quad bayer structure, as it can capture higher dynamic range in a single shot. (not stacked HDR)

    So even if there would be just an option to manually set exposure values for the two split sensors and take the pictures at the same time. These images could be then used to generate more controlled HDR images later in some proper software.

    I think this dual sensor structure can also be used in video to capture higher dynamic ranges. So any sort of manual control over this dual sensor process would be nice.

    Seems quad bayer is becoming the standard in high resolution sensors, so implementing support for it would be essential.

     
  • Anonymous

    Anonymous - 2021-04-24

    Also pixel binning on/off modes. Clearly differentiated when its used and when its not.

     
  • Mark

    Mark - 2021-04-26

    What device is this, and what are the two cameras? As in, the two sets of interleaved sensors are exposed as two separate cameras?

    For the HDR, in theory it may be possible to open both cameras and take photos at different exposures. The main problem is it's a lot of work for something that for now seems device specific (I'm not sure many of them are exposing quad bayer cameras like this, and it's difficult to know what the particular configuration of cameras is). And although it eliminates any risk of ghosting effects, it means there are only two exposures instead of the usual three than Open Camera takes.

     
  • Anonymous

    Anonymous - 2021-04-27

    Its Huawei P20 Pro. I'm not 100% sure its because its a quad bayer sensor, just seemed like a logical explanation.

    I also noticed, that Android 12 will bring quad bayer support. So this might be something worth exploring.

    "Quad bayer camera sensor support - Many Android devices today ship with ultra high-resolution camera sensors, typically with Quad / Nona Bayer patterns, and these offer great flexibility in terms of image quality and low-light performance. In Android 12, we’re introducing new platform APIs that let third-party apps take full advantage of these versatile sensors. "

     
  • Anonymous

    Anonymous - 2021-04-27

    If it allows more (manual) control over these sensors, that would be quite exciting thing to explore. As its tech that's currently only available in smartphones.

     
  • Anonymous

    Anonymous - 2021-08-09

    Hi, the imaging chip in question is Samsung ISOCELL HM2. It has 108 million color filtered photosites. A photosite is a tiny surface area that is sensitive to all colors of light. A color filter is a foil that pass certain color of light and blocks other colrors.

    For all imaging chips that rely on color filtering: Each color filtered photosite provides 33.3% of the tri-chromatic information that constitutes the established concept of "a pixel" that is more accurately "an RGB pixel". So, here on I'll use CFP as a short for Color Filtered Photosite.

    For every photo (or frame), this Samsung ISOCELL HM2 chip does either 4 CFP binning or 9 CFP binning. CFP binning means that the captured information in each pysical CFPs are combined by some rule, resulting a virtual CFP. So, CFP binning can be though as a change of the structure of the chip.

    This chip, that has 108 MCFPs, whenit does 4 CFP binning the resulting RGB image has 27 MP (million RGB pixels). And when it does 9 CFP binning the resulting RGB image has 27 MP.

    So, this chip is not able to output 108 MP image data (un-binned data).

    So, mhen the 108 million CFPs are binned by 4, then the the resulting CFP count is 108 MCFP/4 = 27 MCFP, and the resulting digital resolution is divided by 2 (in both X and Y directions), and the area of the virtual CFP surface is 4 larger than the area of the physical CFP surface. And when the CFPs are binned by 9, then the resulting CFP count is 108 MCFP/9 = 12 MCFP, and the resulting digital resolution is divided by 3 (in both X and Y directions), and the area of the virtual CFP surface is 9 larger than the area of the physical CFP surface.

    Binning improves the image quality by reducing the noise, there are many sources of noise. A simplified example of noise: If we take a perfect picture of perfect surface that reflects perfect 50% green, then all the CFPs that have green color filter would have the same output level say e.g. the value of 127, such image data has low (zero) noise level. But in reality a 2 by 2 CFP would have values something like (126, 120, 130, 132) because of the various noise sources, such image is noisy. However when those CFPs are binned, the result is (126+120+130+132)/4 = 127. So noise is averaged out (on the expense of the digital resolution). And when binned by 9 (3 x 3) then the noise reduction is much better, good for low light situations where the absolute noise level of the system cause relatively higher noise level in the signal. An example: Combined, more or less random, noise level of the system (all hardware and the light itself) that ranges from - 5 units to +5 units (from CFP to CFP) is severe when the captured signal from the CFPs has value of 10, but much less severe when the captured signal has a value of 100.

    So, CFP binning does not provide a capture of two shots simultaneously.

    Why such huge amount of extremely tiny CFPs? Because it lowers the production cost, or in other words it increases the yield in chip manufacture. Because the CFPs are so tiny, the dead CFPs (that are all time ON or all time OFF) can be filtered out, in post-process by interpolation, and this does not affect to the image quality so much as it affect in case larger surface area CFPs, the effect of a dead CFP also gets reduced due to the binning. Note that a dead CFP affects, in the final RGB image data, further apart from its location (in the CFP matrix) because of the color interpolation that has to be applied in order to have the final RGB pixels that make up the final image data.

     
    • Anonymous

      Anonymous - 2021-08-09

      Ah, I forgot to answer to the question: Open Camera already supports, to the full extent, sensors that utilize the so called quad bayer color filter.

      Bauer color filer (also incorrectly referred as bauer color mask) is a rather old invention.

      The "quad" is a fairly new invention, it means that the imaging chip does not output the captured data from all of it's CFMs, but does a 4 CFP binning (and this chip can also do also 9 CFP binning).

       
      • Anonymous

        Anonymous - 2021-08-31

        If it did, it would support the HDR mode that quad bayer sensors enable.

        Quad bayer sensors have 3 modes, pixel pinning is just one of them. Then you have the HDR mode, where you have short & long exposure pixels, and the full resolution mode.

        The HDR mode is probably the most interesting aspect of what quad bayers provide. As you can take two exposures simultaneously.

         
        • Anonymous

          Anonymous - 2021-09-01

          Since the photon capturing surface area, per "pixel", is divided by two in the "HDR" mode, more than one f-stop is lost due that. Exactly one f-stop is lost when considering a simple/ideal system, the 'more' is due to the various noises in the real system. Noises are considerably heavy with such and extremely small surface area of "pixels".

          In order to have WDR (Wider Dynamic Range) compared to the binned "pixel" mode, one of the frames must be overexposed so that it can capture the very dark shadows. Overexposure means wasting some data, or more accurately wasting the ability to capture as much data that the camera is technically able to to capture.The other frame, optimally, is "properly" exposed so that practically nothing is overexposed. So, image quality wise, both of the frames have less that half of the quality compared to the binned "pixel" mode and some capture performance is lost in overexposing one of the frames. Still more capture performance is lost when the two frames are merged due to the fact that they will partly overlap on the luminance scale and due to the process of merging. So,IMO there is no benefit in this respect, surely I'd love to see such a benefit from my Xiaomi Redmi Note 9.

          To clarify the difference between HDR and WDR: The "HDR" is coined up by marketing people, from "High(er) Dynamic Range". But truthfully it is the question of WDR or "Wider Dynamic Range", and the widening of the range happens in he dark end of the scale, since we always try to avoid overexposing..

           
          • Anonymous

            Anonymous - 2021-09-01

            You use the HDR mode when you have enough light. Its not in opposition to binned pixels, the idea behind quad pixel sensors is to be able to use it efficiently in different lighting scenarios. And HDR mode is also possible to use in video.

            You use pinned pixels in low light scenarios.

            WDR is just another marketing term, do you call it high or wide doesn't really matter. I assume it has been invented because HDR is used on displays. You still have to combine the two exposures, so the output is not exactly just higher or wider dynamic range.

            But there is also fourth mode that would be interesting on these tiny pixel sensors, and that would be something that i think is called "superpixels". Instead of pinning same color pixels, you would collapse 4 RGGB pixels into one. Thus getting true RGB values for each pixel.

            This would require you to first produce a raw file and the unbayer it in this way, as its not directly supported by these sensors.

            As the pixels are now even under 0.65microns, that are impossible to resolve by any optics, this approach would make perfect sense. And produce low noise, good color accuracy and higher detail.

            Im not sure why its not been implemented on these sensors. Maybe its not that marketable.

            Some raw conversion softwares do support this mode, like Dcraw (half size mode). It would make best sense on sensors that have pixels that are too small for optics to resolve.

             
      • Anonymous

        Anonymous - 2021-08-31

        Here is how the HDR mode works:

         
  • Anonymous

    Anonymous - 2021-08-31

     
  • Anonymous

    Anonymous - 2023-08-16

    Out of curiosity, is there any interest in making this a feature? It could potentially help with noise reduction as mentioned.

     

Anonymous
Anonymous

Add attachments
Cancel





Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.