I'm trying to add JPEG XL encoding into OpenCamera, from what I understand, the output of the camera, if not in RAW mode, is compressed using JPEG with the quality specified by API user. When the saveSingleImageNow method compress the bitmap using the bitmap.compress method, is the resulting image actually the 2nd generation?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Indeed there is a double save/compression. MyApplicationInterface.getImageQualityPref() will set the first request to 100% for when non-JPEG formats (WebP, PNG) (which I know is still lossy, but tries to minimise the first generation loss).
Although JPEG XL could be supported similarly to WebP/PNG, I'm a bit wary of adding it at the moment, firstly due to the issue that it won't be direct from the camera, and due to losing support for UltraHDR.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Thanks for the reply, I'm unfamilar with Android Camera(2) API. When it's not RAW mode, is the data returned from the camera a JPEG codestream or RGB? Maybe I can prevent a second color transform by re-using JPEG's YCbCr stuff to feed the encoder of some format idk
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
For non-RAW mode I get the data from a JPEG stream - Android does support returning in other formats (e.g. Camera2 supports YUV_420_888), I have thought about trying to convert from that instead but not yet implement this.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Nah it's just normal question like any other in the open-source world, if I'm trying to highlight something then it would be my limited knowledge of Android :P
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm trying to add JPEG XL encoding into OpenCamera, from what I understand, the output of the camera, if not in RAW mode, is compressed using JPEG with the quality specified by API user. When the saveSingleImageNow method compress the bitmap using the bitmap.compress method, is the resulting image actually the 2nd generation?
Indeed there is a double save/compression. MyApplicationInterface.getImageQualityPref() will set the first request to 100% for when non-JPEG formats (WebP, PNG) (which I know is still lossy, but tries to minimise the first generation loss).
Although JPEG XL could be supported similarly to WebP/PNG, I'm a bit wary of adding it at the moment, firstly due to the issue that it won't be direct from the camera, and due to losing support for UltraHDR.
Thanks for the reply, I'm unfamilar with Android Camera(2) API. When it's not RAW mode, is the data returned from the camera a JPEG codestream or RGB? Maybe I can prevent a second color transform by re-using JPEG's YCbCr stuff to feed the encoder of some format idk
For non-RAW mode I get the data from a JPEG stream - Android does support returning in other formats (e.g. Camera2 supports YUV_420_888), I have thought about trying to convert from that instead but not yet implement this.
oh really. I would be interested in seeing untampered bitmap have you found any sensor you can get an uncompressed bitmap from?
Are you trying to highlight that the dev does not know what he is doing because it is obvious that it would be.
Please keep things friendly :)
Nah it's just normal question like any other in the open-source world, if I'm trying to highlight something then it would be my limited knowledge of Android :P