I'm an amateur photographer and, recently, I decided to switch the Mi A2 stock camera - which isn't exactly famous for its features - for OpenCamera. As I worked my way through the various settings and options, I noticed that the app does a much better job at focusing distant objects when "Face Detection" mode is active rather than when using the buit-in touch-to-focus mode.
I've attached a couple of photos to demostrate what I mean. The first was taken with touch-to-focus (locked focus - which should mean the camera focuses once and then doesn't bother refocusing before snapping - the area I touched on is the big antenna in the background), the second (the highly detailed one) was snapped by simply turning on face-recognition and centering the same antenna on the phone's screen. The distance between the lenses and the subject is virtually the same and the pictures were captured only moments away from each other.
While I must say I love the second picture (I wasn't aware a 200$ phone could manage such a thing) and I'm not complaining about the quality at all (which is why this isn't a bug report), I am curious about the logic behind it. More specifically: how does the app behave when face-detection is on and no face is detected? I tried to replicate the results by playing around with the focus options in OC and there was no other way I could get such a crisp image. Not by setting focus to infinity; not by playing with the focus distance slider in manual mode.
So, to sum things up, how is this possible?
Thanks in advance for the response!
Hello Mark, hello OpenCamera community!
I'm an amateur photographer and, recently, I decided to switch the Mi A2 stock camera - which isn't exactly famous for its features - for OpenCamera. As I worked my way through the various settings and options, I noticed that the app does a much better job at focusing distant objects when "Face Detection" mode is active rather than when using the buit-in touch-to-focus mode.
I've attached a couple of photos to demostrate what I mean. The first was taken with touch-to-focus (locked focus - which should mean the camera focuses once and then doesn't bother refocusing before snapping - the area I touched on is the big antenna in the background), the second (the highly detailed one) was snapped by simply turning on face-recognition and centering the same antenna on the phone's screen. The distance between the lenses and the subject is virtually the same and the pictures were captured only moments away from each other.
While I must say I love the second picture (I wasn't aware a 200$ phone could manage such a thing) and I'm not complaining about the quality at all (which is why this isn't a bug report), I am curious about the logic behind it. More specifically: how does the app behave when face-detection is on and no face is detected? I tried to replicate the results by playing around with the focus options in OC and there was no other way I could get such a crisp image. Not by setting focus to infinity; not by playing with the focus distance slider in manual mode.
So, to sum things up, how is this possible?
Thanks in advance for the response!