-
Couple of Android phones have experimented with this although not sure any have been commercially released yet
https://www.theverge.com/2019/6/26/18759380/under-display-selfie-camera-first-oppo-announcement
-
I think it will be done differently than this.
The smaller the optics, the less light... hence the drive towards having multiple optics of differing capabilities (iPhone, Pixel 4, etc) and blending them in software to create the single image.
Once you've got the software to blend information from multiple sensors, iterating on that to have a greater number of smaller sensors... i.e. imagine a grid of a few hundred micro sensors... and these become small enough to be invisible.
Apple are ahead here as their XDR displays are lit by individual LEDs and they have consistent lighting across the panel, so placing sensors means they all have to compensate light in an equal way, which makes the software easier. Edge-lit panels (almost every other screen in the world) have uneven light, so accounting for differences at points in the panel is really hard.
We're still at least 5 years away from losing mobile phone sized optical elements... but Apple are on the right path to achieve it, and thus make a major hardware leap that others will find difficult / impossible to follow on.
ah, that is an excellent idea.
I wonder if/when they'll actually emplace a camera behind the display itself, looking through the display. This seems impossible now with an optical element, but if a sensor was an array of sensors with micro-optical elements, it could be distributed over the surface of the display and then software would produce the image... looking at the screen would be to look at the subject.