I’ve been reading about the Deep Fusion feature for iPhone 11 models that’s soon to be released from public beta, and it sounds like it has great potential for optimizing stop motion stills by combining multiple images and then sharpening:
"How Deep Fusion technology works
Both Smart HDR and Deep Fusion capture multiple images and then the iPhone picks a reference photo that’s meant to stop motion blur as much as possible. It then combines three standard and one long exposure into a single “synthetic” long photo. Here is when Deep Fusion comes into play. Deep Fusion breaks down the reference image and synthetic long photo into multiple regions, identifying skies, walls, textures and fine details (like hair).
After this quick breakdown, the software does a pixel-by-pixel analysis of the two photos – that’s 24 million pixels in total. Lastly, the results of the analysis are used to determine which pixels to use and optimise in building a final image. This whole information is captured and processed in the background once iPhone’s A13 processor has a chance.
Looks complex? Well, Apple says that the entire process takes a second or so to happen. This means you don’t need to wait before clicking next shot.
There are some gotchas. You can’t use this with your phone’s ultra-wide angle lens, as hinted earlier, and bright telephoto shots will revert to Smart HDR to maintain better exposure. The capture process is quick, but it’ll take a second for your iPhone to process the image at full quality. And yes, you absolutely need a 2019 iPhone for this to work – it’s dependent on the A13 chip.
Since there has been quite a lot of discussion about Apple choosing to develop its own chips, it’s quite clear the A13 is responsible for great photo enhancements including Deep Fusion and Smart HDR that are keeping iOS cameras way ahead of the competition.
Though I’m not going to upgrade my 8+ until 5G is well established, I am quite tempted by stuff like this.