Apple at its iPhone 11 launch event had showcased a new camera technology called Deep Fusion. Powered by Neural Engine of A13 Bionic chip, Deep Fusion leverages latest machine algorithms to deliver high quality images. The new technology seems to be ready for the wide roll out as it has now made its way to the latest iOS 13 developer beta.
Apple’s new iPhone 11, iPhone 11 Pro, and iPhone Pro Max rely on Smart HDR to combine multiple images and deliver you a photo that’s rich in detail and dynamic range. The technology allows the camera to automatically detect when it’s too dark and switch on the Night Mode. The Deep Fusion is said to take this to a new level. Apple says the new feature will do pixel-by-pixel processing of photos, add optimisation for texture, and enhance details in each photo.
The Verge further explains how the new Deep Fusion camera technology works. The first step involves preparing four frames at a fast shutter speed to freeze motion in a picture. This happens by the time you tap on the camera button. Once you’ve tapped the shutter button, the camera takes a long-exposure photo to add the details.
Next, Apple creates a “synthetic long” by adding three regular frames and long-exposure, unlike how the Smart HDR works. Now, Deep Fusion kicks in to merge the short-exposure image and synthetic long shot. The technology then goes after the image by optimising the shot pixel by pixel and increase details. Then the final image is created.
According to CNET, Deep Fusion-enabled processing takes roughly a second to complete. The technology is also capable of identifying in-photo content such as sky, textures, and even finer details such as hair.
While Deep Fusion won’t work in burst mode, it is likely to come handy when lowlight and dark light settings. For brighter settings, Apple iPhone 11 and iPhone 11 Pro will continue to use Smart HDR. The tele lens on iPhone 11 will rely on the new technology whereas ultrawide sensors will use Smart HDR.
Source : Hindustan Times0