Seeing double: why Apple would want dual cameras on the iPhone 7

Dual cameras

Phones with more than one camera have been around for years, performing a wide variety of functions, a lot of them useless.

However, with the iPhone 7 expected to have rear dual-camera array (well, the larger iPhone 7 Plus anyway), suddenly the practice has been legitimized in the eyes of many smartphone fans – and for good reason. We're about to see a whole new era of mobile photography pick up speed.

But let's look more closely. While Apple might popularize this feature, it's hardly new, and others have been using the double sensor for years. So what, exactly, are dual-camera setups for? And are they worth putting on your list of most-wanted phone features?

The bad old days of 3D cameras

Before we look at what the iPhone 7 is up to, let's see what dual-camera phones have managed so far.

Poring back through the annals of smartphone history, you'll find four main reasons to want to bung multiple cameras on the back of a phone. The first is to take "3D photos".

The HTC Evo 3D took this idea to its logical conclusion all the way back in 2011, with both a 3D camera setup and a 3D screen. While a neat concept, it was quite rightly regarded as a gimmick with appeal that was short-lived.

Several similar LG Optimus 3D camera phones were also sprinkled across the world in 2012 to mimic the trick (as well as even a tablet from LG), but manufacturers soon defaulted to simply increasing megapixel counts each year instead. Multi-camera phones went back into hibernation.

A remarkably similar dual-sensor concept was behind the HTC One M8, which became the next dual-camera sensation in 2014. It inspired a few imitators including the Honor 6 Plus (2014) and much more recent Xiaomi Redmi Pro (2016).

These dual-lens cameras see depth or 3D in the same manner as our eyes. The differences in what the two camera 'eyes' see calculates a depth map of any scene.

Instead of using this information to produce a 3D effect, phones like the HTC One M8 use it to simulate variable aperture, which in a dedicated camera means altering the size of the hole in the lens that lets light through. Aside from a couple of old Nokia phones, every mobile camera has a completely fixed aperture, so we have to fake it to make it.

With a very wide aperture, you can make an image's background appear dramatically blurred, leaving only your subject in-focus. It's a dramatic effect, and one that also becomes more pronounced as a camera sensor becomes larger.

An APS-C sensor has about 20x the surface area of an average mobile phone sensor. A full-frame sensor is about 50x larger. With software a phone can attempt to simulate the sort of blurring you'll get with one of these large-sensor cameras, rather than that of a small-sensor phone.

HTC Evo 3D

There's still something to love about this phone

It gives photos a professional, arty look.

While the marketing and application of the feature were different to what we saw in the HTC Evo 3D, the theory behind them is comparable.

The big issue of early depth cameras is that their processing was spotty. If you tried to shoot an irregular-shaped object, the shot ended up looking a ragged mess. That the second depth-calculating sensor would generally be of much lower quality didn't help either. Like 3D cameras, they came to be seen as a gimmick.

Modern dual-camera solutions

Huawei currently offers a dual-lens feature in phones like the Huawei P9, though, and has significantly improved the character of the processing. However, depth photography is no longer the focus.

Huawei uses a more interesting application of a multi-sensor camera, and it's one that Apple's iPhone 7 camera will likely hinge on.

Huawei P9

The Huawei P9 packs double cameras and is engineered in association with Leica

We're talking about 'computational photography', using multiple tiny sensors combined with super-smart processing to overcome the image quality limitations of a small phone camera sensor. It is one of the crucial things we need to help the quality of our smartphone pictures progress.

The phones with the best low-light performance at the moment use optical image stabilisation to improve the quality of their night shots. This involves using a little motor that ever-so-slightly moves the camera to compensate for the natural movements of your hand as you shoot, letting the shutter stay open for longer without making shots blurry.

More light on the sensor is good news. However, the issue with pushing for longer exposure times is that it's not much good for shooting moving objects. OIS can compensate for your movements, not those of your subject, so can only be pushed so far even with the best stabilisation in the world.

We need another solution, and this dual camera setup may be it.

Meet LinX

We don't have to look far to see how Apple's may use dual cameras as it bought a company working in exactly this area in April 2015, LinX.

Before the buyout, LinX detailed its plans for multi-sensor modules, four designs in total. The first is what we saw in the HTC One M8: two colour camera sensors used to create a depth map. Yawn.

Next up is the one most likely to be the basis for the iPhone 7 Plus dual camera. It's a pair of cameras where one of the sensors is mono (capturing only blacks and whites), the other colour, and it's exactly what Huawei uses in its P9 phone.

LinX's claims this setup "has improved low light performance and general image quality" and comes at "lower cost than single-aperture cameras of the same resolution".

The Bayer Filter effect

Image quality benefits rely on merging the information between the colour and mono cameras. This in turn hinges on the way a colour camera sensor splits light into different shades before it reaches the sensor, using something called a bayer filter.

A standard camera sensor is made up of millions of little 'photosite' pixels, points on the camera sensor that are stimulated by light.

One M8

The HTC One M8 had one of the most innovative cameras of the year

Just as your LCD TV is made up of red, green and blue sub pixels that fire at different intensities to deliver any colour of the rainbow, a camera sensor has red, green and blue-sensing sub pixels used to determine the colour of each pixel. The bayer filter makes sure only light of the right colour hits these sub pixels.

The issue comes when these shades are split up. Green and blue light heading to the 'red' sub pixel is rejected. Green and red is rejected from the 'blue' sub pixel and so on.

This means a lot of light is wasted by the bayer filter, and it's what the monochrome sensor avoids. It doesn't have a bayer filter, because it doesn't need to determine colour. With more light hitting the sensor, its low-light image quality improves.

So with a dual-camera setup, the monochrome sensor increases dynamic range, detail and reduces noise levels, and the other sensor fills in the colour.

After that the camera needs software smarts to work out how to mesh the two sensors' information together when their view of the world is ever-so-slightly different. The closer the subject is, the more appreciable the difference is too.

Huawei vs Apple

You may be left wondering: if this tech is so great then why isn't everyone saying the Huawei P9 has the best camera in the world?

It's because having decent tech doesn't instantly guarantee a great picture: Huawei's handling of the design results in slightly worse low-light results than something like the Samsung Galaxy S7.

It's down to a combination of a slower lens, smaller photosites and faster shutter speeds demanded by a lack of optical image stabilisation outweighing the benefits of the mono sensor's sensitivity. The Huawei P9 has a great, but not world-beating, camera.

LinX to the future

There's more to come from multi-sensor array too, as LinX documented a couple of 'next generation' multi-sensor camera designs.

Perhaps the most interesting is a 2x2 array of cameras, using similar ideas to the single strip of two cameras, but with more eyes. This could then be used to increase dynamic range further, or simply to reduce the sensor errors that create image noise in the first place.

The final LinX camera design sees two small sensors used to act as a rangefinder for a larger image sensor. Because the paired sensors can work out how far away a subject is, the autofocus can head straight to the right point rather than seeking for it, a good alternative to the current 'phase detection' focusing that the iPhone uses.

There's another important multi-camera use out there too: cameras with different focal lengths can emulate an optical zoom and use software smarts to stitch their output together.

The LG G5 has something like this, but instead of zooming in you can zoom out to use the secondary action camera while shooting stills.

Seeing the Light

There's already a device that demonstrates the kind of progress we're likely to see in mobile phones over the next five years.

Light L16

The Light L16 could be the future of smartphones - loads of sensors packed on the rear

It's called the Light L16, an Android-based standalone camera that has 16 lenses of three different focal lengths to create a camera with a 35-150mm equivalent zoom and image quality Light says can rival a full-frame DSLR in a box only an inch thick.

This is a new device and we're waiting to try it out to see if it can really fulfil those huge claims. Whether it impresses the camera crowd will be a good indication of how effective computational photography may be for phones in the short to mid-term.

However, with no recent word of the L16's promised 2016 release other than it now being "sold out" until 2017, the iPhone 7 is where we're going to look to see if multiple sensors can really lay claim to creating some of the best photos in the world from a pocketable device.

TOPICS
Andrew Williams

Andrew is a freelance journalist and has been writing and editing for some of the UK's top tech and lifestyle publications including TrustedReviews, Stuff, T3, TechRadar, Lifehacker and others.