SCIENCE BEHIND SUCCESS OF SMARTPHONE CAMERAS


Share

Since the advent of digital photography, digital cameras, especially DSLRs, have delivered exceptional image quality. However, in the past decade, something happened which led to a drop of 84 percent in the worldwide shipment of cameras between 2010 and 2018, from 121 million to only 19 million, as per a Japanese study. The single most important factor to have triggered the slump is the rise of the smartphone camera.

Original equipment manufacturers (OEMs) have made some notable advances in camera tech, fitting some incredibly and innovative features in a form factor that one would normally not consider capable. They introduced features like 50 MP sensors, 10x optical zoom (hitting a physics-defying 100x digital zoom in some cases). Loaded with multiple cameras, sensors, and advance software technologies, the science behind working of cameras is exciting.

Challenges and working of smartphone cameras

Smartphone cameras essentially works like any other camera. The photographer focuses the lens on the object. The light reflected from the object enters the lens, the quantity of which is regulated by the aperture — the size of the opening that allows light in. As the light moves towards sensors, the shutter regulates the duration of light-exposure for the sensor. The light finally reaches the sensor, which then captures the image. This image is processed and recorded by the hardware. However, smartphone cameras deal with severe space limitations, making it hard to fit larger sensors and complicated lens arrangements in a sleek smartphone, which is the biggest challenge.

So, how do the cameras deliver on high quality images?

Focusing right

To click a quality image, the focus has to be right. The lenses of the smartphone cameras need to move their focus to the correct position for that perfect shot. Various technologies have evolved to help cameras autofocus better. Contrast detect is the oldest autofocus technique. It aims to achieve a maximum possible degree of contrast amongst pixels as the intensity difference rises between adjacent pixels with the object being in focus.

The popular phase-detect autofocus feature, on the other hand, has dedicated photodiodes in the sensors. The method includes comparing pairs of images generated by dividing the incoming light from the object. Earlier, only a handful of dedicated phase-detect pixels could be used. In recent launches, the technology has further improved, utilising all sub-pixels for dual phase detection and therefore employing all the pixels for autofocusing.

The dual-pixel autofocus uses a higher number of focus points across the sensor. Each pixel of the sensor comprises of two photodiodes, which can operate together or separately. As light rays pass through the lens and hits the diode, the processor analyses signals from each diode, and calculates phase differences, to achieve focus in just milliseconds.

Multi-lens ecosystem

Camera basics: Focal length, aperture, and shutter speeds

Before understanding the importance of a multiple lens ecosystem in smartphone cameras, it is essential to understand the three important factors: focal length, aperture size, and shutter speeds. To capture a variety of photos, from close-ups of distant objects to wide-angle shots, zoom is one of the most important features. A longer focal length provides a larger zoom factor while a shorter focal length provides a wider or “zoomed out” feature.

Aperture plays a critical role as it regulates the amount of light that needs to enter the camera in order to control the view that appears in focus, also known as, the depth of field. While wide apertures assist give you bokeh — that dreamy, out-of-focus background blur — highlighting the subject. The narrower apertures assist scenic photography or larger shots by keeping more of the background in focus. The F/stop ratio, which is focal length of the lens divided by aperture size is an indicator of the amount of light hitting the camera sensor.

The right shutter speed is important as it determines the right exposure. A lower speed would result in blurry images, and a faster speed will result in darker photographs. Smartphones are equipped with e-shutters, which instructs the sensor to record the object for a particular duration.

The duration of the shutter determines exposure time, which matters in low-light scenarios and when recording moving subjects. A long exposure means more light, but it also means that moving subjects are blurry. Another issue with long exposures is camera shake, where the movement of your hand can result in a blurry shot. This can be fixed by optical image stabilisation (OIS) mechanisms which move sensors or lenses to counter small movements.

Multiple lenses and sensor size

Challenges call for innovation, and multiple cameras do just that in space-constrained smartphones. In a triple, quad, or penta camera set-up, each lens plays a specific role. The primary camera often features the largest sensor.

Sensor size plays an important role as the bigger the sensor, the more the amount of light that it will capture, which results in dynamic range and sharper images even in low-light conditions.

Apart from the primary camera, the lens assembly usually includes an ultra-wide angle camera, a telephoto camera, a time-of-flight depth sensing sensor, and in some cases, a dedicated super zoom camera.

The depth-sensing camera generally assists in background separation, generating better bokeh. It also helps in augmented reality (AR) applications.

The ultra-wide angle camera usually has a larger aperture and shorter focal length, enabling users to fit more in the frame to capture zoomed-out, wide-angle photos.

Telephoto camera and zoom features

With the combination of lenses, smartphone cameras deliver digital, optical, and hybrid zoom features. Digital zoom cuts-off the areas around a scene (crops in, to use the technical term) to give the sense that the object is closer. Some information is lost in this process as data is discarded and interpolated.

Optical zoom delivers the truest form of magnification by manipulating light to enlarge a subject. This gives the best result, and is lossless.

Hybrid zoom utilises optical and digital zooming capabilities along with software to provide an option for zoom well beyond a camera’s normal capabilities. The idea is that while digital zoom is lossy, it’s possible to zoom in to some extent without causing a perceivable loss in quality. Paired with large sensors, long optical zoom, and the right software, some incredible results are indeed possible.

To fit long lenses, some smartphones are equipped with a telephoto camera featuring an innovative periscope design that uses a prism element to bend light at a 90-degree angle. By bending light, you eliminate the need for a long lens sticking out the back of your phone.

The periscopic structure and prism lenses reflect light multiple times in order to achieve a larger focal length. Additional smaller telephoto lens are being used for mid-range zooming to achieve a seamless transition from mid-range to long-range zoom. Smartphones are also enhancing digital zooming capabilities to 100x using pixel binning techniques.

For video zoom, smartphones employ directional audio zoom, in which omnidirectional microphones at the top, bottom and back can accurately collect the surrounding sound. As the focus of the camera zooms in, the sound from further away will be amplified via software. The system also filters out other background noise.

AI to the rescue

There are various digital technologies and software processing techniques that can be employed to ensure that multiple lenses work seamlessly in various lighting conditions, and that zooming feature delivers the desired results.

Advanced signal processors and neural processing units, now standard fare on most smartphone chips, deliver impressive results in real-time. Supporting hardware like the time of flight sensor emits infrared light to enhance the depth of images, creating a multilevel bokeh effect. It is also a very accurate distance mapping and 3D imaging technology.

Zooming takes a toll on image clarity, which is being addressed by employing techniques like block-matching and 3D filtering (BM3D) algorithms for the first time as blur and noise reducing solutions. The use of artificial Intelligence technologies is on the rise in mobile cameras and is being employed for multiple purposes, from combining multiple frames from multiple cameras together, identifying landmarks, backgrounds, and more, and utilising its algorithms to extract details, de-noise the image, and enhance image processing while supporting multiple photography scenarios.

The combination of better lens ecosystems and advancing digital technologies are opening up a whole new realm for the smartphone camera, which holds immense potential in the near future.


Leave a reply