top of page

Photography and Algorithms


[ C H R O N I C L E ]

Over the last twenty years, digital photography has seen a technical upheaval that has led to an explosion in the number of photos: websites and social media receive more than one billion per day. Taken by sophisticated cameras or by smartphones, now everywhere, photos cost almost nothing to take. How has this happened, along with a simultaneous constant improvement in quality? It is due to a good balance between the equipment and the data processing. It is little known that, while image sensors have made significant progress in terms of resolution and signal-to-noise ratio, a modern camera is above all, a computer. The principal innovations have come from algorithms, that can be classified into three categories: picture framing assistance, digital development and image enhancing and finally, printing and distribution (which I won’t go into here).

Progress in picture framing has been considerable. Some are cyber-physical, such as the silent electronic shutter with ultrafast firing, or image sensor and lens stabilisation that offsets the photographer’s shaking to gain up to five speeds before the blur of movement is evident. Other advances are purely in processing, such as new electronic viewfinders that can enhance the view with useful information: sharp contours shimmer to refine focus; histograms or colouring of lighter or darker areas to correct exposure if necessary, itself calculated using algorithmic analysis of the scene; spirit levels for horizontality and verticality, etc. Finally, unlike film photography where sensitivity was limited and constant for a given roll of film, this can now be selected automatically or manually for each photo. Furthermore, performances with sensitivities of 1600 or even 3200 ISO remain excellent, eliminating almost any need for the flash, and it is possible to go even further with the best image sensors and algorithms.

To develop the sensor’s raw data, algorithms have improved immensely; balancing of light and shade, subtle digital noise suppression, correction of the distortions and optical aberrations in lenses, selective brightening of areas that are too dark. Above all, now we can merge several images shot rapidly to reduce noise, enhance the light energy by combining images taken at different exposures, and extend the depth of field by merging images with gradual focus points. New statistical learning techniques are starting to visually enhance natural images by training on huge databases.

One consequence of all this for manufacturers is the need to closely associate the equipment with the algorithms from the camera design stage. For example, in new mirrorless SLRs, the lens is far closer to the image sensor, reducing its size and weight while at the same time, improving its optical qualities. Reducing the size of image sensors also reduces the size and weight of the cameras and lenses. 24 x36 mm film is no longer necessary for the enlightened amateur: the APS-C (half this size) or the 4/3 micro (one quarter this size) are excellent and handle much better, with differences only visible on very large enlargements. In telephone image sensors, the tiny lenses which are one thirtieth of 24 x 36 mm, are separated from the lens by a thin film of air! It is above all thanks to advances in algorithms and the merging of multiple images that make for great photos, as calculations can make up for the limitations of the physical equipment. The era of digital photography is only beginning. It should also be noted that digital photography algorithms are not much different from those used in astronomy or medical imaging, once again illustrating the universal nature of computing.

> AUTHOR

Gérard Berry

Computer Scientist

Gérard Berry is a professor at the Collège de France, member of the french Academy of Sciences and CNRS gold medal in 2014.

RECENT POST
LATEST DISCOVERIES OF FRENCH SCIENTIFIC RESEARCH
bottom of page