That of photos taken at the moon with a smartphone, also thanks to the digital zooms that now even exceed 100x magnification, is a topic that is now being talked about with a certain cyclicality but the conclusion seems to always be the same. On the one hand those who point the finger at all the super-detailed photos stating that they are fake, the result of a sort of editing with real textures, on the other the producers who promptly deny it by justifying everything with algorithms and machine learning.
HOW THE TEST WAS DONE
In recent days the controversy has come back to the surface thanks to a post on Reddit in which a user has “demonstrated” that the super zoom function of a Galaxy S20 Ultra, which we also find on the Galaxy S23 Ultra, would do nothing but generate “false” photos of the moon. To prove its point, iBreakphotos downloaded a photo of the moon from the web, resized it to 170×170 pixels, applied a blur to hide all the details, in an “unrecoverable” way, and snapped from one photo to this image reproduced on the monitor by positioning a Galaxy S20 Ultra at a considerable distance.
The image before and after shooting
In the shot obtained all the craters of the moon were visible again, in quite detail. Bottom line, according to iBreakphotos, the smartphone has added details where it was impossible for there to be using a neural network trained with hundreds of images of the Moon. Therefore, rather than multi-frame or multi-exposures, functions that would be used by Samsung to make an image more detailed, more use would be made of artificial intelligence which is entrusted with the task of recreating all the details that optics are not was able to capture.
HOW THE SAMSUNG ALGORITHM WORKS
Samsung has not responded to this demonstration and will hardly do so because it does not state falsehoods. How the moon shot function works has already been extensively documented by the Korean company in a post published on CamCyclopedia (link in VIA). This function, called Scene Optimizer, allows the AI to recognize the subject to be photographed and to obtain an optimal result.
To take a photo of the moon with a Galaxy there are two possibilities: the only one super-resolution technology which synthesizes more than 10 images by recreating a high-resolution image from a low-resolution one, improving the quality degradation caused by digital zoom, removing noise, and enhancing detail; use the “Optimum Shot” technology which recognizes the moon and completes the Super Resolution image obtained with details obtained from an improvement engine based on deep learning.
The AI goes to recreate the details
The image, therefore, it is not replaced but completed by the AI thanks to a model created by learning the various lunar shapes, from full moon to crescent moon, based on images of what we actually see from the Earth, i.e. always the same portion of the moon regardless of position, day, time, month and of the season. In practice, a technique which is the basis of the so-called computational photography which all smartphones now rely on.