Many of us have witnessed the stunning moon photos taken with the latest Samsung smartphones. However, based on testing by a Reddit user, Samsung seems to add details where there are none. What this user has done is downloaded a high-resolution image of the Moon from the Internet, reduced it to a size of 170×170 pixels, and applied a Gaussian blur, so that all detail is gone. This means that it is not recoverable, since the information is simply not there, it is digitally blurred.
Then he put the blurry image full screen on his monitor (displaying it at 170×170 pixels, blurry, full screen), moved to the other end of the room, and turned off all the lights. She zoomed in on the monitor and voila, she got this photo.
To put it in perspective, here’s the photo that was displayed on the monitor (left) and the photo that was captured by the phone (right).
In the comparison, you can see that Samsung is taking advantage of an AI model that adds detail such as craters and other details in places that were just a blur. That detail could not be obtained by combining several frames to improve the image, since the detail is simply not in the original image. Rather it seems that Samsung uses an AI model trained on a set of images of the moon, in order to recognize the moon and put some texture on it. How about? Is Samsung right by adding detail that doesn’t exist?