The researchers started out by training a deep learning system using photos taken of the same scene using a phone and a DSLR. It’s effective, but it can only improve the quality for the smartphone in question. That led to a more sophisticated system, however: the new network only needs to see two sets of images from different cameras to understand how to apply the image quality from one to the other. In other words, you can feed it any photo and expect results that are more comparable to a target camera. You can try it yourself.
The results aren’t always ideal, as you can see in the sample above. While the colors and exposure in the “after” shot (left) are noticeably better than the dull reference image, there’s also a greenish tint. Other samples will occasionally lose a bit of detail, even if they’re overall more vibrant. The tool nonetheless appears to achieve its overall goal, especially when it’s used with older or low-end phones that tend to take lifeless shots as a matter of course. About the only thing it can’t do is add details that weren’t already there. If your phone is terrible at low-light shots, you’re not going to recover the missing info.
And importantly, this isn’t the end. The scientists hope to put the neural network to work ‘correcting’ the shooting conditions themselves. If it’s rainy day, for example, the AI could make it seem bright and sunny. That’s perilously close to creating non-existent shots, but it could be helpful if your vacation was spoiled by lousy weather and you’d like something nice to show friends back home. As it is, the current technology could improve the baseline image quality for phone cameras if it’s incorporated into future devices and software. You’re still going to get better shots with higher-end sensors and lenses, but the gap between the best and worst phone cams might not be quite so pronounced.