This researcher created an algorithm that removes the water from underwater images – Watch movies, videos and games for free

Uncategorized

It would be interesting to compare her output with what we can do in Lightroom/Photoshop — IMHO it would be close, maybe not exactly. Dehaze, Clarify, Contrast, local and global color correction, and other filters, etc. Our existing tools are also just “math”: mathematical functions and transformations applied to arrays of values, which Adobe and others have been doing for….2 decades? Also, it seems to me her images lose sharpness, and there are halo artifacts visible, especially around some of the edges…it could just be the video though. I don’t mean to rain down on the party or over-simplify, but I would love to see SA do comparison. Perhaps b/c of the ML component Derya is achieving the same thing as with LR/PS in a different way that’s more scalable and renders more useful metadata? At the end of the day, I don’t see her results as much different than either shooting with different lens filters or using the latest digital darkroom techniques. I should read the paper. 🙂

Source

Sharing is caring!

Leave a Reply