Robust recovery of lost colors in underwater images remains a challenging problem.
We recently showed that this was partly due to the prevalent use of an atmospheric image formation model for underwater images and proposed a physically accurate model.
The revised model showed:
1) the attenuation coefficient of the signal is not uniform across the scene but depends on object range and reflectance.
2) the coefficient governing the increase in backscatter with distance differs from the signal attenuation coefficient.
Here, we present the first method that recovers color with our revised model, using RGBD images. The Sea-thru method estimates backscatter using the dark pixels and their known range information. Then, it uses an estimate of the spatially varying illuminant to obtain the range-dependent attenuation coefficient. Using more than 1,100 images from two optically different water bodies, which we make available, we show that our method with the revised model outperforms those using the atmospheric model. Consistent removal of water will open up large underwater datasets to powerful computer vision and machine learning algorithms, creating exciting opportunities for the future of underwater exploration and conservation.
The paper is availbale here.
The dataset includes RAW images (.ARW or .DEF files) and corresponding depth maps (.tif files).
The dataset is divided to 5 subsets, as described in the paper (Table 1):
For convient downlaoding, folders D1, D2 and D4 are divided to subfolders.
Several examples are displayed above the download links.
Dataset structure is explained in the README file.
If you use this data, please cite the paper.