Abstract

Robust recovery of lost colors in underwater images remains a challenging problem.
We recently showed that this was partly due to the prevalent use of an atmospheric image formation model for underwater images and proposed a physically accurate model.
The revised model showed:
1) the attenuation coefficient of the signal is not uniform across the scene but depends on object range and reflectance.
2) the coefficient governing the increase in backscatter with distance differs from the signal attenuation coefficient.
Here, we present the first method that recovers color with our revised model, using RGBD images. The Sea-thru method estimates backscatter using the dark pixels and their known range information. Then, it uses an estimate of the spatially varying illuminant to obtain the range-dependent attenuation coefficient. Using more than 1,100 images from two optically different water bodies, which we make available, we show that our method with the revised model outperforms those using the atmospheric model. Consistent removal of water will open up large underwater datasets to powerful computer vision and machine learning algorithms, creating exciting opportunities for the future of underwater exploration and conservation.


Publication

The paper is availbale here.


Dataset

The dataset includes RAW images (.ARW or .DEF files) and corresponding depth maps (.tif files).
The dataset is divided to 5 subsets, as described in the paper (Table 1):

dataset_table_paper

For convient downlaoding, folders D1, D2 and D4 are divided to subfolders.
Several examples are displayed above the download links.

Dataset structure is explained in the README file.
If you use this data, please cite the paper.

D1 - Reef Scene

D1 Part 1 (8.04GB)
D1 Part 2 (8.04GB)
D1 Part 3 (7.98GB)
D1 Part 4 (7.67GB)
D1 Part 5 (9.58GB)

D2 - Reef Scene

D2 Part 1 (7.99GB)
D2 Part 2 (8GB)
D2 Part 3 (8.97GB)

D3 - Reef Scene

D4 - Canyon Scene

D4 Part 1 (18.26GB)
D4 Part 2 (9.32GB)