SOCD: Synthesized Object-Level Change Detection Dataset

Descriptions

The SOCD dataset is the first dataset that can be used to evaluate object-level change detection. The SOCD dataset comprises 15,000 perspective image pairs and object-level change labels synthesized by CARLA simulator [1].

Images

The SOCD dataset contains 15,000 perspective image pairs rendered from cameras in city-like environments of the CARLA simulator. The field of view of the images is 90-degree, and the image size is 1080 × 1080.

Labels

In addition to pixel-level (semantic change mask) and object-level (instance mask) change labels, semantic masks for entire scenes, depth images, and correspondences between objects in image pairs are available.

There are four object categories:

  • Buildings,
  • Cars,
  • Poles,
  • traffic signs and lights.

The figure below shows examples of the instance mask and bounding box for the objects/changes. The light blue is buildings, pink is cars, and green is poles. Please see this repository if you would like to know more about the label format.

Viewpoint differences

We rendered the image pairs with varying viewpoint differences to investigate the robustness of the change detection method to viewpoint differences. Specifically, the SOCD dataset has four categories {S1, S2, S3, S4} with varying yaw angle differences. In S1, there is no difference in yaw angle. In S2, 3, 4, the difference in yaw angle is uniformly sampled within the ranges of [0, 10], [10, 20], and [20, 30].

The figure below shows image pairs from the dataset with viewpoint differences. For more details, please see our paper.

Directory Structure

│
├── Town01/         # RGB images
├── Town02/         # RGB images
├── Town03/         # RGB images
├── chmasks/        # binary change masks
│   ├── Town01/
│   ├── Town02/
│   └── Town03/
├── semmask/        # semantic masks
│   ├── Town01/
│   ├── Town02/
│   └── Town03/
└── labels/         # label files
    ├── train[1-4].json
    ├── val[1-4].json
    └── test[1-4].json

Copyright & License

The SOCD dataset and sample code on this page are copyrighted by the National Institute of Advanced Industrial Science and Technology (AIST) and published under the CC BY 4.0 license.

Download

Dataset

If you use this dataset please cite our paper:

@article{objcd,
  author    = {Doi, Kento and Hamaguchi, Ryuhei and Iwasawa, Yusuke and Onishi, Masaki and Matsuo, Yutaka and Sakurada, Ken},
  title     = {Detecting Object-Level Scene Changes in Images with Viewpoint Differences Using Graph Matching},
  journal   = {Remote Sensing},
  volume    = {14},
  number    = {17},
  year      = {2022},
}

Terms of use are displayed after you click the following button. When you agree to the term of use, you can download PSCD dataset.

Download

To reduce the size of the data to be distributed, RGB images are converted from PNG format to JPG format. Since the experimental results published in the paper are based on PNG images, we will soon publish the experimental results using JPG images on this page.

Codes

The code to visualization, data loading, and evaluation can be found here.

Acknowledgment

We wish to thank A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun for developing an excellent simulator.

This work was partially supported by JSPS KAKENHI (grantnumber:20H04217).

References

  1. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, V. Koltun. (2017). "CARLA: An Open Urban Driving Simulator.": In Proceedings of the 1st Annual Conference on Robot Learning, 78, 1–16. (webpage: https://carla.org/, license: https://github.com/carla-simulator/carla#licenses)