Cross-Reference Stitching Quality Assessment for 360° Omnidirectional Images

Jia Li*,   Kaiwen Yu,   Yifan Zhao,   Yu Zhang,  

State Key Laboratory of Virtual Reality Technology and Systems, Beihang University

Long Xu

National Astronomical Observatories, Chinese Academy of Sciences

Along with the development of virtual reality (VR), omnidirectional images play an important role in producing media content with immersive experience. However, despite various existing approaches for omnidirectional image stitching, how to assess the quality of stitched images is still insufficiently explored. To address this problem, we first establish a novel omnidirectional image dataset containing stitched images as well as dual-fisheye images with standard quarters of 0, 90, 180 and 270 degrees. In this manner, when evaluating the quality of stitched image, there always exist corresponding fisheye images from at least two degrees (which called cross-reference images) that can provide groundtruth observations of the stitching region. Based on this dataset, we propose a novel Omnidirectional Stitching Image Quality Assessment (OS-IQA) algorithm, where we design the histogram, perceptual hashing and sparse reconstruction based quality measurements of the local stitching region by exploring the relationships between the stitched image and its cross-reference. We further propose two global quality indicators that assess the visual color difference and fitness of blind zones. To the best of our knowledge, it is the first attempt that mainly focuses on assessing the stitching quality of omnidirectional images. Qualitative and quantitative experiments show our method outperforms the state-of-the-art methods and is highly matched with human subjective evaluation.

 Dataset Corrigendum of ACM MM 2019


In the paper of "Cross-Reference Stitching Quality Assessment for 360° Omnidirectional Images" on ACM MM 2019, the "292 quaternions of fisheye images" should be revised as "292 images as quaternions" in Section 1, Section 3.1, Section 5.1. We apologize for our negligence in the dataset description in our paper. In addition, we also enlarged our dataset with more than 10000+ stitched results and release them now. If any problems, please feel free to contact us.

 Dataset Visualization


Fisheye and cross-reference groundtruth images in the proposed CROSS datasets. Collected original fisheye images are shown in the first and third rows, while the ground-truth omnidirectional stitching images are shown in the second and forth rows.

 Approach


Overview of OS-IQA framework. The stitched images are evaluated with three local metrics which focuses on the quality of stitching region, and two global metrics to evaluate the environmental immersion. These metrics are learned to fuse by a guided linear classier to match the human subjective evaluations.

 Visualizations Demos


Software of our OS-IQA evaluation tools. The rate score are normalized to 0~1 to get a better view. For usage problems, contact kevinyu@buaa.edu.cn for details.

 CROSS Dataset-v1


Fisheye and cross-reference groundtruth images in the proposed CROSS datasets.

 CROSS Dataset-v2


Fisheye and cross-reference groundtruth images in the proposed CROSS datasets.

 Update logs


2019/12: We have updated the CROSS-V2 dataset.

2020/01: The OS-IQA software is publicly available.

 Citation