OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction


Zheng Dang1   Jialu Huang2   Fei Wang2   Mathieu Salzmann1
1         2


Distinct materials. From left to right: conductor, dielectric, plastic and diffuse. The top portion of the vase shows rough surface finishes, while the bottom one shows smooth surface finishes.

Abstract

Recent advances in deep learning such as neural radiance fields and implicit neural representations have significantly propelled the field of 3D reconstruction. However, accurately reconstructing objects with complex optical properties, such as metals and glass, remains a formidable challenge due to their unique specular and light-transmission characteristics. To facilitate the development of solutions to these challenges, we introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials—including conductors, dielectrics, plastics, and their roughened variants— and captured under 723 diverse lighting conditions.

To this end, we utilized physics-based rendering with laboratory-measured Indices of Refraction(IOR) and generated high-fidelity multiview images that closely replicate real-world objects. OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.

It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials, thereby paving the way for the development of 3D reconstruction algorithms capable of handling complex material properties.

Preview
Statistics and Distribution
Novel View Synthesis
3D Shape Reconstruction
References








References

[1] G. Oxholm and K. Nishino, “Multiview shape and reflectance from natural illumination,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2155–2162.
[2] H. Aanæs, R. R. Jensen, G. Vogiatzis, E. Tola, and A. B. Dahl, “Large-scale data for multiple- view stereopsis,” International Journal of Computer Vision, vol. 120, pp. 153–168, 2016.
[3] A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun, “Tanks and temples: Benchmarking large-scale scene reconstruction,” ACM Transactions on Graphics, vol. 36, no. 4, 2017.
[4] B. Mildenhall, P. P. Srinivasan, R. Ortiz-Cayon, N. K. Kalantari, R. Ramamoorthi, R. Ng, and A. Kar, “Local light field fusion: Practical view synthesis with prescriptive sampling guidelines,” ACM Transactions on Graphics (TOG), vol. 38, no. 4, pp. 1–14, 2019.
[5] M. Li, Z. Zhou, Z. Wu, B. Shi, C. Diao, and P. Tan, “Multi-view photometric stereo: A robust solution and benchmark dataset for spatially varying isotropic materials,” IEEE Transactions on Image Processing, vol. 29, pp. 4159–4173, 2020.
[6] M. Boss, R. Braun, V. Jampani, J. T. Barron, C. Liu, and H. Lensch, “Nerd: Neural reflectance decomposition from image collections,” in Proceedings of the IEEE/CVF International Confer- ence on Computer Vision, 2021, pp. 12 684–12 694.
[7] D. Verbin, P. Hedman, B. Mildenhall, T. Zickler, J. T. Barron, and P. P. Srinivasan, “Ref- nerf: Structured view-dependent appearance for neural radiance fields,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022, pp. 5481–5490.
[8] Z. Kuang, K. Olszewski, M. Chai, Z. Huang, P. Achlioptas, and S. Tulyakov, “Neroic: Neural rendering of objects from online image collections,” ACM Trans. Graph., vol. 41, no. 4, jul 2022. [Online]. Available: https://doi.org/10.1145/3528223.3530177
[9] M. Toschi, R. D. Matteo, R. Spezialetti, D. D. Gregorio, L. D. Stefano, and S. Salti, “Relight my nerf: A dataset for novel view synthesis and relighting of real world objects,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Los Alamitos, CA, USA: IEEE Computer Society, jun 2023, pp. 20 762–20 772. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/CVPR52729.2023.01989
[10] Z. Kuang, Y. Zhang, H.-X. Yu, S. Agarwala, S. Wu, and J. Wu, “Stanford-orb: A real-world 3d object inverse rendering benchmark,” 2023.
[11] T. Wu, J. Zhang, X. Fu, Y. Wang, L. P. Jiawei Ren, W. Wu, L. Yang, J. Wang, C. Qian, D. Lin, and Z. Liu, “Omniobject3d: Large-vocabulary 3d object dataset for realistic perception, reconstruction and generation,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
[12] Y. Liu, P. Wang, C. Lin, X. Long, J. Wang, L. Liu, T. Komura, and W. Wang, “Nero: Neural geometry and brdf reconstruction of reflective objects from multiview images,” in SIGGRAPH, 2023.
[13] I. Liu, L. Chen, Z. Fu, L. Wu, H. Jin, Z. Li, C. M. R. Wong, Y. Xu, R. Ramamoorthi, Z. Xu et al., “Openillumination: A multi-illumination dataset for inverse rendering evaluation on real objects,” Advances in Neural Information Processing Systems, vol. 36, 2024.
[14] V. Jampani, K.-K. Maninis, A. Engelhardt, A. Karpur, K. Truong, K. Sargent, S. Popov, A. Araujo, R. Martin Brualla, K. Patel et al., “Navi: Category-agnostic image collections with high-quality 3d shape and pose annotations,” Advances in Neural Information Processing Systems, vol. 36, 2024.
[15] J. Shi, Y. Dong, H. Su, and S. X. Yu, “Learning non-lambertian object intrinsics across shapenet categories,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1685–1694.
[16] Y. Yao, Z. Luo, S. Li, J. Zhang, Y. Ren, L. Zhou, T. Fang, and L. Quan, “Blendedmvs: A large-scale dataset for generalized multi-view stereo networks,” Computer Vision and Pattern Recognition (CVPR), 2020.
[17] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” Communications of the ACM, vol. 65, no. 1, pp. 99–106, 2021.
[18] J. Collins, S. Goel, K. Deng, A. Luthra, L. Xu, E. Gundogdu, X. Zhang, T. F. Y. Vicente, T. Dideriksen, H. Arora et al., “Abo: Dataset and benchmarks for real-world 3d object un- derstanding,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 21 126–21 136.
[19] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani, A. Kembhavi, and A. Farhadi, “Objaverse: A universe of annotated 3d objects,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 13 142– 13 153.
[20] Y.-C. Guo, “Instant neural surface reconstruction,” 2022, https://github.com/bennyguo/instant- nsr-pl.
[21] Y. Wang, Q. Han, M. Habermann, K. Daniilidis, C. Theobalt, and L. Liu, “Neus2: Fast learning of neural implicit surfaces for multi-view reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
[22] B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,” ACM Transactions on Graphics, vol. 42, no. 4, July 2023. [Online]. Available: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/