Learning Shape-Independent Transformation via Spherical Representations for Category-Level Object Pose Estimation

ICLR 2025

1University of Science and Technology of China, 2National Key Laboratoray of Deep Space Exploration, Deep Space Exploration Laboratory, 3Jianghuai Advance Technology Center, 4Dongguan University of Technology, 5Sangfor Technologies
motivation

Our method employs spherical representations to learn shape-independent transformation, yielding smaller NOCS angle errors compared to DPDN, which adopts point-based representations and suffers from the shape-dependence.

Abstract

Category-level object pose estimation aims to determine the pose and size of novel objects in specific categories. Existing correspondence-based approaches typically adopt point-based representations to establish the correspondences between primitive observed points and normalized object coordinates. However, due to the inherent shape-dependence of canonical coordinates, these methods suffer from semantic incoherence across diverse object shapes. To resolve this issue, we innovatively leverage the sphere as a shared proxy shape of objects to learn shape-independent transformation via spherical representations. Based on this insight, we introduce a novel architecture called SpherePose, which yields precise correspondence prediction through three core designs. Firstly, We endow the point-wise feature extraction with SO(3)-invariance, which facilitates robust mapping between camera coordinate space and object coordinate space regardless of rotation transformation. Secondly, the spherical attention mechanism is designed to propagate and integrate features among spherical anchors from a comprehensive perspective, thus mitigating the interference of noise and incomplete point cloud. Lastly, a hyperbolic correspondence loss function is designed to distinguish subtle distinctions, which can promote the precision of correspondence prediction. Experimental results on CAMERA25, REAL275 and HouseCat6D benchmarks demonstrate the superior performance of our method, verifying the effectiveness of spherical representations and architectural innovations.

Method

The proposed SpherePose mainly consists of two components. First, in the spherical representation projection phase, point-wise features are extracted and then assigned to spherical anchors with HEALPix spherical projection, yielding the spherical representations. Subsequently, in the spherical rotation estimation phase, spherical anchors perform feature interaction with each other through the attention mechanism, and then they are mapped to the spherical NOCS coordinates for rotation estimation.

method

Experiment

exp_nocs exp_housecat exp_visual

BibTeX

@inproceedings{iclr2025spherepose,
    title={Learning Shape-Independent Transformation via Spherical Representations for Category-Level Object Pose Estimation},
    author={Ren, Huan and Yang, Wenfei and Liu, Xiang and Zhang, Shifeng and Zhang, Tianzhu},
    booktitle={The Thirteenth International Conference on Learning Representations},
    year={2025}
}