Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization

CVPR 2023


Huan Ren1, Wenfei Yang1, Tianzhu Zhang1,2, Yongdong Zhang1

1University of Science and Technology of China, 2Deep Space Exploration Lab

Abstract


Weakly-supervised temporal action localization aims to localize and recognize actions in untrimmed videos with only video-level category labels during training. Without instance-level annotations, most existing methods follow the Segment-based Multiple Instance Learning (S-MIL) framework, where the predictions of segments are supervised by the labels of videos. However, the objective for acquiring segment-level scores during training is not consistent with the target for acquiring proposal-level scores during testing, leading to suboptimal results. To deal with this problem, we propose a novel Proposal-based Multiple Instance Learning (P-MIL) framework that directly classifies the candidate proposals in both the training and testing stages, which includes three key designs: 1) a surrounding contrastive feature extraction module to suppress the discriminative short proposals by considering the surrounding contrastive information, 2) a proposal completeness evaluation module to inhibit the low-quality proposals with the guidance of the completeness pseudo labels, and 3) an instance-level rank consistency loss to achieve robust detection by leveraging the complementarity of RGB and FLOW modalities. Extensive experimental results on two challenging benchmarks including THUMOS14 and ActivityNet demonstrate the superior performance of our method.


Motivation


The S-MIL framework has two drawbacks.
(1) The objectives of the training and testing stages are inconsistent. As shown in Figure (a), the target is to score the action proposals as a whole in the testing stage, but the classifier is trained to score the segments in the training stage.
(2) It is difficult to classify each segment alone in many cases. As shown in Figure (b), by watching a single running segment, it is difficult to tell whether it belongs to high jump, long jump, or triple jump.


Framework



Experiment



Visualizations



Citation


@InProceedings{Ren_2023_CVPR,
    author    = {Ren, Huan and Yang, Wenfei and Zhang, Tianzhu and Zhang, Yongdong},
    title     = {Proposal-Based Multiple Instance Learning for Weakly-Supervised Temporal Action Localization},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {2394-2404}
}