Sorry, you need to enable JavaScript to visit this website.

Rethinking temporal self-similarity for repetitive action counting

DOI:
10.60864/ctb9-6p39
Citation Author(s):
Yanan Luo, Jinhui Yi, Yazan Abu Farha, Moritz Wolter, Juergen Gall
Submitted by:
Yanan Luo
Last updated:
17 June 2024 - 3:13pm
Document Type:
supplementary
Document Year:
2024
Event:
Presenters:
Yanan Luo
Paper Code:
2400
 

Counting repetitive actions in long untrimmed videos is a challenging task that has many applications such as rehabilitation.
State-of-the-art methods predict action counts by first generating a temporal self-similarity matrix (TSM) from the sampled frames and then feeding the matrix to a predictor network. The self-similarity matrix, however, is not an optimal input to a network since it discards too much information from the frame-wise embeddings. We thus rethink how a TSM can be utilized for counting repetitive actions and propose a framework that learns embeddings and predicts action start probabilities at full temporal resolution. The number of repeated actions is then inferred from the action start probabilities. In contrast to current approaches that have the TSM as an intermediate representation, we propose a novel loss based on a generated reference TSM, which enforces that the self-similarity of the learned frame-wise embeddings is consistent with the self-similarity of repeated actions. The proposed framework achieves state-of-the-art results on three datasets, i.e., RepCount, UCFRep, and Countix.

up
0 users have voted: