A survey on video moment localization
Video moment localization, also known as video moment retrieval, aims to search a target
segment within a video described by a given natural language query. Beyond the task of …
segment within a video described by a given natural language query. Beyond the task of …
Balanced contrastive learning for long-tailed visual recognition
Real-world data typically follow a long-tailed distribution, where a few majority categories
occupy most of the data while most minority categories contain a limited number of samples …
occupy most of the data while most minority categories contain a limited number of samples …
Temporal sentence grounding in videos: A survey and future directions
Temporal sentence grounding in videos (TSGV), aka, natural language video localization
(NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that …
(NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that …
Weakly supervised temporal sentence grounding with gaussian-based contrastive proposal learning
Temporal sentence grounding aims to detect the most salient moment corresponding to the
natural language query from untrimmed videos. As labeling the temporal boundaries is labor …
natural language query from untrimmed videos. As labeling the temporal boundaries is labor …
You can ground earlier than see: An effective and efficient pipeline for temporal sentence grounding in compressed videos
Given an untrimmed video, temporal sentence grounding (TSG) aims to locate a target
moment semantically according to a sentence query. Although previous respectable works …
moment semantically according to a sentence query. Although previous respectable works …
Rethinking weakly-supervised video temporal grounding from a game perspective
This paper addresses the challenging task of weakly-supervised video temporal grounding.
Existing approaches are generally based on the moment proposal selection framework that …
Existing approaches are generally based on the moment proposal selection framework that …
Weakly supervised temporal sentence grounding with uncertainty-guided self-training
The task of weakly supervised temporal sentence grounding aims at finding the
corresponding temporal moments of a language description in the video, given video …
corresponding temporal moments of a language description in the video, given video …
Partially relevant video retrieval
Current methods for text-to-video retrieval (T2VR) are trained and tested on video-captioning
oriented datasets such as MSVD, MSR-VTT and VATEX. A key property of these datasets is …
oriented datasets such as MSVD, MSR-VTT and VATEX. A key property of these datasets is …
Zero-shot video grounding with pseudo query lookup and verification
Video grounding, the process of identifying a specific moment in an untrimmed video based
on a natural language query, has become a popular topic in video understanding. However …
on a natural language query, has become a popular topic in video understanding. However …
A survey on temporal sentence grounding in videos
Temporal sentence grounding in videos (TSGV), which aims at localizing one target
segment from an untrimmed video with respect to a given sentence query, has drawn …
segment from an untrimmed video with respect to a given sentence query, has drawn …