Video Understanding

Videos capture the appearance of objects and people and how those evolve over time. The rich interactions among the objects exhibit great challenges for automated video content understanding. We explore visual, audio, textual information and mine relationships of different modalities for representation learning, and build lightweight and efficient recognition models through pruning redundant information. The resulting models achieve top-notch performance on a variety of video understanding benchmarks. Furthermore, to inspire video recognition research, we have also released several large-scale video classification benchmarks that have been widely used in both academia and industry.

Featured Projects

BEVT: BERT Pretraining of Video Transformers

We study BERT pretraining of video transformers, which decouples video representation learning into spatial representation learning and temporal dynamics learning. BEVT first performs masked image modeling on image data, and then conducts masked image modeling jointly with masked video modeling on video data.

View Project


VideoLT: Large-scale Long-tailed Video Recognition

Label distributions in the real-world are oftentimes long-tailed and imbalanced, resulting in biased models towards dominant labels. While long-tailed recognition has been extensively studied for image classification tasks, limited effort has been made for the video domain. We introduce VideoLT, a large-scale long-tailed video recognition dataset, as a step toward real-world video recognition.

View Project


A Coarse-to-Fine Framework for Resource Efficient Video Recognition

Recent video recognition frameworks offer excellent recognition results, yet their computational expense limits their impact for many real-world applications. We present LiteEval, a framework that adaptively selects the optimal resolution on a per-input basis for fast video recognition.

View Project