This portal has been archived. Explore the next generation of this technology.

Tiny Video Networks

lib:962bbd344b68592b (v1.0.0)

Authors: AJ Piergiovanni,Anelia Angelova,Michael S. Ryoo
ArXiv: 1910.06961
Document:  PDF  DOI 
Abstract URL: https://arxiv.org/abs/1910.06961v1


Video understanding is a challenging problem with great impact on the abilities of autonomous agents working in the real-world. Yet, solutions so far have been computationally intensive, with the fastest algorithms running for more than half a second per video snippet on powerful GPUs. We propose a novel idea on video architecture learning - Tiny Video Networks - which automatically designs highly efficient models for video understanding. The tiny video models run with competitive performance for as low as 37 milliseconds per video on a CPU and 10 milliseconds on a standard GPU.

Relevant initiatives  

Related knowledge about this paper Reproduced results (crowd-benchmarking and competitions) Artifact and reproducibility checklists Common formats for research projects and shared artifacts Reproducibility initiatives

Comments  

Please log in to add your comments!
If you notice any inapropriate content that should not be here, please report us as soon as possible and we will try to remove it within 48 hours!