Abstract
Large media repositories are often facing large additions of new material with no prior knowledge about the content. The screening, assessment, and filtering of potentially interesting sequences is currently a tedious process feasible manually only. Despite recent advances in computer vision approaches, in most cases, existing media retrieval systems allow only for a straightforward retrieval based on available, but limited metadata such as title and performer, or user recommendations. Interesting and fresh material easily
gets lost in the flood of data. This project counteracts these trends by providing efficient access and indexing methods for large video repositories that additionally account for similarity in the context of film grammatical and semantic concepts. As a result, the project introduces a radically different approach to the context-based retrieval of very large video collections by addressing unusual and, thus, potentially interesting material only.
Project partners
- Universität Wien Fakultät für Informatik Forschungsgruppe Multimedia Information Systems (ORT Austria)
Funding provided by
- WWTF Wiener Wissenschafts-, Forschu und Technologiefonds