Abstract
In the past vision concentrated on basic entities of a vision space: pixels, key points, ... Local features gave rise to selected points in the image or the feature space where again, only individual feature vectors were considered for classifying or discriminating the basic entities. Every point in the underlying Euclidean space is principally valid. However context and relations among two or more basic entities establish constraints that exclude certain constellations and give rise to highly complex algorithms for checking consistency, finding correspondence, or for matching. Furthermore noise and inaccurate measurements enforce strategies to cope with instability to produce robust results.More holistic approaches consider the spaces in which the basic elements are embedded. They do not need to be homogeneous and may contain empty subspaces ("holes"), many combinatorial constellations are excluded by real world constraints. A few examples from structural representations of such spaces will be discussed: graphs describing spatio-temporal partitions at multiple levels of abstraction, topological and homological representations with strong invariance to continuous deformations. The concept of topological persistence will be introduced as an example how to overcome the widely spread myth that "topology is not robust".
Reference
Kropatsch, W. (2012). Representing Vision Embedding Spaces. GS Workshop on Computer Vision and Perception, Prag, EU. http://hdl.handle.net/20.500.12708/85406