String Matching for Visual Retrieval and Classification


We present an approach to measuring similarities between visual data based on approximate string matching. An image is first represented by an ordered list of feature descriptors, and we compare two images based on such a representation. In this framework, the similarity is measured by 1) solving a correspondence problem between two sets of features that preserves ordering, 2) similarities between matched features and dissimilarities between unmatched features. Our experimental study shows that such a globally ordered and locally unordered representation becomes more discriminative than does a bag-of-features representation because the features' order is considered. We demonstrate our method on contour matching and scene recognition tasks, and achieve state-of-the-art performances.

Approximate String Matching

Mei-Chen Yeh and Tim Cheng, "A String Matching Approach for Visual Retrieval and Classification," ACM International Conference on Multimedia Information Retrieval, Vancouver, Canada, 2008.

(presentation slides)

<back to research>