Attribute Prototype Network for Zero-Shot Learning


Wenjia Xu, Yongqin Xian, Jiuniu Wang, Zeynep Akata, Bernt Schiele

Paper Code

Introduction


From the beginning of zero-shot learning research, visual attributes have been shown to play an important role. In order to better transfer attribute-based knowledge from known to unknown classes, we argue that an image representation with integrated attribute localization ability would be beneficial for zero-shot learning. To this end, we propose a novel zero-shot representation learning framework that jointly learns discriminative global and local features using only class-level attributes. While a visual-semantic embedding layer learns global features, local features are learned through an attribute prototype network that simultaneously regresses and decorrelates attributes from intermediate features. We show that our locality augmented image representations achieve a new state-of-the-art on three zero-shot learning benchmarks. As an additional benefit, our model points to the visual evidence of the attributes in an image, e.g. for the CUB dataset, confirming the improved attribute localization ability of our image representation.

Quantitative Results


We next show quantitative results for Zero-Shot Learning.

Qualitative Results


We next show qualitative results for localizing attributes and body parts.