Authors: Yong Liu,Lin Shang,Andy Song
ArXiv: 1811.08561
Document:
PDF
DOI
Abstract URL: http://arxiv.org/abs/1811.08561v1
Typical person re-identification (re-ID) methods train a deep CNN to extract
deep features and combine them with a distance metric for the final evaluation.
In this work, we focus on exploiting the full information encoded in the deep
feature to boost the re-ID performance. First, we propose a Deep Feature Fusion
(DFF) method to exploit the diverse information embedded in a deep feature. DFF
treats each sub-feature as an information carrier and employs a diffusion
process to exchange their information. Second, we propose an Adaptive
Re-Ranking (ARR) method to exploit the contextual information encoded in the
features of neighbors. ARR utilizes the contextual information to re-rank the
retrieval results in an iterative manner. Particularly, it adds more contextual
information after each iteration automatically to consider more matches. Third,
we propose a strategy that combines DFF and ARR to enhance the performance.
Extensive comparative evaluations demonstrate the superiority of the proposed
methods on three large benchmarks.