Workshop on Cross Modal Person Reidentification
(CM-PRID'21)

MIPR Head

About CM-PRID 21 Workshop

Person reidentification has received a lot of attention in the recent past due to its potential in visual surveillance. Most of the works focus on visible spectrum and use single modality. With widespread surveillance and more stringent constraints, the current requirement is to develop techniques which can address the cross modality nature of the captured data. In addition to the visible spectrum reidentification challenges such as pose, illumination and scale variations, and occlusion, cross-modal dataset also pose the challenge of domain or spectrum variation. Thus, the cross-modal reidentification becomes practically very challenging. This necessitates two goals which are the focus of the workshop. First, to generate cross-modal datasets such as text-image, RGB-IR, image-video and RGB-Depth datasets. Second, novel techniques which can bridge the domain gap between the two modalities. Though some preliminary datasets and techniques exist, there is a huge scope of contribution towards the two goals.

We invite novel and high-quality papers presenting or addressing issues related to MIPR, but not limited to:

  • Models for cross-modal re-id.
  • Adversarial attacks.
  • Image to video or video to image based re-id.
  • Text-image, RGB-IR, Sketch-Image, RGB-Depth re-id.
  • Active Learning for data collection.
  • Long term re-id.
  • Privacy concerns.
  • Large scale deployments.
  • Industrial applications.
  • Re-id in wireless networks.
  • Cross-modal human image synthesis.
  • Domain adaptation techniques.

Important Dates

Submission: November 6, 2020
Notification of Paper Acceptance: December 25, 2020
Camera Ready: January 8, 2021

For submission, please refer to the main conference site

Organizing Team

Program Committee

Follow us on Social Media:





Updated by Web Admin