GenComm

Pragmatic Heterogeneous Collaborative Perception via Generative Communication Mechanism

NeurIPS 2025 Poster

1Southwest Jiaotong University  2Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China  3Wuhan University of Technology  4City University of Hong Kong
Corresponding Author

Key Idea

  • Generate features consistent with ego features.
  • Preserve collaborators' spatial information.
Key idea illustration

Comparison with Baselines

GenComm simultaneously meets:

  • Non-intrusive
  • Scalable
  • Plug & Play
  • Efficient Communication
  • Private & Secure
intro

GenComm Framework

framework

Application Rational

  • Stage1: GenComm base training under homogeneous collaboration
  • Stage2: Specific Deformable Message Extractors training under heterogeneous collaboration
  • New agents join the collaboration by reaching a consensus with other vendors and training specific DMEs.
Application Rational

Abstract

Multi-agent collaboration enhances the perception capabilities of individual agents through information sharing. However, in real-world applications, differences in sensors and models across heterogeneous agents inevitably lead to domain gaps during collaboration. Existing approaches based on adaptation and reconstruction fail to support \textit{pragmatic heterogeneous collaboration} due to two key limitations: (1) Intrusive retraining of the encoder or core modules disrupts the established semantic consistency among agents; and (2) accommodating new agents incurs high computational costs, limiting scalability. To address these challenges, we present a novel Generative Communication mechanism (GenComm) that facilitates seamless perception across heterogeneous multi-agent systems through feature generation, without altering the original network, and employs lightweight numerical alignment of spatial information to efficiently integrate new agents at minimal cost. Specifically, a tailored Deformable Message Extractor is designed to extract spatial information for each collaborator, which is then transmitted in place of intermediate features. The Spatial-Aware Feature Generator, utilizing a conditional diffusion model, generates features aligned with the ego agent's semantic space while preserving the spatial information of the collaborators. These generated features are further refined by a Channel Enhancer before fusion. Experiments conducted on the OPV2V-H, DAIR-V2X and V2X-Real datasets demonstrate that GenComm outperforms existing state-of-the-art methods, achieving an 81\% reduction in both computational cost and parameter count when incorporating new agents.

BibTeX

@article{zhou2025pragmatic,
  title={Pragmatic Heterogeneous Collaborative Perception via Generative Communication Mechanism},
  author={Zhou, Junfei and Dai, Penglin and Wei, Quanmin and Liu, Bingyi and Wu, Xiao and Wang, Jianping},
  journal={arXiv preprint arXiv:2510.19618},
  year={2025}
}