Layout Sequence Prediction From Noisy Mobile Modality

Haichao Zhang1
Yi Xu1
Hongsheng Lu2
Takayuki Shimizu2
Yun Fu1

1Northeastern University
2Toyota Motor North America

In ACM MM 2023

[Arxiv]
[GitHub]
(coming soon)
[Video]
[Poster]
[Paper]


Real-World Scenario with Obstructed Cameras and Missing Objects

Abstract

Trajectory prediction plays a vital role in understanding pedestrian movement for applications such as autonomous driving and robotics. Current trajectory prediction models depend on long, complete, and accurately observed sequences from visual modalities. Nevertheless, real-world situations often involve obstructed cameras, missed objects, or objects out of sight due to environmental factors, leading to incomplete or noisy trajectories. To overcome these limitations, we propose \textbf{LTrajDiff}, a novel approach that treats objects obstructed or out of sight as equally important as those with fully visible trajectories. LTrajDiff utilizes sensor data from mobile phones to surmount out-of-sight constraints, albeit introducing new challenges such as modality fusion, noisy data, and the absence of spatial layout and object size information. We employ a denoising diffusion model to predict precise layout sequences from noisy mobile data using a coarse-to-fine diffusion strategy, incorporating the Random Mask Strategy, Siamese Masked Encoding Module, and Modality Fusion Module. Our model predicts layout sequences by implicitly inferring object size and projection status from a single reference timestamp or significantly obstructed sequences. Achieving state-of-the-art results in randomly obstructed experiments, our model outperforms other baselines in extremely short input experiments, illustrating the effectiveness of leveraging noisy mobile data for layout sequence prediction. In summary, our approach offers a promising solution to the challenges faced by layout sequence and trajectory prediction models in real-world settings, paving the way for utilizing sensor data from mobile phones to accurately predict pedestrian bounding box trajectories. To the best of our knowledge, this is the first work that addresses severely obstructed and extremely short layout sequences by combining vision with noisy mobile modality, making it the pioneering work in the field of layout sequence trajectory prediction.


Code coming soon!


Paper




Haichao Zhang, Yi Xu, Hongsheng Lu, Takayuki Shimizu, Yun Fu
Layout Sequence Prediction From Noisy Mobile Modality
ACM MM, 2023 (Paper)

[BibTex]


@inproceedings{zhang2023layout,
  title={Layout Sequence Prediction From Noisy Mobile Modality},
  author={Zhang, Haichao and Xu, Yi and Lu, Hongsheng and Shimizu, Takayuki and Fu, Yun},
  booktitle={Proceedings of the 31st ACM International Conference on Multimedia},
  pages={3965--3974},
  year={2023}
}
					



Expository Videos


Two-minute papers