Towards Autonomous Disaster Assessment: A Cross-View Multi-Agent Pipeline for Zero-Shot Damage Diagnosis
Agent4Disaster is an autonomous GeoAI multi-agent framework designed for
hyperlocal, interpretable, and near–real-time disaster assessment.
The framework integrates multimodal geospatial observations, including:
- Satellite imagery (RSI)
- Street-view imagery (SVI)
- Textual and contextual cues
- Temporal change information
- Disaster Mapping
It performs an end-to-end disaster intelligence pipeline including:
- Disaster perception
- Image restoration
- Damage recognition
- Disaster reasoning and recovery recommendation
By leveraging vision–language foundation models and agent-based orchestration,
Agent4Disaster enables cross-view disaster understanding and zero/few-shot damage diagnosis without task-specific retraining.
This repository hosts project materials associated with our research paper.
Note
The full source code is currently not publicly available because the paper has not yet been released on a public platform.
- Autonomous multi-agent disaster intelligence pipeline
- Cross-view disaster understanding (Satellite ↔ Street-view)
- Multimodal disaster interpretation (RSI + SVI + text)
- Zero-shot damage assessment using foundation models
- Structured JSON outputs for downstream analytics
- Automated disaster situation reports for the “golden 36 hours”
- Evaluation across cross-view, bi-temporal, and multi-hazard datasets
The Agent4Disaster pipeline consists of four core agents.
Identifies:
- disaster type
- image modality
- structural context
and plans the downstream analysis workflow.
Enhances degraded imagery to improve structural visibility for downstream reasoning.
Supported inputs include:
- Street-view imagery (SVI)
- Remote sensing imagery (RSI)
Performs:
- Object-level damage detection
- Damage severity classification
- Change-aware reasoning (pre vs. post disaster)
This step combines vision-language models and structured reasoning to produce interpretable outputs.
Synthesizes all intermediate outputs and produces:
- Structured disaster interpretation
- Causal explanations
- Recovery recommendations
Figure 2. Example of object-level damage detection using vision–language models.
Figure 3. Final disaster intelligence output generated by the agent pipeline.
Outputs include:
- detected objects
- damage severity
- structured reasoning
- recovery suggestions
Agent4Disaster supports multiple multimodal disaster datasets, including:
- Cross-view hurricane imagery (paired SVI + RSI)
- Bi-temporal street-view imagery (pre vs. post disaster)
- Multi-hazard street-view datasets
- wildfire
- flooding
- earthquake
Dataset details and preprocessing scripts will be released with the paper.
If you find this work useful, please consider citing our paper (coming soon).
@article{yang2026agent4disaster,
title={Towards Autonomous Disaster Assessment: A Cross-View Multi-Agent Pipeline for Zero-Shot Damage Diagnosis},
author={Yang, Yifan and others},
year={2026}
}
This project is released for academic research purposes.
Code will be released after paper publication.
This research is conducted in the GEAR Lab at
Texas A&M University Department of Geography.






