Skip to content

rayford295/Agent4Disaster

Repository files navigation

Agent4Disaster

Towards Autonomous Disaster Assessment: A Cross-View Multi-Agent Pipeline for Zero-Shot Damage Diagnosis


Overview

Agent4Disaster is an autonomous GeoAI multi-agent framework designed for
hyperlocal, interpretable, and near–real-time disaster assessment.

The framework integrates multimodal geospatial observations, including:

  • Satellite imagery (RSI)
  • Street-view imagery (SVI)
  • Textual and contextual cues
  • Temporal change information
  • Disaster Mapping

It performs an end-to-end disaster intelligence pipeline including:

  • Disaster perception
  • Image restoration
  • Damage recognition
  • Disaster reasoning and recovery recommendation

By leveraging vision–language foundation models and agent-based orchestration,
Agent4Disaster enables cross-view disaster understanding and zero/few-shot damage diagnosis without task-specific retraining.

This repository hosts project materials associated with our research paper.

Note
The full source code is currently not publicly available because the paper has not yet been released on a public platform.


Key Features

  • Autonomous multi-agent disaster intelligence pipeline
  • Cross-view disaster understanding (Satellite ↔ Street-view)
  • Multimodal disaster interpretation (RSI + SVI + text)
  • Zero-shot damage assessment using foundation models
  • Structured JSON outputs for downstream analytics
  • Automated disaster situation reports for the “golden 36 hours”
  • Evaluation across cross-view, bi-temporal, and multi-hazard datasets

Project Architecture

The Agent4Disaster pipeline consists of four core agents.

1. Disaster Perception Agent

Identifies:

  • disaster type
  • image modality
  • structural context

and plans the downstream analysis workflow.


2. Image Restoration Agent

Enhances degraded imagery to improve structural visibility for downstream reasoning.

Supported inputs include:

  • Street-view imagery (SVI)
  • Remote sensing imagery (RSI)


3. Damage Recognition Agent

Performs:

  • Object-level damage detection
  • Damage severity classification
  • Change-aware reasoning (pre vs. post disaster)

This step combines vision-language models and structured reasoning to produce interpretable outputs.


4. Disaster Reasoning Agent

Synthesizes all intermediate outputs and produces:

  • Structured disaster interpretation
  • Causal explanations
  • Recovery recommendations


Example Outputs

LLM-Based Object Detection

Figure 2. Example of object-level damage detection using vision–language models.


Final Output (Structured JSON + Explanation)

Figure 3. Final disaster intelligence output generated by the agent pipeline.

Outputs include:

  • detected objects
  • damage severity
  • structured reasoning
  • recovery suggestions

Datasets

Agent4Disaster supports multiple multimodal disaster datasets, including:

  • Cross-view hurricane imagery (paired SVI + RSI)
  • Bi-temporal street-view imagery (pre vs. post disaster)
  • Multi-hazard street-view datasets
    • wildfire
    • flooding
    • earthquake

Dataset details and preprocessing scripts will be released with the paper.


Citation

If you find this work useful, please consider citing our paper (coming soon).

@article{yang2026agent4disaster,
  title={Towards Autonomous Disaster Assessment: A Cross-View Multi-Agent Pipeline for Zero-Shot Damage Diagnosis},
  author={Yang, Yifan and others},
  year={2026}
}

License

This project is released for academic research purposes.
Code will be released after paper publication.


Acknowledgements

This research is conducted in the GEAR Lab at
Texas A&M University Department of Geography.

About

A Multi-Agent GeoAI pipeline for Multimodal Disaster Perception, Restoration, Damage Recognition, and Reasoning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors