jaychempan commited on
Commit
d5f2c95
·
verified ·
1 Parent(s): b3c7617

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +204 -3
README.md CHANGED
@@ -1,3 +1,204 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - remote-sensing
5
+ - computer-vision
6
+ - diffusion-models
7
+ - controlnet
8
+ - generative-model
9
+ - earth-observation
10
+ - open-vocabulary
11
+ - image-dataset
12
+ ---
13
+ <p align="center">
14
+ <img src="assets/EarthSy.png" alt="Image" width="120">
15
+ </p>
16
+ <div align="center">
17
+ <h1 align="center"> EarthSynth: Generating Informative Earth Observation with Diffusion Models</h1>
18
+
19
+ <h4 align="center"><em>Jiancheng Pan*,     Shiye Lei*,     Yuqian Fu✉,    Jiahao Li,    Yanxing Liu</em></h4>
20
+
21
+ <h4 align="center"><em>Xiao He,   Yuze Sun,  Long Peng,   Xiaomeng Huang✉ ,     Bo Zhao✉ </em></h4>
22
+ <p align="center">
23
+ <img src="assets/inst.png" alt="Image" width="400">
24
+ </p>
25
+
26
+ \* *Equal Contribution* &nbsp; &nbsp; Corresponding Author ✉
27
+
28
+ </div>
29
+
30
+ <p align="center">
31
+ <a href="https://arxiv.org/abs/2505.12108"><img src="https://img.shields.io/badge/Arxiv-2505.12108-b31b1b.svg?logo=arXiv"></a>
32
+ <!-- <a href="http://arxiv.org/abs/2408.09110"><img src="https://img.shields.io/badge/AAAI'25-Paper-blue"></a> -->
33
+ <a href="https://jianchengpan.space/EarthSynth-website/index.html"><img src="https://img.shields.io/badge/EarthSynth-Project_Page-<color>"></a>
34
+ <a href="https://huggingface.co/datasets/jaychempan/EarthSynth-180K"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-HuggingFace-yellow?style=flat&logo=hug"></a>
35
+ <a href="https://huggingface.co/jaychempan/EarthSynth"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Model-HuggingFace-yellow?style=flat&logo=hug"></a>
36
+ <a href="https://github.com/jaychempan/EarthSynth/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-orange"></a>
37
+ </p>
38
+
39
+ <p align="center">
40
+ <a href="#news">News</a> |
41
+ <a href="#abstract">Abstract</a> |
42
+ <a href="#dataset">Dataset</a> |
43
+ <a href="#model">Model</a> |
44
+ <a href="#statement">Statement</a>
45
+ </p>
46
+
47
+ ## TODO
48
+
49
+ - [ ] Release EarthSynth Models to 🤗 HuggingFace
50
+ - [x] Release EarthSynth-180K Dataset to 🤗 HuggingFace
51
+
52
+ ## News
53
+ - [2025/8/7] EarthSynth-180K dataset is uploaded to 🤗 [HuggingFace](jaychempan/EarthSynth-180K).
54
+ - [2025/5/20] Our paper of "EarthSynth: Generating Informative Earth Observation with Diffusion Models" is up on [arXiv](https://arxiv.org/abs/2505.12108).
55
+
56
+ ## Abstract
57
+
58
+ Remote sensing image (RSI) interpretation typically faces challenges due to the scarcity of labeled data, which limits the performance of RSI interpretation tasks. To tackle this challenge, we propose **EarthSynth**, a diffusion-based generative foundation model that enables synthesizing multi-category, cross-satellite labeled Earth observation for downstream RSI interpretation tasks. To the best of our knowledge, EarthSynth is the first to explore multi-task generation for remote sensing, tackling the challenge of limited generalization in task-oriented synthesis for RSI interpretation. EarthSynth, trained on the EarthSynth-180K dataset, employs the Counterfactual Composition training strategy with a three-dimensional batch-sample selection mechanism to improve training data diversity and enhance category control. Furthermore, a rule-based method of R-Filter is proposed to filter more informative synthetic data for downstream tasks. We evaluate our EarthSynth on scene classification, object detection, and semantic segmentation in open-world scenarios. There are significant improvements in open-vocabulary understanding tasks, offering a practical solution for advancing RSI interpretation.
59
+
60
+ <p align="center">
61
+ <img src="assets/EarthSynth-FM.png" alt="Image" width="500">
62
+ </p>
63
+
64
+ ## Dataset
65
+ EarthSynth-180K is derived from OEM, LoveDA, DeepGlobe, SAMRS, and LAE-1M datasets. It is further enhanced with mask and text prompt conditions, making it suitable for training foundation diffusion-based generative model. The EarthSynth-180K dataset is constructed using the Random Cropping and Category Augmentation strategies.
66
+
67
+ <p align="center">
68
+ <img src="assets/EarthSynth-180K-Map.png" alt="Image" width="400">
69
+ </p>
70
+
71
+
72
+ <p align="center">
73
+ <img src="assets/EarthSynth-180K.png" alt="Image" width="400">
74
+ </p>
75
+
76
+ ### Data Preparation
77
+ We use category augmentation on each image to help the model better understand each category and allow more control over specific categories when generating images. This also helps improve the combination of samples in the batch-based CF-Comp strategy. If you want to train a remote sensing foundation generative model of your own, this step is not necessary. Here is the use of the category-augmentation method.
78
+
79
+ - Merge the split zip files and extract them
80
+ ```
81
+ cat train.zip_part_* > train.zip
82
+ unzip train.zip
83
+ ```
84
+ - Store the dataset in the following directory structure: `./data/EarthSynth-180K`
85
+ ```
86
+ .(./data/EarthSynth-180K)
87
+ └── train
88
+ ├── images
89
+ └── masks
90
+ ```
91
+ - Run the category augmentation script:
92
+ ```
93
+ python category-augmentation.py
94
+ ```
95
+ After running, the directory will look like this:
96
+ ```
97
+ ..(./data/EarthSynth-180K)
98
+ └─��� train
99
+ ├── category_images # Augmented single-category images
100
+ ├── category_masks # Augmented single-category masks
101
+ ├── images
102
+ ├── masks
103
+ └── train.jsonl # JSONL file for training
104
+ ```
105
+
106
+ ## Model
107
+ ### Environment Setup
108
+ The experimental environment is based on [`diffusers==0.30.3`](https://huggingface.co/docs/diffusers/v0.30.3/en/installation), and the installation environment references mmdetection's installation guide. You can refer to my environment `requirements.txt` if you encounter problems.
109
+ ```
110
+ conda create -n earthsy python=3.8 -y
111
+ conda activate earthsy
112
+ pip install -r requirements.txt
113
+ git clone https://github.com/jaychempan/EarthSynth.git
114
+ cd diffusers
115
+ pip install -e ".[torch]"
116
+ ```
117
+ ### EarthSynth with CF-Comp
118
+ EarthSynth is trained with CF-Comp training strategy on real and unrealistic logical mixed data distribution, learns remote sensing pixel-level properties in multiple dimensions, and builds a unified process for conditional diffusion training and synthesis.
119
+
120
+ <p align="center">
121
+ <img src="assets/EarthSynth-Framwork.png" alt="Image" width="700">
122
+ </p>
123
+
124
+ ### Train EarthSynth
125
+ This project is based on diffusers' ControlNet base structure, and the community is open for easy use and promotion. By modifying the config file of `train.sh` of the catalog `./diffusers/train/`.
126
+
127
+ ```
128
+ cd diffusers/
129
+ bash train/train.sh
130
+ ```
131
+
132
+ ### Inference
133
+ Example inference using 🤗 HuggingFace pipeline:
134
+ ```python
135
+ from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
136
+ import torch
137
+ from PIL import Image
138
+
139
+ img = Image.open("./demo/control/mask.png")
140
+
141
+ controlnet = ControlNetModel.from_pretrained("jaychempan/EarthSynth")
142
+
143
+ pipe = StableDiffusionControlNetPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", controlnet=controlnet)
144
+ pipe = pipe.to("cuda:0")
145
+
146
+ # generate image
147
+ generator = torch.manual_seed(10345340)
148
+ image = pipe(
149
+ "A satellite image of a storage tank",
150
+ generator=generator,
151
+ image=img,
152
+ ).images[0]
153
+
154
+ image.save("generated_storage_tank.png")
155
+
156
+ ```
157
+ Or you can infer locally:
158
+ ```
159
+ python test.py --base_model path/to/stable-diffusion/ --controlnet_path path/to/earthsynth [--control_image_dir] [--output_dir] [--output_dir] [--category_txt_path] [--num_images]
160
+ ```
161
+ ### Training Data Generation
162
+
163
+ [TODO]
164
+
165
+ <p align="center">
166
+ <img src="assets/Vis.png" alt="Image" width="500">
167
+ </p>
168
+
169
+
170
+ ### Acknowledgement
171
+
172
+ This project references and uses the following open-source models and datasets.
173
+
174
+ #### Related Open Source Models
175
+
176
+ - [Diffusers](https://github.com/huggingface/diffusers)
177
+ - [ControlNet](https://github.com/lllyasviel/ControlNet)
178
+ - [MM-Grounding-DINO](https://github.com/open-mmlab/mmdetection/blob/main/configs/mm_grounding_dino/README.md)
179
+ - [CLIP](https://github.com/openai/CLIP)
180
+ - [GSNet](https://github.com/yecy749/GSNet)
181
+
182
+ #### Related Open Source Datasets
183
+
184
+ - [OpenEarthMap](https://open-earth-map.org/overview_oem.html)
185
+ - [LoveDA](https://github.com/Junjue-Wang/LoveDA?tab=readme-ov-file)
186
+ - [DeepGlobe](http://deepglobe.org/)
187
+ - [SAMRS](https://github.com/ViTAE-Transformer/SAMRS)
188
+ - [LAE-1M](https://github.com/jaychempan/LAE-DINO)
189
+
190
+ ### Citation
191
+
192
+ If you are interested in the following work or want to use our dataset, please cite the following paper.
193
+
194
+ ```
195
+ @misc{pan2025earthsynthgeneratinginformativeearth,
196
+ title={EarthSynth: Generating Informative Earth Observation with Diffusion Models},
197
+ author={Jiancheng Pan and Shiye Lei and Yuqian Fu and Jiahao Li and Yanxing Liu and Yuze Sun and Xiao He and Long Peng and Xiaomeng Huang and Bo Zhao},
198
+ year={2025},
199
+ eprint={2505.12108},
200
+ archivePrefix={arXiv},
201
+ primaryClass={cs.CV},
202
+ url={https://arxiv.org/abs/2505.12108},
203
+ }
204
+ ```