- Environment Setup
- Performance Results
- Pretrained Models
- Dataset Preparation
- Training
- Inference
- Citation
| Component | Version |
|---|---|
| OS | Ubuntu 22.04 LTS |
| GPU | RTX 4090 (24GB) |
| GPU Driver | 550.54.14 |
| CUDA | 12.4 |
| PyTorch | 2.4.1 |
Comprehensive evaluation across multiple retinal vessel segmentation datasets:
| Dataset | mIoU | F1 Score | Accuracy | AUC | Sensitivity | MCC |
|---|---|---|---|---|---|---|
| DRIVE | 84.068 | 83.229 | 97.042 | 98.235 | 84.207 | 81.731 |
| STARE | 86.118 | 85.100 | 97.746 | 98.967 | 86.608 | 83.958 |
| CHASE_DB1 | 82.680 | 81.019 | 97.515 | 99.378 | 85.995 | 79.889 |
| HRF | 83.088 | 81.567 | 97.106 | 98.744 | 83.616 | 80.121 |
Pre-trained models for each dataset are available in the releases page.
Download and extract the models corresponding to your target dataset before inference.
Edit the dataset paths in configs/train.yml:
- Update
train_x_pathwith your input data directory - Update corresponding label paths
- ✋ Input and label files must be sorted by name
- ✋ Ensure one-to-one correspondence between inputs and labels
Datasets can be obtained from:
- Public repositories
- GitHub Releases
Edit configs/train.yml with your:
- Dataset paths
- Hyperparameters
- Training settings
With WandB:
wandb login # Login with your credentialsWithout WandB:
Set wandb: false in configs/train.yml
bash bash_train.shEdit configs/inference.yml:
- Set
model_pathto your pretrained model location - Specify input image directory
- Configure output paths
bash bash_inference.shWhen using official pretrained models, results should closely match the performance metrics above.
If you find this work helpful, please cite:
@article{seo2025fullscalerepresentationguidednetwork,
title = {Full-scale Representation Guided Network for Retinal Vessel Segmentation},
author = {Sunyong Seo, Sangwook Yoo, Huisu Yoon},
journal = {arXiv preprint arXiv:2501.18921},
year = {2025}
}⭐ If you find this repository useful, please consider giving it a star!
