BEARD
BEnchmarking the Adversarial Robustness for Dataset distillation
Zheng Zhou1  Wenquan Feng1  Shuchang Lyu1  Guangliang Cheng2  Xiaowei Huang2  Qi Zhao1
1 Beihang University   2 University of Liverpool

The goal of BEARD is to systematically explore the adversarial robustness in Dataset Distillation (DD). A critical gap exists in the field, where the adversarial robustness of models trained on distilled datasets remains underexplored. Despite advancements in DD, deep neural networks continue to be vulnerable to adversarial attacks, posing significant security risks across various applications. Current research shows that while DD can partially improve adversarial robustness, it falls short of fully addressing these vulnerabilities. Additionally, the lack of a unified, open-source benchmark for evaluating adversarial robustness in DD has slowed progress in the field.

In response, we introduce BEARD, an open benchmark designed to systematically evaluate the adversarial robustness of DD methods. BEARD provides a comprehensive framework for testing DD methods across various datasets using different adversarial attacks, introduces new evaluation metrics, and offers open-source tools to support ongoing research and development.

TO DO List:

  • Supplement More DD Methods: Our benchmark evaluates the following distillation methods: DC, DSA, DM, MTT, IDM, BACON, TESLA, RDED, and et cetera. We will periodically update this list as new methods are developed and evaluated.

    ✓ Completed methods
    ✗ Not yet completed methods

    We will periodically update this list as new methods are developed and evaluated.

  • Improve the Code We welcome contributions and participation from the community to help enhance our benchmark. If you have suggestions or want to contribute, please feel free to get involved and make a difference!

Training phase with
6+ distillation methods
and 3+ datasets

Evaluation phase with
5+ adversarial attacks
to assess model robustness

Training Configuration


Check out the available models and available datasets. Below is the JSON file used for the training stage:
                  # Configuration for model training
                  config = {
                      "dataset": "CIFAR10",
                      "model": "ConvNet",
                      "method": "XX",
                      "ipc": "50",
                      "dsa_strategy": "color_crop_cutout_flip_scale_rotate",
                      "syn_ce": true,
                      "ce_weight": 1,
                      "aug": false,
                      "data_file": "./data_pool/XX/XXX.pt",
                      "save_path": "./model_pool/XX/",
                      "train_attack": "PGD",
                      "target_attack": false,
                      "test_attack": "None",
                      "src_dataset": false
                  }
              

Evaluating Configuration


Check out the evaluation results. Below is the JSON file used for the evaluating stage:
                # Configuration for model evaluation
                config = {
                    "dataset": "CIFAR10",
                    "model": "ConvNet",
                    "method": "XX",
                    "ipc": "50",
                    "dsa_strategy": "color_crop_cutout_flip_scale_rotate",
                    "syn_ce": true,
                    "ce_weight": 1,
                    "aug": false,
                    "load_file": "。/model_pool/XX/XXX.pth",
                    "save_path": "。/model_pool/XX/",
                    "train_attack": "None",
                    "test_attack": ["Clean", "FGSM", "PGD", XXX],
                    "target_attack": [true, false],
                    "src_dataset": true,
                    "pgd_eva": false
                }
            
Available Leaderboards
CIFAR-10 (Unified) CIFAR-100 (Unified) TinyImageNet (Unified) CIFAR-10 (IPC-1) CIFAR-10 (IPC-10) CIFAR-10 (IPC-50) CIFAR-100 (IPC-1) CIFAR-100 (IPC-10) CIFAR-100 (IPC-50) TinyImageNet (IPC-1) TinyImageNet (IPC-10) TinyImageNet (IPC-50)

1
1
1
1
Leaderboard: CIFAR-10 (Unified), untargeted attack

Rank Method CREI RRM AEM Code Distilled
Data
Author Venue Update Date
1 Improved Distribution Matching for Dataset Condensation
IDM
28.46% 33.03% 23.89% ✓ ✗ Ganlong Zhao CVPR 2023 2024/08/14
2 Dataset Condensation with Distribution Matching
DM
28.32% 34.50% 22.13% ✓ ✓ Bo Zhao WACV 2023 2024/08/14
3 Dataset Condensation with Differentiable Siamese Augmentation
DSA
27.75% 36.53% 18.97% ✓ ✓ Bo Zhao ICML 2021 2024/08/14
4 BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
BACON
27.20% 32.87% 21.53% ✓ ✓ Zheng Zhou ARXIV 2024 2024/08/14
5 Dataset Condensation with Gradient Matching
DC
26.70% 31.87% 21.53% ✓ ✓ Bo Zhao ICLR 2021 2024/08/14
6 Dataset Distillation by Matching Training Trajectories
MTT
26.26% 33.30% 19.21% ✓ ✓ George Cazenavette CVPR 2022 2024/08/14

1
1
1
1
Leaderboard: CIFAR-100 (Unified), untargeted attack

Rank Method CREI RRM AEM Code Distilled
Data
Author Venue Update Date
1 Dataset Condensation with Gradient Matching
DC
22.40% 28.74% 16.06% ✓ ✓ Bo Zhao ICLR 2021 2024/08/14
2 Dataset Condensation with Differentiable Siamese Augmentation
DSA
20.40% 28.53% 12.26% ✓ ✓ Bo Zhao ICML 2021 2024/08/14
3 Improved Distribution Matching for Dataset Condensation
IDM
20.36% 26.28% 14.44% ✓ ✗ Ganlong Zhao CVPR 2023 2024/08/14
4 Dataset Condensation with Distribution Matching
DM
19.78% 26.72% 12.83%% ✓ ✓ Bo Zhao WACV 2023 2024/08/14
5 Dataset Distillation by Matching Training Trajectories
MTT
19.65% 26.07% 13.23% ✓ ✓ George Cazenavette CVPR 2022 2024/08/14
6 BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
BACON
19.30% 25.26% 13.34% ✓ ✓ Zheng Zhou ARXIV 2024 2024/08/14

1
1
1
1
Leaderboard: TinyImageNet (Unified), untargeted attack

1
1
1
1
Leaderboard: CIFAR-10 (IPC-1), untargeted attack

1
1
1
1
Leaderboard: CIFAR-10 (IPC-10), untargeted attack

1
1
1
1
Leaderboard: CIFAR-10 (IPC-50), untargeted attack

1
1
1
1
Leaderboard: CIFAR-100 (IPC-1), untargeted attack

1
1
1
1
Leaderboard: CIFAR-100 (IPC-10), untargeted attack

1
1
1
1
Leaderboard: CIFAR-100 (IPC-50), untargeted attack

Rank Method CREI RRM AEM Code Distilled
Data
Author Venue Update Date
1 Improved Distribution Matching for Dataset Condensation
IDM
21.34% 24.68% 18.00% ✓ ✗ Ganlong Zhao CVPR 2023 2024/08/15
2 Dataset Condensation with Differentiable Siamese Augmentation
DSA
20.67% 24.01% 17.34% ✓ ✓ Bo Zhao ICML 2021 2024/08/15
3 Dataset Distillation by Matching Training Trajectories
MTT
20.54% 23.76% 17.31% ✓ ✓ George Cazenavette CVPR 2022 2024/08/15
4 BACON: Bayesian Optimal Condensation Framework for Dataset Distillation
BACON
20.11% 22.82% 17.40% ✓ ✓ Zheng Zhou ARXIV 2024 2024/08/15
5 Dataset Condensation with Distribution Matching
DM
20.10% 22.44% 17.76% ✓ ✓ Bo Zhao WACV 2023 2024/08/15
6 Dataset Condensation with Gradient Matching
DC
19.46% 20.92% 18.00% ✓ ✓ Bo Zhao ICLR 2021 2024/08/15

1
1
1
1
Leaderboard: TinyImageNet (IPC-1), untargeted attack

1
1
1
1
Leaderboard: TinyImageNet (IPC-10), untargeted attack

1
1
1
1
Leaderboard: TinyImageNet (IPC-50), untargeted attack

FAQ

➤ How does the BEARD leaderboard differ from the DC-Bench leaderboard? 🤔
The DC-Bench leaderboard is a leaderboard to evaluate the distillation performance of DD methods. BEARD aims to evaluate the adversarial robustness of DD methods.

➤ How does the BEARD leaderboard differ from RobustBench? 🤔
RobustBench focuses on adversarial robustness evaluations of general methods, but we provide the evaluation of adversarial robustness for DD methods.

Citation

Please consider citing our paper if you reference our leaderboard or use models and datasets from the model and dataset pools:
@inproceedings{zhou2025beard,
    title={BEARD: Benchmarking the Adversarial Robustness for Dataset Distillation},
    author={Zhou, Zheng and Feng, Wenquan and Lyu, Shuchang and Cheng, Guangliang and Huang, Xiaowei
    and Zhao, Qi},
    journal={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2025}
}

Contribute to BEARD!


We welcome contributions in the form of both new robust models and evaluations to our BEARD. Feel free to contact us at zhengzhou@buaa.edu.cn.

Maintainers