Are you sure you want to delete this access key?
title | description | keywords | sidebar_label | sidebar_position |
---|---|---|---|---|
ModelAudit - Static Security Scanner for ML Models | Scan AI/ML models for security vulnerabilities, malicious code, and backdoors. Supports PyTorch, TensorFlow, ONNX, Keras, and 15+ model formats. | [model security AI security ML security scanning static analysis malicious model detection pytorch security tensorflow security model vulnerability scanner] | Overview | 1 |
ModelAudit is a lightweight static security scanner for machine learning models integrated into Promptfoo. It allows you to quickly scan your AI/ML models for potential security risks before deploying them in production environments.
By invoking promptfoo scan-model
, you can use ModelAudit's static security scanning capabilities.
Promptfoo also includes a UI that allows you to set up a scan:
And displays the results:
AI/ML models can introduce security risks through:
ModelAudit helps identify these risks before models are deployed to production environments, ensuring a more secure AI pipeline.
The easiest way to use ModelAudit is through Promptfoo:
# Install Promptfoo globally
npm install -g promptfoo
# Install modelaudit dependency
pip install modelaudit
You can also install ModelAudit directly:
# Basic installation
pip install modelaudit
# With optional dependencies for specific model formats
pip install modelaudit[tensorflow,h5,pytorch]
# For all dependencies
pip install modelaudit[all]
# Or install specific components:
pip install modelaudit[tensorflow,h5,pytorch] # Core ML frameworks
pip install modelaudit[cloud,mlflow] # Remote model access
pip install modelaudit[numpy1] # NumPy 1.x compatibility
# Pull from GitHub Container Registry
docker pull ghcr.io/promptfoo/modelaudit:latest
# Use specific variants
docker pull ghcr.io/promptfoo/modelaudit:latest-full # All ML frameworks
docker pull ghcr.io/promptfoo/modelaudit:latest-tensorflow # TensorFlow only
# Run with Docker
docker run --rm -v $(pwd):/data ghcr.io/promptfoo/modelaudit:latest scan /data/model.pkl
promptfoo scan-model [OPTIONS] PATH...
# Scan a single model file
promptfoo scan-model model.pkl
# Scan a model directly from HuggingFace without downloading
promptfoo scan-model https://huggingface.co/bert-base-uncased
promptfoo scan-model hf://microsoft/resnet-50
# Scan from cloud storage
promptfoo scan-model s3://my-bucket/model.pt
promptfoo scan-model gs://my-bucket/model.h5
# Scan from MLflow registry
promptfoo scan-model models:/MyModel/1
# Scan multiple models and directories
promptfoo scan-model model.pkl model2.h5 models_directory
# Export results to JSON
promptfoo scan-model model.pkl --format json --output results.json
# Add custom blacklist patterns
promptfoo scan-model model.pkl --blacklist "unsafe_model" --blacklist "malicious_net"
# Enable verbose output
promptfoo scan-model model.pkl --verbose
# Set file size limits
promptfoo scan-model models/ --max-file-size 1073741824 --max-total-size 5368709120
# Generate Software Bill of Materials
promptfoo scan-model model.pkl --sbom sbom.json
See the Advanced Usage guide for detailed authentication setup for cloud storage, JFrog, and other remote sources.
:::info Alternative Installation and Usage
pip install modelaudit
. modelaudit scan
behaves the same as promptfoo scan-model
.promptfoo view
and navigate to /model-audit
for visual scanning and configuration.
:::Option | Description |
---|---|
--blacklist , -b |
Additional blacklist patterns to check against model names |
--format , -f |
Output format (text or json ) [default: text] |
--output , -o |
Output file path (prints to stdout if not specified) |
--timeout , -t |
Scan timeout in seconds [default: 3600] |
--verbose , -v |
Enable verbose output |
--max-file-size |
Maximum file size to scan in bytes [default: unlimited] |
--max-total-size |
Maximum total bytes to scan before stopping [default: unlimited] |
--sbom |
Generate CycloneDX Software Bill of Materials with license info |
--registry-uri |
MLflow registry URI (only used for MLflow model URIs) |
--jfrog-api-token |
JFrog API token for authentication |
--jfrog-access-token |
JFrog access token for authentication |
--strict-license |
Fail scan when incompatible or deprecated licenses are detected |
--max-download-size |
Maximum download size for cloud storage (e.g., 500MB, 2GB) |
--preview |
Preview what would be downloaded without actually downloading |
--cache/--no-cache |
Use cache for downloaded cloud storage files [default: cache] |
--cache-dir |
Directory for caching downloaded files |
--no-skip-files |
Don't skip non-model file types during directory scans |
--selective/--all-files |
Download only scannable files from directories [default: selective] |
Promptfoo includes a web interface for ModelAudit at /model-audit
with visual path selection, real-time progress tracking, and detailed results visualization.
Access: Run promptfoo view
and navigate to http://localhost:15500/model-audit
Key Features:
ModelAudit supports scanning 15+ model formats across major ML frameworks:
Format | Extensions | Description |
---|---|---|
PyTorch | .pt , .pth , .bin |
PyTorch model files and checkpoints |
TensorFlow SavedModel | .pb , directories |
TensorFlow's standard model format |
TensorFlow Lite | .tflite |
Mobile-optimized TensorFlow models |
TensorRT | .engine , .plan |
NVIDIA GPU-optimized inference engines |
Keras | .h5 , .keras , .hdf5 |
Keras/TensorFlow models in HDF5 format |
ONNX | .onnx |
Open Neural Network Exchange format |
SafeTensors | .safetensors |
Hugging Face's secure tensor format |
GGUF/GGML | .gguf , .ggml , .ggmf , .ggjt , .ggla , .ggsa |
Quantized models (LLaMA, Mistral, etc.) |
Flax/JAX | .msgpack , .flax , .orbax , .jax |
JAX-based model formats |
JAX Checkpoints | .ckpt , .checkpoint , .orbax-checkpoint |
JAX training checkpoints |
Pickle | .pkl , .pickle , .dill |
Python serialization (includes Dill) |
Joblib | .joblib |
Scikit-learn and general ML serialization |
NumPy | .npy , .npz |
NumPy array storage formats |
PMML | .pmml |
Predictive Model Markup Language (XML) |
ZIP Archives | .zip |
Compressed model archives with recursive scanning |
Container Manifests | .manifest |
OCI/Docker layer scanning |
Binary Files | .bin |
Auto-detected format (PyTorch, ONNX, SafeTensors, etc.) |
Source | URL Format | Example |
---|---|---|
HuggingFace Hub | https://huggingface.co/ , https://hf.co/ , hf:// |
hf://microsoft/resnet-50 |
Amazon S3 | s3:// |
s3://my-bucket/model.pt |
Google Cloud Storage | gs:// |
gs://my-bucket/model.h5 |
Cloudflare R2 | r2:// |
r2://my-bucket/model.safetensors |
MLflow Registry | models:/ |
models:/MyModel/1 |
JFrog Artifactory | https://*.jfrog.io/ |
https://company.jfrog.io/artifactory/models/model.pkl |
DVC | .dvc files |
model.pkl.dvc |
The scanner looks for various security issues, including:
The scan results are classified by severity:
--verbose
)Some issues include a "Why" explanation to help understand the security risk:
1. suspicious_model.pkl (pos 28): [CRITICAL] Suspicious module reference found: posix.system
Why: The 'os' module provides direct access to operating system functions.
ModelAudit is particularly useful in CI/CD pipelines when incorporated with Promptfoo:
# Example CI/CD script segment
npm install -g promptfoo
pip install modelaudit
promptfoo scan-model --format json --output scan-results.json ./models/
if [ $? -ne 0 ]; then
echo "Security issues found in models! Check scan-results.json"
exit 1
fi
ModelAudit returns specific exit codes for automation:
:::tip CI/CD Best Practice In CI/CD pipelines, exit code 1 indicates findings that should be reviewed but don't necessarily block deployment. Only exit code 2 represents actual scan failures. :::
ModelAudit includes a doctor
command to diagnose scanner compatibility and system status:
# Check system diagnostics and scanner status
promptfoo scan-model doctor
# Show details about failed scanners
promptfoo scan-model doctor --show-failed
The doctor
command provides:
ModelAudit is included with Promptfoo, but specific model formats may require additional dependencies:
# For TensorFlow models
pip install tensorflow
# For PyTorch models
pip install torch
# For Keras models with HDF5
pip install h5py
# For YAML configuration scanning
pip install pyyaml
# For SafeTensors support
pip install safetensors
# For HuggingFace URL scanning
pip install huggingface-hub
# For cloud storage scanning
pip install boto3 google-cloud-storage
# For MLflow registry scanning
pip install mlflow
ModelAudit supports both NumPy 1.x and 2.x. If you encounter NumPy compatibility issues:
# Force NumPy 1.x if needed for full compatibility
pip install modelaudit[numpy1]
Press p or to see the previous file or, n or to see the next file
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?
Are you sure you want to delete this access key?