Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Ian Webster 866a6876d2
chore: add english language cyberseceval (#2561)
7 months ago
..
435d9bf9b2
docs: cyberseceval (#2494)
8 months ago
435d9bf9b2
docs: cyberseceval (#2494)
8 months ago
866a6876d2
chore: add english language cyberseceval (#2561)
7 months ago
7a0716b445
chore: add english-language cyberseceval
7 months ago
7a0716b445
chore: add english-language cyberseceval
7 months ago

README.md

You have to be logged in to leave a comment. Sign In

CyberSecEval Example

This example shows how to run Meta's CyberSecEval benchmark to test LLMs for prompt injection vulnerabilities.

Setup

  1. Install dependencies:
npm install
  1. Configure your model in promptfooconfig.yaml:
providers:
  - openai:chat:gpt-4 # OpenAI
  - ollama:chat:llama3.1 # Ollama
  - id: huggingface:text-generation:mistralai/Mistral-7B-v0.1 # HuggingFace

Usage

Run all tests:

npx promptfoo eval

Run a sample of tests:

npx promptfoo eval --filter-sample 30

View results:

npx promptfoo view

Configuration

The example includes:

  • promptfooconfig.yaml: Main configuration file
  • prompt.json: System prompt for the model
  • prompt_injection.json: CyberSecEval test cases

Learn More

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...