Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Integration:  git github
ladyofcode 25fb05b320
docs: remove files from pr
2 days ago
ae1b1dd5a9
feat(cli): add mcp server (#4595)
3 weeks ago
f984945770
fix(devcontainer): simplify and standardize development environment (#1547)
11 months ago
1d03b364b8
chore: improve generated constants handling to prevent accidental commits (#5148)
1 week ago
370fc43957
chore: Add email to remote inferencere requests (#2647)
7 months ago
193a8d3eab
chore: Sort imports and turn on rule against unused imports (#5010)
1 month ago
23ba873f50
fix: composite indices and query optimization (#5275)
4 days ago
80c0431731
chore: bump pypdf from 5.7.0 to 6.0.0 in /examples/rag-full in the pip group across 1 directory (#5252)
1 week ago
aa94c4ddcf
fix(Dockerfile): Create .promptfoo directory in Dockerfile and remove initContainer (#3435)
5 months ago
1d03b364b8
chore: improve generated constants handling to prevent accidental commits (#5148)
1 week ago
25fb05b320
docs: remove files from pr
2 days ago
src
a1e2486ad9
feat: add update check for modelaudit package (#5278)
3 days ago
a1e2486ad9
feat: add update check for modelaudit package (#5278)
3 days ago
5cfe9d6ea3
chore: migrate from ESLint + Prettier to Biome (#4903)
1 month ago
78a93bd2e1
chore: update coderabbit config to be less aggressive (#4586)
2 months ago
ee622119a1
feat: Migrate NextUI to a React App (#1637)
11 months ago
1d03b364b8
chore: improve generated constants handling to prevent accidental commits (#5148)
1 week ago
e1aa6ab106
docs: Merge docs into main repo (#317)
1 year ago
7c335ff340
chore: upgrade development versions of Node.js to v22 and Python to 3.13 (#2340)
6 months ago
5cfe9d6ea3
chore: migrate from ESLint + Prettier to Biome (#4903)
1 month ago
5cfe9d6ea3
chore: migrate from ESLint + Prettier to Biome (#4903)
1 month ago
753e1a7d0b
chore: bump version 0.117.5 (#5206)
1 week ago
5fb3ac2f91
chore: improve RAG metrics with detailed metadata and fix context-relevance scoring (#5164)
2 weeks ago
242b7cbbaa
docs: add contributing guide (#1150)
1 year ago
33570d1c85
chore(docker): update base images to Node.js 22 (#3666)
4 months ago
1b28ccc8c2
chore: update year
7 months ago
77d9c14971
docs(site): add llms.txt mentions and documentation standards (#4481)
2 months ago
5be7ca2dcf
docs(security): add security policy (#3470)
5 months ago
60389928d9
chore(webui): add intelligent scroll-timeline polyfill loading (#5130)
2 weeks ago
dcddee95ee
chore: migrate drizzle (#1922)
10 months ago
5afdcaff78
test: fix modelGradedClosedQa test segmentation fault on macOS/Node 24 (#5163)
2 weeks ago
8dc68f1a96
chore: update Jest to version 30 (#4939)
1 month ago
5cfe9d6ea3
chore: migrate from ESLint + Prettier to Biome (#4903)
1 month ago
58fd112507
chore: integrate knip for unused code detection and clean up codebase (#4464)
1 month ago
6b4746ed6c
fix: nodemon
6 months ago
af339173f3
chore: bump the npm_and_yarn group with 2 updates (#5276)
4 days ago
e34625e257
chore: bump @anthropic-ai/sdk from 0.59.0 to 0.60.0 (#5257)
1 week ago
8938dd3236
chore(build): add pnpm support (#3307)
5 months ago
b5f0766391
chore(deps): update minor and patch dependencies (#4686)
1 month ago
Storage Buckets

README.md

You have to be logged in to leave a comment. Sign In

Promptfoo: LLM evals & red teaming

npm npm GitHub Workflow Status MIT license Discord

promptfoo is a developer-friendly local tool for testing LLM applications. Stop the trial-and-error approach - start shipping secure, reliable AI apps.

Website ยท Getting Started ยท Red Teaming ยท Documentation ยท Discord

Quick Start

# Install and initialize project
npx promptfoo@latest init

# Run your first evaluation
npx promptfoo eval

See Getting Started (evals) or Red Teaming (vulnerability scanning) for more.

What can you do with Promptfoo?

  • Test your prompts and models with automated evaluations
  • Secure your LLM apps with red teaming and vulnerability scanning
  • Compare models side-by-side (OpenAI, Anthropic, Azure, Bedrock, Ollama, and more)
  • Automate checks in CI/CD
  • Share results with your team

Here's what it looks like in action:

prompt evaluation matrix - web viewer

It works on the command line too:

prompt evaluation matrix - command line

It also can generate security vulnerability reports:

gen ai red team

Why promptfoo?

  • ๐Ÿš€ Developer-first: Fast, with features like live reload and caching
  • ๐Ÿ”’ Private: Runs 100% locally - your prompts never leave your machine
  • ๐Ÿ”ง Flexible: Works with any LLM API or programming language
  • ๐Ÿ’ช Battle-tested: Powers LLM apps serving 10M+ users in production
  • ๐Ÿ“Š Data-driven: Make decisions based on metrics, not gut feel
  • ๐Ÿค Open source: MIT licensed, with an active community

Star the Project โญ

If you find promptfoo useful, please star it on GitHub! Stars help the project grow and ensure you stay updated on new releases and features.

Star us on GitHub!

Learn More

LLMs.txt Support

Our docs include a llms.txt and llms-full.txt files following the llms.txt specification. These files make our documentation more accessible to AI-powered development tools and assistants.

Contributing

We welcome contributions! Check out our contributing guide to get started.

Join our Discord community for help and discussion.

Tip!

Press p or to see the previous file or, n or to see the next file

About

Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.

Collaborators 1

Comments

Loading...