Are you sure you want to delete this access key?
title | description | image | keywords | date | authors | tags |
---|---|---|---|---|---|---|
Celebrating 100,000 Users: Promptfoo's Journey, Red Teaming, and the Future of AI Security | Promptfoo reaches 100K users! Learn about our journey from prompt evaluation to AI red teaming and what's next for AI security. | /img/blog/100k-users-milestone.jpg | [promptfoo AI security red teaming LLM evaluation prompt engineering AI agents] | 2025-06-10 | [michael] | [company-update] |
We're thrilled to announce that Promptfoo now has over 100,000 users! This milestone reflects our incredible community of developers, enterprises, and AI enthusiasts who trust us to help build reliable and secure AI applications.
Promptfoo was born out of a simple idea: prompt engineering should be systematic and measurable. Since our first open-source release in 2023, we've focused on giving developers the tools to evaluate models and prompts with confidence. Along the way, we discovered that happy path testing wasn't enough—teams also needed a way to proactively identify vulnerabilities. That realization sparked our deep investment in AI red teaming and GenAI Application Security.
Promptfoo founder Ian Webster began Promptfoo while leading AI engineering at Discord, where ensuring reliable model behavior was a daily challenge. Co-founder Michael D'Angelo joined shortly after, bringing experience in scaling machine learning products. Together, they set out to create a simple, declarative open-source toolkit that removes guesswork from prompt engineering.
As GenAI applications matured, we realized that traditional testing wasn't enough. Teams needed to understand how their prompts would hold up against adversarial attacks—a critical barrier to shipping applications safely in production. This insight led us to pioneer AI red teaming for application prompts, simulating malicious inputs like prompt injections and toxic content so teams can fix weaknesses before they ship.
We were the first to adapt AI-specific penetration testing techniques to application prompts, allowing teams to simulate real-world attacks before deployment. This approach has proven essential as AI systems face increasingly sophisticated adversarial attempts.
The AI landscape continues to evolve rapidly, with new paradigms emerging. Recently, we've witnessed a significant shift toward AI agents and multi-agent systems—autonomous models capable of performing complex, multi-step tasks. Recognizing this shift, promptfoo expanded its capabilities to support agentic systems, empowering developers to evaluate and secure these advanced applications effectively.
Our framework now provides specialized tools to simulate and test multi-agent interactions, ensuring coherent and safe agent behavior across various tasks and environments. Through red teaming, developers can verify that agents perform intended actions without unintended consequences, maintaining safety, reliability, and consistency.
Our approach has gained recognition across the AI community. Leading platforms have integrated promptfoo into their official learning materials:
These partnerships have helped educate thousands of developers while validating our approach to systematic AI testing.
Promptfoo is now trusted by individual developers, startups, and dozens of Fortune 500 companies. Our users rely on Promptfoo to:
This growth has been fueled by an active open-source community, regular releases, and educational collaborations with leading AI organizations.
Our framework supports a wide range of models and APIs—from OpenAI and Anthropic to local LLMs—with flexible evaluation plugins that teams can tailor to their specific domains. Whether it's preventing a fintech assistant from divulging private data or ensuring a game's AI character adheres to content guidelines, promptfoo provides the safety net teams need.
Reaching 100,000 users is just the beginning. We are rapidly scaling our team and product to meet the needs of our growing community and of the thousands of developers who are using promptfoo to ship safer AI applications.
We're committed to being the best open-source LLM red teaming and eval tool available. We invite you to:
To our users, contributors, and partners—thank you for helping us reach this milestone. We couldn't have done it without your enthusiasm and feedback. Here's to the next chapter of building safer AI together!
— The Promptfoo Team
Press p or to see the previous file or, n or to see the next file
Browsing data directories saved to S3 is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with AWS S3!
Are you sure you want to delete this access key?
Browsing data directories saved to Google Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with Google Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to Azure Cloud Storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with Azure Cloud Storage!
Are you sure you want to delete this access key?
Browsing data directories saved to S3 compatible storage is possible with DAGsHub. Let's configure your repository to easily display your data in the context of any commit!
promptfoo is now integrated with your S3 compatible storage!
Are you sure you want to delete this access key?