Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Michael c514791528
chore(deps): update all example dependencies to latest versions (#4900)
1 month ago
..
add206cb3c
docs(examples): add pydantic-ai example with structured output evaluation (#4575)
2 months ago
add206cb3c
docs(examples): add pydantic-ai example with structured output evaluation (#4575)
2 months ago
add206cb3c
docs(examples): add pydantic-ai example with structured output evaluation (#4575)
2 months ago
add206cb3c
docs(examples): add pydantic-ai example with structured output evaluation (#4575)
2 months ago
c514791528
chore(deps): update all example dependencies to latest versions (#4900)
1 month ago

README.md

You have to be logged in to leave a comment. Sign In

pydantic-ai

This example demonstrates how to evaluate PydanticAI agents using promptfoo. PydanticAI is a Python agent framework that provides structured outputs and type safety for AI applications.

You can run this example with:

npx promptfoo@latest init --example pydantic-ai

Quick Start

cd pydantic-ai
pip install -r requirements.txt
export OPENAI_API_KEY=your_openai_api_key_here
npx promptfoo@latest eval
npx promptfoo@latest view

What This Shows

  • Creating a PydanticAI agent with structured outputs
  • Using promptfoo's Python provider to evaluate agents
  • JSON schema validation with is-json assertions
  • Multiple assertion types: JavaScript, Python, and LLM-rubric evaluations
  • Evaluating agent tool usage

Example Structure

  • agent.py - Simple PydanticAI weather agent with structured output
  • provider.py - Promptfoo Python provider that runs the agent
  • promptfooconfig.yaml - Evaluation configuration with diverse assertion types
  • requirements.txt - Python dependencies
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...