Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

README.md 311 B

You have to be logged in to leave a comment. Sign In

This example shows how you can have an LLM grade its own output according to predefined expectations.

Identical configurations are provided in promptfooconfig.js and promptfooconfig.yaml.

Run:

promptfoo eval

You can also define the tests in a CSV file:

promptfoo eval --tests tests.csv
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...