Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Will e5122a89d1
docs(examples): Improves instructions for running 4o vs. 4o mini example (#3474)
5 months ago
..
e5122a89d1
docs(examples): Improves instructions for running 4o vs. 4o mini example (#3474)
5 months ago
76797692e5
feat(schema): add yaml schema validation to config files (#1990)
10 months ago

README.md

You have to be logged in to leave a comment. Sign In

gpt-4o-vs-4o-mini (Comparing GPT-4o and GPT-4o-Mini)

Quick Start

  1. Initialize this example by running:

    npx promptfoo@latest init --example gpt-4o-vs-4o-mini
    
  2. Navigate to the newly created gpt-4o-vs-4o-mini directory:

    cd gpt-4o-vs-4o-mini
    
  3. Set an OpenAI API key directly in your environment:

    export OPENAI_API_KEY="your_openai_api_key"
    

    Alternatively, you can set the API key in a .env file:

    OPENAI_API_KEY=your_openai_api_key
    
  4. Run the evaluation with:

    npx promptfoo@latest eval --no-cache
    

    Note: the --no-cache flag is required because the example uses a latency assertion which does not support caching.

  5. View the results:

    npx promptfoo@latest view
    

    The expected output will include the responses from both models for the provided riddles, allowing you to compare their performance side by side.

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...