Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

openllm.md 1.3 KB

You have to be logged in to leave a comment. Sign In
sidebar_label
OpenLLM

OpenLLM

To use OpenLLM with promptfoo, we take advantage of OpenLLM's support for OpenAI-compatible endpoint.

  1. Start the server using the openllm start command.

  2. Set environment variables:

    • Set OPENAI_BASE_URL to http://localhost:8001/v1
    • Set OPENAI_API_KEY to a dummy value foo.
  3. Depending on your use case, use the chat or completion model types.

    Chat format example: To run a Llama2 eval using chat-formatted prompts, first start the model:

    openllm start llama --model-id meta-llama/Llama-2-7b-chat-hf
    

    Then set the promptfoo configuration:

    providers:
      - openai:chat:llama2
    

    Completion format example: To run a Flan eval using completion-formatted prompts, first start the model:

    openllm start flan-t5 --model-id google/flan-t5-large
    

    Then set the promptfoo configuration:

    providers:
      - openai:completion:flan-t5
    
  4. See OpenAI provider documentation for more details.

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...