Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel
Michael 76797692e5
feat(schema): add yaml schema validation to config files (#1990)
10 months ago
..
bbec5e9ed6
docs: add example and configuration guide for using llama.cpp (#1104)
1 year ago
76797692e5
feat(schema): add yaml schema validation to config files (#1990)
10 months ago

README.md

You have to be logged in to leave a comment. Sign In

Getting Started with Promptfoo and llama.cpp

Install llama.cpp

To begin, install llama.cpp by following the instructions on their GitHub page.

Starting the Server

To start the server, use the following command:

./llama-server -m your_model.gguf --port 8080

You can check if it's running by visiting http://localhost:8080.

Configuring Promptfoo

  1. Edit the prompts in promptfooconfig.yaml.

  2. Run the evaluation:

    npx promptfoo@latest eval
    
  3. View the results:

    npx promptfoo@latest view
    

Note on Supported Models

llama.cpp supports many models that can be converted to the GGUF format. We recommend downloading models from Hugging Face. You may need to authenticate with your Hugging Face account using their CLI to download models.

Note on Prompt Formatting

We do not format the prompts for compatibility with llama.cpp. Prompts are passed as-is. Refer to the documentation or model card for the specific model you are using to ensure compatibility with its interface. We provide various formatting examples to illustrate different ways to format your prompts.

Note on Caching

Since promptfoo is unaware of the underlying model being run in llama.cpp, it will not invalidate the cache when the model is updated. This means you may see stale results from the cache if you change the model. Run npx promptfoo@latest eval --no-cache to perform the evaluation without using the cache.

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...