Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

llamafile.md 675 B

You have to be logged in to leave a comment. Sign In

llamafile

Llamafile has an OpenAI-compatible HTTP endpoint, so you can override the OpenAI provider to talk to your llamafile server.

In order to use llamafile in your eval, set the apiBaseUrl variable to http://localhost:8080 (or wherever you're hosting llamafile).

Here's an example config that uses LLaMA_CPP for text completions:

providers:
  - id: openai:chat:LLaMA_CPP
    config:
      apiBaseUrl: http://localhost:8080/v1

If desired, you can instead use the OPENAI_BASE_URL environment variable instead of the apiBaseUrl config.

Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...