Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

onboarding.ts 1.8 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
  1. export const DEFAULT_PROMPTS = `Your first prompt goes here
  2. ---
  3. Next prompt goes here. You can substitute variables like this: {{var1}} {{var2}} {{var3}}
  4. ---
  5. This is the next prompt.
  6. These prompts are nunjucks templates, so you can use logic like this:
  7. {% if var1 %}
  8. {{ var1 }}
  9. {% endif %}
  10. ---
  11. [
  12. {"role": "system", "content": "This is another prompt. JSON is supported."},
  13. {"role": "user", "content": "Using this format, you may construct multi-shot OpenAI prompts"}
  14. {"role": "user", "content": "Variable substitution still works: {{ var3 }}"}
  15. ]
  16. ---
  17. If you prefer, you can break prompts into multiple files (make sure to edit promptfooconfig.yaml accordingly)
  18. `;
  19. export const DEFAULT_YAML_CONFIG = `# This configuration compares LLM output of 2 prompts x 2 GPT models across 3 test cases.
  20. # Learn more: https://promptfoo.dev/docs/configuration/guide
  21. description: 'My first eval'
  22. prompts:
  23. - "Write a tweet about {{topic}}"
  24. - "Write a very concise, funny tweet about {{topic}}"
  25. providers:
  26. - openai:gpt-3.5-turbo-0613
  27. - openai:gpt-4
  28. tests:
  29. - vars:
  30. topic: bananas
  31. - vars:
  32. topic: avocado toast
  33. assert:
  34. # For more information on assertions, see https://promptfoo.dev/docs/configuration/expected-outputs
  35. - type: icontains
  36. value: avocado
  37. - type: javascript
  38. value: 1 / (output.length + 1) # prefer shorter outputs
  39. - vars:
  40. topic: new york city
  41. assert:
  42. # For more information on model-graded evals, see https://promptfoo.dev/docs/configuration/expected-outputs/model-graded
  43. - type: llm-rubric
  44. value: ensure that the output is funny
  45. `;
  46. export const DEFAULT_README = `To get started, set your OPENAI_API_KEY environment variable.
  47. Next, edit promptfooconfig.yaml.
  48. Then run:
  49. \`\`\`
  50. promptfoo eval
  51. \`\`\`
  52. Afterwards, you can view the results by running \`promptfoo view\`
  53. `;
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...