Register
Login
Resources
Docs Blog Datasets Glossary Case Studies Tutorials & Webinars
Product
Data Engine LLMs Platform Enterprise
Pricing Explore
Connect to our Discord channel

synthesis.test.ts 20 KB

You have to be logged in to leave a comment. Sign In
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
  1. import dedent from 'dedent';
  2. import {
  3. convertQuestionToPythonPrompt,
  4. generateNewQuestionsPrompt,
  5. synthesize,
  6. } from '../../src/assertions/synthesis';
  7. import { loadApiProvider } from '../../src/providers';
  8. import type { TestCase } from '../../src/types';
  9. jest.mock('../../src/providers', () => ({
  10. loadApiProvider: jest.fn(),
  11. }));
  12. describe('synthesize', () => {
  13. it('should generate assertions based on config prompts and existing assertions', async () => {
  14. let i = 0;
  15. const mockProvider = {
  16. id: () => 'mock-provider',
  17. callApi: jest.fn(() => {
  18. if (i === 0) {
  19. i++;
  20. return Promise.resolve({
  21. output:
  22. '{"questions": [{"label": "metric1", "question" : "test question", "question_source": "IMPLIED_IN_INSTRUCTIONS", "question_type": "CORE_FOR_APPLICATION" }]}',
  23. });
  24. }
  25. return Promise.resolve({ output: 'None' });
  26. }),
  27. };
  28. jest.mocked(loadApiProvider).mockResolvedValue(mockProvider);
  29. const result = await synthesize({
  30. provider: 'mock-provider',
  31. prompts: ['Test prompt'],
  32. tests: [],
  33. numQuestions: 1,
  34. type: 'pi',
  35. });
  36. expect(result).toHaveLength(1);
  37. expect(result).toEqual([{ metric: 'metric1', value: 'test question', type: 'pi' }]);
  38. });
  39. });
  40. describe('generateNewQuestionsPrompt', () => {
  41. it('should generate a prompt that uses multiple system prompts and all assertions', () => {
  42. const prompts = ['What is the capital of France?', 'What is the capital of Germany?'];
  43. const testCases: TestCase[] = [
  44. {
  45. assert: [
  46. {
  47. type: 'llm-rubric',
  48. value: 'test question',
  49. },
  50. ],
  51. },
  52. ];
  53. const result = generateNewQuestionsPrompt(prompts, testCases, 1);
  54. expect(result).toBe(dedent`
  55. Role: You are a senior data scientist specializing in metric design for stochastic AI systems. You will be given
  56. an series of system prompts and existing assertions being tested in an evaluation, your task is to create objective evaluation questions that assess
  57. individual AI responses—not the application holistically—based on input-output pairs.
  58. Make sure to generate questions that are different from ones that already exist.
  59. Clarification: Some applications (like scam detection, content moderation, or classification tasks) ask the AI to evaluate an input artifact.
  60. Your task is **NOT** to evaluate the artifact (input) directly, but to assess the AI's response — i.e., how well the assistant performed the requested evaluation.
  61. For example, don’t ask: “Does the message contain suspicious links?”
  62. Instead, ask: “Did the response correctly identify suspicious links in the message?” or “Are the ratings in the output aligned with the rubric?”
  63. Core Requirements
  64. 1. Question Types:
  65. Questions may use one of the following scoring formats: binary (Yes/No), 5-point Likert scale, or 0–1 continuous scale.
  66. Design each question to naturally align with its scale—for example, use binary for clear-cut presence/absence traits, Likert for subjective gradations, and continuous for measurable properties.
  67. Binary questions can still be scored on a Likert scale by mapping “Yes = 5” and “No = 1” if needed.
  68. IMPORTANT: Questions should be phrased so that a 'Yes' answer or higher score **always** indicates compliance with the desired metric or requirement.
  69. 2. Focus:
  70. Questions can evaluate:
  71. i. Input-output relationships (e.g., "Does the output address all parts of the input query?").
  72. ii. Response attributes (e.g., structure, clarity, safety).
  73. Avoid holistic/system-level judgments (e.g., "Is the AI helpful?").
  74. 3. Objectivity:
  75. Be as objective as possible. Replace ambiguous terms (e.g., "inspiring," "too long") with quantifiable criteria (e.g., "Is the output > 100 words?").
  76. Allowed subjectivity: Verbs/adjectives are fine if they describe inherent properties of language (e.g., "Does the response contain abusive language?").
  77. Rationale: "Abusive" is a property of language, even if borderline cases exist.
  78. Avoid unbounded subjectivity (e.g., "Is the output extremely concise?" → replace with "Is the output ≤ 50 words?").
  79. In general, think of ways to replace subjective ideas with objective ones.
  80. 4. Atomicity:
  81. Each question should test one attribute or relationship (e.g., split "Is the response clear and concise?" into two questions).
  82. 5. Independence:
  83. Questions should avoid overlap to prevent double-counting issues in evaluation. They should not overlap with any assertions either.
  84. 6. Self-Containment:
  85. Permitted: Derive answers from the input/output text (e.g., "Does the output cite a verbatim quote from the input?").
  86. Forbidden: Reliance on external knowledge (e.g., "Is the cited source reputable?" → replace with "Does the citation include a DOI?").
  87. 7. Special Cases:
  88. For creative tasks: Focus on technical execution (e.g., "Does each stanza have 4 lines?").
  89. For list outputs: Evaluate per item (e.g., "Does each bullet point contain a complete sentence?").
  90. Each question must be preceded by a label in Title Case, no longer than three words, that serves as a concise and descriptive title for the question.
  91. After writing each question, **always** set 'is_lower_score_desirable' to false because if the answer to the question is “Yes” (or higher score in case of likert/0-1 scales),
  92. it always indicates a good response. You are only generating such type of questions.
  93. Each question should have a question_source. If the question is implied in the input application_description, use
  94. IMPLIED_IN_INSTRUCTIONS; otherwise if you are generating it from scratch, use FULLY_NEWLY_GENERATED.
  95. Each question should have a question_type. If the question is core for this specific application, use
  96. CORE_FOR_APPLICATION. If the question is a generic check which applies to many other applications like check for
  97. abusive content or toxic language, use HORIZONTAL. If the question is regarding output format or some structure
  98. in the response of the application, use FORMAT_CHECK.
  99. Anti-Patterns to Avoid
  100. 1. Reasoning Dependencies:
  101. Bad: "Is the argument persuasive?"
  102. Fixed: "Does the response list at least 2 supporting facts?"
  103. 2. World Knowledge:
  104. Bad: "Is the cited author an expert?"
  105. Fixed: "Does the citation include the author’s institutional affiliation?"
  106. 3. Unbounded Subjectivity:
  107. Bad: "Is the output extremely concise?"
  108. Fixed: "Is the output ≤ 3 sentences?"
  109. Process
  110. 1. Classify the Application:
  111. First classify the application into appropriate categories such as information extraction, information summarization, creative task, analysis task.
  112. Note that an application can belong to multiple categories.
  113. Define key attributes (e.g., accuracy, structure, safety).
  114. 2. Extract Implied Questions (Mandatory):
  115. Scan the application_description for any *implied requirements*—expectations stated or suggested in the instructions.
  116. For each implied requirement, generate an evaluation question marked with:
  117. - 'question_source = implied_in_instructions'
  118. These must be generated **before** any newly inferred or generic questions.
  119. 3. Generate Deep Criteria (for new questions):
  120. For each key attribute not already covered by an implied question:
  121. - Identify subtle failure modes
  122. - Design objectively measurable, atomic, and independent evaluation criteria
  123. - Use quantifiable standards and avoid vague constructs
  124. - Generate questions with 'question_source = fully_newly_generated'
  125. 4. Generate Questions:
  126. Create total 1 questions with:
  127. Binary (if absolute criteria exist) or Likert/continuous scales.
  128. Concrete thresholds for quantifiable traits (e.g., word/line counts).
  129. **IMPORTANT**: You must prioritize and fully exhaust all questions implied by the application description before generating any new questions.
  130. Do not generate any 'fully_newly_generated' questions if the implied questions alone fulfill the requested 1.
  131. # OUTPUT FORMAT
  132. Only respond in JSON with no extra content.
  133. # EXAMPLES
  134. <application>
  135. Describe a recipe for an input dish in bulleted list format.
  136. </application>
  137. <existing_assertions>
  138. [
  139. {
  140. "type" : "llm-rubric",
  141. "value": "Does the output list all necessary ingredients for the dish?",
  142. "metric": "Ingredient Inclusion"
  143. },
  144. {
  145. "type" : "g-eval",
  146. "value": "Does each step in the recipe provide clear and complete instructions for preparation?"
  147. }
  148. ]
  149. </existing_assertions>
  150. \`\`\`json
  151. {
  152. "questions": [
  153. {
  154. "label": "Sequential Order",
  155. "question": "Are the preparation steps listed in a logical and sequential order?",
  156. "question_source": "implied_in_instructions",
  157. "question_type": "core_for_application"
  158. },
  159. {
  160. "label": "Bullet Format",
  161. "question": "Is each item in the recipe presented as a distinct bullet point?",
  162. "question_source": "implied_in_instructions",
  163. "question_type": "format_check"
  164. },
  165. {
  166. "label": "Cooking Times",
  167. "question": "Are the cooking and preparation times mentioned in the recipe?",
  168. "question_source": "fully_newly_generated",
  169. "question_type": "core_for_application"
  170. },
  171. {
  172. "label": "Ingredient Quantities",
  173. "question": "Are the quantities for each ingredient specified in the recipe?",
  174. "question_source": "fully_newly_generated",
  175. "question_type": "core_for_application"
  176. },
  177. {
  178. "label": "Serving Size",
  179. "question": "Does the recipe specify the number of servings it makes?",
  180. "question_source": "fully_newly_generated",
  181. "question_type": "core_for_application"
  182. },
  183. {
  184. "label": "Filler Words",
  185. "question": "Does the recipe avoid including unnecessary details?",
  186. "question_source": "fully_newly_generated",
  187. "question_type": "horizontal"
  188. }
  189. ]
  190. }
  191. Consider the following prompts and assertions for an LLM application:
  192. <Prompts>
  193. <Prompt>
  194. What is the capital of France?
  195. </Prompt>
  196. <Prompt>
  197. What is the capital of Germany?
  198. </Prompt>
  199. </Prompts>
  200. <existing_assertions>
  201. [
  202. {
  203. "type": "llm-rubric",
  204. "value": "test question"
  205. }
  206. ]
  207. </existing_assertions>
  208. `);
  209. });
  210. });
  211. describe('convertQuestionToPythonPrompt', () => {
  212. it('should generate a prompt that uses multiple system prompts and all assertions', () => {
  213. const result = convertQuestionToPythonPrompt(
  214. ['What is the capital of France?', 'What is the capital of Germany?'],
  215. 'Is the response clear?',
  216. );
  217. expect(result).toBe(dedent`
  218. You are a specialized system that analyzes an LLM evaluation question and generates a Python function to automatically check LLM responses against the specific criterion.
  219. Your task is to determine if the given evaluation question can be reliably answered using a deterministic Python function.
  220. ## Input Format
  221. You will be provided with:
  222. 1. A description of the LLM application (string)
  223. 2. A single evaluation question used to assess LLM responses (string)
  224. ## Output Format
  225. For the evaluation question, you must:
  226. - Determine if the question can be reliably answered with a deterministic Python function using ONLY the LLM response
  227. - If YES: Return only the Python function body (without the function signature) that:
  228. - Assumes the LLM's response text is available as a string variable named \`output\`
  229. - Returns a dictionary with two keys:
  230. - \`'pass'\`: boolean value (True if criterion is met, False if not)
  231. - \`'score'\`: float value (1.0 if criterion is met, 0.0 if not)
  232. - The Answer "Yes" to the question should correspond to \`{'pass': True, 'score': 1.0}\`
  233. - The answer "No" to the question should correspond to \`{'pass': False, 'score': 0.0}\`
  234. - Includes clear comments
  235. - Handles edge cases gracefully (e.g., empty responses, invalid formats)
  236. - Performs any necessary parsing of the response string (JSON parsing, text extraction, etc.)
  237. - If NO: Return the string "None" (when the question requires semantic understanding, subjective judgment, domain expertise, or requires examining the original prompt/input)
  238. ## Critical Requirements
  239. - The function must evaluate ONLY the LLM response itself, which will always be provided as a string
  240. - The evaluation question might refer to the LLM output by domain-specific terms (e.g., "story", "recipe", "code", "answer") based on the application description, rather than generic terms like "response" or "output"
  241. - Regardless of terminology used in the question, the variable name in your code must be "output".
  242. - If evaluation requires comparing the response to the original prompt/input, return "None"
  243. - If evaluation requires external knowledge, context, or resources, return "None"
  244. - When in doubt, return "None" rather than an unreliable function
  245. - Any required parsing (JSON, XML, etc.) must be handled within the function
  246. ## IMPORTANT
  247. - Return "None" for any evaluation that requires semantic understanding or could have multiple valid expressions
  248. - For questions about greetings, politeness, tone, style, or other subjective language features, return "None"
  249. - Avoid creating functions that rely on hardcoded lists of phrases, expressions, or patterns when the concept being evaluated could be expressed in many different ways
  250. - Only create functions for criteria that can be evaluated through standardized, unambiguous patterns or clear structural properties
  251. ## Guidelines for Domain-Specific References
  252. - When the question refers to the output by a domain-specific term (e.g., "Is the story less than 2 lines long?", "Does the recipe include four or more spices?"), understand that it's referring to the same content that will be available as the \`output\` variable
  253. - The application description often provides context for what type of output to expect (story, recipe, etc.)
  254. ## Guidelines for Function Generation
  255. ### Questions Suitable for Functions (return a function):
  256. - Counting elements (words, sentences, lines, items)
  257. - Checking for presence of specific strings, patterns, or structures within the response
  258. - Validating formats (JSON, dates, emails, etc.)
  259. - Measuring response length in characters/bytes etc
  260. - Checking for code syntax, structure, or presence of specific elements
  261. - Verifying mathematical properties or numerical ranges
  262. ### Questions NOT Suitable for Functions (return "None"):
  263. - Any evaluation requiring comparison to the original prompt
  264. - Evaluating relevance, accuracy, or helpfulness
  265. - Assessing tone, intent, style, sentiment or semantics
  266. - Checking factual correctness
  267. - Determining completeness of explanations
  268. - Evaluating creativity or originality
  269. - Assessing logical coherence or reasoning quality
  270. - Any judgment requiring domain expertise
  271. - Any evaluation that would require an exhaustive list of possible expressions (like apologies, call-to-action etc.)
  272. Please provide only the Python function body without markdown formatting or function signature.
  273. The function body should assume the LLM's response is available as a variable named \`output\`.
  274. Also include the necessary import statements within the function body itself.
  275. ## Example Input/Output Pairs
  276. ### Example 1:
  277. **Application Description:** A JSON API documentation system
  278. **Evaluation Question:** "Does the response contain valid JSON?"
  279. **Output:**
  280. \`\`\`python
  281. import json
  282. import re
  283. # Try to find JSON blocks in the output
  284. # Look for content within code blocks with \`\`\`json
  285. json_block_pattern = r'\`\`\`(?:json)?\\s*([\\s\\S]*?)\\s*\`\`\`'
  286. json_blocks = re.findall(json_block_pattern, output)
  287. # Also look for content within curly braces that might be JSON
  288. potential_json = re.findall(r'(\\{[\\s\\S]*?\\})', output)
  289. # Combine all potential JSON content
  290. all_potential_json = json_blocks + potential_json
  291. # If we don't find any potential JSON patterns, return False
  292. if not all_potential_json:
  293. return {'pass': False, 'score': 0.0}
  294. # Try to parse each potential JSON block
  295. for json_str in all_potential_json:
  296. try:
  297. json.loads(json_str)
  298. return {'pass': True, 'score': 1.0} # Valid JSON found
  299. except json.JSONDecodeError:
  300. continue
  301. return {'pass': False, 'score': 0.0} # No valid JSON found
  302. \`\`\`
  303. ### Example 2:
  304. **Application Description:** A customer service chatbot
  305. **Evaluation Question:** "Does the response address the customer's initial query?"
  306. **Output:**
  307. None
  308. ### Example 3:
  309. **Application Description:** A code assistant that generates SQL queries.
  310. **Evaluation Question:** "Does the SQL query use a JOIN statement?"
  311. **Output:**
  312. \`\`\`python
  313. import re
  314. # Convert to lowercase for case-insensitive matching
  315. output_lower = output.lower()
  316. # Extract code blocks if present
  317. code_blocks = re.findall(r'\`\`\`(?:sql)?([^\`]+)\`\`\`', output_lower)
  318. # If code blocks are found, check them first
  319. if code_blocks:
  320. for block in code_blocks:
  321. # Check for JOIN keyword with word boundaries
  322. if re.search(r'\\b(join|inner\\s+join|left\\s+join|right\\s+join|full\\s+join|cross\\s+join)\\b', block):
  323. return {'pass': True, 'score': 1.0}
  324. # If no code blocks or no JOIN found in code blocks, check the entire output
  325. join_patterns = [
  326. r'\\b(join)\\b',
  327. r'\\b(inner\\s+join)\\b',
  328. r'\\b(left\\s+join)\\b',
  329. r'\\b(right\\s+join)\\b',
  330. r'\\b(full\\s+join)\\b',
  331. r'\\b(cross\\s+join)\\b'
  332. ]
  333. for pattern in join_patterns:
  334. if re.search(pattern, output_lower):
  335. return {'pass': True, 'score': 1.0}
  336. return {'pass': False, 'score': 0.0}
  337. \`\`\`
  338. ### Example 4:
  339. **Application Description:** An eval agent that can plan weekend trips.
  340. **Evaluation Question:** "Does the response exceed 1500 words?"
  341. **Output:**
  342. \`\`\`python
  343. # Split the output into words
  344. words = output.split()
  345. # Count the number of words
  346. word_count = len(words)
  347. # Check if the word count exceeds 1500
  348. if word_count > 1500:
  349. return {'pass': True, 'score': 1.0}
  350. return {'pass': False, 'score': 0.0}
  351. \`\`\`
  352. ### Example 5:
  353. **Application Description:** A customer service chatbot
  354. **Evaluation Question:** "Does the response start with a greeting?"
  355. **Output:**
  356. None
  357. Remember: When in doubt, return "None". It's better to use some other evaluation mechanism than to generate an unreliable function.
  358. <application_description>
  359. <Prompts>
  360. <Prompt>
  361. What is the capital of France?
  362. </Prompt>
  363. <Prompt>
  364. What is the capital of Germany?
  365. </Prompt>
  366. </Prompts>
  367. </application_description>
  368. <question>
  369. Is the response clear?
  370. </question>
  371. `);
  372. });
  373. });
Tip!

Press p or to see the previous file or, n or to see the next file

Comments

Loading...