API reference #223
Replies: 4 comments 5 replies
-
|
You da man!! Thank you so much for this. I feel like I’ve been reverse engineering this from the example notebooks and source code. Extremely helpful. Hope this gets added to Guidance documentation yesterday |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for this! Please keep this updated always. It really should be added to the documentation. It's so easy to generate and add, but it is extremely helpful for those using the module. Low cost, high value! |
Beta Was this translation helpful? Give feedback.
-
|
SO much in here that is not in the examples, and it's so much clearer than trying to reverse engineer the examples to an API or dig into source. If you want this library to catch on, please add API documentation! |
Beta Was this translation helpful? Give feedback.
-
|
Hey, isn't this the same as https://guidance.readthedocs.io/en/latest/api.html#library ? |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
For convenience, I extracted the docstrings from
guidance/library/*.pyand formatted them as Markdown.EDIT: may be a clone of https://guidance.readthedocs.io/en/latest/api.html#library
Guidance API reference
assistantgenawaitaddrolegeneachblockcontainssystemselectbreakequaluserseteachgreatershelliflessparsestripsubtractadd#Add the given variables together.
assistant#A chat role block for the
'assistant'role.This is just a shorthand for
{{#role 'assistant'}}...{{/role}}.Parameters
hidden: boolWhether to include the assistant block in future LLM context.
await#Awaits a variable by returning its value and then deleting it.
Note that this is useful for repeatedly getting values since programs
will pause when they need a value that is not yet set. This means
that putting
awaitin a loop will create a stateful "agent" that canrepeatedly await values when called multiple times.
Parameters
name: strThe name of the variable to await.
block#Generic block-level element.
This is useful for naming or hiding blocks of content.
Parameters
name: strThe name of the block. A variable with this name will be set with the generated block content.
hidden: boolWhether to include the generated block content in future LLM context.
break#Breaks out of the current loop.
This is useful for breaking out of a geneach loop early, typically this is used inside an
{{#if ...}}...{{/if}}block.contains#Check if a string contains a substring.
each#Iterate over a list and execute a block for each item.
Parameters
list: iterableThe list to iterate over. Inside the block each element will be available as
this.hidden: boolWhether to include the generated item blocks in future LLM context.
parallel: boolIf this is
Truethen we generate all the items in the list in parallel. Note that this is only compatible withhidden=True. Whenparallel=Trueyou can no longer raise aStopIterationexception to stop the loop at a specific step (since the steps can be runin parallel in any order).
equal#Check that all arguments are equal.
gen#Use the LLM to generate a completion.
Parameters
name: str or NoneThe name of a variable to store the generated value in. If none the value is just returned.
stop: strThe stop string to use for stopping generation. If not provided, the next node's text will be used if that text matches a closing quote, XML tag, or role end. Note that the stop string is not included in the generated value.
stop_regex: strA regular expression to use for stopping generation. If not provided, the stop string will be used.
save_stop_text: str or boolIf set to a string, the exact stop text used will be saved in a variable with the given name. If set to
True, the stop text will be saved in a variable namedname+"_stop_text". If set toFalse, the stop text will not be saved.max_tokens: intThe maximum number of tokens to generate in this completion.
n: intThe number of completions to generate. If you generate more than one completion, the variable will be set to a list of generated values. Only the first completion will be used for future context for the LLM, but you may often want to use
hidden=Truewhen usingn > 1.temperature: floatThe temperature to use for generation. A higher temperature will result in more random completions. Note that caching is always on for
temperature=0, and is seed-based for other temperatures.top_p: floatThe
top_pvalue to use for generation. A highertop_pwill result in more random completions.logprobs: int or NoneIf set to an integer, the LLM will return that number of top log probabilities for the generated tokens which will be stored in a variable named
name+"_logprobs". If set toNone, the logprobabilities will not be returned.
pattern: str or NoneA regular expression pattern guide to use for generation. If set the LLM will be forced (through guided decoding) to only generate completions that match the regular expression.
hidden: boolWhether to hide the generated value from future LLM context. This is useful for generating completions that you just want to save in a variable and not use for future context.
list_append: boolWhether to append the generated value to a list stored in the variable. If set to
True, the variable must be a list, and the generated value will be appended to the list.save_prompt: str or boolIf set to a string, the exact prompt given to the LLM will be saved in a variable with the given name.
token_healing: bool or NoneIf set to a bool this overrides the token_healing setting for the LLM.
**llm_kwargsAny other keyword arguments will be passed to the LLM call method. This can be useful for setting LLM-specific parameters like
repetition_penaltyfor Transformers models orsuffixfor some OpenAI models.geneach#Generate a potentially variable length list of items using the LLM.
Parameters
list_name: strThe name of the variable to save the generated list to.
stop: str or list of strA string or list of strings that will stop the generation of the list. For example if
stop="</ul>"then the list will be generated until the first"</ul>"is generated.max_iterations: intThe maximum number of items to generate.
min_iterations: intThe minimum number of items to generate.
num_iterations: intThe exact number of items to generate (this overrides
max_iterationsandmin_iterations).hidden: boolIf
True, the generated list items will not be added to the LLMs input context. This means that each item will be generated independently of the others. Note that if you usehidden=Trueyou must also setnum_iterationsto a fixed number (since without adding items the context there is not way for the LLM to know when to stop on its own).join: strA string to join the generated items with.
single_call: boolThis is an option designed to make look generation more convienent for LLMs that don't support guidance acceleration. If
True, the LLM will be called once to generate the entire list. This only works if the LLM has already been prompted to generate content that matches the format of the list. After the single call, the generated list variables will be parsed out of the generated text using a regex. (note that only basic template tags are supported in the list items when usingsingle_call=True).single_call_temperature: floatOnly used with
single_call=True. The temperature to use when generating the list items in a single call.single_call_max_tokens: intOnly used with
single_call=True. The maximum number of tokens to generate when generating the list items.single_call_top_p: floatOnly used with
single_call=True. Thetop_pto use when generating the list items in a single call.greater#Check if
arg1is greater thanarg2.Note that this can also be called using
>as well asgreater.if#Standard if/else statement.
Parameters
value: boolThe value to check. If
Truethen the first block will be executed, otherwise the second block (the one after the{{else}}) will be executed.invert: boolIf
Truethen the value will be inverted before checking.less#Check if
arg1is less thanarg2.Note that this can also be called using
<as well asless.parse#Parse a string as a guidance program.
This is useful for dynamically generating and then running guidance programs (or parts of programs).
Parameters
string: strThe string to parse.
name: strThe name of the variable to set with the generated content.
role#A chat role block.
select#Select a value from a list of choices.
Parameters
variable_name: strThe name of the variable to set with the selected value.
options: list of str or NoneAn optional list of options to select from. This argument is only used when select is used in non-block mode.
logprobs: str or NoneAn optional variable name to set with the logprobs for each option. If this is set the log probs of every option is fully evaluated. When this is
None(the default) we use a greedy max approach to select the option (similar to how greedy decoding works in a language model). So in some cases the selected option can change when logprobs is set since it will be more like an exhaustive beam search scoring than a greedy max scoring.list_append: boolWhether to append the generated value to a list stored in the variable. If set to
True, the variable must be a list, and the generated value will be appended to the list.set#Set the value of a variable or set of variables.
Parameters
name: str or dictIf a string, the name of the variable to set. If a dict, the keys are the variable names and the values are the values to set.
value: str, optionalThe value to set the variable to. Only used if
nameis a string.hidden: bool, optionalIf
True, the variable will be set but not printed in the output.shell#Send a command to the shell and return the output.
strip#Strip whitespace from the beginning and end of the given string.
Parameters
string: strThe string to strip.
subtract#Subtract the second variable from the first.
Parameters
minuend: int or floatThe number to subtract from.
subtrahend: int or floatThe number to subtract.
system#A chat role block for the
'system'role.This is just a shorthand for
{{#role 'system'}}...{{/role}}.Parameters
hidden: boolWhether to include the assistant block in future LLM context.
user#A chat role block for the
'user'role.This is just a shorthand for
{{#role 'user'}}...{{/role}}.Parameters
hidden: boolWhether to include the assistant block in future LLM context.
Beta Was this translation helpful? Give feedback.
All reactions