Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.
Array of inputs to each batch call.
Optional
options: Partial<BaseCallbackConfig> | Partial<BaseCallbackConfig>[]Either a single call options object to apply to each batch call or an array for each call.
Optional
batchOptions: RunnableBatchOptions & { An array of RunOutputs, or mixed RunOutputs and errors if batchOptions.returnExceptions is set
Optional
options: Partial<BaseCallbackConfig> | Partial<BaseCallbackConfig>[]Optional
batchOptions: RunnableBatchOptions & { Optional
options: Partial<BaseCallbackConfig> | Partial<BaseCallbackConfig>[]Optional
batchOptions: RunnableBatchOptionsBind arguments to a Runnable, returning a new Runnable.
A new RunnableBinding that, when invoked, will apply the bound args.
Calls the parser with a given input and optional configuration options.
If the input is a string, it creates a generation with the input as
text and calls parseResult
. If the input is a BaseMessage
, it
creates a generation with the input as a message and the content of the
input as text, and then calls parseResult
.
The input to the parser, which can be a string or a BaseMessage
.
Optional
options: BaseCallbackConfigOptional configuration options.
A promise of the parsed output.
Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input.
Parses the given text into an AgentAction or AgentFinish object. If an output fixing parser is defined, uses it to parse the text.
Text to parse.
Promise that resolves to an AgentAction or AgentFinish object.
Parses the result of an LLM call. This method is meant to be implemented by subclasses to define how the output from the LLM should be parsed.
The generations from an LLM call.
Optional
callbacks: CallbacksOptional callbacks.
A promise of the parsed output.
Parses the result of an LLM call with a given prompt. By default, it
simply calls parseResult
.
The generations from an LLM call.
The prompt used in the LLM call.
Optional
callbacks: CallbacksOptional callbacks.
A promise of the parsed output.
Optional
callbacks: CallbacksCreate a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.
A runnable, function, or object whose values are functions or runnables.
A new runnable sequence.
Stream output in chunks.
Optional
options: Partial<BaseCallbackConfig>A readable stream that is also an iterable.
Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.
Optional
options: Partial<BaseCallbackConfig>Optional
streamOptions: Omit<LogStreamCallbackHandlerInput, "autoClose">Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.
Bind config to a Runnable, returning a new Runnable.
New configuration parameters to attach to the new runnable.
A new RunnableBinding with a config matching what's passed.
Create a new runnable from the current one that will try invoking other passed fallback runnables if the initial invocation fails.
Other runnables to call if the runnable errors.
A new RunnableWithFallbacks.
Add retry logic to an existing runnable.
Optional
fields: { Optional
onOptional
stopA new RunnableRetry that, when invoked, will retry according to the parameters.
Static
isGenerated using TypeDoc
Parses ReAct-style LLM calls that have a single tool input.
Expects output to be in one of two formats.
If the output signals that an action should be taken, should be in the below format. This will result in an AgentAction being returned.
If the output signals that a final answer should be given, should be in the below format. This will result in an AgentFinish being returned.