Bases: ActionNode
An action node that uses a causal language model to generate
some text based on a prompt contained in the node's
blackboard.
This node is based on the HFLM library, and will
download the model that you specify by name. This can take a long
time and/or use a lot of storage, depending on the model you name.
There are enough configuration options for this type of node that
the options have all been placed in a dataclass config object. See
the documentation for that object to learn about the many options
available to you.
Parameters:
Name |
Type |
Description |
Default |
name
|
str
|
The given name of this node.
|
required
|
cfg
|
HFLMActionConfig
|
The configuration object for this model.
|
required
|
Source code in src/dendron/actions/generate_action.py
| class GenerateAction(ActionNode):
"""
An action node that uses a causal language model to generate
some text based on a prompt contained in the node's
blackboard.
This node is based on the HFLM library, and will
download the model that you specify by name. This can take a long
time and/or use a lot of storage, depending on the model you name.
There are enough configuration options for this type of node that
the options have all been placed in a dataclass config object. See
the documentation for that object to learn about the many options
available to you.
Args:
name (str):
The given name of this node.
cfg (HFLMActionConfig):
The configuration object for this model.
"""
def __init__(self, model_cfg : HFLMConfig, node_cfg : LMActionConfig) -> None:
super().__init__(node_cfg.node_name)
self.input_key = node_cfg.input_key
self.output_key = node_cfg.output_key
self.max_new_tokens = node_cfg.max_new_tokens
self.temperature = node_cfg.temperature
if self.temperature == 0.0:
self.do_sample = False
else:
self.do_sample = True
self.node_config = node_cfg
self.model_config = model_cfg
self.input_processor = None
self.output_processor = None
def set_tree(self, tree : BehaviorTree) -> None:
self.tree = tree
self.set_blackboard(tree.blackboard)
tree.add_model(self.model_config)
def set_model(self, new_model) -> None:
"""
TODO: This should take a model config or a name, and propagate
that to the tree's model map.
"""
self.model = new_model
def set_input_processor(self, f : Callable) -> None:
"""
Set the input processor to use during `tick()`s.
An input processor is applied to the prompt text stored in the
blackboard, and can be used to preprocess the prompt. The
processor function should be a map from `str` to `str`. During a
`tick()`, the output of this function will be what is tokenized
and sent to the model for generation.
Args:
f (Callable):
The input processor function to use. Should be a callable
object that maps (self, Any) to str.
"""
self.input_processor = types.MethodType(f, self)
def set_output_processor(self, f : Callable) -> None:
"""
Set the output processor to use during `tick()`s.
An output processor is applied to the text generated by the model,
before that text is written to the output slot of the blackboard.
The function should be a map from `str` to `str`.
A typical example of an output processor would be a function that
removes the prompt from the text returned by a model, so that only
the newly generated text is written to the blackboard.
Args:
f (Callable):
The output processor function. Should be a callable object
that maps from (self, str) to Any.
"""
self.output_processor = types.MethodType(f, self)
def tick(self) -> NodeStatus:
"""
Execute a tick, consisting of the following steps:
- Retrieve a prompt from the node's blackboard, using the input_key.
- Apply the input processor, if one exists.
- Tokenize the prompt text.
- Generate new tokens based on the prompt.
- Decode the model output into a text string.
- Apply the output processor, if one exists,
- Write the result back to the blackboard, using the output_key.
If any of the above fail, the exception text is printed and the node
returns a status of `FAILURE`. Otherwise the node returns `SUCCESS`. If
you want to use a language model to make decisions, consider looking at
the `CompletionConditionNode`.
"""
try:
input_text = self.blackboard[self.input_key]
if self.input_processor:
input_text = self.input_processor(input_text)
output_text = self.tree.get_model(self.model_config.model_name).generate_until(
[(input_text,
{'max_new_tokens': self.max_new_tokens,
"temperature": self.temperature,
"do_sample": self.do_sample
})],
disable_tqdm=True)[0]
if self.output_processor:
output_text = self.output_processor(output_text)
self.blackboard[self.output_key] = output_text
return NodeStatus.SUCCESS
except Exception as ex:
print(f"Exception in node {self.name}:")
print(traceback.format_exc())
return NodeStatus.FAILURE
|
set_tree(tree)
Source code in src/dendron/actions/generate_action.py
| def set_tree(self, tree : BehaviorTree) -> None:
self.tree = tree
self.set_blackboard(tree.blackboard)
tree.add_model(self.model_config)
|
set_model(new_model)
TODO: This should take a model config or a name, and propagate
that to the tree's model map.
Source code in src/dendron/actions/generate_action.py
| def set_model(self, new_model) -> None:
"""
TODO: This should take a model config or a name, and propagate
that to the tree's model map.
"""
self.model = new_model
|
Set the input processor to use during tick()
s.
An input processor is applied to the prompt text stored in the
blackboard, and can be used to preprocess the prompt. The
processor function should be a map from str
to str
. During a
tick()
, the output of this function will be what is tokenized
and sent to the model for generation.
Parameters:
Name |
Type |
Description |
Default |
f
|
Callable
|
The input processor function to use. Should be a callable
object that maps (self, Any) to str.
|
required
|
Source code in src/dendron/actions/generate_action.py
| def set_input_processor(self, f : Callable) -> None:
"""
Set the input processor to use during `tick()`s.
An input processor is applied to the prompt text stored in the
blackboard, and can be used to preprocess the prompt. The
processor function should be a map from `str` to `str`. During a
`tick()`, the output of this function will be what is tokenized
and sent to the model for generation.
Args:
f (Callable):
The input processor function to use. Should be a callable
object that maps (self, Any) to str.
"""
self.input_processor = types.MethodType(f, self)
|
set_output_processor(f)
Set the output processor to use during tick()
s.
An output processor is applied to the text generated by the model,
before that text is written to the output slot of the blackboard.
The function should be a map from str
to str
.
A typical example of an output processor would be a function that
removes the prompt from the text returned by a model, so that only
the newly generated text is written to the blackboard.
Parameters:
Name |
Type |
Description |
Default |
f
|
Callable
|
The output processor function. Should be a callable object
that maps from (self, str) to Any.
|
required
|
Source code in src/dendron/actions/generate_action.py
| def set_output_processor(self, f : Callable) -> None:
"""
Set the output processor to use during `tick()`s.
An output processor is applied to the text generated by the model,
before that text is written to the output slot of the blackboard.
The function should be a map from `str` to `str`.
A typical example of an output processor would be a function that
removes the prompt from the text returned by a model, so that only
the newly generated text is written to the blackboard.
Args:
f (Callable):
The output processor function. Should be a callable object
that maps from (self, str) to Any.
"""
self.output_processor = types.MethodType(f, self)
|
tick()
Execute a tick, consisting of the following steps:
- Retrieve a prompt from the node's blackboard, using the input_key.
- Apply the input processor, if one exists.
- Tokenize the prompt text.
- Generate new tokens based on the prompt.
- Decode the model output into a text string.
- Apply the output processor, if one exists,
- Write the result back to the blackboard, using the output_key.
If any of the above fail, the exception text is printed and the node
returns a status of FAILURE
. Otherwise the node returns SUCCESS
. If
you want to use a language model to make decisions, consider looking at
the CompletionConditionNode
.
Source code in src/dendron/actions/generate_action.py
| def tick(self) -> NodeStatus:
"""
Execute a tick, consisting of the following steps:
- Retrieve a prompt from the node's blackboard, using the input_key.
- Apply the input processor, if one exists.
- Tokenize the prompt text.
- Generate new tokens based on the prompt.
- Decode the model output into a text string.
- Apply the output processor, if one exists,
- Write the result back to the blackboard, using the output_key.
If any of the above fail, the exception text is printed and the node
returns a status of `FAILURE`. Otherwise the node returns `SUCCESS`. If
you want to use a language model to make decisions, consider looking at
the `CompletionConditionNode`.
"""
try:
input_text = self.blackboard[self.input_key]
if self.input_processor:
input_text = self.input_processor(input_text)
output_text = self.tree.get_model(self.model_config.model_name).generate_until(
[(input_text,
{'max_new_tokens': self.max_new_tokens,
"temperature": self.temperature,
"do_sample": self.do_sample
})],
disable_tqdm=True)[0]
if self.output_processor:
output_text = self.output_processor(output_text)
self.blackboard[self.output_key] = output_text
return NodeStatus.SUCCESS
except Exception as ex:
print(f"Exception in node {self.name}:")
print(traceback.format_exc())
return NodeStatus.FAILURE
|