diff --git a/docs/latest/pipeline_nodes/custom_nodes.mdx b/docs/latest/pipeline_nodes/custom_nodes.mdx index 6bb192a57..518d2d2e6 100644 --- a/docs/latest/pipeline_nodes/custom_nodes.mdx +++ b/docs/latest/pipeline_nodes/custom_nodes.mdx @@ -6,8 +6,8 @@ In Haystack, you can create your own nodes and use them stand-alone or in Pipeli Here's how you do it: -1. Create new class that inherits from `BaseComponent`. -2. Define the number of `outgoing_edges` as a class attribute. Decision nodes have more than one outgoing edge. +1. Create a new class that inherits from `BaseComponent`. +2. If your node's output will be routed to a fixed number of nodes, set `outgoing_edges` as a class attribute. Most nodes have one outgoing edge. Decision nodes have more than one outgoing edge. If your node has a variable number of outgoing edges, define `CustomNode._calculate_outgoing_edges()` to return that number. See [`FileClassifier._calculate_outgoing_edges()`](https://github.com/deepset-ai/haystack/blob/862ac31b5c4349519cc2c472bca5bfcee1944c95/haystack/nodes/file_classifier/file_type.py#L54) for an example. 3. Define a `run()` method that is executed when the Pipeline calls your node. The input arguments should consist of all configuration parameters that you want to allow and the data arguments that you expect as input from a previous node. For example, data parameters can be `documents`, `query`, `file_paths`, and so on. 4. Set `run()` to return a tuple. The first element of this tuple is an output dictionary of the data you want to pass to the next node. The second element in the tuple is the name of the outgoing edge (usually `output_1`). 5. Define a `run_batch()` method that makes it possible for query pipelines to accept more than one query as input. You can define the same input arguments for it as you did for the `run()` method.