You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/docs/module_guides/workflow/index.md
+11-3Lines changed: 11 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -632,11 +632,12 @@ result = await handler
632
632
handler = w.run(ctx=handler.ctx)
633
633
result =await handler
634
634
```
635
+
635
636
## Resources
636
637
637
638
Resources are external dependencies you can inject into the steps of a workflow.
638
639
639
-
A simple example can be:
640
+
As a simple example, look at `memory` in the following workflow:
640
641
641
642
```python
642
643
from llama_index.core.workflow.resource import Resource
@@ -675,9 +676,16 @@ class WorkflowWithResource(Workflow):
675
676
return StopEvent(result="Messages put into memory")
676
677
```
677
678
678
-
The `Resource` wrapper acts as both a type declaration and an executor. At definition time, it specifies the expected type using `Annotated` - for example, a `Memory` object. At runtime, it invokes the associated factory function, such as `get_memory`, to produce the actual instance. The return type of this function must match the declared type, ensuring consistency between what’s expected and what’s provided during execution.
679
+
To inject a resource into a workflow step, you have to add a parameter to the step signature and define its type,
680
+
using `Annotated` and invoke the `Resource()` wrapper passing a function or callable returning the actual Resource
681
+
object. The return type of the wrapped function must match the declared type, ensuring consistency between what’s
682
+
expected and what’s provided during execution. In the example above, `memory: Annotated[Memory, Resource(get_memory)`
683
+
defines a resource of type `Memory` that will be provided by the `get_memory()` function and passed to the step in the
684
+
`memory` parameter when the workflow runs.
679
685
680
-
Resources are shared among steps of a workflow, and `Resource` will invoke the factory function only once. In case this is not the desired behavior, passing `cache=False` to `Resource` will inject different resource objects in different steps, invoking the factory function as many times.
686
+
Resources are shared among steps of a workflow, and the `Resource()` wrapper will invoke the factory function only once.
687
+
In case this is not the desired behavior, passing `cache=False` to `Resource()` will inject different resource objects
688
+
in different steps, invoking the factory function as many times.
Copy file name to clipboardExpand all lines: docs/docs/understanding/workflows/resources.md
+41-12Lines changed: 41 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,16 @@
1
1
# Resources
2
2
3
-
Resources are a component of workflows that allow us to equip our steps with external dependencies such as memory, LLMs, query engines or chat history.
3
+
Resources are external dependencies such as memory, LLMs, query engines or chat history instances that will be injected
4
+
into workflow steps at runtime.
4
5
5
-
Resources are a powerful way of binding components to our steps that we otherwise would need to specify by hand every time and, most importantly, resources are **stateful**, meaning that they maintain their state across different steps, unless otherwise specified.
6
+
Resources are a powerful way of binding workflow steps to Python objects that we otherwise would need to create by hand
7
+
every time. For performance reasons, by default resources are cached for a workflow, meaning the same resource instance
8
+
is passed to every step where it's injected. It's important to master this concept because cached and non-cached
9
+
resources can lead to unexpected behaviour, let's see it in detail.
6
10
7
-
## Using Stateful Resources
11
+
## Resources are cached by default
8
12
9
-
In order to use them within our code, we need to import them from the `resource` submodule:
13
+
First of all, to use resources within our code, we need to import `Resource` from the `resource` submodule:
10
14
11
15
```python
12
16
from llama_index.core.workflow.resource import Resource
@@ -19,7 +23,8 @@ from llama_index.core.workflow import (
19
23
)
20
24
```
21
25
22
-
The `Resource` function works as a wrapper for another function that, when executed, returns an object of a specified type. This is the usage pattern:
26
+
`Resource` wraps a function or callable that must return an object of the same type as the one in the resource
When a step of our workflow will be equipped with this resource, the variable in the step to which the resource is assigned would behave as a memory component:
41
+
In the example above, `Annotated[Memory, Resource(get_memory)` defines a resource of type `Memory` that will be provided
42
+
at runtime by the `get_memory()` function. A resource defined like this can be injected into a step by passing it as
43
+
a method parameter:
37
44
38
45
```python
39
46
import random
@@ -98,7 +105,9 @@ class WorkflowWithMemory(Workflow):
98
105
return StopEvent(result=messages)
99
106
```
100
107
101
-
As you can see, each step has access to memory and writes to it - the memory is shared among them and we can see it by running the workflow:
108
+
As you can see, each step has access to the `memory` resource and can write to it. It's important to note that
109
+
`get_memory()` will be called only once, and the same memory instance will be injected into the different steps. We can
110
+
see this is the case by running the workflow:
102
111
103
112
```python
104
113
wf = WorkflowWithMemory(disable_validation=True)
@@ -128,7 +137,9 @@ Third step: Hello World!
128
137
129
138
This shows that each step added its message to a global memory, which is exactly what we were expecting!
130
139
131
-
It is important to note, though, the resources are preserved across steps of the same workflow instance, but not across different workflows. If we were to run two `WorkflowWithMemory` instances, their memories would be separate and independent:
140
+
Note that resources are preserved across steps of the same workflow instance, but not across different workflows. If we
141
+
were to run two `WorkflowWithMemory` instances, `get_memory` would be called one time for each workflow and as a result
142
+
their memories would be separate and independent:
132
143
133
144
```python
134
145
wf1 = WorkflowWithMemory(disable_validation=True)
@@ -165,11 +176,11 @@ First step: Happy New Year!
165
176
Second step: Python is awesome!
166
177
```
167
178
168
-
## Using Steteless Resources
179
+
## Disable resource caching
169
180
170
-
Resources can also be stateless, meaning that we can configure them *not* to be preserved across steps in the same run.
171
-
172
-
In order to do so, we just need to specify `cache=False` when instantiating `Resource` - let's see this in a simple example, using a custom `Counter` class:
181
+
If we pass `cache=False` to `Resource` when defining a resource, the wrapped function is called every time the resource
182
+
is injected into a step. This behaviour can be desirable at times, let's see a simple example using a custom
183
+
`Counter` class:
173
184
174
185
```python
175
186
from pydantic import BaseModel, Field
@@ -219,4 +230,22 @@ Counter at first step: 1
219
230
Counter at second step: 1
220
231
```
221
232
233
+
## A note about stateful and stateless resources
234
+
235
+
As we have seen, cached resources are expected to be **stateful**, meaning that they can maintain their state across
236
+
different workflow runs and different steps, unless otherwise specified. But this doesn't mean we can consider a
237
+
resource **stateless** only because we disable caching. Let's see an example:
0 commit comments