Skip to content

Commit ec3c15e

Browse files
Merge branch 'main' into mm-ui
2 parents 7c3eb06 + 0edb31f commit ec3c15e

File tree

29 files changed

+833
-6996
lines changed

29 files changed

+833
-6996
lines changed

docs/contributing/LOCAL_DEVELOPMENT.md

Lines changed: 190 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,3 +81,193 @@ pytest --cov; open ./coverage/html/index.html
8181
<!--#TODO: get input from blessedcoolant here, for the moment inserted the frontend README via snippets extension.-->
8282

8383
--8<-- "invokeai/frontend/web/README.md"
84+
85+
## Developing InvokeAI in VSCode
86+
87+
VSCode offers some nice tools:
88+
89+
- python debugger
90+
- automatic `venv` activation
91+
- remote dev (e.g. run InvokeAI on a beefy linux desktop while you type in
92+
comfort on your macbook)
93+
94+
### Setup
95+
96+
You'll need the
97+
[Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
98+
and
99+
[Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
100+
extensions installed first.
101+
102+
It's also really handy to install the `Jupyter` extensions:
103+
104+
- [Jupyter](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter)
105+
- [Jupyter Cell Tags](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-jupyter-cell-tags)
106+
- [Jupyter Notebook Renderers](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter-renderers)
107+
- [Jupyter Slide Show](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.vscode-jupyter-slideshow)
108+
109+
#### InvokeAI workspace
110+
111+
Creating a VSCode workspace for working on InvokeAI is highly recommended. It
112+
can hold InvokeAI-specific settings and configs.
113+
114+
To make a workspace:
115+
116+
- Open the InvokeAI repo dir in VSCode
117+
- `File` > `Save Workspace As` > save it _outside_ the repo
118+
119+
#### Default python interpreter (i.e. automatic virtual environment activation)
120+
121+
- Use command palette to run command
122+
`Preferences: Open Workspace Settings (JSON)`
123+
- Add `python.defaultInterpreterPath` to `settings`, pointing to your `venv`'s
124+
python
125+
126+
Should look something like this:
127+
128+
```json
129+
{
130+
// I like to have all InvokeAI-related folders in my workspace
131+
"folders": [
132+
{
133+
// repo root
134+
"path": "InvokeAI"
135+
},
136+
{
137+
// InvokeAI root dir, where `invokeai.yaml` lives
138+
"path": "/path/to/invokeai_root"
139+
}
140+
],
141+
"settings": {
142+
// Where your InvokeAI `venv`'s python executable lives
143+
"python.defaultInterpreterPath": "/path/to/invokeai_root/.venv/bin/python"
144+
}
145+
}
146+
```
147+
148+
Now when you open the VSCode integrated terminal, or do anything that needs to
149+
run python, it will automatically be in your InvokeAI virtual environment.
150+
151+
Bonus: When you create a Jupyter notebook, when you run it, you'll be prompted
152+
for the python interpreter to run in. This will default to your `venv` python,
153+
and so you'll have access to the same python environment as the InvokeAI app.
154+
155+
This is _super_ handy.
156+
157+
#### Debugging configs with `launch.json`
158+
159+
Debugging configs are managed in a `launch.json` file. Like most VSCode configs,
160+
these can be scoped to a workspace or folder.
161+
162+
Follow the [official guide](https://code.visualstudio.com/docs/python/debugging)
163+
to set up your `launch.json` and try it out.
164+
165+
Now we can create the InvokeAI debugging configs:
166+
167+
```json
168+
{
169+
// Use IntelliSense to learn about possible attributes.
170+
// Hover to view descriptions of existing attributes.
171+
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
172+
"version": "0.2.0",
173+
"configurations": [
174+
{
175+
// Run the InvokeAI backend & serve the pre-built UI
176+
"name": "InvokeAI Web",
177+
"type": "python",
178+
"request": "launch",
179+
"program": "scripts/invokeai-web.py",
180+
"args": [
181+
// Your InvokeAI root dir (where `invokeai.yaml` lives)
182+
"--root",
183+
"/path/to/invokeai_root",
184+
// Access the app from anywhere on your local network
185+
"--host",
186+
"0.0.0.0"
187+
],
188+
"justMyCode": true
189+
},
190+
{
191+
// Run the nodes-based CLI
192+
"name": "InvokeAI CLI",
193+
"type": "python",
194+
"request": "launch",
195+
"program": "scripts/invokeai-cli.py",
196+
"justMyCode": true
197+
},
198+
{
199+
// Run tests
200+
"name": "InvokeAI Test",
201+
"type": "python",
202+
"request": "launch",
203+
"module": "pytest",
204+
"args": ["--capture=no"],
205+
"justMyCode": true
206+
},
207+
{
208+
// Run a single test
209+
"name": "InvokeAI Single Test",
210+
"type": "python",
211+
"request": "launch",
212+
"module": "pytest",
213+
"args": [
214+
// Change this to point to the specific test you are working on
215+
"tests/nodes/test_invoker.py"
216+
],
217+
"justMyCode": true
218+
},
219+
{
220+
// This is the default, useful to just run a single file
221+
"name": "Python: File",
222+
"type": "python",
223+
"request": "launch",
224+
"program": "${file}",
225+
"justMyCode": true
226+
}
227+
]
228+
}
229+
```
230+
231+
You'll see these configs in the debugging configs drop down. Running them will
232+
start InvokeAI with attached debugger, in the correct environment, and work just
233+
like the normal app.
234+
235+
Enjoy debugging InvokeAI with ease (not that we have any bugs of course).
236+
237+
#### Remote dev
238+
239+
This is very easy to set up and provides the same very smooth experience as
240+
local development. Environments and debugging, as set up above, just work,
241+
though you'd need to recreate the workspace and debugging configs on the remote.
242+
243+
Consult the
244+
[official guide](https://code.visualstudio.com/docs/remote/remote-overview) to
245+
get it set up.
246+
247+
Suggest using VSCode's included settings sync so that your remote dev host has
248+
all the same app settings and extensions automagically.
249+
250+
##### One remote dev gotcha
251+
252+
I've found the automatic port forwarding to be very flakey. You can disable it
253+
in `Preferences: Open Remote Settings (ssh: hostname)`. Search for
254+
`remote.autoForwardPorts` and untick the box.
255+
256+
To forward ports very reliably, use SSH on the remote dev client (e.g. your
257+
macbook). Here's how to forward both backend API port (`9090`) and the frontend
258+
live dev server port (`5173`):
259+
260+
```bash
261+
ssh \
262+
-L 9090:localhost:9090 \
263+
-L 5173:localhost:5173 \
264+
user@remote-dev-host
265+
```
266+
267+
The forwarding stops when you close the terminal window, so suggest to do this
268+
_outside_ the VSCode integrated terminal in case you need to restart VSCode for
269+
an extension update or something
270+
271+
Now, on your remote dev client, you can open `localhost:9090` and access the UI,
272+
now served from the remote dev host, just the same as if it was running on the
273+
client.

invokeai/app/invocations/compel.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,10 +57,10 @@ class Config(InvocationConfig):
5757
@torch.no_grad()
5858
def invoke(self, context: InvocationContext) -> CompelOutput:
5959
tokenizer_info = context.services.model_manager.get_model(
60-
**self.clip.tokenizer.dict(),
60+
**self.clip.tokenizer.dict(), context=context,
6161
)
6262
text_encoder_info = context.services.model_manager.get_model(
63-
**self.clip.text_encoder.dict(),
63+
**self.clip.text_encoder.dict(), context=context,
6464
)
6565

6666
def _lora_loader():
@@ -82,6 +82,7 @@ def _lora_loader():
8282
model_name=name,
8383
base_model=self.clip.text_encoder.base_model,
8484
model_type=ModelType.TextualInversion,
85+
context=context,
8586
).context.model
8687
)
8788
except ModelNotFoundException:

invokeai/app/invocations/generate.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -157,13 +157,13 @@ def load_model_old_way(self, context, scheduler):
157157
def _lora_loader():
158158
for lora in self.unet.loras:
159159
lora_info = context.services.model_manager.get_model(
160-
**lora.dict(exclude={"weight"}))
160+
**lora.dict(exclude={"weight"}), context=context,)
161161
yield (lora_info.context.model, lora.weight)
162162
del lora_info
163163
return
164164

165-
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
166-
vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
165+
unet_info = context.services.model_manager.get_model(**self.unet.unet.dict(), context=context,)
166+
vae_info = context.services.model_manager.get_model(**self.vae.vae.dict(), context=context,)
167167

168168
with vae_info as vae,\
169169
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),\

invokeai/app/invocations/latent.py

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ def get_scheduler(
7676
scheduler_name, SCHEDULER_MAP['ddim']
7777
)
7878
orig_scheduler_info = context.services.model_manager.get_model(
79-
**scheduler_info.dict()
79+
**scheduler_info.dict(), context=context,
8080
)
8181
with orig_scheduler_info as orig_scheduler:
8282
scheduler_config = orig_scheduler.config
@@ -262,6 +262,7 @@ def prep_control_data(
262262
model_name=control_info.control_model.model_name,
263263
model_type=ModelType.ControlNet,
264264
base_model=control_info.control_model.base_model,
265+
context=context,
265266
)
266267
)
267268

@@ -313,14 +314,14 @@ def step_callback(state: PipelineIntermediateState):
313314
def _lora_loader():
314315
for lora in self.unet.loras:
315316
lora_info = context.services.model_manager.get_model(
316-
**lora.dict(exclude={"weight"})
317+
**lora.dict(exclude={"weight"}), context=context,
317318
)
318319
yield (lora_info.context.model, lora.weight)
319320
del lora_info
320321
return
321322

322323
unet_info = context.services.model_manager.get_model(
323-
**self.unet.unet.dict()
324+
**self.unet.unet.dict(), context=context,
324325
)
325326
with ExitStack() as exit_stack,\
326327
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),\
@@ -403,14 +404,14 @@ def step_callback(state: PipelineIntermediateState):
403404
def _lora_loader():
404405
for lora in self.unet.loras:
405406
lora_info = context.services.model_manager.get_model(
406-
**lora.dict(exclude={"weight"})
407+
**lora.dict(exclude={"weight"}), context=context,
407408
)
408409
yield (lora_info.context.model, lora.weight)
409410
del lora_info
410411
return
411412

412413
unet_info = context.services.model_manager.get_model(
413-
**self.unet.unet.dict()
414+
**self.unet.unet.dict(), context=context,
414415
)
415416
with ExitStack() as exit_stack,\
416417
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()),\
@@ -491,7 +492,7 @@ def invoke(self, context: InvocationContext) -> ImageOutput:
491492
latents = context.services.latents.get(self.latents.latents_name)
492493

493494
vae_info = context.services.model_manager.get_model(
494-
**self.vae.vae.dict(),
495+
**self.vae.vae.dict(), context=context,
495496
)
496497

497498
with vae_info as vae:
@@ -636,7 +637,7 @@ def invoke(self, context: InvocationContext) -> LatentsOutput:
636637

637638
#vae_info = context.services.model_manager.get_model(**self.vae.vae.dict())
638639
vae_info = context.services.model_manager.get_model(
639-
**self.vae.vae.dict(),
640+
**self.vae.vae.dict(), context=context,
640641
)
641642

642643
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))

invokeai/app/services/events.py

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -105,8 +105,6 @@ def emit_graph_execution_complete(self, graph_execution_state_id: str) -> None:
105105
def emit_model_load_started (
106106
self,
107107
graph_execution_state_id: str,
108-
node: dict,
109-
source_node_id: str,
110108
model_name: str,
111109
base_model: BaseModelType,
112110
model_type: ModelType,
@@ -117,8 +115,6 @@ def emit_model_load_started (
117115
event_name="model_load_started",
118116
payload=dict(
119117
graph_execution_state_id=graph_execution_state_id,
120-
node=node,
121-
source_node_id=source_node_id,
122118
model_name=model_name,
123119
base_model=base_model,
124120
model_type=model_type,
@@ -129,8 +125,6 @@ def emit_model_load_started (
129125
def emit_model_load_completed(
130126
self,
131127
graph_execution_state_id: str,
132-
node: dict,
133-
source_node_id: str,
134128
model_name: str,
135129
base_model: BaseModelType,
136130
model_type: ModelType,
@@ -142,12 +136,12 @@ def emit_model_load_completed(
142136
event_name="model_load_completed",
143137
payload=dict(
144138
graph_execution_state_id=graph_execution_state_id,
145-
node=node,
146-
source_node_id=source_node_id,
147139
model_name=model_name,
148140
base_model=base_model,
149141
model_type=model_type,
150142
submodel=submodel,
151-
model_info=model_info,
143+
hash=model_info.hash,
144+
location=model_info.location,
145+
precision=str(model_info.precision),
152146
),
153147
)

0 commit comments

Comments
 (0)