Skip to content

Commit eb4ca40

Browse files
authored
Merge branch 'main' into release/3-0-0
2 parents ce7fbdb + 594bf6f commit eb4ca40

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

59 files changed

+1533
-1050
lines changed

docs/index.md

Lines changed: 22 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ title: Home
2424

2525
[![CI checks on main badge]][ci checks on main link]
2626
[![CI checks on dev badge]][ci checks on dev link]
27-
[![latest commit to dev badge]][latest commit to dev link]
27+
<!-- [![latest commit to dev badge]][latest commit to dev link] -->
2828

2929
[![github open issues badge]][github open issues link]
3030
[![github open prs badge]][github open prs link]
@@ -54,10 +54,10 @@ title: Home
5454
[github stars badge]:
5555
https://flat.badgen.net/github/stars/invoke-ai/InvokeAI?icon=github
5656
[github stars link]: https://github.com/invoke-ai/InvokeAI/stargazers
57-
[latest commit to dev badge]:
57+
<!-- [latest commit to dev badge]:
5858
https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/development?icon=github&color=yellow&label=last%20dev%20commit&cache=900
5959
[latest commit to dev link]:
60-
https://github.com/invoke-ai/InvokeAI/commits/development
60+
https://github.com/invoke-ai/InvokeAI/commits/main -->
6161
[latest release badge]:
6262
https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
6363
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
@@ -82,6 +82,25 @@ Q&A</a>]
8282

8383
This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
8484

85+
## :octicons-package-dependencies-24: Installation
86+
87+
This fork is supported across Linux, Windows and Macintosh. Linux users can use
88+
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
89+
driver).
90+
91+
### [Installation Getting Started Guide](installation)
92+
#### **[Automated Installer](installation/010_INSTALL_AUTOMATED.md)**
93+
✅ This is the recommended installation method for first-time users.
94+
#### [Manual Installation](installation/020_INSTALL_MANUAL.md)
95+
This method is recommended for experienced users and developers
96+
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
97+
This method is recommended for those familiar with running Docker containers
98+
### Other Installation Guides
99+
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
100+
- [XFormers](installation/070_INSTALL_XFORMERS.md)
101+
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
102+
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
103+
85104
## :fontawesome-solid-computer: Hardware Requirements
86105

87106
### :octicons-cpu-24: System
@@ -107,24 +126,6 @@ images in full-precision mode:
107126
- At least 18 GB of free disk space for the machine learning model, Python, and
108127
all its dependencies.
109128

110-
## :octicons-package-dependencies-24: Installation
111-
112-
This fork is supported across Linux, Windows and Macintosh. Linux users can use
113-
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
114-
driver).
115-
116-
### [Installation Getting Started Guide](installation)
117-
#### [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
118-
This method is recommended for 1st time users
119-
#### [Manual Installation](installation/020_INSTALL_MANUAL.md)
120-
This method is recommended for experienced users and developers
121-
#### [Docker Installation](installation/040_INSTALL_DOCKER.md)
122-
This method is recommended for those familiar with running Docker containers
123-
### Other Installation Guides
124-
- [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md)
125-
- [XFormers](installation/070_INSTALL_XFORMERS.md)
126-
- [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md)
127-
- [Installing New Models](installation/050_INSTALLING_MODELS.md)
128129

129130
## :octicons-gift-24: InvokeAI Features
130131

docs/installation/010_INSTALL_AUTOMATED.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -124,9 +124,9 @@ experimental versions later.
124124
[latest release](https://github.com/invoke-ai/InvokeAI/releases/latest),
125125
and look for a file named:
126126

127-
- InvokeAI-installer-v2.X.X.zip
127+
- InvokeAI-installer-v3.X.X.zip
128128

129-
where "2.X.X" is the latest released version. The file is located
129+
where "3.X.X" is the latest released version. The file is located
130130
at the very bottom of the release page, under **Assets**.
131131

132132
4. **Unpack the installer**: Unpack the zip file into a convenient directory. This will create a new

docs/installation/index.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ See the [troubleshooting
1515
section](010_INSTALL_AUTOMATED.md#troubleshooting) of the automated
1616
install guide for frequently-encountered installation issues.
1717

18-
## Main Application
18+
## Installation options
1919

2020
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
2121

@@ -24,6 +24,9 @@ install guide for frequently-encountered installation issues.
2424
"developer console" which will help us debug problems with you and
2525
give you to access experimental features.
2626

27+
28+
✅ This is the recommended option for first time users.
29+
2730
2. [Manual Installation](020_INSTALL_MANUAL.md)
2831

2932
In this method you will manually run the commands needed to install

docs/nodes/communityNodes.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,17 @@
11
# Community Nodes
22

3-
These are nodes that have been developed by the community for the community. If you're not sure what a node is, you can learn more about nodes [here](overview.md).
3+
These are nodes that have been developed by the community, for the community. If you're not sure what a node is, you can learn more about nodes [here](overview.md).
44

5-
If you'd like to submit a node for the community, please refer to the [node creation overview](overview.md).
5+
If you'd like to submit a node for the community, please refer to the [node creation overview](./overview.md#contributing-nodes).
66

77
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
88

99
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
1010

11+
## Disclaimer
12+
13+
The nodes linked below have been developed and contributed by members of the Invoke AI community. While we strive to ensure the quality and safety of these contributions, we do not guarantee the reliability or security of the nodes. If you have issues or concerns with any of the nodes below, please raise it on GitHub or in the Discord.
14+
1115
## List of Nodes
1216

1317
--------------------------------

docs/nodes/overview.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
# Nodes
2+
23
## What are Nodes?
34
An Node is simply a single operation that takes in some inputs and gives
45
out some outputs. We can then chain multiple nodes together to create more
@@ -10,18 +11,18 @@ You can read more about nodes and the node editor [here](../features/NODES.md).
1011

1112

1213
## Downloading Nodes
13-
To download a new node, visit our list of [Community Nodes](communityNodes.md). These are codes that have been created by the community, for the community.
14+
To download a new node, visit our list of [Community Nodes](communityNodes.md). These are nodes that have been created by the community, for the community.
1415

1516

1617
## Contributing Nodes
1718

1819
To learn about creating a new node, please visit our [Node creation documenation](../contributing/INVOCATIONS.md).
1920

2021
Once you’ve created a node and confirmed that it behaves as expected locally, follow these steps:
21-
- Make sure the node is contained in a new Python (.py) file
22-
- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
23-
- Make sure you are following the template below and have provided all relevant details about the node and what it does.
24-
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
22+
* Make sure the node is contained in a new Python (.py) file
23+
* Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
24+
* Make sure you are following the template below and have provided all relevant details about the node and what it does.
25+
* A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
2526

2627
### Community Node Template
2728

invokeai/app/api/routers/images.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,15 @@ async def upload_image(
4040
response: Response,
4141
image_category: ImageCategory = Query(description="The category of the image"),
4242
is_intermediate: bool = Query(description="Whether this is an intermediate image"),
43+
board_id: Optional[str] = Query(
44+
default=None, description="The board to add this image to, if any"
45+
),
4346
session_id: Optional[str] = Query(
4447
default=None, description="The session ID associated with this upload, if any"
4548
),
49+
crop_visible: Optional[bool] = Query(
50+
default=False, description="Whether to crop the image"
51+
),
4652
) -> ImageDTO:
4753
"""Uploads an image"""
4854
if not file.content_type.startswith("image"):
@@ -52,6 +58,9 @@ async def upload_image(
5258

5359
try:
5460
pil_image = Image.open(io.BytesIO(contents))
61+
if crop_visible:
62+
bbox = pil_image.getbbox()
63+
pil_image = pil_image.crop(bbox)
5564
except:
5665
# Error opening the image
5766
raise HTTPException(status_code=415, detail="Failed to read image")
@@ -62,6 +71,7 @@ async def upload_image(
6271
image_origin=ResourceOrigin.EXTERNAL,
6372
image_category=image_category,
6473
session_id=session_id,
74+
board_id=board_id,
6575
is_intermediate=is_intermediate,
6676
)
6777

invokeai/app/services/images.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ def create(
5252
image_category: ImageCategory,
5353
node_id: Optional[str] = None,
5454
session_id: Optional[str] = None,
55+
board_id: Optional[str] = None,
5556
is_intermediate: bool = False,
5657
metadata: Optional[dict] = None,
5758
) -> ImageDTO:
@@ -174,6 +175,7 @@ def create(
174175
image_category: ImageCategory,
175176
node_id: Optional[str] = None,
176177
session_id: Optional[str] = None,
178+
board_id: Optional[str] = None,
177179
is_intermediate: bool = False,
178180
metadata: Optional[dict] = None,
179181
) -> ImageDTO:
@@ -215,6 +217,11 @@ def create(
215217
session_id=session_id,
216218
)
217219

220+
if board_id is not None:
221+
self._services.board_image_records.add_image_to_board(
222+
board_id=board_id, image_name=image_name
223+
)
224+
218225
self._services.image_files.save(
219226
image_name=image_name, image=image, metadata=metadata, graph=graph
220227
)

invokeai/backend/util/mps_fixes.py

Lines changed: 149 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
1+
import math
12
import torch
3+
import diffusers
24

35

46
if torch.backends.mps.is_available():
@@ -61,3 +63,150 @@ def new_torch_interpolate(input, size=None, scale_factor=None, mode='nearest', a
6163
return _torch_interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias)
6264

6365
torch.nn.functional.interpolate = new_torch_interpolate
66+
67+
# TODO: refactor it
68+
_SlicedAttnProcessor = diffusers.models.attention_processor.SlicedAttnProcessor
69+
class ChunkedSlicedAttnProcessor:
70+
r"""
71+
Processor for implementing sliced attention.
72+
73+
Args:
74+
slice_size (`int`, *optional*):
75+
The number of steps to compute attention. Uses as many slices as `attention_head_dim // slice_size`, and
76+
`attention_head_dim` must be a multiple of the `slice_size`.
77+
"""
78+
79+
def __init__(self, slice_size):
80+
assert isinstance(slice_size, int)
81+
slice_size = 1 # TODO: maybe implement chunking in batches too when enough memory
82+
self.slice_size = slice_size
83+
self._sliced_attn_processor = _SlicedAttnProcessor(slice_size)
84+
85+
def __call__(self, attn, hidden_states, encoder_hidden_states=None, attention_mask=None):
86+
if self.slice_size != 1:
87+
return self._sliced_attn_processor(attn, hidden_states, encoder_hidden_states, attention_mask)
88+
89+
residual = hidden_states
90+
91+
input_ndim = hidden_states.ndim
92+
93+
if input_ndim == 4:
94+
batch_size, channel, height, width = hidden_states.shape
95+
hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
96+
97+
batch_size, sequence_length, _ = (
98+
hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape
99+
)
100+
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
101+
102+
if attn.group_norm is not None:
103+
hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
104+
105+
query = attn.to_q(hidden_states)
106+
dim = query.shape[-1]
107+
query = attn.head_to_batch_dim(query)
108+
109+
if encoder_hidden_states is None:
110+
encoder_hidden_states = hidden_states
111+
elif attn.norm_cross:
112+
encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states)
113+
114+
key = attn.to_k(encoder_hidden_states)
115+
value = attn.to_v(encoder_hidden_states)
116+
key = attn.head_to_batch_dim(key)
117+
value = attn.head_to_batch_dim(value)
118+
119+
batch_size_attention, query_tokens, _ = query.shape
120+
hidden_states = torch.zeros(
121+
(batch_size_attention, query_tokens, dim // attn.heads), device=query.device, dtype=query.dtype
122+
)
123+
124+
chunk_tmp_tensor = torch.empty(self.slice_size, query.shape[1], key.shape[1], dtype=query.dtype, device=query.device)
125+
126+
for i in range(batch_size_attention // self.slice_size):
127+
start_idx = i * self.slice_size
128+
end_idx = (i + 1) * self.slice_size
129+
130+
query_slice = query[start_idx:end_idx]
131+
key_slice = key[start_idx:end_idx]
132+
attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None
133+
134+
self.get_attention_scores_chunked(attn, query_slice, key_slice, attn_mask_slice, hidden_states[start_idx:end_idx], value[start_idx:end_idx], chunk_tmp_tensor)
135+
136+
hidden_states = attn.batch_to_head_dim(hidden_states)
137+
138+
# linear proj
139+
hidden_states = attn.to_out[0](hidden_states)
140+
# dropout
141+
hidden_states = attn.to_out[1](hidden_states)
142+
143+
if input_ndim == 4:
144+
hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
145+
146+
if attn.residual_connection:
147+
hidden_states = hidden_states + residual
148+
149+
hidden_states = hidden_states / attn.rescale_output_factor
150+
151+
return hidden_states
152+
153+
154+
def get_attention_scores_chunked(self, attn, query, key, attention_mask, hidden_states, value, chunk):
155+
# batch size = 1
156+
assert query.shape[0] == 1
157+
assert key.shape[0] == 1
158+
assert value.shape[0] == 1
159+
assert hidden_states.shape[0] == 1
160+
161+
dtype = query.dtype
162+
if attn.upcast_attention:
163+
query = query.float()
164+
key = key.float()
165+
166+
#out_item_size = query.dtype.itemsize
167+
#if attn.upcast_attention:
168+
# out_item_size = torch.float32.itemsize
169+
out_item_size = query.element_size()
170+
if attn.upcast_attention:
171+
out_item_size = 4
172+
173+
chunk_size = 2 ** 29
174+
175+
out_size = query.shape[1] * key.shape[1] * out_item_size
176+
chunks_count = min(query.shape[1], math.ceil((out_size - 1) / chunk_size))
177+
chunk_step = max(1, int(query.shape[1] / chunks_count))
178+
179+
key = key.transpose(-1, -2)
180+
181+
def _get_chunk_view(tensor, start, length):
182+
if start + length > tensor.shape[1]:
183+
length = tensor.shape[1] - start
184+
#print(f"view: [{tensor.shape[0]},{tensor.shape[1]},{tensor.shape[2]}] - start: {start}, length: {length}")
185+
return tensor[:,start:start+length]
186+
187+
for chunk_pos in range(0, query.shape[1], chunk_step):
188+
if attention_mask is not None:
189+
torch.baddbmm(
190+
_get_chunk_view(attention_mask, chunk_pos, chunk_step),
191+
_get_chunk_view(query, chunk_pos, chunk_step),
192+
key,
193+
beta=1,
194+
alpha=attn.scale,
195+
out=chunk,
196+
)
197+
else:
198+
torch.baddbmm(
199+
torch.zeros((1,1,1), device=query.device, dtype=query.dtype),
200+
_get_chunk_view(query, chunk_pos, chunk_step),
201+
key,
202+
beta=0,
203+
alpha=attn.scale,
204+
out=chunk,
205+
)
206+
chunk = chunk.softmax(dim=-1)
207+
torch.bmm(chunk, value, out=_get_chunk_view(hidden_states, chunk_pos, chunk_step))
208+
209+
#del chunk
210+
211+
212+
diffusers.models.attention_processor.SlicedAttnProcessor = ChunkedSlicedAttnProcessor

invokeai/frontend/web/src/app/components/ImageDnd/typesafeDnd.tsx

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -175,9 +175,7 @@ export const isValidDrop = (
175175
const destinationBoard = overData.context.boardId;
176176

177177
const isSameBoard = currentBoard === destinationBoard;
178-
const isDestinationValid = !currentBoard
179-
? destinationBoard !== 'no_board'
180-
: true;
178+
const isDestinationValid = !currentBoard ? destinationBoard : true;
181179

182180
return !isSameBoard && isDestinationValid;
183181
}

invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/addFirstListImagesListener.ts.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,10 @@ export const addFirstListImagesListener = () => {
1919
action,
2020
{ getState, dispatch, unsubscribe, cancelActiveListeners }
2121
) => {
22-
// Only run this listener on the first listImages request for `images` categories
22+
// Only run this listener on the first listImages request for no-board images
2323
if (
2424
action.meta.arg.queryCacheKey !==
25-
getListImagesUrl({ categories: IMAGE_CATEGORIES })
25+
getListImagesUrl({ board_id: 'none', categories: IMAGE_CATEGORIES })
2626
) {
2727
return;
2828
}

0 commit comments

Comments
 (0)