Skip to content

Commit 7025c00

Browse files
authored
Add configuration system, remove legacy globals, args, generate and CLI (#3340)
# Application-wide configuration service This PR creates a new `InvokeAIAppConfig` object that reads application-wide settings from an init file, the environment, and the command line. Arguments and fields are taken from the pydantic definition of the model. Defaults can be set by creating a yaml configuration file that has a top-level key of "InvokeAI" and subheadings for each of the categories returned by `invokeai --help`. The file looks like this: [file: invokeai.yaml] ``` InvokeAI: Paths: root: /home/lstein/invokeai-main conf_path: configs/models.yaml legacy_conf_dir: configs/stable-diffusion outdir: outputs embedding_dir: embeddings lora_dir: loras autoconvert_dir: null gfpgan_model_dir: models/gfpgan/GFPGANv1.4.pth Models: model: stable-diffusion-1.5 embeddings: true Memory/Performance: xformers_enabled: false sequential_guidance: false precision: float16 max_loaded_models: 4 always_use_cpu: false free_gpu_mem: false Features: nsfw_checker: true restore: true esrgan: true patchmatch: true internet_available: true log_tokenization: false Cross-Origin Resource Sharing: allow_origins: [] allow_credentials: true allow_methods: - '*' allow_headers: - '*' Web Server: host: 127.0.0.1 port: 8081 ``` The default name of the configuration file is `invokeai.yaml`, located in INVOKEAI_ROOT. You can use any OmegaConf dictionary by passing it to the config object at initialization time: ``` omegaconf = OmegaConf.load('/tmp/init.yaml') conf = InvokeAIAppConfig(conf=omegaconf) ``` The default name of the configuration file is `invokeai.yaml`, located in INVOKEAI_ROOT. You can replace supersede this by providing anyOmegaConf dictionary object initialization time: ``` omegaconf = OmegaConf.load('/tmp/init.yaml') conf = InvokeAIAppConfig(conf=omegaconf) ``` By default, InvokeAIAppConfig will parse the contents of `sys.argv` at initialization time. You may pass a list of strings in the optional `argv` argument to use instead of the system argv: ``` conf = InvokeAIAppConfig(arg=['--xformers_enabled']) ``` It is also possible to set a value at initialization time. This value has highest priority. ``` conf = InvokeAIAppConfig(xformers_enabled=True) ``` Any setting can be overwritten by setting an environment variable of form: "INVOKEAI_<setting>", as in: ``` export INVOKEAI_port=8080 ``` Order of precedence (from highest): 1) initialization options 2) command line options 3) environment variable options 4) config file options 5) pydantic defaults Typical usage: ``` from invokeai.app.services.config import InvokeAIAppConfig # get global configuration and print its nsfw_checker value conf = InvokeAIAppConfig() print(conf.nsfw_checker) ``` Finally, the configuration object is able to recreate its (modified) yaml file, by calling its `to_yaml()` method: ``` conf = InvokeAIAppConfig(outdir='/tmp', port=8080) print(conf.to_yaml()) ``` # Legacy code removal and porting This PR replaces Globals with the InvokeAIAppConfig system throughout, and therefore removes the `globals.py` and `args.py` modules. It also removes `generate` and the legacy CLI. ***The old CLI and web servers are now gone.*** I have ported the functionality of the configuration script, the model installer, and the merge and textual inversion scripts. The `invokeai` command will now launch `invokeai-node-cli`, and `invokeai-web` will launch the web server. I have changed the continuous invocation tests to accommodate the new command syntax in `invokeai-node-cli`. As a convenience function, you can also pass invocations to `invokeai-node-cli` (or its alias `invokeai`) on the command line as as standard input: ``` invokeai-node-cli "t2i --positive_prompt 'banana sushi' --seed 42" invokeai < invocation_commands.txt ```
2 parents bd1b84f + 7ea9951 commit 7025c00

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+1366
-4969
lines changed

.github/workflows/test-invoke-pip.yml

Lines changed: 7 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -80,12 +80,7 @@ jobs:
8080
uses: actions/checkout@v3
8181

8282
- name: set test prompt to main branch validation
83-
if: ${{ github.ref == 'refs/heads/main' }}
84-
run: echo "TEST_PROMPTS=tests/preflight_prompts.txt" >> ${{ matrix.github-env }}
85-
86-
- name: set test prompt to Pull Request validation
87-
if: ${{ github.ref != 'refs/heads/main' }}
88-
run: echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
83+
run:echo "TEST_PROMPTS=tests/validate_pr_prompt.txt" >> ${{ matrix.github-env }}
8984

9085
- name: setup python
9186
uses: actions/setup-python@v4
@@ -105,12 +100,6 @@ jobs:
105100
id: run-pytest
106101
run: pytest
107102

108-
- name: set INVOKEAI_OUTDIR
109-
run: >
110-
python -c
111-
"import os;from invokeai.backend.globals import Globals;OUTDIR=os.path.join(Globals.root,str('outputs'));print(f'INVOKEAI_OUTDIR={OUTDIR}')"
112-
>> ${{ matrix.github-env }}
113-
114103
- name: run invokeai-configure
115104
id: run-preload-models
116105
env:
@@ -129,15 +118,20 @@ jobs:
129118
HF_HUB_OFFLINE: 1
130119
HF_DATASETS_OFFLINE: 1
131120
TRANSFORMERS_OFFLINE: 1
121+
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
132122
run: >
133123
invokeai
134124
--no-patchmatch
135125
--no-nsfw_checker
136-
--from_file ${{ env.TEST_PROMPTS }}
126+
--precision=float32
127+
--always_use_cpu
137128
--outdir ${{ env.INVOKEAI_OUTDIR }}/${{ matrix.python-version }}/${{ matrix.pytorch }}
129+
--from_file ${{ env.TEST_PROMPTS }}
138130
139131
- name: Archive results
140132
id: archive-results
133+
env:
134+
INVOKEAI_OUTDIR: ${{ github.workspace }}/results
141135
uses: actions/upload-artifact@v3
142136
with:
143137
name: results

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -201,6 +201,8 @@ checkpoints
201201
# If it's a Mac
202202
.DS_Store
203203

204+
invokeai/frontend/web/dist/*
205+
204206
# Let the frontend manage its own gitignore
205207
!invokeai/frontend/web/*
206208

invokeai/app/api/dependencies.py

Lines changed: 3 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,6 @@
77

88
from ..services.default_graphs import create_system_graphs
99
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
10-
from ...backend import Globals
1110
from ..services.model_manager_initializer import get_model_manager
1211
from ..services.restoration_services import RestorationServices
1312
from ..services.graph import GraphExecutionState, LibraryGraph
@@ -42,17 +41,8 @@ class ApiDependencies:
4241

4342
invoker: Invoker = None
4443

45-
@staticmethod
4644
def initialize(config, event_handler_id: int, logger: types.ModuleType=logger):
47-
Globals.try_patchmatch = config.patchmatch
48-
Globals.always_use_cpu = config.always_use_cpu
49-
Globals.internet_available = config.internet_available and check_internet()
50-
Globals.disable_xformers = not config.xformers
51-
Globals.ckpt_convert = config.ckpt_convert
52-
53-
# TO DO: Use the config to select the logger rather than use the default
54-
# invokeai logging module
55-
logger.info(f"Internet connectivity is {Globals.internet_available}")
45+
logger.info(f"Internet connectivity is {config.internet_available}")
5646

5747
events = FastAPIEventService(event_handler_id)
5848

@@ -72,7 +62,6 @@ def initialize(config, event_handler_id: int, logger: types.ModuleType=logger):
7262
services = InvocationServices(
7363
model_manager=get_model_manager(config,logger),
7464
events=events,
75-
logger=logger,
7665
latents=latents,
7766
images=images,
7867
metadata=metadata,
@@ -85,6 +74,8 @@ def initialize(config, event_handler_id: int, logger: types.ModuleType=logger):
8574
),
8675
processor=DefaultInvocationProcessor(),
8776
restoration=RestorationServices(config,logger),
77+
configuration=config,
78+
logger=logger,
8879
)
8980

9081
create_system_graphs(services.graph_library)

invokeai/app/api_app.py

Lines changed: 13 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -13,11 +13,11 @@
1313
from fastapi_events.middleware import EventHandlerASGIMiddleware
1414
from pydantic.schema import schema
1515

16-
from ..backend import Args
1716
from .api.dependencies import ApiDependencies
1817
from .api.routers import images, sessions, models
1918
from .api.sockets import SocketIO
2019
from .invocations.baseinvocation import BaseInvocation
20+
from .services.config import InvokeAIAppConfig
2121

2222
# Create the app
2323
# TODO: create this all in a method so configuration/etc. can be passed in?
@@ -33,30 +33,25 @@
3333
middleware_id=event_handler_id,
3434
)
3535

36-
# Add CORS
37-
# TODO: use configuration for this
38-
origins = []
39-
app.add_middleware(
40-
CORSMiddleware,
41-
allow_origins=origins,
42-
allow_credentials=True,
43-
allow_methods=["*"],
44-
allow_headers=["*"],
45-
)
46-
4736
socket_io = SocketIO(app)
4837

49-
config = {}
50-
38+
# initialize config
39+
# this is a module global
40+
app_config = InvokeAIAppConfig()
5141

5242
# Add startup event to load dependencies
5343
@app.on_event("startup")
5444
async def startup_event():
55-
config = Args()
56-
config.parse_args()
45+
app.add_middleware(
46+
CORSMiddleware,
47+
allow_origins=app_config.allow_origins,
48+
allow_credentials=app_config.allow_credentials,
49+
allow_methods=app_config.allow_methods,
50+
allow_headers=app_config.allow_headers,
51+
)
5752

5853
ApiDependencies.initialize(
59-
config=config, event_handler_id=event_handler_id, logger=logger
54+
config=app_config, event_handler_id=event_handler_id, logger=logger
6055
)
6156

6257

@@ -148,14 +143,11 @@ def overridden_redoc():
148143

149144
def invoke_api():
150145
# Start our own event loop for eventing usage
151-
# TODO: determine if there's a better way to do this
152146
loop = asyncio.new_event_loop()
153-
config = uvicorn.Config(app=app, host="0.0.0.0", port=9090, loop=loop)
147+
config = uvicorn.Config(app=app, host=app_config.host, port=app_config.port, loop=loop)
154148
# Use access_log to turn off logging
155-
156149
server = uvicorn.Server(config)
157150
loop.run_until_complete(server.serve())
158151

159-
160152
if __name__ == "__main__":
161153
invoke_api()

invokeai/app/cli/commands.py

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -285,3 +285,19 @@ def run(self, context: CliContext) -> None:
285285
nx.draw_networkx_labels(nxgraph, pos, font_size=20, font_family="sans-serif")
286286
plt.axis("off")
287287
plt.show()
288+
289+
class SortedHelpFormatter(argparse.HelpFormatter):
290+
def _iter_indented_subactions(self, action):
291+
try:
292+
get_subactions = action._get_subactions
293+
except AttributeError:
294+
pass
295+
else:
296+
self._indent()
297+
if isinstance(action, argparse._SubParsersAction):
298+
for subaction in sorted(get_subactions(), key=lambda x: x.dest):
299+
yield subaction
300+
else:
301+
for subaction in get_subactions():
302+
yield subaction
303+
self._dedent()

invokeai/app/cli/completer.py

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,10 @@
1111
from typing import List, Dict, Literal, get_args, get_type_hints, get_origin
1212

1313
import invokeai.backend.util.logging as logger
14-
from ...backend import ModelManager, Globals
14+
from ...backend import ModelManager
1515
from ..invocations.baseinvocation import BaseInvocation
1616
from .commands import BaseCommand
17+
from ..services.invocation_services import InvocationServices
1718

1819
# singleton object, class variable
1920
completer = None
@@ -131,13 +132,13 @@ def _pre_input_hook(self):
131132
readline.redisplay()
132133
self.linebuffer = None
133134

134-
def set_autocompleter(model_manager: ModelManager) -> Completer:
135+
def set_autocompleter(services: InvocationServices) -> Completer:
135136
global completer
136137

137138
if completer:
138139
return completer
139140

140-
completer = Completer(model_manager)
141+
completer = Completer(services.model_manager)
141142

142143
readline.set_completer(completer.complete)
143144
# pyreadline3 does not have a set_auto_history() method
@@ -153,7 +154,7 @@ def set_autocompleter(model_manager: ModelManager) -> Completer:
153154
readline.parse_and_bind("set skip-completed-text on")
154155
readline.parse_and_bind("set show-all-if-ambiguous on")
155156

156-
histfile = Path(Globals.root, ".invoke_history")
157+
histfile = Path(services.configuration.root_dir / ".invoke_history")
157158
try:
158159
readline.read_history_file(histfile)
159160
readline.set_history_length(1000)

invokeai/app/cli_app.py

Lines changed: 34 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,14 @@
44
import os
55
import re
66
import shlex
7+
import sys
78
import time
89
from typing import (
910
Union,
1011
get_type_hints,
1112
)
1213

13-
from pydantic import BaseModel
14+
from pydantic import BaseModel, ValidationError
1415
from pydantic.fields import Field
1516

1617

@@ -19,8 +20,7 @@
1920
from .services.default_graphs import create_system_graphs
2021
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
2122

22-
from ..backend import Args
23-
from .cli.commands import BaseCommand, CliContext, ExitCli, add_graph_parsers, add_parsers
23+
from .cli.commands import BaseCommand, CliContext, ExitCli, add_graph_parsers, add_parsers, SortedHelpFormatter
2424
from .cli.completer import set_autocompleter
2525
from .invocations.baseinvocation import BaseInvocation
2626
from .services.events import EventServiceBase
@@ -34,7 +34,7 @@
3434
from .services.invoker import Invoker
3535
from .services.processor import DefaultInvocationProcessor
3636
from .services.sqlite import SqliteItemStorage
37-
37+
from .services.config import get_invokeai_config
3838

3939
class CliCommand(BaseModel):
4040
command: Union[BaseCommand.get_commands() + BaseInvocation.get_invocations()] = Field(discriminator="type") # type: ignore
@@ -64,7 +64,7 @@ def add_invocation_args(command_parser):
6464

6565
def get_command_parser(services: InvocationServices) -> argparse.ArgumentParser:
6666
# Create invocation parser
67-
parser = argparse.ArgumentParser()
67+
parser = argparse.ArgumentParser(formatter_class=SortedHelpFormatter)
6868

6969
def exit(*args, **kwargs):
7070
raise InvalidArgs
@@ -189,24 +189,25 @@ def invoke_all(context: CliContext):
189189

190190

191191
def invoke_cli():
192-
config = Args()
193-
config.parse_args()
192+
# this gets the basic configuration
193+
config = get_invokeai_config()
194+
195+
# get the optional list of invocations to execute on the command line
196+
parser = config.get_parser()
197+
parser.add_argument('commands',nargs='*')
198+
invocation_commands = parser.parse_args().commands
199+
200+
# get the optional file to read commands from.
201+
# Simplest is to use it for STDIN
202+
if infile := config.from_file:
203+
sys.stdin = open(infile,"r")
204+
194205
model_manager = get_model_manager(config,logger=logger)
195-
196-
# This initializes the autocompleter and returns it.
197-
# Currently nothing is done with the returned Completer
198-
# object, but the object can be used to change autocompletion
199-
# behavior on the fly, if desired.
200-
set_autocompleter(model_manager)
201-
206+
202207
events = EventServiceBase()
203-
208+
output_folder = config.output_path
204209
metadata = PngMetadataService()
205210

206-
output_folder = os.path.abspath(
207-
os.path.join(os.path.dirname(__file__), "../../../outputs")
208-
)
209-
210211
# TODO: build a file/path manager?
211212
db_location = os.path.join(output_folder, "invokeai.db")
212213

@@ -226,6 +227,7 @@ def invoke_cli():
226227
processor=DefaultInvocationProcessor(),
227228
restoration=RestorationServices(config,logger=logger),
228229
logger=logger,
230+
configuration=config,
229231
)
230232

231233
system_graphs = create_system_graphs(services.graph_library)
@@ -241,10 +243,18 @@ def invoke_cli():
241243
# print(services.session_manager.list())
242244

243245
context = CliContext(invoker, session, parser)
246+
set_autocompleter(services)
244247

245-
while True:
248+
command_line_args_exist = len(invocation_commands) > 0
249+
done = False
250+
251+
while not done:
246252
try:
247-
cmd_input = input("invoke> ")
253+
if command_line_args_exist:
254+
cmd_input = invocation_commands.pop(0)
255+
done = len(invocation_commands) == 0
256+
else:
257+
cmd_input = input("invoke> ")
248258
except (KeyboardInterrupt, EOFError):
249259
# Ctrl-c exits
250260
break
@@ -368,6 +378,9 @@ def invoke_cli():
368378
invoker.services.logger.warning('Invalid command, use "help" to list commands')
369379
continue
370380

381+
except ValidationError:
382+
invoker.services.logger.warning('Invalid command arguments, run "<command> --help" for summary')
383+
371384
except SessionError:
372385
# Start a new session
373386
invoker.services.logger.warning("Session error: creating a new session")

invokeai/app/invocations/compel.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,6 @@
1616
Fragment,
1717
)
1818

19-
from invokeai.backend.globals import Globals
20-
2119

2220
class ConditioningField(BaseModel):
2321
conditioning_name: Optional[str] = Field(default=None, description="The name of conditioning data")
@@ -103,7 +101,7 @@ def load_huggingface_concepts(concepts: list[str]):
103101
conjunction = Compel.parse_prompt_string(prompt_str)
104102
prompt: Union[FlattenedPrompt, Blend] = conjunction.prompts[0]
105103

106-
if getattr(Globals, "log_tokenization", False):
104+
if context.services.configuration.log_tokenization:
107105
log_tokenization_for_prompt_object(prompt, tokenizer)
108106

109107
c, options = compel.build_conditioning_tensor_for_prompt_object(prompt)

0 commit comments

Comments
 (0)