You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+55-69Lines changed: 55 additions & 69 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,7 +54,7 @@ Both of these potential goals could pose challenges to interoperability, so we w
54
54
In this example, a single string is used to prompt the API, which is assumed to come from the user. The returned response is from the language model.
55
55
56
56
```js
57
-
constsession=awaitai.languageModel.create();
57
+
constsession=awaitLanguageModel.create();
58
58
59
59
// Prompt the model and wait for the whole result to come back.
60
60
constresult=awaitsession.prompt("Write me a poem.");
@@ -72,7 +72,7 @@ for await (const chunk of stream) {
72
72
The language model can be configured with a special "system prompt" which gives it the context for future interactions:
73
73
74
74
```js
75
-
constsession=awaitai.languageModel.create({
75
+
constsession=awaitLanguageModel.create({
76
76
systemPrompt:"Pretend to be an eloquent hamster."
77
77
});
78
78
@@ -88,7 +88,7 @@ If the system prompt is too large, then the promise will be rejected with a `Quo
88
88
If developers want to provide examples of the user/assistant interaction, they can use the `initialPrompts` array. This aligns with the common "chat completions API" format of `{ role, content }` pairs, including a `"system"` role which can be used instead of the `systemPrompt` option shown above.
89
89
90
90
```js
91
-
constsession=awaitai.languageModel.create({
91
+
constsession=awaitLanguageModel.create({
92
92
initialPrompts: [
93
93
{ role:"system", content:"Predict up to 5 emojis as a response to a comment. Output emojis, comma-separated." },
94
94
{ role:"user", content:"This is amazing!" },
@@ -121,7 +121,7 @@ Some details on error cases:
121
121
Our examples so far have provided `prompt()` and `promptStreaming()` with a single string. Such cases assume messages will come from the user role. These methods can also take in objects in the `{ role, content }` format, or arrays of such objects, in case you want to provide multiple user or assistant messages before getting another assistant message:
systemPrompt:"You are a mediator in a discussion between two departments."
126
126
});
127
127
@@ -141,7 +141,7 @@ Because of their special behavior of being preserved on context window overflow,
141
141
A special case of the above is using the assistant role to emulate tool use or function-calling, by marking a response as coming from the assistant side of the conversation:
142
142
143
143
```js
144
-
constsession=awaitai.languageModel.create({
144
+
constsession=awaitLanguageModel.create({
145
145
systemPrompt:`
146
146
You are a helpful assistant. You have access to the following tools:
147
147
- calculator: A calculator. To use it, write "CALCULATOR: <expression>" where <expression> is a valid mathematical expression.
@@ -186,7 +186,7 @@ Sessions that will include these inputs need to be created using the `expectedIn
186
186
A sample of using these APIs:
187
187
188
188
```js
189
-
constsession=awaitai.languageModel.create({
189
+
constsession=awaitLanguageModel.create({
190
190
// { type: "text" } is not necessary to include explicitly, unless
191
191
// you also want to include expected input languages for text.
192
192
expectedInputs: [
@@ -237,9 +237,9 @@ Details:
237
237
To help with programmatic processing of language model responses, the prompt API supports structured outputs defined by a JSON schema.
@@ -427,7 +427,7 @@ The default behavior for a language model session assumes that the input languag
427
427
It's better practice, if possible, to supply the `create()` method with information about the expected input languages. This allows the implementation to download any necessary supporting material, such as fine-tunings or safety-checking models, and to immediately reject the promise returned by `create()` if the web developer needs to use languages that the browser is not capable of supporting:
428
428
429
429
```js
430
-
constsession=awaitai.languageModel.create({
430
+
constsession=awaitLanguageModel.create({
431
431
systemPrompt:`
432
432
You are a foreign-language tutor for Japanese. The user is Korean. If necessary, either you or
433
433
the user might "break character" and ask for or give clarification in Korean. But by default,
The expected input languages are supplied alongside the [expected input types](#multimodal-inputs), and can vary per type. Our above example assumes the default of `type: "text"`, but more complicated combinations are possible, e.g.:
445
445
446
446
```js
447
-
constsession=awaitai.languageModel.create({
447
+
constsession=awaitLanguageModel.create({
448
448
expectedInputs: [
449
449
// Be sure to download any material necessary for English and Japanese text
450
450
// prompts, or fail-fast if the model cannot support that.
@@ -465,9 +465,9 @@ Note that there is no way of specifying output languages, since these are govern
465
465
466
466
### Testing available options before creation
467
467
468
-
In the simple case, web developers should call `ai.languageModel.create()`, and handle failures gracefully.
468
+
In the simple case, web developers should call `LanguageModel.create()`, and handle failures gracefully.
469
469
470
-
However, if the web developer wants to provide a differentiated user experience, which lets users know ahead of time that the feature will not be possible or might require a download, they can use the promise-returning `ai.languageModel.availability()` method. This method lets developers know, before calling `create()`, what is possible with the implementation.
470
+
However, if the web developer wants to provide a differentiated user experience, which lets users know ahead of time that the feature will not be possible or might require a download, they can use the promise-returning `LanguageModel.availability()` method. This method lets developers know, before calling `create()`, what is possible with the implementation.
471
471
472
472
The method will return a promise that fulfills with one of the following availability values:
// Either the API overall, or the expected languages and temperature setting, is not available.
@@ -507,7 +507,7 @@ if (availability !== "unavailable") {
507
507
For cases where using the API is only possible after a download, you can monitor the download progress (e.g. in order to show your users a progress bar) using code such as the following:
508
508
509
509
```js
510
-
constsession=awaitai.languageModel.create({
510
+
constsession=awaitLanguageModel.create({
511
511
monitor(m) {
512
512
m.addEventListener("downloadprogress", e=> {
513
513
console.log(`Downloaded ${e.loaded*100}%`);
@@ -539,39 +539,25 @@ Finally, note that there is a sort of precedent in the (never-shipped) [`FetchOb
539
539
### Full API surface in Web IDL
540
540
541
541
```webidl
542
-
// Shared self.ai APIs:
543
-
// See https://webmachinelearning.github.io/writing-assistance-apis/#shared-ai-api for most of them.
@@ -679,7 +665,7 @@ To actually get a response back from the model given a prompt, the following pos
679
665
3. Add an initial prompt to establish context. (This will not generate a response.)
680
666
4. Execute a prompt and receive a response.
681
667
682
-
We've chosen to manifest these 3-4 stages into the API as two methods, `ai.languageModel.create()` and `session.prompt()`/`session.promptStreaming()`, with some additional facilities for dealing with the fact that `ai.languageModel.create()` can include a download step. Some APIs simplify this into a single method, and some split it up into three (usually not four).
668
+
We've chosen to manifest these 3-4 stages into the API as two methods, `LanguageModel.create()` and `session.prompt()`/`session.promptStreaming()`, with some additional facilities for dealing with the fact that `LanguageModel.create()` can include a download step. Some APIs simplify this into a single method, and some split it up into three (usually not four).
0 commit comments