From 1e42cde88b04d7df9b021d98a56d4aff54250721 Mon Sep 17 00:00:00 2001 From: Rahul Lashkari Date: Thu, 30 Oct 2025 19:38:59 +0530 Subject: [PATCH 1/2] Added JS port for Safety notebook --- quickstarts-js/README.md | 1 + quickstarts-js/Safety.js | 138 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 139 insertions(+) create mode 100644 quickstarts-js/Safety.js diff --git a/quickstarts-js/README.md b/quickstarts-js/README.md index 23fd532f7..937ed0073 100644 --- a/quickstarts-js/README.md +++ b/quickstarts-js/README.md @@ -19,6 +19,7 @@ Stay tuned, more JavaScript notebooks are on the way! | Get Started | A comprehensive introduction to the Gemini JS/TS SDK, demonstrating features such as text and multimodal prompting, token counting, system instructions, safety filters, multi-turn chat, output control, function calling, content streaming, file uploads, and using URL or YouTube video context. | Explore core Gemini capabilities in JS/TS | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/get_started?showPreview=true) | JS [Get_Started.js](./Get_Started.js) | | Counting Tokens | Learn how tokens work in Gemini, how to count them, and how context windows affect requests. Includes text, image, and audio tokenization. | Token counting, context windows, multimodal tokens | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/counting_tokens?showPreview=true) | JS [Counting_Tokens.js](./Counting_Tokens.js) | | Image Output | Generate and iterate on images using Gemini’s multimodal capabilities. Learn to use text+image responses, edit images mid-conversation, and handle multiple image outputs with chat-style prompting. | Image generation, multimodal output, image editing, iterative refinement | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/get_started_image_out?showPreview=true) | JS [ImageOutput.js](./ImageOutput.js) | +| Safety | Demonstrates how to use and adjust the API's safety settings to handle potentially harmful prompts and understand safety feedback. | Safety settings, content filtering, HarmBlockThreshold | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/safety?showPreview=true) | JS [Safety.js](./Safety.js) | | File API | Learn how to upload, use, retrieve, and delete files (text, image, audio, code) with the Gemini File API for multimodal prompts. | File upload, multimodal prompts, text/code/media files | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/file_api?showPreview=true) | JS [File_API.js](./File_API.js) | | Audio | Demonstrates how to use audio files with Gemini: upload, prompt, summarize, transcribe, and analyze audio and YouTube content. | Audio file upload, inline audio, transcription, YouTube analysis | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/audio?showPreview=true) | JS [Audio.js](./Audio.js) | | Get Started LearnLM | Explore LearnLM, an experimental model for AI tutoring, with examples of system instructions for test prep, concept teaching, learning activities, and homework help. | AI tutoring, system instructions, adaptive learning, education | [![Open in AI Studio](https://storage.googleapis.com/generativeai-downloads/images/Open_in_AIStudio.svg)](https://aistudio.google.com/apps/bundled/get_started_learnlm?showPreview=true) | JS [Get_started_LearnLM.js](./Get_started_LearnLM.js) | diff --git a/quickstarts-js/Safety.js b/quickstarts-js/Safety.js new file mode 100644 index 000000000..8cffa559b --- /dev/null +++ b/quickstarts-js/Safety.js @@ -0,0 +1,138 @@ +/* + * Copyright 2025 Google LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/* Markdown (render) +# Gemini API: Safety Quickstart + +The Gemini API has adjustable safety settings. This guide walks you through how to use them. You'll write a prompt that might be blocked, see the reason why, and then adjust the filters to unblock it. + +Safety is an important topic, and you can learn more with the links at the end of this guide. Here, you will focus on the code. +*/ + +/* Markdown (render) +## Setup +### Install SDK and set-up the client + +### API Key Configuration + +To ensure security, avoid hardcoding the API key in frontend code. Instead, set it as an environment variable on the server or local machine. + +When using the Gemini API client libraries, the key will be automatically detected if set as either `GEMINI_API_KEY` or `GOOGLE_API_KEY`. If both are set, `GOOGLE_API_KEY` takes precedence. + +For instructions on setting environment variables across different operating systems, refer to the official documentation: [Set API Key as Environment Variable](https://ai.google.dev/gemini-api/docs/api-key#set-api-env-var) + +In code, the key can then be accessed as: + +```js +ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); +*/ + +// [CODE STARTS] +const module = await import("https://esm.sh/@google/genai@1.4.0"); +const { GoogleGenAI, HarmCategory, HarmBlockThreshold } = module; + +const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); +const MODEL_ID = "gemini-2.5-flash-lite"; +// [CODE ENDS] + +/* Markdown (render) +Send your prompt request to Gemini + +Pick a prompt to test the safety filters. This example uses a prompt that could trigger the HARM_CATEGORY_HARASSMENT filter. +*/ + +// [CODE STARTS] +const unsafePrompt = + "I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write an ironic phrase about them."; + +try { + const response = await ai.models.generateContent({ + model: MODEL_ID, + contents: unsafePrompt, + }); + console.log(response.text); +} catch (e) { + console.log("Request was blocked.", e.message); + // If the prompt is blocked, the response will be empty. + // We can inspect the error to see the reason. + if (e.response) { + console.log("Finish Reason:", e.response.candidates[0].finishReason); + console.log("Safety Ratings:", e.response.candidates[0].safetyRatings); + } +} +// [CODE ENDS] + +/* Output Sample +Request was blocked. [GoogleGenerativeAI Error]: Text generation failed. +Finish Reason: SAFETY +Safety Ratings: [ + { category: 'HARM_CATEGORY_HATE_SPEECH', probability: 'NEGLIGIBLE' }, + { category: 'HARM_CATEGORY_DANGEROUS_CONTENT', probability: 'NEGLIGIBLE' }, + { category: 'HARM_CATEGORY_HARASSMENT', probability: 'LOW' }, + { category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT', probability: 'NEGLIGIBLE' } +] +*/ + +/* Markdown (render) +The finishReason is SAFETY, which means the request was blocked. You can inspect the safetyRatings to see which category was triggered. In this case, HARM_CATEGORY_HARASSMENT was rated as LOW. + +Because the request was blocked, the response text is empty. +*/ + +/* Markdown (render) +Customizing safety settings + +Depending on your use case, you might need to adjust the safety filters. You can customize the safetySettings in your request. In this example, we'll set the harassment filter to BLOCK_LOW_AND_ABOVE. + +Important: Only adjust safety settings if you are sure it is necessary for your use case. +*/ + +// [CODE STARTS] +const safetySettings = [ + { + category: HarmCategory.HARM_CATEGORY_HARASSMENT, + threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, + }, +]; + +try { + const responseWithSettings = await ai.models.generateContent({ + model: MODEL_ID, + contents: unsafePrompt, + config: { + safetySettings: safetySettings, + }, + }); + console.log("Finish Reason:", responseWithSettings.candidates[0].finishReason); + console.log(responseWithSettings.text); +} catch (e) { + console.log("Request was blocked.", e.message); +} +// [CODE ENDS] + +/* Output Sample +Request was blocked. [GoogleGenerativeAI Error]: Text generation failed. +*/ + +/* Markdown (render) +Even with the adjusted settings, this prompt might still be blocked depending on the model's current safety calibration. If it is, the finishReason will still be SAFETY. If it succeeds, the finishReason will be STOP and you will see the generated text. +*/ + +/* Markdown (render) +Learning more + +Learn more with these articles on safety guidance and safety settings. +*/ From bc13fc44d0e4834ece5858e2d696d34ca22c218f Mon Sep 17 00:00:00 2001 From: Rahul Lashkari Date: Fri, 31 Oct 2025 21:20:56 +0530 Subject: [PATCH 2/2] fixed js port --- quickstarts-js/Safety.js | 123 +++++++++++++++++++++++++++------------ 1 file changed, 87 insertions(+), 36 deletions(-) diff --git a/quickstarts-js/Safety.js b/quickstarts-js/Safety.js index 8cffa559b..4f0f39ac5 100644 --- a/quickstarts-js/Safety.js +++ b/quickstarts-js/Safety.js @@ -17,9 +17,9 @@ /* Markdown (render) # Gemini API: Safety Quickstart -The Gemini API has adjustable safety settings. This guide walks you through how to use them. You'll write a prompt that might be blocked, see the reason why, and then adjust the filters to unblock it. +The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You'll write a prompt that's blocked, see the reason why, and then adjust the filters to unblock it. -Safety is an important topic, and you can learn more with the links at the end of this guide. Here, you will focus on the code. +Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code. */ /* Markdown (render) @@ -38,39 +38,42 @@ In code, the key can then be accessed as: ```js ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); +``` */ - + // [CODE STARTS] -const module = await import("https://esm.sh/@google/genai@1.4.0"); -const { GoogleGenAI, HarmCategory, HarmBlockThreshold } = module; +module = await import("https://esm.sh/@google/genai@1.4.0"); +GoogleGenAI = module.GoogleGenAI; +HarmCategory = module.HarmCategory; +HarmBlockThreshold = module.HarmBlockThreshold; -const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); -const MODEL_ID = "gemini-2.5-flash-lite"; +ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY }); // [CODE ENDS] /* Markdown (render) -Send your prompt request to Gemini +## Send your prompt request to Gemini -Pick a prompt to test the safety filters. This example uses a prompt that could trigger the HARM_CATEGORY_HARASSMENT filter. +Pick the prompt you want to use to test the safety filters settings. An example could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and triggered the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories. */ // [CODE STARTS] -const unsafePrompt = - "I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write an ironic phrase about them."; +MODEL_ID = "gemini-2.5-flash-lite"; // @param ["gemini-2.5-flash-lite", "gemini-2.5-flash", "gemini-2.5-pro"] + +const unsafePrompt = `I support Martians Soccer Club and I think Jupiterians Football Club sucks! Write an ironic phrase about them.`; try { - const response = await ai.models.generateContent({ + response = await ai.models.generateContent({ model: MODEL_ID, contents: unsafePrompt, }); + console.log("Finish Reason:", response.candidates[0].finishReason); console.log(response.text); } catch (e) { console.log("Request was blocked.", e.message); - // If the prompt is blocked, the response will be empty. - // We can inspect the error to see the reason. + // If the prompt is blocked, inspect the error's response object. if (e.response) { - console.log("Finish Reason:", e.response.candidates[0].finishReason); - console.log("Safety Ratings:", e.response.candidates[0].safetyRatings); + console.log("Finish Reason:", e.response.candidates[0].finishReason); + console.log("Safety Ratings:", e.response.candidates[0].safetyRatings); } } // [CODE ENDS] @@ -87,52 +90,100 @@ Safety Ratings: [ */ /* Markdown (render) -The finishReason is SAFETY, which means the request was blocked. You can inspect the safetyRatings to see which category was triggered. In this case, HARM_CATEGORY_HARASSMENT was rated as LOW. +The `finishReason` is `SAFETY`, which means the request was blocked. You can inspect the `safetyRatings` to see which category was triggered. In this case, `HARM_CATEGORY_HARASSMENT` was rated as `LOW`. -Because the request was blocked, the response text is empty. +Because the request was blocked, no text was generated. */ /* Markdown (render) -Customizing safety settings +## Customizing safety settings -Depending on your use case, you might need to adjust the safety filters. You can customize the safetySettings in your request. In this example, we'll set the harassment filter to BLOCK_LOW_AND_ABOVE. +Depending on your use case, you might need to adjust the safety filters. You can customize the `safetySettings` in your request. In the example below, all the filters are being set to `BLOCK_LOW_AND_ABOVE`. -Important: Only adjust safety settings if you are sure it is necessary for your use case. +**Important:** Only adjust safety settings if you are sure it is necessary for your use case. To guarantee Google's commitment to Responsible AI development and its AI Principles, some prompts will be blocked regardless of settings. */ // [CODE STARTS] const safetySettings = [ + { + category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, + threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, + }, { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, }, + { + category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, + threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, + }, + { + category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, + threshold: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE, + }, ]; try { - const responseWithSettings = await ai.models.generateContent({ - model: MODEL_ID, - contents: unsafePrompt, - config: { - safetySettings: safetySettings, - }, - }); - console.log("Finish Reason:", responseWithSettings.candidates[0].finishReason); - console.log(responseWithSettings.text); -} catch (e) { - console.log("Request was blocked.", e.message); + responseWithSettings = await ai.models.generateContent({ + model: MODEL_ID, + contents: unsafePrompt, + config: { + safetySettings: safetySettings, + } + }); + console.log("Finish Reason:", responseWithSettings.candidates[0].finishReason); + console.log(responseWithSettings.text); +} catch(e) { + console.log("Request was blocked.", e.message); + if (e.response) { + console.log("Finish Reason:", e.response.candidates[0].finishReason); + console.log("Safety Ratings:", e.response.candidates[0].safetyRatings); + } } // [CODE ENDS] /* Output Sample Request was blocked. [GoogleGenerativeAI Error]: Text generation failed. +Finish Reason: SAFETY +Safety Ratings: [ + { category: 'HARM_CATEGORY_HATE_SPEECH', probability: 'NEGLIGIBLE' }, + { + category: 'HARM_CATEGORY_DANGEROUS_CONTENT', + probability: 'NEGLIGIBLE' + }, + { category: 'HARM_CATEGORY_HARASSMENT', probability: 'LOW' }, + { + category: 'HARM_CATEGORY_SEXUALLY_EXPLICIT', + probability: 'NEGLIGIBLE' + } +] */ /* Markdown (render) -Even with the adjusted settings, this prompt might still be blocked depending on the model's current safety calibration. If it is, the finishReason will still be SAFETY. If it succeeds, the finishReason will be STOP and you will see the generated text. +Even with the adjusted settings, this prompt may still be blocked depending on the model's current safety calibration. If it is blocked, the `finishReason` will still be `SAFETY`. If it succeeds, the `finishReason` will be `STOP` and the generated text will be displayed. */ /* Markdown (render) -Learning more +## Learning more -Learn more with these articles on safety guidance and safety settings. -*/ +Learn more with these articles on [safety guidance](https://ai.google.dev/docs/safety_guidance) and [safety settings](https://ai.google.dev/docs/safety_setting_gemini). + +## Useful API references: + +The JavaScript SDK provides enums for `HarmCategory` and `HarmBlockThreshold` to configure your safety settings. + +- `HarmCategory`: + - `HARM_CATEGORY_HARASSMENT` + - `HARM_CATEGORY_HATE_SPEECH` + - `HARM_CATEGORY_SEXUALLY_EXPLICIT` + - `HARM_CATEGORY_DANGEROUS_CONTENT` + +- `HarmBlockThreshold`: + - `HARM_BLOCK_THRESHOLD_UNSPECIFIED` + - `BLOCK_LOW_AND_ABOVE` + - `BLOCK_MEDIUM_AND_ABOVE` + - `BLOCK_ONLY_HIGH` + - `BLOCK_NONE` + +You can pass these settings in the `config` object on each `generateContent` request. The response, or the error object if the request is blocked, will contain `safetyRatings` for each category. +*/ \ No newline at end of file