From c75c371a3ce1b25d248b78310c9397c47cf94974 Mon Sep 17 00:00:00 2001
From: Tejas Mahajan <141305477+mahajantejas@users.noreply.github.com>
Date: Thu, 27 Jun 2024 15:49:38 +0530
Subject: [PATCH 1/5] Delete docs/4. Integrations/Bhashini ASR
Deleted extra bhashini asr documentation
---
docs/4. Integrations/Bhashini ASR | 78 -------------------------------
1 file changed, 78 deletions(-)
delete mode 100644 docs/4. Integrations/Bhashini ASR
diff --git a/docs/4. Integrations/Bhashini ASR b/docs/4. Integrations/Bhashini ASR
deleted file mode 100644
index e8ef912a4..000000000
--- a/docs/4. Integrations/Bhashini ASR
+++ /dev/null
@@ -1,78 +0,0 @@
-> ### **4 minute read `Advanced`**
-
-**“Bhashini” is a project or initiative aimed at providing easy access to the Internet and digital services for all Indians in their native languages. The primary goal of the project is to increase the amount of content available in Indian languages, thereby promoting digital inclusivity and accessibility for a broader population**
-
----
-
-## Bhashini in Glific
-- Bhashini APIs are used in Glific for speech recognition
-- Whatsapp user can share an audio file, which could be converted to text (in the language chosen by the user)
-- The converted text could be stored in Google Sheets, used in flows/ used as per the use case
-
----
-
-## Sample Chat
-
-
----
-
-## Steps to integrate Bhashini ASR in Glific flows
-1. Create an account in [Bhashini website](https://bhashini.gov.in/ulca/user/register#) using name & email ID. Please verify the account using the email received.
-
-
-2. Go to ‘My Profile’ on the top-right corner.
-
-
-3. Generate ‘ULCA API Key’ (top-right corner). Also make note of the User ID on top of the button. This will be used inside the webhook call in Glific flow.
-
-
-4. Make note of the API Key generated. This is required inside the webhook call used in the Glific flow for calling the Bhashini API. (Starting with 01af.. here)
-
-
-5. Now login to your Glific instance and create a flow. In this example we’ll use a flow called Maven ASR Bhashini, used for recording incoming audio queries in a Google Sheet. The flow starts with asking the user to share the query as a voice note and it is saved as ‘speech’.
-
-The download link for a few sample flows are given below to test it for yourself.
-
-
-
-6. After the audio is captured, the Bhashini ASR API is called using a Webhook function called ‘speech_to_text_with_bhasini’ and the response is stored as ‘speech2text’.
-
-- The function name is pre-defined, you should always use the name ‘speech_to_text_with_bhasini’ to call the Bhashini API
-- The response stored (speech2text here) could be given any name of your liking, just like any other flow variable
-
-
-
-7. The webhook body is shown below. Please update the parameters as shown :
-
-- speech : It should be updated with the result name given for the audio file captured. Here, it is saved as ‘speech’ (Step 5), hence the value is @results.speech.input (If the audio note captured was saved as ‘query’, then the value will be @results.query.input)
-- userID : The userID is updated with the userID captured from Bhashini website. (Refer Step 3)
-- ulcaApiKey : The API Key needs to be updated with the ULCA API key captured from Bhashini website. (Refer Step 4)
-- pipelineId : Keep the value as given in the screenshot below - “64392f96daac500b55c543cd”
-- base_url : Keep the value as given in the screenshot below - “https://meity-auth.ulcacontrib.org/ulca/apis/v0/model/”
-- contact : Keep the value as given in the screenshot below - “@contact”
-You can read more about the variables used inside the webhook body/ Bhashini APIs [here](https://bhashini.gitbook.io/bhashini-apis/)
-
-
-
-8. Once the webhook is updated, you could always refer to the translated text as ‘@results.speech2text.asr_response_text’ to use it inside the flow.
-
-The output of the text response from the Bhashini depends on the language preference of the user. For instance if a user has selected Hindi language, the response from Glific will be in Hindi script.
-
-9. You could additionally link the webhook to a ‘Link Google Sheet’ node to record the translated text into a spreadsheet as shown in the flow below.
-
-
-
-10. The Google Sheet node is configured as shown below. The fourth variable being used is ‘@results.speech2text.asr_response_text’ which will capture the translated text in the fourth column.
-
-
-11. The incoming audio notes in this flow will be captured in Google Sheets as shown below
-
-
-## Sample Flow Links
-*Test some sample flows for yourself by importing these flows to your Glific instance.*
-
-- [Maven ASR Bhashini](https://drive.google.com/file/d/1qXxDIaP4MMDl18NsgVuhF1H8cIetJNiM/view?usp=sharing)
-- [Bhasini API](https://drive.google.com/file/d/1K-KcJ0NFDIcVN2w8E1sTnev36AEGdv5I/view?usp=sharing)
-
-## Blogs
-- [Getting started with Bhashini ASR](https://glific.org/getting-started-using-asr-with-bhashini-api/)
From 38487a5a011b322efd327ee469aef54ee82b2f91 Mon Sep 17 00:00:00 2001
From: Tejas Mahajan <141305477+mahajantejas@users.noreply.github.com>
Date: Thu, 27 Jun 2024 16:33:26 +0530
Subject: [PATCH 2/5] Update and rename Bhashini ASR.md to Bhashini
Integrations.md
Updated documentation for bhashini speech to text
Added documentation for bhashini text to speech
---
docs/4. Integrations/Bhashini ASR.md | 78 ----------------
docs/4. Integrations/Bhashini Integrations.md | 91 +++++++++++++++++++
2 files changed, 91 insertions(+), 78 deletions(-)
delete mode 100644 docs/4. Integrations/Bhashini ASR.md
create mode 100644 docs/4. Integrations/Bhashini Integrations.md
diff --git a/docs/4. Integrations/Bhashini ASR.md b/docs/4. Integrations/Bhashini ASR.md
deleted file mode 100644
index fb87c6b77..000000000
--- a/docs/4. Integrations/Bhashini ASR.md
+++ /dev/null
@@ -1,78 +0,0 @@
-> ### **4 minute read `Advanced`**
-
-**`Bhashini` is a project or initiative aimed at providing easy access to the Internet and digital services for all Indians in their native languages. The primary goal of the project is to increase the amount of content available in Indian languages, thereby promoting digital inclusivity and accessibility for a broader population**
-
----
-
-## Bhashini in Glific
-- Bhashini APIs are used in Glific for speech recognition
-- Whatsapp user can share an audio file, which could be converted to text (in the language chosen by the user)
-- The converted text could be stored in Google Sheets, used in flows/ used as per the use case
-
----
-
-## Sample Chat
-
-
----
-
-## Steps to integrate Bhashini ASR in Glific flows
-1. Create an account in [Bhashini website](https://bhashini.gov.in/ulca/user/register#) using name & email ID. Please verify the account using the email received.
-
-
-2. Go to ‘My Profile’ on the top-right corner.
-
-
-3. Generate ‘ULCA API Key’ (top-right corner). Also make note of the User ID on top of the button. This will be used inside the webhook call in Glific flow.
-
-
-4. Make note of the API Key generated. This is required inside the webhook call used in the Glific flow for calling the Bhashini API. (Starting with 01af.. here)
-
-
-5. Now login to your Glific instance and create a flow. In this example we’ll use a flow called Maven ASR Bhashini, used for recording incoming audio queries in a Google Sheet. The flow starts with asking the user to share the query as a voice note and it is saved as ‘speech’.
-
-The download link for a few sample flows are given below to test it for yourself.
-
-
-
-6. After the audio is captured, the Bhashini ASR API is called using a Webhook function called ‘speech_to_text_with_bhasini’ and the response is stored as ‘speech2text’.
-
-- The function name is pre-defined, you should always use the name ‘speech_to_text_with_bhasini’ to call the Bhashini API
-- The response stored (speech2text here) could be given any name of your liking, just like any other flow variable
-
-
-
-7. The webhook body is shown below. Please update the parameters as shown :
-
-- speech : It should be updated with the result name given for the audio file captured. Here, it is saved as ‘speech’ (Step 5), hence the value is @results.speech.input (If the audio note captured was saved as ‘query’, then the value will be @results.query.input)
-- userID : The userID is updated with the userID captured from Bhashini website. (Refer Step 3)
-- ulcaApiKey : The API Key needs to be updated with the ULCA API key captured from Bhashini website. (Refer Step 4)
-- pipelineId : Keep the value as given in the screenshot below - “64392f96daac500b55c543cd”
-- base_url : Keep the value as given in the screenshot below - “https://meity-auth.ulcacontrib.org/ulca/apis/v0/model/”
-- contact : Keep the value as given in the screenshot below - “@contact”
-You can read more about the variables used inside the webhook body/ Bhashini APIs [here](https://bhashini.gitbook.io/bhashini-apis/)
-
-
-
-8. Once the webhook is updated, you could always refer to the translated text as ‘@results.speech2text.asr_response_text’ to use it inside the flow.
-
-The output of the text response from the Bhashini depends on the language preference of the user. For instance if a user has selected Hindi language, the response from Glific will be in Hindi script.
-
-9. You could additionally link the webhook to a ‘Link Google Sheet’ node to record the translated text into a spreadsheet as shown in the flow below.
-
-
-
-10. The Google Sheet node is configured as shown below. The fourth variable being used is ‘@results.speech2text.asr_response_text’ which will capture the translated text in the fourth column.
-
-
-11. The incoming audio notes in this flow will be captured in Google Sheets as shown below
-
-
-## Sample Flow Links
-*Test some sample flows for yourself by importing these flows to your Glific instance.*
-
-- [Maven ASR Bhashini](https://drive.google.com/file/d/1qXxDIaP4MMDl18NsgVuhF1H8cIetJNiM/view?usp=sharing)
-- [Bhasini API](https://drive.google.com/file/d/1K-KcJ0NFDIcVN2w8E1sTnev36AEGdv5I/view?usp=sharing)
-
-## Blogs
-- [Getting started with Bhashini ASR](https://glific.org/getting-started-using-asr-with-bhashini-api/)
diff --git a/docs/4. Integrations/Bhashini Integrations.md b/docs/4. Integrations/Bhashini Integrations.md
new file mode 100644
index 000000000..8d435a3d9
--- /dev/null
+++ b/docs/4. Integrations/Bhashini Integrations.md
@@ -0,0 +1,91 @@
+> ### **4 minute read `Advanced`**
+
+**`Bhashini` is an initiative aimed at providing easy access to the Internet and digital services for all Indians in their native languages. The primary goal of the project is to increase the amount of content available in Indian languages, thereby promoting digital inclusivity and accessibility for a broader population**
+
+---
+
+## Bhashini in Glific
+- Bhashini APIs are used in Glific for converting **speech to text** and **text to speech**
+- By using speech to text, voice notes sent by users to chatbot can be converted into texts by NGOs
+- By using text to speech, text messages being sent via Glific platform to users can also be converted to voice notes in respective indic languages
+
+---
+
+## Steps to integrate Bhashini Speech To Text in Glific flows
+
+1. After the audio is captured, the Bhashini ASR API is called using a Webhook function called `speech_to_text_with_bhasini`. The response in the given example is stored as "bhashini_asr".
+
+- The function name is pre-defined, you should always use the name ‘speech_to_text_with_bhasini’ to call the Bhashini API
+- The webhook result name could be given any name of your liking, just like any other flow variable
+
+
+
+
+7. The webhook body is shown below. Please update the parameters as shown :
+
+
+
+
+- speech : It should be updated with the result name given for the audio file captured. Here, it is saved as ‘speech’ (Step 5), hence the value is @results.speech.input (If the audio note captured was saved as ‘query’, then the value will be @results.query.input)
+- contact : Keep the value as given in the screenshot below - “@contact”
+
+
+
+8. Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow.
+
+The output of the text response from the Bhashini depends on the language preference of the user. For instance if a user has selected Hindi language, the response from Glific will be in Hindi script.
+
+## Steps to integrate Bhashini Text To Speech in Glific flows
+
+
+
+1. This webhook can be used to generated a voice note for any given text message, it can be user generated or NGO staff written. (Above screen shot is of an example flow which takes in a user input text, and converts it inot a text message and voice note in a desired langauge)
+
+2. Call the webhook with function name as `nmt_tts_with_bhasini`. Give an appropriate name to the webhook result. In the shown example the result name is `bhashini_tts`
+
+4. Go to function body and pass the following params
+`{
+"text": "@results.result_3",
+"source_language": "english",
+"target_language": "hindi"
+}`
+here `text` is the text to be converted into voice note
+`source_language` is the langauge of the text
+`target_language` is the language the voice notes need to be in
+
+- Keep source language and target language same if translation is not needed.
+- Following are the possible values for `target_language` as covered by Bhashini
+ ` "tamil"
+ "kannada"
+ "malayalam"
+ "telugu"
+ "assamese"
+ "gujarati"
+ "bengali"
+ "punjabi"
+ "marathi"
+ "urdu"
+ "spanish"
+ "english"
+ "hindi"
+`
+
+
+4. To get the voice note output, create a send message node, go to attachments, select `Expression` from the dropdown of attachment types. Pass the variable `@results.bhasini_tts.media_url`. Here `bhashini_tts` is the name for the webhook result.
+
+
+**Please note**: In order to get the voice notes as outputs, the Glific instance must be linked to the Google Cloud Storage for your organization. This is to facilitate storage of the voice notes generated by Bhashini as a result of the webhook call. To set up Google Cloud Storage go [here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup)
+
+5. To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`. Here `bhashini_tts` is the name for the webhook result.
+
+
+
+
+## Sample Flow Links
+*Test some sample flows for yourself by importing these flows to your Glific instance.*
+
+- [Bhashini Speech to Text](https://drive.google.com/file/d/1qkNGzLCQacrlP96GCytCihRGM4LhfokL/view?usp=sharing)
+- [Bhashini Text to Speech](https://drive.google.com/file/d/1WCOLQMF-OgLVR7PNHXbggMSeDXMJbui7/view?usp=drive_link)
+
+## Blogs
+- [Getting started with Bhashini ASR](https://glific.org/getting-started-using-asr-with-bhashini-api/)
From 6a99b4d5e0217b2ac51e750cd83076899d0cf3af Mon Sep 17 00:00:00 2001
From: Tejas Mahajan <141305477+mahajantejas@users.noreply.github.com>
Date: Thu, 27 Jun 2024 17:18:20 +0530
Subject: [PATCH 3/5] Update Bhashini Integrations.md
end tag missing
---
docs/4. Integrations/Bhashini Integrations.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/4. Integrations/Bhashini Integrations.md b/docs/4. Integrations/Bhashini Integrations.md
index 8d435a3d9..29057614b 100644
--- a/docs/4. Integrations/Bhashini Integrations.md
+++ b/docs/4. Integrations/Bhashini Integrations.md
@@ -78,7 +78,7 @@ here `text` is the text to be converted into voice note
5. To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`. Here `bhashini_tts` is the name for the webhook result.
-
+
## Sample Flow Links
From e7241df4ad1b34ff3c10ff4452c9648a575e0e9b Mon Sep 17 00:00:00 2001
From: Fawas003
Date: Tue, 12 Aug 2025 12:06:33 +0530
Subject: [PATCH 4/5] Update Bhashini Integrations.md (#391)
---
docs/4. Integrations/Bhashini Integrations.md | 188 +++++++++++-------
1 file changed, 120 insertions(+), 68 deletions(-)
diff --git a/docs/4. Integrations/Bhashini Integrations.md b/docs/4. Integrations/Bhashini Integrations.md
index 29057614b..d99feabd7 100644
--- a/docs/4. Integrations/Bhashini Integrations.md
+++ b/docs/4. Integrations/Bhashini Integrations.md
@@ -1,91 +1,143 @@
-> ### **4 minute read `Advanced`**
-**`Bhashini` is an initiative aimed at providing easy access to the Internet and digital services for all Indians in their native languages. The primary goal of the project is to increase the amount of content available in Indian languages, thereby promoting digital inclusivity and accessibility for a broader population**
+
+
+
+
6 minutes read
+
Level: Advanced
+
Last Updated: August 2025
+
+
+
----
+# Bhashini Integration in Glific: Speech-to-Text and Text-to-Speech
-## Bhashini in Glific
-- Bhashini APIs are used in Glific for converting **speech to text** and **text to speech**
-- By using speech to text, voice notes sent by users to chatbot can be converted into texts by NGOs
-- By using text to speech, text messages being sent via Glific platform to users can also be converted to voice notes in respective indic languages
+Bhashini is an initiative by the Government of India aimed at making digital services and the internet accessible in all Indian languages.
+The integration of Bhashini into Glific enables NGOs and organizations to offer real-time translation and transliteration capabilities in various Indian languages, ensuring effective communication with end users in their preferred languages.
---
-## Steps to integrate Bhashini Speech To Text in Glific flows
+### This integration can be especially useful in use cases such as:
+- Translating chatbot content for multilingual campaigns.
+- Enabling users to respond in regional languages.
+- Transliteration helps convert text from one script to another, for example: writing Hindi words using English letters.
-1. After the audio is captured, the Bhashini ASR API is called using a Webhook function called `speech_to_text_with_bhasini`. The response in the given example is stored as "bhashini_asr".
+Bhashini specializes in Indic language translation and transliteration, supporting a wide range of languages and dialects. You can learn more about the platform [here](https://bhashini.gov.in)
-- The function name is pre-defined, you should always use the name ‘speech_to_text_with_bhasini’ to call the Bhashini API
-- The webhook result name could be given any name of your liking, just like any other flow variable
-
-
+## Steps to Integrate Bhashini Speech to Text in Glific Flows
+`Speech-to-Text (STT)` function in Glific can be used to convert user-recorded audio messages into text. This is especially helpful when users prefer speaking over typing, or in cases where typing in local languages is difficult.
+
+#### Step 1: Create a `Send message` node directing users to send their responses as audio messages, based on their preference.
+
+#### Step 2: In the `Wait for response` node, select `has audio` as the message response type. Also, give a Result Name. In the screenshot below, `speech` is used as the result name.
+
-7. The webhook body is shown below. Please update the parameters as shown :
+#### Step 3: Add a `Call Webhook` node. This is where we integrate the Bhashini service.
+- Select the `Function` from the dropdown.
+- In the `Function` field, enter `speech_to_text_with_bhasini`. The function name is pre-defined, you should always use the function `speech_to_text_with_bhasini` to call the Bhashini API for converting audio to text.
+- Give the webhook result name - you can use any name. In the screenshot example, it’s named `bhashini_asr`.
-
+
-- speech : It should be updated with the result name given for the audio file captured. Here, it is saved as ‘speech’ (Step 5), hence the value is @results.speech.input (If the audio note captured was saved as ‘query’, then the value will be @results.query.input)
-- contact : Keep the value as given in the screenshot below - “@contact”
+#### Step 4: Click on `Function Body` (top right corner) and add the parameters as shown in the screenshot below
+
+
+- `speech` : It should be updated with the result name given for the audio file captured. In this example, the variable is named `speech` (Step 2), hence the value is `@results.speech.input` (If the audio note captured was saved as `query`, then the value will be `@results.query.input`)
+- `contact` : Keep the value as given in the screenshot below - `@contact`
+#### Step 5: Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow. Add a `Send Message` node and paste this variable to show the converted text to the user.
+
+
+
-8. Once the webhook is updated, you could always refer to the translated text as `@results.bhashini_asr.asr_response_text` to use it inside the flow.
The output of the text response from the Bhashini depends on the language preference of the user. For instance if a user has selected Hindi language, the response from Glific will be in Hindi script.
-## Steps to integrate Bhashini Text To Speech in Glific flows
+[Sample Flow](https://drive.google.com/file/d/1qkNGzLCQacrlP96GCytCihRGM4LhfokL/view) Click on the Sample Flow link to import it and explore how it works.
+
+---
+
+## Steps to Integrate Bhashini Text To Speech in Glific Flows
+
+Text-to-Speech (TTS) function in Glific can be used to generate a voice note for any text message, whether it's typed by the end user or written by NGO staff. This allows organizations to make information more accessible, especially for end users who prefer audio over text.
+
+
+
+
+
+#### Step 1: Create a `Send Message` node asking users to reply in text if they prefer.
-
-1. This webhook can be used to generated a voice note for any given text message, it can be user generated or NGO staff written. (Above screen shot is of an example flow which takes in a user input text, and converts it inot a text message and voice note in a desired langauge)
+#### Step 2: In the `Wait for Response` node, select `has only the phrase` as the message response type. Also, give a Result Name. In the screenshot below, `result_3` is used as the result name.
+
+
+
+
+
+#### Step 3: Create a 'Call Webhook' node.
+- Select the `Function` from the dropdown.
+- In the `Function` field, enter `nmt_tts_with_bhasini` The function name is pre-defined, you should always use the function `nmt_tts_with_bhasini` to call the Bhashini API for converting text to audio.
+- Give the webhook result name - you can use any name. In the screenshot example, it’s named `bhashini_tts`.
-2. Call the webhook with function name as `nmt_tts_with_bhasini`. Give an appropriate name to the webhook result. In the shown example the result name is `bhashini_tts`
-4. Go to function body and pass the following params
-`{
-"text": "@results.result_3",
-"source_language": "english",
-"target_language": "hindi"
-}`
-here `text` is the text to be converted into voice note
-`source_language` is the langauge of the text
-`target_language` is the language the voice notes need to be in
-
-- Keep source language and target language same if translation is not needed.
-- Following are the possible values for `target_language` as covered by Bhashini
- ` "tamil"
- "kannada"
- "malayalam"
- "telugu"
- "assamese"
- "gujarati"
- "bengali"
- "punjabi"
- "marathi"
- "urdu"
- "spanish"
- "english"
- "hindi"
-`
-
-
-4. To get the voice note output, create a send message node, go to attachments, select `Expression` from the dropdown of attachment types. Pass the variable `@results.bhasini_tts.media_url`. Here `bhashini_tts` is the name for the webhook result.
-
-
-**Please note**: In order to get the voice notes as outputs, the Glific instance must be linked to the Google Cloud Storage for your organization. This is to facilitate storage of the voice notes generated by Bhashini as a result of the webhook call. To set up Google Cloud Storage go [here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup)
-
-5. To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`. Here `bhashini_tts` is the name for the webhook result.
-
-
-
-
-## Sample Flow Links
-*Test some sample flows for yourself by importing these flows to your Glific instance.*
-
-- [Bhashini Speech to Text](https://drive.google.com/file/d/1qkNGzLCQacrlP96GCytCihRGM4LhfokL/view?usp=sharing)
-- [Bhashini Text to Speech](https://drive.google.com/file/d/1WCOLQMF-OgLVR7PNHXbggMSeDXMJbui7/view?usp=drive_link)
-
-## Blogs
-- [Getting started with Bhashini ASR](https://glific.org/getting-started-using-asr-with-bhashini-api/)
+
+
+
+#### Step 4: Click Function Body (top right corner) and add the parameters as shown in the screenshot below.
+
+
+
+
+
+- `text` : It should be updated with the result name given for the response/query provided by the user.
+- `Source_language` : The original language of the text
+- `target_language` : The language in which the voice note will be generated
+- If translation is not needed, keep both `Source_language` and `target_language` the same.
+- Supported Target Languages: `"tamil" "kannada" "malayalam" "telugu" "assamese" "gujarati" "bengali" "punjabi" "marathi" "urdu" "spanish" "english" "hindi"`
+
+
+ #### Step 5: Create a `send Message` node and paste the variable.
+ `@results.bhasini_tts.media_url` for the voice input. `Bhasini_tts` is the webhook result name used in the given example.
+
+- Go to `Attachments` in the `Send Message` node
+- Select `Expression` from the dropdown.
+- Use the following expression: `@results.bhasini_tts.media_url`
+
+
+
+
+
+Please note: In order to get the voice notes as outputs, the Glific instance must be linked to the Google Cloud Storage for your organization. This is to facilitate storage of the voice notes generated by Bhashini as a result of the webhook call. To set up Google Cloud Storage [click here](https://glific.github.io/docs/docs/Onboarding/GCS%20Setup/Google%20Cloud%20Storage%20Setup/)
+
+
+#### Step 6: To get the translated text out, create another send message node, and call the `@results.bhasini_tts.translated_text`.
+
+
+
+
+
+
+
+[Sample Flow](https://drive.google.com/file/d/1WCOLQMF-OgLVR7PNHXbggMSeDXMJbui7/view) Click on the Sample Flow link to import it and explore how it works.
+
+
+---
+
+### Blogs
+
+[Blog Link](https://glific.org/the-importance-of-mother-language-in-the-indian-development-sector/)
+
+---
+
+### Video of Showcase
+
+[Video Link](https://www.youtube.com/watch?v=zS83U9OJJzk)
+
+
+_Watch from 25 minute mark to watch the Bhashini integration part_
+
+
+
+
From 0e0e3f3f1d92e3dfad048125c4f2209666cba7fc Mon Sep 17 00:00:00 2001
From: Akansha Sakhre
Date: Mon, 25 Aug 2025 12:46:20 +0530
Subject: [PATCH 5/5] Update Bhashini Integrations.md
---
docs/4. Integrations/Bhashini Integrations.md | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/docs/4. Integrations/Bhashini Integrations.md b/docs/4. Integrations/Bhashini Integrations.md
index 25411d5cc..a83d9857d 100644
--- a/docs/4. Integrations/Bhashini Integrations.md
+++ b/docs/4. Integrations/Bhashini Integrations.md
@@ -2,8 +2,8 @@