Skip to content

Conversation

abrham17
Copy link
Contributor

@abrham17 abrham17 commented Sep 6, 2025

real time emotion visualizer for emotion in the curious agent works as follows:

call the function in the main loop of the curious agent

        ($_ (pyModuleX visualizeEmotionValues $emotionVals))

in the code above the pyModuleX accepts a function and an argument for that function.

so every time the mainloop is called the emotion visualizer will be updated using the values of the emotion.

Current problem : -

the result is non interactive and works by overwriting the previous image saved. other than this problem it works well.
but I think this way is better because it is better than opening external browser or GUI for visualization and things change after the main loop calculates everything needed(this take long).

The second important feature i add is text to speech pipeline which integrate speech inputs by changing them to text and then run the main loop based on the input.
with these capability a user can select from speech and text input at the entry of the curious agent.
for more detail read the STT_IMPLEMENTATION_SUMMARY.md

@abrham17 abrham17 changed the title emotion visualizations real time speech to text + emotion visualizations Sep 22, 2025
@Nahom32 Nahom32 merged commit d0ffa57 into iCog-Labs-Dev:dev Oct 9, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants