real time speech to text + emotion visualizations #99
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
real time emotion visualizer for emotion in the curious agent works as follows:
call the function in the main loop of the curious agent
in the code above the pyModuleX accepts a function and an argument for that function.
so every time the mainloop is called the emotion visualizer will be updated using the values of the emotion.
Current problem : -
the result is non interactive and works by overwriting the previous image saved. other than this problem it works well.
but I think this way is better because it is better than opening external browser or GUI for visualization and things change after the main loop calculates everything needed(this take long).
The second important feature i add is text to speech pipeline which integrate speech inputs by changing them to text and then run the main loop based on the input.
with these capability a user can select from speech and text input at the entry of the curious agent.
for more detail read the STT_IMPLEMENTATION_SUMMARY.md