Skip to content

[bug] ToxicLanguage doesn't work with streaming and OpenAI  #619

@dawidm

Description

@dawidm

Describe the bug
A first returned value when streaming with OpenAI is an empty string, which causes an error in the ToxicLanguage validator:

File ~/app/miniconda3/envs/pllum-guard/lib/python3.11/site-packages/guardrails/validators/toxic_language.py:192, in ToxicLanguage.validate(self, value, metadata)
189 metadata = self._metadata
191 if not value:
--> 192 raise ValueError("Value cannot be empty.")
194 if self._validation_method == "sentence":
195 return self.validate_each_sentence(value, metadata)

ValueError: Value cannot be empty.

To Reproduce

import guardrails as gd
import openai

answer_guard = gd.Guard.from_string([gd.validators.toxic_language.ToxicLanguage(),])
result = answer_guard(
     openai.chat.completions.create,
     model='gpt-3.5-turbo',
     prompt='Hi!',
     max_tokens=1024,
     temperature=0,
     stream=True,
 )
for s in result:
    print(s)

Expected behavior
I think that validator should return PassResult for an empty string. Or maybe StreamRunner should skip this first empty response?

Library version:
0.4.0

Additional context
...

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions