-
Notifications
You must be signed in to change notification settings - Fork 30.9k
Update tiny model information and pipeline tests #26285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
|
The I am not sure if Could you check this to see if a fix is required? You can checkout this branch python3 -m pytest -v tests/models/vits/test_modeling_vits.py::VitsModelTest::test_pipeline_text_to_audioThe full error log self = <tests.models.vits.test_modeling_vits.VitsModelTest testMethod=test_pipeline_text_to_audio>
@is_pipeline_test
@require_torch
def test_pipeline_text_to_audio(self):
> self.run_task_tests(task="text-to-audio")
tests/test_pipeline_mixin.py:413:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_pipeline_mixin.py:171: in run_task_tests
self.run_model_pipeline_tests(
tests/test_pipeline_mixin.py:209: in run_model_pipeline_tests
self.run_pipeline_test(task, repo_name, model_architecture, tokenizer_name, processor_name, commit)
tests/test_pipeline_mixin.py:298: in run_pipeline_test
task_test.run_pipeline_test(pipeline, examples)
tests/pipelines/test_pipelines_text_to_audio.py:185: in run_pipeline_test
outputs = speech_generator(["This is great !", "Something else"], forward_params=forward_params)
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/text_to_audio.py:138: in __call__
return super().__call__(text_inputs, **forward_params)
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/base.py:1121: in __call__
outputs = list(final_iterator)
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:124: in __next__
item = next(self.iterator)
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:125: in __next__
processed = self.infer(item, **self.params)
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/base.py:1046: in forward
model_outputs = self._forward(model_inputs, **forward_params)
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/text_to_audio.py:114: in _forward
output = self.model(**model_inputs, **kwargs)[0]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = VitsModel(
(text_encoder): VitsTextEncoder(
(embed_tokens): Embedding(38, 16)
(encoder): VitsEncoder(
... (dropout): Dropout(p=0.0, inplace=False)
)
(conv_proj): Conv1d(16, 32, kernel_size=(1,), stride=(1,))
)
)
args = ()
kwargs = {'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1]]... 8, 0, 19, 0, 18, 0, 8, 0, 19, 0, 37,
0, 25, 0, 7, 0, 26, 0, 33, 0]]), 'num_return_sequences': 2}
forward_call = <bound method VitsModel.forward of VitsModel(
(text_encoder): VitsTextEncoder(
(embed_tokens): Embedding(38, 16)... (dropout): Dropout(p=0.0, inplace=False)
)
(conv_proj): Conv1d(16, 32, kernel_size=(1,), stride=(1,))
)
)>
def _call_impl(self, *args, **kwargs):
forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward)
# If we don't have any hooks, we want to skip the rest of the logic in
# this function, and just call forward.
if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
or _global_backward_pre_hooks or _global_backward_hooks
or _global_forward_hooks or _global_forward_pre_hooks):
> return forward_call(*args, **kwargs)
E TypeError: forward() got an unexpected keyword argument 'num_return_sequences'
../.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/modules/module.py:1501: TypeError |
|
Hey @ydshieh! The VITS model is registered under the correct mapping, and the pipeline class is correct in that it only calls transformers/src/transformers/pipelines/text_to_audio.py Lines 111 to 114 in 9a30753
The problem is in the testing code, namely that
We can set the forward_params = {"num_return_sequences": 2, "do_sample": True} if speech_generator.model.can_generate() else {}cc @ylacombe as well for info |
|
@sanchit-gandhi Thank you for the information! |
| forward_params = ( | ||
| {"num_return_sequences": 2, "do_sample": True} if speech_generator.model.can_generate() else {} | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggested by @sanchit-gandhi in #26285 (comment)
| def is_pipeline_test_to_skip( | ||
| self, pipeline_test_casse_name, config_class, model_architecture, tokenizer_name, processor_name | ||
| ): | ||
| return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably better not to add entries into pipeline_model_mapping here - by excluding this model in the corresponding pipeline (test) classes.
| TF_MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict( | ||
| [ | ||
| ("layoutlm", "TFLayoutLMForQuestionAnswering"), | ||
| ("layoutlmv3", "TFLayoutLMv3ForQuestionAnswering"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The torch part has this model in MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work @ydshieh
|
|
||
| # Need this class in oder to create tiny model for `bark` | ||
| # TODO (@Yoach) Implement actual test methods | ||
| @unittest.skip("So far all tests will fail.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original motivation to have this BarkModelTest is so we can create the tiny model for bark. However, @sanchit-gandhi mentioned
Bark is irregular in the sense that it's a concatenation of three auto-regressive models, meaning there's no notion of a forward pass
see here, and this BarkModelTest doesn't make sense.
I will remove this test class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audio related changes LGTM - thanks @ydshieh!
* Update tiny model summary file * add to pipeline tests * revert * fix import * fix import * fix * fix * update * update * update * fix * remove BarkModelTest * fix --------- Co-authored-by: ydshieh <[email protected]>
What does this PR do?
Update tiny model information and pipeline tests