-
Notifications
You must be signed in to change notification settings - Fork 547
Support Cache Class for New Versions of Transformers Library #1341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
9e412b8 to
3c1fc7d
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
3c1fc7d to
8b93cf0
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
8b93cf0 to
8600914
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
8600914 to
189676a
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
189676a to
4897dad
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
4897dad to
ef91a25
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
ef91a25 to
f52946e
Compare
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
f52946e to
6bfe388
Compare
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
|
This pull request was exported from Phabricator. Differential Revision: D62408520 |
6bfe388 to
bb7213c
Compare
…torch#1341) Summary: Pull Request resolved: meta-pytorch#1341 Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for `transformers` models here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742. Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a `_supports_cache_class` flag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility. (minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior). Differential Revision: D62408520
|
This pull request has been merged in 7b22550. |
Summary:
Fixes D62210529 (now reverted by D62262760). Transformers library is now an optional dependency. We do not depend on it, however, we have some logic for
transformersmodels here. The library will only be imported if a model already has the library in the corresponding environment. This TARGETS configuration prevents transformers version conflicts which e.g. caused T200877742.Add support for new transformers Cache objects. This may need changes in the future as it seems that LLMs handle Caching differently. Some handle Caching themselves, however, some of them do not and some of them don't support Caches yet. Llama models seem to have a
_supports_cache_classflag that indicates whether this new Cache object is supported. If it isn't marked as supported, we assume it takes legacy format (tuple past values). Multiple checks added to ensure compatibility.(minor) Also, changed the defaults for LLM generation to dismiss warnings (does not change generation behavior).
Differential Revision: D62408520