-
Notifications
You must be signed in to change notification settings - Fork 30.9k
Add codestral mamba2 #32080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Add codestral mamba2 #32080
Changes from 32 commits
Commits
Show all changes
68 commits
Select commit
Hold shift + click to select a range
100f054
add new model like
molbap 4df8fd5
draft cuda forward - mismatched keys (sharding on conv1)
molbap eaf921f
match keys successfully
molbap 299071f
fix split
molbap 8c61fb2
get generation/forward running (wrong gens, norm?)
molbap 2101c98
:update
ArthurZucker c1a4de7
some refactoring
ArthurZucker 89c5422
fixes
ArthurZucker 6570bed
works up until copy to cache
ArthurZucker 41eb3ed
fix
ArthurZucker e330d94
update
ArthurZucker d60f1df
NON WORKING VERSION
ArthurZucker cd28689
version that work?
ArthurZucker 8c6794f
nit
ArthurZucker c0b2f47
fix config
molbap 80626b3
fix conversion script
molbap b2718c1
working cuda forward
molbap 23db9b7
fix merge conflict
molbap 13ab6fc
nit
ArthurZucker fb2186e
update
ArthurZucker 22e9c5b
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap 490e79e
simplifcation
ArthurZucker cc90dba
make mamba slow simple work
ArthurZucker 48084e9
no einops
ArthurZucker be65a7c
todo
ArthurZucker 32b6017
fix style
molbap 266a87d
no einops
ArthurZucker 0cd4ecb
update fix no einsum
ArthurZucker ab4b7e5
nit
ArthurZucker bf5464f
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap 951359c
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap abd9c5f
remove einops
molbap 1befaa2
bug: scan_output differs strongly
molbap e60ea8c
add rms norm option
molbap b7ce3b1
fix fast + slow generation with and w/o cache :heavy_check_mark:
molbap 7e14814
draft integration tests
molbap 43e6989
remove a big chunk of the einsum
molbap 394ae99
fix slow, fast generations, without any einsum
molbap b18e28c
fix copies
molbap 0fce131
fix structure
molbap d80c2ce
fix up modeling and tests
molbap 7648852
fix tests
molbap d0550ab
Merge branch 'main' into add_codestral_mamba2
molbap 7522ba9
clamping is indeed worse
molbap ed238b6
recover mamba2 cache test
molbap f75df9d
fix copies
molbap ecbd2e6
no cache position (yet)
molbap bd07f46
fix tf tests
molbap d06ae45
fix matmul for generate
molbap f8fa2d4
fixup
molbap e580482
skip cache tests for now
molbap 5311fc3
[run-slow]mamba2
molbap ec56cbe
tune out hidden states for padding
molbap 803cbe7
test batched generation
molbap bcc76d3
propagate attention mask changes
molbap 798ff1e
fix past length
molbap b295112
fix integration test
molbap fccd533
style
molbap cbd1622
address comments
molbap af58188
update readme
molbap fce50da
add mamba2 version check
molbap 2dc979b
fix tests
molbap ce9d8fe
[run-slow]mamba2
molbap c38647a
skip edge tests
molbap e068ba6
[run-slow]mamba2
molbap 0fac4dc
last fixup
molbap cce32fd
[run-slow]mamba2
molbap 7052786
update README
molbap File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
<!--Copyright 2024 The HuggingFace Team. All rights reserved. | ||
|
||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
the License. You may obtain a copy of the License at | ||
|
||
http://www.apache.org/licenses/LICENSE-2.0 | ||
|
||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
specific language governing permissions and limitations under the License. | ||
|
||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be | ||
rendered properly in your Markdown viewer. | ||
|
||
--> | ||
|
||
# mamba2 | ||
|
||
# mamba2 | ||
|
||
## Overview | ||
|
||
The mamba2 model was proposed in [<INSERT PAPER NAME HERE>](<INSERT PAPER LINK HERE>) by <INSERT AUTHORS HERE>. | ||
<INSERT SHORT SUMMARY HERE> | ||
|
||
The abstract from the paper is the following: | ||
|
||
*<INSERT PAPER ABSTRACT HERE>* | ||
|
||
Tips: | ||
|
||
<INSERT TIPS ABOUT MODEL HERE> | ||
|
||
This model was contributed by [INSERT YOUR HF USERNAME HERE](https://huggingface.co/<INSERT YOUR HF USERNAME HERE>). | ||
The original code can be found [here](<INSERT LINK TO GITHUB REPO HERE>). | ||
|
||
|
||
## Mamba2Config | ||
|
||
[[autodoc]] Mamba2Config | ||
|
||
## Mamba2Model | ||
|
||
[[autodoc]] Mamba2Model | ||
- forward | ||
|
||
## Mamba2LMHeadModel | ||
|
||
[[autodoc]] Mamba2ForCausalLM | ||
- forward |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -134,6 +134,7 @@ | |
lxmert, | ||
m2m_100, | ||
mamba, | ||
mamba2, | ||
marian, | ||
markuplm, | ||
mask2former, | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
# Copyright 2024 The HuggingFace Team. All rights reserved. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
from typing import TYPE_CHECKING | ||
|
||
from ...utils import ( | ||
OptionalDependencyNotAvailable, | ||
_LazyModule, | ||
is_torch_available, | ||
) | ||
|
||
|
||
_import_structure = { | ||
"configuration_mamba2": ["Mamba2Config", "Mamba2OnnxConfig"], | ||
} | ||
|
||
try: | ||
if not is_torch_available(): | ||
raise OptionalDependencyNotAvailable() | ||
except OptionalDependencyNotAvailable: | ||
pass | ||
else: | ||
_import_structure["modeling_mamba2"] = [ | ||
"Mamba2ForCausalLM", | ||
"Mamba2Model", | ||
"Mamba2PreTrainedModel", | ||
] | ||
|
||
|
||
if TYPE_CHECKING: | ||
from .configuration_mamba2 import Mamba2Config, Mamba2OnnxConfig | ||
|
||
try: | ||
if not is_torch_available(): | ||
raise OptionalDependencyNotAvailable() | ||
except OptionalDependencyNotAvailable: | ||
pass | ||
else: | ||
from .modeling_mamba2 import ( | ||
Mamba2ForCausalLM, | ||
Mamba2Model, | ||
Mamba2PreTrainedModel, | ||
) | ||
else: | ||
import sys | ||
|
||
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,174 @@ | ||
# coding=utf-8 | ||
# Copyright 2024 The HuggingFace Inc. team. | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
"""MAMBA2 configuration""" | ||
|
||
import math | ||
|
||
from ...configuration_utils import PretrainedConfig | ||
from ...utils import logging | ||
|
||
|
||
logger = logging.get_logger(__name__) | ||
|
||
|
||
class Mamba2Config(PretrainedConfig): | ||
""" | ||
This is the configuration class to store the configuration of a [`Mamba2Model`]. It is used to instantiate a MAMBA2 | ||
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the | ||
defaults will yield a similar configuration to that of the MAMBA2 | ||
[state-spaces/mamba2-2.8b](https://huggingface.co/state-spaces/mamba2-2.8b) architecture. | ||
|
||
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the | ||
documentation from [`PretrainedConfig`] for more information. | ||
|
||
|
||
Args: | ||
vocab_size (`int`, *optional*, defaults to 50280): | ||
Vocabulary size of the MAMBA2 model. Defines the number of different tokens that can be represented by the | ||
`inputs_ids` passed when calling [`Mamba2Model`]. | ||
hidden_size (`int`, *optional*, defaults to 768): | ||
Dimensionality of the embeddings and hidden states. | ||
state_size (`int`, *optional*, defaults to 16): shape of the state space latents. | ||
num_hidden_layers (`int`, *optional*, defaults to 32): | ||
Number of hidden layers in the model. | ||
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): | ||
The epsilon to use in the layer normalization layers. | ||
pad_token_id (`int`, *optional*, defaults to 0): | ||
Padding token id. | ||
bos_token_id (`int`, *optional*, defaults to 0): | ||
The id of the beginning of sentence token in the vocabulary. | ||
eos_token_id (`int`, *optional*, defaults to 0): | ||
The id of the end of sentence token in the vocabulary. | ||
expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size. | ||
conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel. | ||
use_bias (`bool`, *optional*, defaults to `False`): | ||
Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block | ||
use_conv_bias (`bool`, *optional*, defaults to `True`): | ||
Whether or not to use bias in the convolution layer of the mixer block. | ||
hidden_act (`str`, *optional*, defaults to `"silu"`): | ||
The non-linear activation function (function or string) in the decoder. | ||
initializer_range (`float`, *optional*, defaults to 0.1): | ||
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | ||
residual_in_fp32 (`bool`, *optional*, defaults to `True`): | ||
Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model | ||
time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`): | ||
Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)` | ||
time_step_scale (`float`, *optional*, defaults to 1.0): | ||
Scale used used to scale `dt_proj.bias`. | ||
time_step_min (`float`, *optional*, defaults to 0.001): | ||
Minimum `time_step` used to bound `dt_proj.bias`. | ||
time_step_max (`float`, *optional*, defaults to 0.1): | ||
Maximum `time_step` used to bound `dt_proj.bias`. | ||
time_step_init_scheme (`float`, *optional*, defaults to `"random"`): | ||
Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]` | ||
time_step_floor (`float`, *optional*, defaults to 0.0001): | ||
Minimum clamping value of the `dt_proj.bias` layer initialization. | ||
rescale_prenorm_residual (`bool`, *optional*, defaults to `False`): | ||
Whether or not to rescale `out_proj` weights when initializing. | ||
use_cache (`bool`, *optional*, defaults to `True`): | ||
Whether or not the cache should be used. | ||
|
||
|
||
Example: | ||
|
||
```python | ||
>>> from transformers import Mamba2Config, Mamba2Model | ||
|
||
>>> # Initializing a Mamba2 configuration | ||
>>> configuration = Mamba2Config() | ||
|
||
>>> # Initializing a model (with random weights) from the configuration | ||
>>> model = Mamba2Model(configuration) | ||
|
||
>>> # Accessing the model configuration | ||
>>> configuration = model.config | ||
```""" | ||
|
||
model_type = "mamba2" | ||
|
||
def __init__( | ||
self, | ||
num_heads=128, | ||
head_dim=64, | ||
vocab_size=32768, | ||
hidden_size=4096, | ||
state_size=128, | ||
num_hidden_layers=64, | ||
layer_norm_epsilon=1e-5, | ||
pad_token_id=1, | ||
bos_token_id=0, | ||
eos_token_id=2, | ||
expand=2, | ||
conv_kernel=4, | ||
n_groups=8, | ||
use_bias=False, | ||
use_conv_bias=True, | ||
hidden_act="silu", | ||
initializer_range=0.1, | ||
residual_in_fp32=True, | ||
time_step_rank="auto", | ||
time_step_scale=1.0, | ||
time_step_min=0.001, | ||
time_step_max=0.1, | ||
time_step_init_scheme="random", | ||
time_step_floor=1e-4, | ||
time_step_limit=(0.0, float("inf")), | ||
rescale_prenorm_residual=False, | ||
use_cache=True, | ||
norm_before_gate=True, | ||
chunk_size=256, | ||
tie_word_embeddings=False, | ||
**kwargs, | ||
): | ||
self.vocab_size = vocab_size | ||
self.hidden_size = hidden_size | ||
self.state_size = state_size | ||
self.num_hidden_layers = num_hidden_layers | ||
self.layer_norm_epsilon = layer_norm_epsilon | ||
self.conv_kernel = conv_kernel | ||
self.expand = expand | ||
self.intermediate_size = int(expand * self.hidden_size) | ||
self.bos_token_id = bos_token_id | ||
self.eos_token_id = eos_token_id | ||
self.pad_token_id = pad_token_id | ||
self.use_bias = use_bias | ||
self.use_conv_bias = use_conv_bias | ||
self.hidden_act = hidden_act | ||
self.initializer_range = initializer_range | ||
self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank | ||
self.time_step_scale = time_step_scale | ||
self.time_step_min = time_step_min | ||
self.time_step_max = time_step_max | ||
self.time_step_init_scheme = time_step_init_scheme | ||
self.time_step_floor = time_step_floor | ||
self.rescale_prenorm_residual = rescale_prenorm_residual | ||
self.residual_in_fp32 = residual_in_fp32 | ||
self.use_cache = use_cache | ||
self.n_groups = n_groups | ||
self.num_heads = num_heads | ||
self.head_dim = head_dim | ||
self.norm_before_gate = norm_before_gate | ||
self.state_size = state_size | ||
self.chunk_size = chunk_size | ||
self.time_step_limit = time_step_limit | ||
self.tie_word_embeddings = tie_word_embeddings | ||
|
||
super().__init__( | ||
bos_token_id=bos_token_id, | ||
eos_token_id=eos_token_id, | ||
pad_token_id=pad_token_id, | ||
tie_word_embeddings=tie_word_embeddings, | ||
**kwargs, | ||
) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.