Skip to content
Merged
Show file tree
Hide file tree
Changes from 32 commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
100f054
add new model like
molbap Jul 16, 2024
4df8fd5
draft cuda forward - mismatched keys (sharding on conv1)
molbap Jul 16, 2024
eaf921f
match keys successfully
molbap Jul 17, 2024
299071f
fix split
molbap Jul 17, 2024
8c61fb2
get generation/forward running (wrong gens, norm?)
molbap Jul 17, 2024
2101c98
:update
ArthurZucker Jul 17, 2024
c1a4de7
some refactoring
ArthurZucker Jul 17, 2024
89c5422
fixes
ArthurZucker Jul 17, 2024
6570bed
works up until copy to cache
ArthurZucker Jul 17, 2024
41eb3ed
fix
ArthurZucker Jul 17, 2024
e330d94
update
ArthurZucker Jul 17, 2024
d60f1df
NON WORKING VERSION
ArthurZucker Jul 17, 2024
cd28689
version that work?
ArthurZucker Jul 18, 2024
8c6794f
nit
ArthurZucker Jul 18, 2024
c0b2f47
fix config
molbap Jul 18, 2024
80626b3
fix conversion script
molbap Jul 18, 2024
b2718c1
working cuda forward
molbap Jul 18, 2024
23db9b7
fix merge conflict
molbap Jul 18, 2024
13ab6fc
nit
ArthurZucker Jul 18, 2024
fb2186e
update
ArthurZucker Jul 18, 2024
22e9c5b
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap Jul 18, 2024
490e79e
simplifcation
ArthurZucker Jul 18, 2024
cc90dba
make mamba slow simple work
ArthurZucker Jul 18, 2024
48084e9
no einops
ArthurZucker Jul 18, 2024
be65a7c
todo
ArthurZucker Jul 18, 2024
32b6017
fix style
molbap Jul 18, 2024
266a87d
no einops
ArthurZucker Jul 18, 2024
0cd4ecb
update fix no einsum
ArthurZucker Jul 18, 2024
ab4b7e5
nit
ArthurZucker Jul 18, 2024
bf5464f
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap Jul 19, 2024
951359c
Merge branch 'add_codestral_mamba2' of github.com:huggingface/new-mod…
molbap Jul 19, 2024
abd9c5f
remove einops
molbap Jul 19, 2024
1befaa2
bug: scan_output differs strongly
molbap Jul 19, 2024
e60ea8c
add rms norm option
molbap Jul 25, 2024
b7ce3b1
fix fast + slow generation with and w/o cache :heavy_check_mark:
molbap Jul 25, 2024
7e14814
draft integration tests
molbap Jul 25, 2024
43e6989
remove a big chunk of the einsum
molbap Jul 27, 2024
394ae99
fix slow, fast generations, without any einsum
molbap Jul 30, 2024
b18e28c
fix copies
molbap Jul 30, 2024
0fce131
fix structure
molbap Jul 30, 2024
d80c2ce
fix up modeling and tests
molbap Jul 31, 2024
7648852
fix tests
molbap Aug 1, 2024
d0550ab
Merge branch 'main' into add_codestral_mamba2
molbap Aug 1, 2024
7522ba9
clamping is indeed worse
molbap Aug 1, 2024
ed238b6
recover mamba2 cache test
molbap Aug 1, 2024
f75df9d
fix copies
molbap Aug 1, 2024
ecbd2e6
no cache position (yet)
molbap Aug 1, 2024
bd07f46
fix tf tests
molbap Aug 1, 2024
d06ae45
fix matmul for generate
molbap Aug 2, 2024
f8fa2d4
fixup
molbap Aug 2, 2024
e580482
skip cache tests for now
molbap Aug 2, 2024
5311fc3
[run-slow]mamba2
molbap Aug 2, 2024
ec56cbe
tune out hidden states for padding
molbap Aug 2, 2024
803cbe7
test batched generation
molbap Aug 2, 2024
bcc76d3
propagate attention mask changes
molbap Aug 2, 2024
798ff1e
fix past length
molbap Aug 5, 2024
b295112
fix integration test
molbap Aug 5, 2024
fccd533
style
molbap Aug 5, 2024
cbd1622
address comments
molbap Aug 6, 2024
af58188
update readme
molbap Aug 6, 2024
fce50da
add mamba2 version check
molbap Aug 6, 2024
2dc979b
fix tests
molbap Aug 6, 2024
ce9d8fe
[run-slow]mamba2
molbap Aug 6, 2024
c38647a
skip edge tests
molbap Aug 6, 2024
e068ba6
[run-slow]mamba2
molbap Aug 6, 2024
0fac4dc
last fixup
molbap Aug 6, 2024
cce32fd
[run-slow]mamba2
molbap Aug 6, 2024
7052786
update README
molbap Aug 6, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -432,6 +432,8 @@
title: MADLAD-400
- local: model_doc/mamba
title: Mamba
- local: model_doc/mamba2
title: mamba2
- local: model_doc/marian
title: MarianMT
- local: model_doc/markuplm
Expand Down
50 changes: 50 additions & 0 deletions docs/source/en/model_doc/mamba2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
<!--Copyright 2024 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->

# mamba2

# mamba2

## Overview

The mamba2 model was proposed in [<INSERT PAPER NAME HERE>](<INSERT PAPER LINK HERE>) by <INSERT AUTHORS HERE>.
<INSERT SHORT SUMMARY HERE>

The abstract from the paper is the following:

*<INSERT PAPER ABSTRACT HERE>*

Tips:

<INSERT TIPS ABOUT MODEL HERE>

This model was contributed by [INSERT YOUR HF USERNAME HERE](https://huggingface.co/<INSERT YOUR HF USERNAME HERE>).
The original code can be found [here](<INSERT LINK TO GITHUB REPO HERE>).


## Mamba2Config

[[autodoc]] Mamba2Config

## Mamba2Model

[[autodoc]] Mamba2Model
- forward

## Mamba2LMHeadModel

[[autodoc]] Mamba2ForCausalLM
- forward
14 changes: 14 additions & 0 deletions src/transformers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -537,6 +537,7 @@
],
"models.m2m_100": ["M2M100Config"],
"models.mamba": ["MambaConfig"],
"models.mamba2": ["Mamba2Config"],
"models.marian": ["MarianConfig"],
"models.markuplm": [
"MarkupLMConfig",
Expand Down Expand Up @@ -2526,6 +2527,13 @@
"MambaPreTrainedModel",
]
)
_import_structure["models.mamba2"].extend(
[
"Mamba2ForCausalLM",
"Mamba2Model",
"Mamba2PreTrainedModel",
]
)
_import_structure["models.marian"].extend(["MarianForCausalLM", "MarianModel", "MarianMTModel"])
_import_structure["models.markuplm"].extend(
[
Expand Down Expand Up @@ -5199,6 +5207,7 @@
)
from .models.m2m_100 import M2M100Config
from .models.mamba import MambaConfig
from .models.mamba2 import Mamba2Config
from .models.marian import MarianConfig
from .models.markuplm import (
MarkupLMConfig,
Expand Down Expand Up @@ -6990,6 +6999,11 @@
MambaModel,
MambaPreTrainedModel,
)
from .models.mamba2 import (
Mamba2ForCausalLM,
Mamba2Model,
Mamba2PreTrainedModel,
)
from .models.marian import MarianForCausalLM, MarianModel, MarianMTModel
from .models.markuplm import (
MarkupLMForQuestionAnswering,
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,7 @@
lxmert,
m2m_100,
mamba,
mamba2,
marian,
markuplm,
mask2former,
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,6 +151,7 @@
("lxmert", "LxmertConfig"),
("m2m_100", "M2M100Config"),
("mamba", "MambaConfig"),
("mamba2", "Mamba2Config"),
("marian", "MarianConfig"),
("markuplm", "MarkupLMConfig"),
("mask2former", "Mask2FormerConfig"),
Expand Down Expand Up @@ -437,6 +438,7 @@
("m2m_100", "M2M100"),
("madlad-400", "MADLAD-400"),
("mamba", "Mamba"),
("mamba2", "mamba2"),
("marian", "Marian"),
("markuplm", "MarkupLM"),
("mask2former", "Mask2Former"),
Expand Down
4 changes: 4 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,7 @@
("lxmert", "LxmertModel"),
("m2m_100", "M2M100Model"),
("mamba", "MambaModel"),
("mamba2", "Mamba2Model"),
("marian", "MarianModel"),
("markuplm", "MarkupLMModel"),
("mask2former", "Mask2FormerModel"),
Expand Down Expand Up @@ -308,6 +309,7 @@
("luke", "LukeForMaskedLM"),
("lxmert", "LxmertForPreTraining"),
("mamba", "MambaForCausalLM"),
("mamba2", "Mamba2ForCausalLM"),
("mega", "MegaForMaskedLM"),
("megatron-bert", "MegatronBertForPreTraining"),
("mobilebert", "MobileBertForPreTraining"),
Expand Down Expand Up @@ -392,6 +394,7 @@
("luke", "LukeForMaskedLM"),
("m2m_100", "M2M100ForConditionalGeneration"),
("mamba", "MambaForCausalLM"),
("mamba2", "Mamba2ForCausalLM"),
("marian", "MarianMTModel"),
("mega", "MegaForMaskedLM"),
("megatron-bert", "MegatronBertForCausalLM"),
Expand Down Expand Up @@ -470,6 +473,7 @@
("jetmoe", "JetMoeForCausalLM"),
("llama", "LlamaForCausalLM"),
("mamba", "MambaForCausalLM"),
("mamba2", "Mamba2ForCausalLM"),
("marian", "MarianForCausalLM"),
("mbart", "MBartForCausalLM"),
("mega", "MegaForCausalLM"),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/auto/tokenization_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -263,6 +263,7 @@
("lxmert", ("LxmertTokenizer", "LxmertTokenizerFast" if is_tokenizers_available() else None)),
("m2m_100", ("M2M100Tokenizer" if is_sentencepiece_available() else None, None)),
("mamba", (None, "GPTNeoXTokenizerFast" if is_tokenizers_available() else None)),
("mamba2", (None, "GPTNeoXTokenizerFast" if is_tokenizers_available() else None)),
("marian", ("MarianTokenizer" if is_sentencepiece_available() else None, None)),
(
"mbart",
Expand Down
58 changes: 58 additions & 0 deletions src/transformers/models/mamba2/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import TYPE_CHECKING

from ...utils import (
OptionalDependencyNotAvailable,
_LazyModule,
is_torch_available,
)


_import_structure = {
"configuration_mamba2": ["Mamba2Config", "Mamba2OnnxConfig"],
}

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
_import_structure["modeling_mamba2"] = [
"Mamba2ForCausalLM",
"Mamba2Model",
"Mamba2PreTrainedModel",
]


if TYPE_CHECKING:
from .configuration_mamba2 import Mamba2Config, Mamba2OnnxConfig

try:
if not is_torch_available():
raise OptionalDependencyNotAvailable()
except OptionalDependencyNotAvailable:
pass
else:
from .modeling_mamba2 import (
Mamba2ForCausalLM,
Mamba2Model,
Mamba2PreTrainedModel,
)
else:
import sys

sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__)
174 changes: 174 additions & 0 deletions src/transformers/models/mamba2/configuration_mamba2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# coding=utf-8
# Copyright 2024 The HuggingFace Inc. team.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""MAMBA2 configuration"""

import math

from ...configuration_utils import PretrainedConfig
from ...utils import logging


logger = logging.get_logger(__name__)


class Mamba2Config(PretrainedConfig):
"""
This is the configuration class to store the configuration of a [`Mamba2Model`]. It is used to instantiate a MAMBA2
model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
defaults will yield a similar configuration to that of the MAMBA2
[state-spaces/mamba2-2.8b](https://huggingface.co/state-spaces/mamba2-2.8b) architecture.

Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.


Args:
vocab_size (`int`, *optional*, defaults to 50280):
Vocabulary size of the MAMBA2 model. Defines the number of different tokens that can be represented by the
`inputs_ids` passed when calling [`Mamba2Model`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the embeddings and hidden states.
state_size (`int`, *optional*, defaults to 16): shape of the state space latents.
num_hidden_layers (`int`, *optional*, defaults to 32):
Number of hidden layers in the model.
layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
The epsilon to use in the layer normalization layers.
pad_token_id (`int`, *optional*, defaults to 0):
Padding token id.
bos_token_id (`int`, *optional*, defaults to 0):
The id of the beginning of sentence token in the vocabulary.
eos_token_id (`int`, *optional*, defaults to 0):
The id of the end of sentence token in the vocabulary.
expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
use_bias (`bool`, *optional*, defaults to `False`):
Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
use_conv_bias (`bool`, *optional*, defaults to `True`):
Whether or not to use bias in the convolution layer of the mixer block.
hidden_act (`str`, *optional*, defaults to `"silu"`):
The non-linear activation function (function or string) in the decoder.
initializer_range (`float`, *optional*, defaults to 0.1):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
residual_in_fp32 (`bool`, *optional*, defaults to `True`):
Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
time_step_scale (`float`, *optional*, defaults to 1.0):
Scale used used to scale `dt_proj.bias`.
time_step_min (`float`, *optional*, defaults to 0.001):
Minimum `time_step` used to bound `dt_proj.bias`.
time_step_max (`float`, *optional*, defaults to 0.1):
Maximum `time_step` used to bound `dt_proj.bias`.
time_step_init_scheme (`float`, *optional*, defaults to `"random"`):
Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
time_step_floor (`float`, *optional*, defaults to 0.0001):
Minimum clamping value of the `dt_proj.bias` layer initialization.
rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
Whether or not to rescale `out_proj` weights when initializing.
use_cache (`bool`, *optional*, defaults to `True`):
Whether or not the cache should be used.


Example:

```python
>>> from transformers import Mamba2Config, Mamba2Model

>>> # Initializing a Mamba2 configuration
>>> configuration = Mamba2Config()

>>> # Initializing a model (with random weights) from the configuration
>>> model = Mamba2Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```"""

model_type = "mamba2"

def __init__(
self,
num_heads=128,
head_dim=64,
vocab_size=32768,
hidden_size=4096,
state_size=128,
num_hidden_layers=64,
layer_norm_epsilon=1e-5,
pad_token_id=1,
bos_token_id=0,
eos_token_id=2,
expand=2,
conv_kernel=4,
n_groups=8,
use_bias=False,
use_conv_bias=True,
hidden_act="silu",
initializer_range=0.1,
residual_in_fp32=True,
time_step_rank="auto",
time_step_scale=1.0,
time_step_min=0.001,
time_step_max=0.1,
time_step_init_scheme="random",
time_step_floor=1e-4,
time_step_limit=(0.0, float("inf")),
rescale_prenorm_residual=False,
use_cache=True,
norm_before_gate=True,
chunk_size=256,
tie_word_embeddings=False,
**kwargs,
):
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.state_size = state_size
self.num_hidden_layers = num_hidden_layers
self.layer_norm_epsilon = layer_norm_epsilon
self.conv_kernel = conv_kernel
self.expand = expand
self.intermediate_size = int(expand * self.hidden_size)
self.bos_token_id = bos_token_id
self.eos_token_id = eos_token_id
self.pad_token_id = pad_token_id
self.use_bias = use_bias
self.use_conv_bias = use_conv_bias
self.hidden_act = hidden_act
self.initializer_range = initializer_range
self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
self.time_step_scale = time_step_scale
self.time_step_min = time_step_min
self.time_step_max = time_step_max
self.time_step_init_scheme = time_step_init_scheme
self.time_step_floor = time_step_floor
self.rescale_prenorm_residual = rescale_prenorm_residual
self.residual_in_fp32 = residual_in_fp32
self.use_cache = use_cache
self.n_groups = n_groups
self.num_heads = num_heads
self.head_dim = head_dim
self.norm_before_gate = norm_before_gate
self.state_size = state_size
self.chunk_size = chunk_size
self.time_step_limit = time_step_limit
self.tie_word_embeddings = tie_word_embeddings

super().__init__(
bos_token_id=bos_token_id,
eos_token_id=eos_token_id,
pad_token_id=pad_token_id,
tie_word_embeddings=tie_word_embeddings,
**kwargs,
)
Loading