Skip to content

Commit deb6be6

Browse files
RyanMullinsSindhuRaghuram97zucchini-nlpMayankChaturvedidouglas-reid
authored andcommitted
* Gemma 3n * initial commit of Gemma 3n scaffold * Fixing param pass through on Gemm3p5RMSNorm * Adds Einsum layer to Gemma 3n * Updating EinsumLayer API * Undoing erroneous force push * Reverting RMSNorm to with_scale by default * Adds LAuReL to Gemma 3n * Adds AltUp to Gemma 3n * Adding Gemma3p5 overall and text config with vision and audio config placeholders (huggingface#3) * Adding gemma3p5 text configs * Adding audio config placeholders * Adding a placeholder for vision configs * Updating MobileNetVisionConfig, inheriting TimmWrapperConfig * Updating text configs * Update src/transformers/models/gemma3p5/modular_gemma3p5.py Co-authored-by: Ryan Mullins <[email protected]> * Removing altup configs to accept the suggested configs * Update src/transformers/models/gemma3p5/modular_gemma3p5.py Co-authored-by: Ryan Mullins <[email protected]> * Updating altup config * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Addressing review comments and updating text configs * Adding a config for activation sparsity * Updating configs to pass through options to super class init and adjust some name prefixes * Updating laurel and altup with corrected config values * Normalizing sub_config initializers --------- Co-authored-by: Ryan Mullins <[email protected]> * Updating MLP with activation sparsity (huggingface#2) * Updating DecoderBlock for Gemma 3n (huggingface#3) * Initial Gemm3nTextModel (huggingface#4) NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference. * Adding KV Cache Sharing * Adds Einsum layer to Gemma 3n * Updating EinsumLayer API * Refactored kv cache sharing in attention * Adding KVStore for cache sharing * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update src/transformers/cache_utils.py Co-authored-by: Ryan Mullins <[email protected]> * Undoing erroneous force push * Reverting RMSNorm to with_scale by default * Adds LAuReL to Gemma 3n * Updating KV Cache Sharing implementation * Updating the q and k norm definitions in the attention module * Fixing name error for q,k,v RMS norm to use the right 3n module * Updating MLP with activation sparsity * Updating DecoderBlock for Gemma 3.5 * Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code * Isolating KV Cache logic to relevant components * Fixing logic error in Gemma3nAttention.forward * Refactoring caching contributions and fixing kv_store initialization * Simplifying Configs * Remove errant self from super init call * Bug fix in the Attention module - changing self.head_dim to config.head_dim * Bug fixes in the LaurelBlock and RMS Norm super init call * removing redundant code from a merge * Adding per_layer_inputs to TextModel * Adding preprocess embeddings with altup * Adds per-layer-to-single output and a host of TODOs * Integrating altup predict with the model workflow and other minor bug fixes * Using nn.Embedding temporarily for text model * It goes forward * Minor refactor of attention sparsity and RoPE initialization * Fixing duplicate rope_scaling param bug when loading from pretrained --------- Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: SindhuRaghuram97 <[email protected]> * Normalizing on altup_num_inputs config option * regenerating modeling file after syncing to HEAD * Use torch.std(..., unbiased=False) for activation sparsity (huggingface#8) * Refactoring to a single QVK Norm (huggingface#13) * AltUp: support scale_corrected_output (huggingface#14) * Converts einsums to nn.Linear (huggingface#7) * Converts einsums to nn.Linear * Removing unused variables * Aligning SharedKVCache with HybridCache (huggingface#11) * Alinging SharedKVStore with HybridCache * Remove KVStore. Refactor apply_rotary_pos_emb for sharing * Addressing review comments * Supporting split modality embeddings in Gemma3n (huggingface#10) * Adding the Embedder class * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Addressing review comments, adding audio embedding layers, integrating embedder with the remaining architecture, adding a forward method for conditional generation * Apply suggestions from code review Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Addressing review comments, prop drilling audio and vision configs to the text config * Removing TODO's that have been addressed * Simplify Embedder init and add audio embeddings * Embeddings refactor. Adds Gemma3nAudioEmbedder and Gemma3nVisionEmbedder * Refactoring vision and audio embeddings into ConditionalGeneration model --------- Co-authored-by: Ryan Mullins <[email protected]> Co-authored-by: Ryan Mullins <[email protected]> * Updating attention mask for Gemma 3.5 (huggingface#15) * xxx_token_index to xxx_token_id * remvoing deprecated last_cache_position * Removing references to SigLIP * Always init per-layer inputs * Using torch.finfo().min for epsilon_tensor * Gemma3nDecoderLayer inherits from Gemma3DecoderLayer. Remove gating lambdas * fix modular GEMMA3N_INPUTS_DOCSTRING * Gemma3nAttention inherits from Gemma3Attention * Modular inheritance fixes * CausalLM conversion script for 4B model (huggingface#16) * Add Gemma3n Audio Encoder (huggingface#6) * initial commit of Gemma 3.5 scaffold * Fixing param pass through on Gemm3nRMSNorm * Adds Einsum layer to Gemma 3.5 * Updating EinsumLayer API * Undoing erroneous force push * Reverting RMSNorm to with_scale by default * Adds LAuReL to Gemma 3n * Adds AltUp to Gemma 3n * Adding Gemma3n overall and text config with vision and audio config placeholders (huggingface#3) * Adding gemma3n text configs * Adding audio config placeholders * Adding a placeholder for vision configs * Updating MobileNetVisionConfig, inheriting TimmWrapperConfig * Updating text configs * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Removing altup configs to accept the suggested configs * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Updating altup config * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Addressing review comments and updating text configs * Adding a config for activation sparsity * Updating configs to pass through options to super class init and adjust some name prefixes * Updating laurel and altup with corrected config values * Normalizing sub_config initializers --------- Co-authored-by: Ryan Mullins <[email protected]> * Updating MLP with activation sparsity (huggingface#2) * Updating DecoderBlock for Gemma 3.5 (huggingface#3) * Initial Gemm3nTextModel (huggingface#4) NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference. * Adding KV Cache Sharing * Adds Einsum layer to Gemma 3.5 * Updating EinsumLayer API * Refactored kv cache sharing in attention * Adding KVStore for cache sharing * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update src/transformers/cache_utils.py Co-authored-by: Ryan Mullins <[email protected]> * Undoing erroneous force push * Reverting RMSNorm to with_scale by default * Adds LAuReL to Gemma 3n * Updating KV Cache Sharing implementation * Updating the q and k norm definitions in the attention module * Fixing name error for q,k,v RMS norm to use the right Gemma 3n module * Updating MLP with activation sparsity * Updating DecoderBlock for Gemma 3.5 * Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code * Isolating KV Cache logic to relevant components * Fixing logic error in Gemma3nAttention.forward * Refactoring caching contributions and fixing kv_store initialization * Simplifying Configs * Remove errant self from super init call * Bug fix in the Attention module - changing self.head_dim to config.head_dim * Bug fixes in the LaurelBlock and RMS Norm super init call * removing redundant code from a merge * Adding per_layer_inputs to TextModel * Adding preprocess embeddings with altup * Adds per-layer-to-single output and a host of TODOs * Integrating altup predict with the model workflow and other minor bug fixes * Using nn.Embedding temporarily for text model * It goes forward * Minor refactor of attention sparsity and RoPE initialization * Fixing duplicate rope_scaling param bug when loading from pretrained --------- Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: SindhuRaghuram97 <[email protected]> * Normalizing on altup_num_inputs config option * Adding audio encoder config * Adds high-level components for Audio Encoder * Implement uniform reducer for Audio Encoder * Adding placeholders for Conformer components in Audio Encoder * Adding placeholders for SubSampleConvProjection components in Audio Encoder * Adding SequenceLayer component placeholders * Implementing Gemma3nAudioEncoder with nn.Sequential * Implementing Gemma3nAudioSubSampleConvProjection with nn.Sequential * Implementing Conformer model with SequenceLayers * Use OrderedDict in nn.Sequential initializers * Implements sl.Residual in Torch with nn.Sequential and OrderedDict * Adopting a base SequenceLayer class with default forward() method * Implementing sl.GatedLinearUnit in Torch * Implementing sl.Swish in Torch * Implementing sl.ReLU in Torch * Implementing sl.Scale in Torch * Removing sl.Dropout after tree-shaking * Implementing sl.RMSNorm in Torch with fake shape * Implementing sl.GroupNorm in Torch * Implementing sl.Conv2d in Torch * Implementing sl.Dense in Torch * Removing sl.Delay layers, which act as pass-throughs * Connecting shapes to configs in initializers * Removing sl.Emit * Implementing sl.ExpandDims in Torch * Adding sl.GradientClipping to Torch * Implementing sl.DenseShaped in Torch * Implementing sl.LDPA in Torch * Removing unused sl.CombinedQKVProj class * Fixing erroneous type hint * Implemnenting sl.DepthwiseConv1D in Torch * Implementing sl.MaskInvalid in Torch * Fixes for initialization * Fixes for saving weights * Removing einsums per feedback from HF staff * Removing Sequence Layers idioms from audio encoder * Fixes for reviewer comments * CausalLM conversion script for 4B model * inv_timescales to non-persistent buffer * Addressing audio encoder Attention feedback * Addressing Gemma3nAudioSSCPConvBlock feedback * Addressing Gemma3nAudioConformerAttention feedback * Addressing padding feedback * Weights conversion loads audio state dict * Always use vision_config so saving works * Token id updates for configs * Stubs for interleaving audio embs * Addressing reviewer feedback --------- Co-authored-by: SindhuRaghuram97 <[email protected]> Co-authored-by: Sindhu Raghuram <[email protected]> * Fixing cache access error * Removing duplicate code from a bad merge * Gemma 3n Text + Vision Part 1 (huggingface#17) * testing utilities for numerics comparisons * Corrected einsum to nn.Linear weights conversion * Inherit scaled word embs from Gemma3 not Bart * Fixing transposes for collapsed linears * More transpose fixes * numpy api fix * RMSNorm: Explicit kwargs, scale_shift=0.0 when with_scale=True * Force AltUp to float32 * Updating debugging script for AudioEncoder debugging * Support divide_weight_by_sqrt_fan_in from JAX for per-layer inputs * Correcting attention einsum conversions * RMSNorm in type of x * Fixing douplicate laurel norm/gating * KV sharing using the right previous indices * Refactor kv shared index computation. Correct frac_shared_layers * Use num_shared_layers instead of inferring from a fraction * fixing a bug for logging * Fix shared data_ptrs in altup inits * rope: adjust proj -> norm -> rope to preserve computation (huggingface#20) * rope: adjust proj -> norm -> rope to preserve computation * Removing some breaking language model fluff in ConditionalGeneration * Consolidate query_states transforms --------- Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Ryan Mullins <[email protected]> * Vectorize the loops in AltUp (huggingface#19) * Vectorize the loops in AltUp * fix typo * Expanding to support batched inputs * remove extra debug script * Fix AltUp.forward --------- Co-authored-by: Ryan Mullins <[email protected]> * Add 'scale_shift=0.0, with_scale=True' to the final norm in TextModel * Convert norm to 1/sqrt (huggingface#21) * Convert norm to 1/sqrt * Scale shift change per Phil's rec * Adding default activation sparsity * Fixing 2B config in weights conversion script * Fixing RMSNorm parameters - adding scale_shift and with_scale * Correcting query pre-attention scaling * Adding query_rescale_scalar to text config * Adding layer_idx to MLP * Permafix for input_layernorm * Use 1/sqrt instead of rsqrt in DecoderLayer * Fix o_proj conversion * Conversion script update for vision encoder * Removing logging for debugging timm model * Fixing bugs in Gemma3nForConditionalGeneration for text generation * Generating the modeling_gemma3n.py file * Removing the addition of an erroneous line in the modeling file * Adding gemma3n text model to modeling_auto * Bugfix: Updating the interleaving of inputs_embeds and vision_embeds * Updating the modeling file with the latest bugfix changes * Updating models/auto for Gemma 3n * using AutoTokenizer in forward test * Adding processing_gemma3n.py * Gemma 3n configured for AutoModel. Conversion script updated. * Removing errant merge artifacts --------- Co-authored-by: Mayank Chaturvedi <[email protected]> Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: Sindhu Raghuram <[email protected]> * Removing errant debugging statements from Gemma 3 * Gemma3n audio model (huggingface#18) * testing utilities for numerics comparisons * Implement CumulativeGroupNorm and add to SubSampleConvProjection and SSCPConvBlock * Add audio version of forward script based on RyanMullins' implementation * Updating to match encoder tests. WIP: config question needs resolving * Updates to audio classes to enable end-to-end running * Removing vestigial classes, cleaning up print statements * Adding SiLU / Swish to audio conformer feed forward block * Shifted Gemma3p5Audio naming prefix to Gemma3NanoAudio * Adding outputs to audio test * Fixes to padding in SSCP and 1D convolution, align RMS Norm with wider model * Update forward test to load from local weights * Update conversion to process / output audio layers * Update __all__ to export audio encoder * AutoModel registration for Gemma 3n Audio * Use AutoModel for ConditionalGeneration.audio_tower * Fixing input_proj_linear transpose * Fixing Gemma3NanoAudioConformerAttention.post conversion * Fixing Gemma3NanoAudioSSCPConvBlock.conv weights conversion * Correcting indentation issue on Gemma3p5RMSNorm --------- Co-authored-by: Ryan Mullins <[email protected]> * Text + Vision Part 2 (huggingface#23) * Updates for ConditionalGeneration.get_image_features * Adding a WIP draft of image_processing_gemma3p5.py * Update src/transformers/models/gemma3p5/modular_gemma3p5.py Co-authored-by: SindhuRaghuram97 <[email protected]> * Modular conversion after github suggested change * Text + image gives good results * Fixing image size preset * Updating configs for the 2B variant in the conversion script * Using final generation config in conversion script --------- Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: SindhuRaghuram97 <[email protected]> * Audio Integration (huggingface#12) * initial commit of Gemma 3n scaffold * Fixing param pass through on Gemm3nRMSNorm * Adds Einsum layer to Gemma 3n * Updating EinsumLayer API * Undoing erroneous force push * Reverting RMSNorm to with_scale by default * Adds LAuReL to Gemma 3n * Adds AltUp to Gemma 3n * Adding Gemma 3n overall and text config with vision and audio config placeholders (huggingface#3) * Adding Gemma 3n text configs * Adding audio config placeholders * Adding a placeholder for vision configs * Updating MobileNetVisionConfig, inheriting TimmWrapperConfig * Updating text configs * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Removing altup configs to accept the suggested configs * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Updating altup config * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Addressing review comments and updating text configs * Adding a config for activation sparsity * Updating configs to pass through options to super class init and adjust some name prefixes * Updating laurel and altup with corrected config values * Normalizing sub_config initializers --------- Co-authored-by: Ryan Mullins <[email protected]> * Updating MLP with activation sparsity (huggingface#2) * Updating DecoderBlock for Gemma 3n (huggingface#3) * Initial Gemma3nTextModel (huggingface#4) NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference. * Adding KV Cache Sharing * Adds Einsum layer to Gemma 3n * Updating EinsumLayer API * Refactored kv cache sharing in attention * Adding KVStore for cache sharing * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update modular Co-authored-by: Ryan Mullins <[email protected]> * Update src/transformers/cache_utils.py Co-authored-by: Ryan Mullins <[email protected]> * Undoing erroneous force push * Reverting RMSNorm to with_scale by default * Adds LAuReL to Gemma 3n * Updating KV Cache Sharing implementation * Updating the q and k norm definitions in the attention module * Fixing name error for q,k,v RMS norm to use the right 3n module * Updating MLP with activation sparsity * Updating DecoderBlock for Gemma 3n * Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code * Isolating KV Cache logic to relevant components * Fixing logic error in Gemma3nAttention.forward * Refactoring caching contributions and fixing kv_store initialization * Simplifying Configs * Remove errant self from super init call * Bug fix in the Attention module - changing self.head_dim to config.head_dim * Bug fixes in the LaurelBlock and RMS Norm super init call * removing redundant code from a merge * Adding per_layer_inputs to TextModel * Adding preprocess embeddings with altup * Adds per-layer-to-single output and a host of TODOs * Integrating altup predict with the model workflow and other minor bug fixes * Using nn.Embedding temporarily for text model * It goes forward * Minor refactor of attention sparsity and RoPE initialization * Fixing duplicate rope_scaling param bug when loading from pretrained --------- Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: SindhuRaghuram97 <[email protected]> * Normalizing on altup_num_inputs config option * Adding audio encoder config * Adds high-level components for Audio Encoder * Implement uniform reducer for Audio Encoder * Adding placeholders for Conformer components in Audio Encoder * Adding placeholders for SubSampleConvProjection components in Audio Encoder * Adding SequenceLayer component placeholders * Implementing Gemma3nAudioEncoder with nn.Sequential * Implementing Gemma3nAudioSubSampleConvProjection with nn.Sequential * Implementing Conformer model with SequenceLayers * Use OrderedDict in nn.Sequential initializers * Implements sl.Residual in Torch with nn.Sequential and OrderedDict * Adopting a base SequenceLayer class with default forward() method * Implementing sl.GatedLinearUnit in Torch * Implementing sl.Swish in Torch * Implementing sl.ReLU in Torch * Implementing sl.Scale in Torch * Removing sl.Dropout after tree-shaking * Implementing sl.RMSNorm in Torch with fake shape * Implementing sl.GroupNorm in Torch * Implementing sl.Conv2d in Torch * Implementing sl.Dense in Torch * Removing sl.Delay layers, which act as pass-throughs * Connecting shapes to configs in initializers * Removing sl.Emit * Implementing sl.ExpandDims in Torch * Adding sl.GradientClipping to Torch * Implementing sl.DenseShaped in Torch * Implementing sl.LDPA in Torch * Removing unused sl.CombinedQKVProj class * Fixing erroneous type hint * Implemnenting sl.DepthwiseConv1D in Torch * Implementing sl.MaskInvalid in Torch * Fixes for initialization * Fixes for saving weights * Removing einsums per feedback from HF staff * Removing Sequence Layers idioms from audio encoder * Fixes for reviewer comments * Converting sl.Frontend to FeatureExtractor * Updates for ConditionalGeneration.get_image_features * Adding a WIP draft of image_processing_gemma3n.py * Update modular Co-authored-by: SindhuRaghuram97 <[email protected]> * Modular conversion after github suggested change * Text + image gives good results * Fixing image size preset * Draft of audio data in chat template * Removing image processing. Using SigLIP instead. * Audio input going end-to-end * Fixing dtype issues in audio encoder * x-lib formatting consistency * Adding example data * Save preprocessor_config.json from conversion script * Instrumentaiton for debugging * Additional instrumentation for preprocessing debugging * Updates to preprocessor, padding; produces correct end-to-end results on sample * Tackling configuraiton TODOs * Start of feature extractor refatcor * Adds Numpy version of USM extractor, removes Torch version and dependencies * Fixing AltUp.correct coef permute * Supporting batches of single audio segment inputs * Docstrings updates for config * In-lining audio feature extraction * Adjustments to conversion script and smoke test script --------- Co-authored-by: SindhuRaghuram97 <[email protected]> Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: pculliton <[email protected]> * Gemma 3n renaming * Removing test data and utilities * Renaming test files * Gemma 3n refactor * Fix tokenizer config in conversion script * Address reviewer feedback * FeatureExtractor returns float32 by default * Adding basic tests for audio, and input name for audio encoder * Audio integration test, updates to model_id for other integration tests * Use scales for q and k norms (huggingface#26) * Update audio integration test to use HF dataset * Reviewer feedback * Expand embedding table to full vocab size in weights conversion * Mix-n-match MatFormers for Gemma 3n (huggingface#25) * Remove in-place operations (huggingface#30) * chore: removing inplace ops * remove [tensor] * n pattern * chore: reviewer feedback in AudioEncoder and AltUp * More grad clipping * Dynamo compatibility * fix: cache slicing error * chore: simplify shared kv cache slicing * chore: vision encoder rename in timm * fix: image processor do_normalize=False * fixup: style * chore: model_doc * fix: docs for code quality * chore: repo consistency * fix: RMSNorm in float as in prior Gemmas * fix: per_layer_inputs = None * chore: Gemma3nForCausalLM from Gemma3nForConditionalGeneration checkpoint * chore: repo consistency * Add initial unit tests for Gemma3nAudioFeatureExtractor (huggingface#27) * Add initial unit tests for Gemma3nAudioFeatureExtractor * Add basic unit tests for Gemma3nProcessor (huggingface#28) Co-authored-by: Douglas Reid <[email protected]> * parameterize tests --------- Co-authored-by: Douglas Reid <[email protected]> * chore: code style * fix: test cases * style and consistency * fix config in the test to be coherent with layer cache sharing * fix hidden states in tests and code * inits and mappings * fix modality prefixes * test order and prefixes * fix test exception * fix class order and reduce model size for faster tests * restore _checkpoint_conversion_mapping to load Caual from Conditional * fix config mapping! * fix: reviewer feedback --------- Co-authored-by: SindhuRaghuram97 <[email protected]> Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: raushan <[email protected]> Co-authored-by: Mayank Chaturvedi <[email protected]> Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: pculliton <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Cyril Vallez <[email protected]> * fix import test * add model args * auto_docstring * replace test path * consistency * skip tests for now * fix docstring for doc builder * skip unused attr --------- Co-authored-by: SindhuRaghuram97 <[email protected]> Co-authored-by: Sindhu Raghuram <[email protected]> Co-authored-by: raushan <[email protected]> Co-authored-by: Mayank Chaturvedi <[email protected]> Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Douglas Reid <[email protected]> Co-authored-by: Xuan-Son Nguyen <[email protected]> Co-authored-by: pculliton <[email protected]> Co-authored-by: Aritra Roy Gosthipaty <[email protected]> Co-authored-by: Cyril Vallez <[email protected]> Co-authored-by: Arthur <[email protected]>
1 parent 96c85e6 commit deb6be6

22 files changed

+8723
-0
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -959,6 +959,8 @@
959959
title: FLAVA
960960
- local: model_doc/gemma3
961961
title: Gemma3
962+
- local: model_doc/gemma3n
963+
title: Gemma3n
962964
- local: model_doc/git
963965
title: GIT
964966
- local: model_doc/glm4v
Lines changed: 204 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,204 @@
1+
2+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
3+
4+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
5+
the License. You may obtain a copy of the License at
6+
7+
http://www.apache.org/licenses/LICENSE-2.0
8+
9+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
10+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
11+
specific language governing permissions and limitations under the License.
12+
13+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
14+
rendered properly in your Markdown viewer.
15+
16+
-->
17+
18+
<div style="float: right;">
19+
<div class="flex flex-wrap space-x-1">
20+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
21+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
22+
</div>
23+
</div>
24+
25+
# Gemma3n
26+
27+
## Overview
28+
29+
Gemma3n is a multimodal model with pretrained and instruction-tuned variants, available in E4B and E2B sizes. While
30+
large portions of the language model architecture are shared with prior Gemma releases, there are many new additions in
31+
this model, including [Alternating Updates][altup] (AltUp), [Learned Augmented Residual Layer][laurel] (LAuReL),
32+
[MatFormer][matformer], Per-Layer Embeddings (PLE), activation sparsity, and KV cache sharing. The language model uses
33+
a similar attention pattern to [Gemma 3](./gemma3.md) with alternating 4 local sliding window self-attention layers for
34+
every global self-attention layer with a maximum context length of 32k tokens. Gemma 3n introduces
35+
[MobileNet v5][mobilenetv5] as the vision encoder, using a default resolution of 768x768 pixels, and adds a
36+
[Universal Speech Model][usm] (USM) as the audio encoder.
37+
38+
The instruction-tuned variant was post-trained with knowledge distillation and reinforcement learning.
39+
40+
You can find all the original Gemma 3n checkpoints under the [Gemma 3n][gemma3n-collection] release.
41+
42+
> [!TIP]
43+
> Click on the Gemma 3n models in the right sidebar for more examples of how to apply Gemma to different vision, audio,
44+
> and language tasks.
45+
46+
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
47+
48+
<hfoptions id="usage">
49+
<hfoption id="Pipeline">
50+
51+
```py
52+
import torch
53+
from transformers import pipeline
54+
55+
pipeline = pipeline(
56+
task="image-text-to-text",
57+
model="google/gemma-3n-e4b",
58+
device=0,
59+
torch_dtype=torch.bfloat16
60+
)
61+
pipeline(
62+
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
63+
text="<start_of_image> What is shown in this image?"
64+
)
65+
```
66+
67+
</hfoption>
68+
<hfoption id="AutoModel">
69+
70+
```py
71+
import torch
72+
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
73+
74+
model = Gemma3nForConditionalGeneration.from_pretrained(
75+
"google/gemma-3n-e4b-it",
76+
torch_dtype=torch.bfloat16,
77+
device_map="auto",
78+
attn_implementation="sdpa"
79+
)
80+
processor = AutoProcessor.from_pretrained(
81+
"google/gemma-3n-e4b-it",
82+
padding_side="left"
83+
)
84+
85+
messages = [
86+
{
87+
"role": "system",
88+
"content": [
89+
{"type": "text", "text": "You are a helpful assistant."}
90+
]
91+
},
92+
{
93+
"role": "user", "content": [
94+
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"},
95+
{"type": "text", "text": "What is shown in this image?"},
96+
]
97+
},
98+
]
99+
inputs = processor.apply_chat_template(
100+
messages,
101+
tokenize=True,
102+
return_dict=True,
103+
return_tensors="pt",
104+
add_generation_prompt=True,
105+
).to("cuda")
106+
107+
output = model.generate(**inputs, max_new_tokens=50, cache_implementation="static")
108+
print(processor.decode(output[0], skip_special_tokens=True))
109+
```
110+
111+
</hfoption>
112+
<hfoption id="transformers CLI">
113+
114+
```bash
115+
echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model google/gemma-3n-e2b --device 0
116+
```
117+
118+
</hfoption>
119+
</hfoptions>
120+
121+
## Notes
122+
123+
- Use [`Gemma3nForConditionalGeneration`] for image-audio-and-text, image-and-text, image-and-audio, audio-and-text,
124+
image-only and aduio-only inputs.
125+
- Gemma 3n supports multiple images per input, but make sure the images are correctly batched before passing them to
126+
the processor. Each batch should be a list of one or more images.
127+
128+
```py
129+
url_cow = "https://media.istockphoto.com/id/1192867753/photo/cow-in-berchida-beach-siniscola.jpg?s=612x612&w=0&k=20&c=v0hjjniwsMNfJSuKWZuIn8pssmD5h5bSN1peBd1CmH4="
130+
url_cat = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
131+
132+
messages =[
133+
{
134+
"role": "system",
135+
"content": [
136+
{"type": "text", "text": "You are a helpful assistant."}
137+
]
138+
},
139+
{
140+
"role": "user",
141+
"content": [
142+
{"type": "image", "url": url_cow},
143+
{"type": "image", "url": url_cat},
144+
{"type": "text", "text": "Which image is cuter?"},
145+
]
146+
},
147+
]
148+
```
149+
- Text passed to the processor should have a `<image_soft_token>` token wherever an image should be inserted.
150+
- Gemma 3n accept at most one target audio clip per input, though multiple audio clips can be provided in few-shot
151+
prompts, for example.
152+
- Text passed to the processor should have a `<audio_soft_token>` token wherever an audio clip should be inserted.
153+
- The processor has its own [`~ProcessorMixin.apply_chat_template`] method to convert chat messages to model inputs.
154+
155+
## Gemma3nAudioFeatureExtractor
156+
157+
[[autodoc]] Gemma3nAudioFeatureExtractor
158+
159+
## Gemma3nProcessor
160+
161+
[[autodoc]] Gemma3nProcessor
162+
163+
## Gemma3nTextConfig
164+
165+
[[autodoc]] Gemma3nTextConfig
166+
167+
## Gemma3nVisionConfig
168+
169+
[[autodoc]] Gemma3nVisionConfig
170+
171+
## Gemma3nAudioConfig
172+
173+
[[autodoc]] Gemma3nAudioConfig
174+
175+
## Gemma3nConfig
176+
177+
[[autodoc]] Gemma3nConfig
178+
179+
## Gemma3nTextModel
180+
181+
[[autodoc]] Gemma3nTextModel
182+
- forward
183+
184+
## Gemma3nModel
185+
186+
[[autodoc]] Gemma3nModel
187+
- forward
188+
189+
## Gemma3nForCausalLM
190+
191+
[[autodoc]] Gemma3nForCausalLM
192+
- forward
193+
194+
## Gemma3nForConditionalGeneration
195+
196+
[[autodoc]] Gemma3nForConditionalGeneration
197+
- forward
198+
199+
[altup]: https://proceedings.neurips.cc/paper_files/paper/2023/hash/f2059277ac6ce66e7e5543001afa8bb5-Abstract-Conference.html
200+
[attention-mask-viz]: https://github.com/huggingface/transformers/blob/beb9b5b02246b9b7ee81ddf938f93f44cfeaad19/src/transformers/utils/attention_visualizer.py#L139
201+
[gemma3n-collection]: https://huggingface.co/collections/google/gemma-3n
202+
[laurel]: https://arxiv.org/abs/2411.07501
203+
[matformer]: https://arxiv.org/abs/2310.07707
204+
[usm]: https://arxiv.org/abs/2303.01037

src/transformers/models/auto/configuration_auto.py

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,10 @@
140140
("gemma2", "Gemma2Config"),
141141
("gemma3", "Gemma3Config"),
142142
("gemma3_text", "Gemma3TextConfig"),
143+
("gemma3n", "Gemma3nConfig"),
144+
("gemma3n_audio", "Gemma3nAudioConfig"),
145+
("gemma3n_text", "Gemma3nTextConfig"),
146+
("gemma3n_vision", "Gemma3nVisionConfig"),
143147
("git", "GitConfig"),
144148
("glm", "GlmConfig"),
145149
("glm4", "Glm4Config"),
@@ -518,6 +522,10 @@
518522
("gemma2", "Gemma2"),
519523
("gemma3", "Gemma3ForConditionalGeneration"),
520524
("gemma3_text", "Gemma3ForCausalLM"),
525+
("gemma3n", "Gemma3nForConditionalGeneration"),
526+
("gemma3n_audio", "Gemma3nAudioEncoder"),
527+
("gemma3n_text", "Gemma3nForCausalLM"),
528+
("gemma3n_vision", "TimmWrapperModel"),
521529
("git", "GIT"),
522530
("glm", "GLM"),
523531
("glm4", "GLM4"),
@@ -839,6 +847,9 @@
839847
("clip_text_model", "clip"),
840848
("aria_text", "aria"),
841849
("gemma3_text", "gemma3"),
850+
("gemma3n_audio", "gemma3n"),
851+
("gemma3n_text", "gemma3n"),
852+
("gemma3n_vision", "gemma3n"),
842853
("glm4v_text", "glm4v"),
843854
("idefics3_vision", "idefics3"),
844855
("siglip_vision_model", "siglip"),

src/transformers/models/auto/feature_extraction_auto.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@
6161
("dpt", "DPTFeatureExtractor"),
6262
("encodec", "EncodecFeatureExtractor"),
6363
("flava", "FlavaFeatureExtractor"),
64+
("gemma3n", "Gemma3nAudioFeatureExtractor"),
6465
("glpn", "GLPNFeatureExtractor"),
6566
("granite_speech", "GraniteSpeechFeatureExtractor"),
6667
("groupvit", "CLIPFeatureExtractor"),

src/transformers/models/auto/image_processing_auto.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -88,6 +88,7 @@
8888
("focalnet", ("BitImageProcessor", "BitImageProcessorFast")),
8989
("fuyu", ("FuyuImageProcessor",)),
9090
("gemma3", ("Gemma3ImageProcessor", "Gemma3ImageProcessorFast")),
91+
("gemma3n", ("SiglipImageProcessor", "SiglipImageProcessorFast")),
9192
("git", ("CLIPImageProcessor", "CLIPImageProcessorFast")),
9293
("glm4v", ("Glm4vImageProcessor", "Glm4vImageProcessorFast")),
9394
("glpn", ("GLPNImageProcessor",)),

src/transformers/models/auto/modeling_auto.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,10 @@
132132
("gemma2", "Gemma2Model"),
133133
("gemma3", "Gemma3Model"),
134134
("gemma3_text", "Gemma3TextModel"),
135+
("gemma3n", "Gemma3nModel"),
136+
("gemma3n_audio", "Gemma3nAudioEncoder"),
137+
("gemma3n_text", "Gemma3nTextModel"),
138+
("gemma3n_vision", "TimmWrapperModel"),
135139
("git", "GitModel"),
136140
("glm", "GlmModel"),
137141
("glm4", "Glm4Model"),
@@ -583,6 +587,8 @@
583587
("gemma2", "Gemma2ForCausalLM"),
584588
("gemma3", "Gemma3ForConditionalGeneration"),
585589
("gemma3_text", "Gemma3ForCausalLM"),
590+
("gemma3n", "Gemma3nForConditionalGeneration"),
591+
("gemma3n_text", "Gemma3nForCausalLM"),
586592
("git", "GitForCausalLM"),
587593
("glm", "GlmForCausalLM"),
588594
("glm4", "Glm4ForCausalLM"),
@@ -906,6 +912,7 @@
906912
("emu3", "Emu3ForConditionalGeneration"),
907913
("fuyu", "FuyuForCausalLM"),
908914
("gemma3", "Gemma3ForConditionalGeneration"),
915+
("gemma3n", "Gemma3nForConditionalGeneration"),
909916
("git", "GitForCausalLM"),
910917
("glm4v", "Glm4vForConditionalGeneration"),
911918
("got_ocr2", "GotOcr2ForConditionalGeneration"),

src/transformers/models/auto/processing_auto.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,7 @@
6666
("flava", "FlavaProcessor"),
6767
("fuyu", "FuyuProcessor"),
6868
("gemma3", "Gemma3Processor"),
69+
("gemma3n", "Gemma3nProcessor"),
6970
("git", "GitProcessor"),
7071
("glm4v", "Glm4vProcessor"),
7172
("got_ocr2", "GotOcr2Processor"),

src/transformers/models/auto/tokenization_auto.py

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -236,6 +236,20 @@
236236
"GemmaTokenizerFast" if is_tokenizers_available() else None,
237237
),
238238
),
239+
(
240+
"gemma3n",
241+
(
242+
"GemmaTokenizer" if is_sentencepiece_available() else None,
243+
"GemmaTokenizerFast" if is_tokenizers_available() else None,
244+
),
245+
),
246+
(
247+
"gemma3n_text",
248+
(
249+
"GemmaTokenizer" if is_sentencepiece_available() else None,
250+
"GemmaTokenizerFast" if is_tokenizers_available() else None,
251+
),
252+
),
239253
("git", ("BertTokenizer", "BertTokenizerFast" if is_tokenizers_available() else None)),
240254
("glm", (None, "PreTrainedTokenizerFast" if is_tokenizers_available() else None)),
241255
("glm4", (None, "PreTrainedTokenizerFast" if is_tokenizers_available() else None)),
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Copyright 2025 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
from typing import TYPE_CHECKING
15+
16+
from ...utils import _LazyModule
17+
from ...utils.import_utils import define_import_structure
18+
19+
20+
if TYPE_CHECKING:
21+
from .configuration_gemma3n import *
22+
from .feature_extraction_gemma3n import *
23+
from .modeling_gemma3n import *
24+
from .processing_gemma3n import *
25+
else:
26+
import sys
27+
28+
_file = globals()["__file__"]
29+
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)

0 commit comments

Comments
 (0)