Skip to content

Conversation

iugoood
Copy link
Contributor

@iugoood iugoood commented Sep 17, 2025

Add

1 add deepseek_v3 model
2 add UT

ps: Quantitative weights cannot be validated.

Usage

from transformers import AutoTokenizer
from mindone.transformers import DeepseekV3ForCausalLM
import mindspore as ms
model = DeepseekV3ForCausalLM.from_pretrained("meta-deepseek_v3/DeepseekV3-2-7b-hf")
tokenizer = AutoTokenizer.from_pretrained("meta-deepseek_v3/DeepseekV3-2-7b-hf")
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="np")
# Generate
generate_ids = model.generate(ms.tensor(inputs.input_ids), max_length=30)
tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]

Performance

Experiments are tested on Ascend Atlas 800T A2 machines with mindspore 2.6.0.

model mode speed
deepseek_v3 pynative 3.32 token/s

@iugoood iugoood requested a review from vigo999 as a code owner September 17, 2025 02:26
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @iugoood, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the mindone.transformers library by introducing the DeepseekV3 large language model. It encompasses the full implementation of the model's architecture, including specialized layers such as DeepseekV3RMSNorm, DeepseekV3RotaryEmbedding, DeepseekV3MLP, and a sophisticated DeepseekV3MoE (Mixture-of-Experts) module. The integration also includes necessary updates to the auto-loading mechanisms and a new utility file for generic modeling layers, ensuring that DeepseekV3 can be easily configured and utilized. Furthermore, robust unit tests are provided to validate the MindSpore implementation against its PyTorch counterpart, although a note indicates that quantitative weights could not be fully validated.

Highlights

  • DeepseekV3 Model Integration: The DeepseekV3 model, including its core components like RMSNorm, Rotary Embedding, MLP, and a Mixture-of-Experts (MoE) implementation, has been added to the mindone.transformers library.
  • Unit Test Coverage: Comprehensive unit tests have been introduced for the DeepseekV3 model, ensuring its functionality and compatibility within the MindSpore framework by comparing outputs with PyTorch.
  • Framework Integration: The new model is seamlessly integrated into the auto-configuration and auto-modeling systems, allowing for easy instantiation and use.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for the DeepSeek-V3 model, including its configuration, model implementation, and auto-class registrations. Unit tests are also included to ensure correctness by comparing with the original Hugging Face Transformers implementation. The changes are well-structured and follow the existing patterns in the repository. I have a few minor suggestions for code cleanup.

Comment on lines +677 to +678
MODEL_MAPPING_NAMES.update({"deepseek_v3": "DeepseekV3Model"}),
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.update({"deepseek_v3": "DeepseekV3ForCausalLM"}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These lines have trailing commas, which create unnecessary tuples. While this is syntactically valid, it's cleaner to remove them for better code clarity.

Suggested change
MODEL_MAPPING_NAMES.update({"deepseek_v3": "DeepseekV3Model"}),
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.update({"deepseek_v3": "DeepseekV3ForCausalLM"}),
MODEL_MAPPING_NAMES.update({"deepseek_v3": "DeepseekV3Model"})
MODEL_FOR_CAUSAL_LM_MAPPING_NAMES.update({"deepseek_v3": "DeepseekV3ForCausalLM"})

Comment on lines +5 to +7
#
# This code is adapted from https://github.com/huggingface/transformers
# with modifications to run transformers on mindspore.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment block is duplicated. Please remove the redundant part for better readability.

Comment on lines +9 to +10
#
# with modifications to run transformers on mindspore.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's a duplicated comment line here. Please remove the redundant part for better readability.

Important:
When using gradient checkpointing with `use_reentrant=True`, inputs that require gradients (e.g. hidden states)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems use_reetrant is not implemented in this class?


gradient_checkpointing = False

def __call__(self, *args, **kwargs):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why use __call__ instead of construct for this nn.Cell class

@vigo999 vigo999 added the new model add new model to mindone label Sep 29, 2025
@vigo999 vigo999 added this to mindone Sep 29, 2025
@vigo999 vigo999 moved this to In Progress in mindone Sep 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new model add new model to mindone
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

3 participants