Skip to content

Conversation

@h-guo18
Copy link
Contributor

@h-guo18 h-guo18 commented Sep 22, 2025

What does this PR do?

Type of change: New Feature

Overview:
Previous related discussion:

This experimental PR explores to resolve distillation inefficiency by a trainer with decoupled student and teacher placement and nccl communication backend using torch.distributed.

It has a few benefits compares to previously used hf.trainer+FSDP:

  • Student and teacher can be placed on different devices with different parallelism, with async communication.
  • Memory efficient. Can train with 4x-8x longer sequence length on Eagle3 training workload.
  • Compatible with torch.compile. Observed 1.5x speedup when student and teacher speed match eachother.

See section below for profiling results.

Some additional benefits includes:

  • HF trainer is huge in code size and sometimes cause model incompatibility with modelopt conversion. (E.g. gpt-oss in eagle3). Maintaining a minimal trainer ourselves may helps reduce model-specific hacks like this.
  • No extra dependencies needed; Purely based on torch.

Some drawbacks includes:

  • Extra codes to maintain. ~300 lines of codes for the trainer.
  • Speedup sensitive to placement setting and relative speed of teacher and student. Could be slower than fsdp under certain settings.
  • Compatibility with NeMo-AutoModel remains uninvestigated.

Usage

To launch training using the new trainer:

python train.py --out_path <output_dir> --data_path <jsonl file> --model_path <base_model>

Testing

All tests are done on eagle3 online training workload, with 8xH100 (nvlink) machine on coreweave cluster:

Training Speed

We use 4TP+4DDP setting with the new trainer:
image
Comments

  • With overlapped teacher and student step, the speed of the new trainer is roughly min(teacher, student), bottlenecked by the slower side.
  • When there is a heavy imbalance, it's slower than FSDP since the faster side's hardware is wasted.
  • Finer grained parallelism (e.g. PP) could help mitigate this imbalance by allowing 5+3 or 6+2 placement. Current used TP is only available for TP2 or TP4.

Mem Efficiency (max training length)

Llama-70B 1k 2k 4k 8k 12k
8FSDP OOM OOM OOM OOM
4TP+4DDP OOM
Qwen-32B 1k 2k 4k 8k 12k
8FSDP OOM OOM OOM
4TP+4DDP OOM

Comments:

  • Memory efficiency is significant, since student rank does not load teacher models now.

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@copy-pr-bot
Copy link

copy-pr-bot bot commented Sep 22, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 22, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch haoguo/eagle-newtrainer

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@h-guo18 h-guo18 force-pushed the hg/eagle3-crossattn branch 5 times, most recently from 47cddea to 94cbb2a Compare September 23, 2025 20:56
Base automatically changed from hg/eagle3-crossattn to main September 26, 2025 21:04
@h-guo18 h-guo18 force-pushed the haoguo/eagle-newtrainer branch from 6ec1c4b to b70b418 Compare September 26, 2025 21:15
@codecov
Copy link

codecov bot commented Sep 26, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 73.37%. Comparing base (adcb1a1) to head (d3494c0).
⚠️ Report is 27 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #352      +/-   ##
==========================================
- Coverage   73.79%   73.37%   -0.42%     
==========================================
  Files         171      180       +9     
  Lines       17591    17937     +346     
==========================================
+ Hits        12981    13162     +181     
- Misses       4610     4775     +165     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@h-guo18 h-guo18 force-pushed the haoguo/eagle-newtrainer branch from ddf5d5d to 5ae4479 Compare October 2, 2025 23:06
@h-guo18 h-guo18 self-assigned this Oct 3, 2025
@copy-pr-bot
Copy link

copy-pr-bot bot commented Oct 7, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@h-guo18 h-guo18 changed the title add new trainer Experimental: Distillation Trainer with Separate Teacher&Student Oct 16, 2025
@h-guo18 h-guo18 changed the title Experimental: Distillation Trainer with Separate Teacher&Student Experimental: Trainer with Separate Teacher&Student Oct 16, 2025
@h-guo18 h-guo18 force-pushed the haoguo/eagle-newtrainer branch from 438c616 to d3494c0 Compare October 17, 2025 00:48
Signed-off-by: h-guo18 <[email protected]>
Signed-off-by: h-guo18 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants