Skip to content

Conversation

@namgyu-youn
Copy link
Contributor

@namgyu-youn namgyu-youn commented Nov 1, 2025

Summary:
This PR is smaller chunks of #2729. Instead of implementing a linear model in each test code, we want to support its API for developers' utility and consistency. The updated toy model requires device and dtype to help clearly identify.

Test Plan:
test/sparsity/test_fast_sparse_training.py

Future Plan:
This PR only updates the toy model in test/sparsity/test_fast_sparse_training.py. For a full update, we also have to update the following tests, same as test_fast_sparse_training.py:

cc @jerryzh168

@pytorch-bot
Copy link

pytorch-bot bot commented Nov 1, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3275

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 1, 2025
@jerryzh168
Copy link
Contributor

thanks for splitting the PR, this should be much easier to land

if model_type == "linear":
model = ToyLinearModel(k, n, high_precision_dtype).to(device)
model = ToySingleLinearModel(k, n, device=device, dtype=high_precision_dtype)
input_data = torch.randn(m, k, device=device, dtype=high_precision_dtype)
Copy link
Contributor

@jerryzh168 jerryzh168 Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can this be changed to use example_inputs from model now

e.g. input_data = model.example_inputs(batch_size=m)[0]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants