Skip to content

Adding FC-DenseNet (Tiramisu) to models. #364

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
daavoo opened this issue Dec 7, 2017 · 4 comments
Closed

Adding FC-DenseNet (Tiramisu) to models. #364

daavoo opened this issue Dec 7, 2017 · 4 comments

Comments

@daavoo
Copy link
Contributor

daavoo commented Dec 7, 2017

I'm using a custom pytorch implementation of DenseNet Tiramisu at work. I'd like to know what are the steps needed to add a pretrained instance of the model to this repo.

I read in #321 and #260 some requirements like that the model should be trained using pytorch and torchvision.

I have no problem with this but I'm not sure about what code should be included along with the pretrained weights:

  • Do I need to also include the training script (examples/imagenet do not apply here, I think)?
  • Should I reuse some of the blocks of the existing Densenet implementation (At work I'm using some custom blocks)
  • Should I train the model using the same hyperparameters and dataset as the original paper?
  • Related with the previous question, should I add CAMVID to Datasets?
@ahundt
Copy link

ahundt commented Dec 10, 2017

Sounds awesome!

Does your code use the efficient densenet memory model or the naive one?

I'm not on this project, but I can give typical general answers that others can confirm or correct as needed.

Do I need to also include the training script (examples/imagenet do not apply here, I think)?

Yes, since single label problems can't run on image segmentation problems

Should I reuse some of the blocks of the existing Densenet implementation (At work I'm using some custom blocks)

Ideally reuse as much as possible, particularly if a small but clear and easy to understand API change can retain backwards compatibility while enabling the new functionality need for a problem with this model.

Should I train the model using the same hyperparameters and dataset as the original paper?

If you have something that produces better results you can use that, but be sure to evaluate and provide the performance metrics on a standardized evaluation dataset and the code in your PR should enable others to reproduce the model.

Related with the previous question, should I add CAMVID to Datasets?

I don't know their policy, but if you're adding a segmentation model there should be at least one segmentation dataset.

@alykhantejani
Copy link
Contributor

Sorry for the delayed response on this, I think proving an examples PR for this would also be good as well as a Dataset for CAMVID. However, I'll hand this over to @fmassa who probably has more ideas around segmentation tasts/dataset and models

@dongzhuoyao
Copy link

any advance?

@fmassa
Copy link
Member

fmassa commented Mar 5, 2018

We currently only provide pre-trained models for image classification in pytorch (which are trained using examples/imagenet.
It would be great to extend that for other vision tasks as well.

For reproducibility, it would be necessary to have a reference implementation that is open source and used to train the models that we add.

I'm still figuring out if we want to add those mode specific models to torchvision or to dedicated repos.
It depends on where the reference training implementation will live, but we will want to push most of the generic and reusable bits to torchvision.

@daavoo daavoo closed this as completed Oct 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants