Skip to content

Conversation

@penelopeysm
Copy link
Member

@penelopeysm penelopeysm commented Nov 3, 2025

notes in comments.

Further refactoring will happen, but in separate PRs.

Closes #2635.

@penelopeysm penelopeysm changed the base branch from main to breaking November 3, 2025 10:40
@github-actions
Copy link
Contributor

github-actions bot commented Nov 3, 2025

Turing.jl documentation for PR #2708 is available at:
https://TuringLang.github.io/Turing.jl/previews/PR2708/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I checked and the base optimisation tests are a superset of these tests, so no need to incorporate any of these deleted tests into the main suite.

Comment on lines -89 to -110
function OptimLogDensity(
model::DynamicPPL.Model,
getlogdensity::Function,
vi::DynamicPPL.AbstractVarInfo;
adtype::ADTypes.AbstractADType=Turing.DEFAULT_ADTYPE,
)
ldf = DynamicPPL.LogDensityFunction(model, getlogdensity, vi; adtype=adtype)
return new{typeof(ldf)}(ldf)
end
function OptimLogDensity(
model::DynamicPPL.Model,
getlogdensity::Function;
adtype::ADTypes.AbstractADType=Turing.DEFAULT_ADTYPE,
)
# No varinfo
return OptimLogDensity(
model,
getlogdensity,
DynamicPPL.ldf_default_varinfo(model, getlogdensity);
adtype=adtype,
)
end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just duplicating LogDensityFunction code. I thought it simpler to just construct the LDF and then wrap it.

@codecov
Copy link

codecov bot commented Nov 3, 2025

Codecov Report

❌ Patch coverage is 0% with 2 lines in your changes missing coverage. Please review.
✅ Project coverage is 56.79%. Comparing base (be007f3) to head (bfa1de2).

Files with missing lines Patch % Lines
src/optimisation/Optimisation.jl 0.00% 2 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (be007f3) and HEAD (bfa1de2). Click for more details.

HEAD has 13 uploads less than BASE
Flag BASE (be007f3) HEAD (bfa1de2)
24 11
Additional details and impacted files
@@              Coverage Diff              @@
##           breaking    #2708       +/-   ##
=============================================
- Coverage     86.45%   56.79%   -29.67%     
=============================================
  Files            21       20        -1     
  Lines          1418     1340       -78     
=============================================
- Hits           1226      761      -465     
- Misses          192      579      +387     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.


## Optimisation interface

The Optim.jl interface has been removed (so you cannot call `Optim.optimize` directly on Turing models).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest that we define our own optimize function on top of DynamicPPL.LogDensityFunction to avoid the breaking change. The optimize function provides a natural alternative to AbstractMCMC.sample.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really agree:

  1. What is the purpose of avoiding the breaking change? The functionality is already present in maximum_likelihood(model) or maximum_a_posteriori(model) so this is just duplicate functionality.

  2. A Turing.optimize function would still be a breaking change because this is not the same as Optim.optimize.

Copy link
Member

@yebai yebai Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Penny, for clarifying.

My main point is not breaking change, instead optimise mirrors the Turing.sample API well, and provides a generic interface for optimisation, and VI algorithms. Sorry for the confusion. Though I agree we ought to remove the Optim interface and define our own optimize method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, in that case we could start by renaming estimate_mode to optimize, and exporting it. That would give us out of the box

optimize(model, MLE(); kwargs...)
optimize(model, MAP(); kwargs...)

and then the VI arguments could be bundled into a struct that would be passed as the second argument.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants