Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,15 @@ As long as the above functions are defined correctly, Turing will be able to use

The `Turing.Inference.isgibbscomponent(::MySampler)` interface function still exists, but in this version the default has been changed to `true`, so you should not need to overload this.

## Optimisation interface

The Optim.jl interface has been removed (so you cannot call `Optim.optimize` directly on Turing models).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest that we define our own optimize function on top of DynamicPPL.LogDensityFunction to avoid the breaking change. The optimize function provides a natural alternative to AbstractMCMC.sample.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really agree:

  1. What is the purpose of avoiding the breaking change? The functionality is already present in maximum_likelihood(model) or maximum_a_posteriori(model) so this is just duplicate functionality.

  2. A Turing.optimize function would still be a breaking change because this is not the same as Optim.optimize.

Copy link
Member

@yebai yebai Nov 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Penny, for clarifying.

My main point is not breaking change, instead optimise mirrors the Turing.sample API well, and provides a generic interface for optimisation, and VI algorithms. Sorry for the confusion. Though I agree we ought to remove the Optim interface and define our own optimize method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, in that case we could start by renaming estimate_mode to optimize, and exporting it. That would give us out of the box

optimize(model, MLE(); kwargs...)
optimize(model, MAP(); kwargs...)

and then the VI arguments could be bundled into a struct that would be passed as the second argument.

You can use the `maximum_likelihood` or `maximum_a_posteriori` functions with an Optim.jl solver instead (via Optimization.jl: see https://docs.sciml.ai/Optimization/stable/optimization_packages/optim/ for documentation of the available solvers).

## Internal changes

The constructors of `OptimLogDensity` have been replaced with a single constructor, `OptimLogDensity(::DynamicPPL.LogDensityFunction)`.

# 0.41.1

The `ModeResult` struct returned by `maximum_a_posteriori` and `maximum_likelihood` can now be wrapped in `InitFromParams()`.
Expand Down
7 changes: 0 additions & 7 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,9 @@ StatsFuns = "4c63d2b9-4356-54db-8cca-17b64c39e42c"

[weakdeps]
DynamicHMC = "bbc10e6e-7c05-544b-b16e-64fede858acb"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"

[extensions]
TuringDynamicHMCExt = "DynamicHMC"
TuringOptimExt = ["Optim", "AbstractPPL"]

[compat]
ADTypes = "1.9"
Expand All @@ -72,7 +70,6 @@ LinearAlgebra = "1"
LogDensityProblems = "2"
MCMCChains = "5, 6, 7"
NamedArrays = "0.9, 0.10"
Optim = "1"
Optimization = "3, 4, 5"
OptimizationOptimJL = "0.1, 0.2, 0.3, 0.4"
OrderedCollections = "1"
Expand All @@ -86,7 +83,3 @@ StatsAPI = "1.6"
StatsBase = "0.32, 0.33, 0.34"
StatsFuns = "0.8, 0.9, 1"
julia = "1.10.8"

[extras]
DynamicHMC = "bbc10e6e-7c05-544b-b16e-64fede858acb"
Optim = "429524aa-4258-5aef-a3af-852621145aeb"
198 changes: 0 additions & 198 deletions ext/TuringOptimExt.jl

This file was deleted.

Loading
Loading