-
-
Notifications
You must be signed in to change notification settings - Fork 95
feat(conditions): improved error handling #1365
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: * Depends on: mlr-org/mlr3misc#141, which makes `encapsulate()` also return the condition objects * Introduce custom error/error condition constructors (#1283) * Allows to specify for which conditions a fallback learner should be triggered (#1335) TODO: * [ ] add condition objects for warnings (varying_predict_types)
I think by default we should not throw a fallback for configuration errors. Configuration errors should 100% be the fault of the user. |
Some example outputs for how it looks in the logger and in the learner state: library(mlr3tuning)
#> Loading required package: mlr3
#> Loading required package: paradox
library(mlr3misc)
learner = lrn("classif.debug", x = to_tune(), sleep_train = function() {
warning_config("hello")
0
})
learner$encapsulate("evaluate", lrn("classif.debug"))
tune(
learner = learner,
task = tsk("iris"),
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
tuner = tnr("random_search"),
term_evals = 1
)
#> INFO [13:22:29.452] [bbotk] Starting to optimize 1 parameter(s) with '<OptimizerBatchRandomSearch>' and '<TerminatorEvals> [n_evals=1, k=0]'
#> INFO [13:22:29.490] [bbotk] Evaluating 1 configuration(s)
#> INFO [13:22:29.497] [mlr3] Running benchmark with 1 resampling iterations
#> INFO [13:22:29.568] [mlr3] Applying learner 'classif.debug' on task 'iris' (iter 1/1)
#> WARN [13:22:29.584] [mlr3] train: ✖ hello
#> → Class: Mlr3WarningConfig
#> INFO [13:22:29.586] [mlr3] Calling train method of fallback 'classif.debug' on task 'iris' with 100 observations {learner: <LearnerClassifDebug/LearnerClassif/Learner/R6>}
#> INFO [13:22:29.598] [mlr3] Finished benchmark
#> INFO [13:22:29.616] [bbotk] Result of batch 1:
#> INFO [13:22:29.618] [bbotk] x classif.ce warnings errors runtime_learners
#> INFO [13:22:29.618] [bbotk] 0.03519635 0.72 1 0 0.015
#> INFO [13:22:29.618] [bbotk] uhash
#> INFO [13:22:29.618] [bbotk] a332e8f9-8a63-4ccd-9bd2-6504a4abcbc5
#> INFO [13:22:29.629] [bbotk] Finished optimizing after 1 evaluation(s)
#> INFO [13:22:29.630] [bbotk] Result:
#> INFO [13:22:29.631] [bbotk] x learner_param_vals x_domain classif.ce
#> INFO [13:22:29.631] [bbotk] <num> <list> <list> <num>
#> INFO [13:22:29.631] [bbotk] 0.03519635 <list[2]> <list[1]> 0.72
#>
#> ── <TuningInstanceBatchSingleCrit> ─────────────────────────────────────────────
#> • State: Optimized
#> • Objective: <ObjectiveTuningBatch>
#> • Search Space:
#> id class lower upper nlevels
#> <char> <char> <num> <num> <num>
#> 1: x ParamDbl 0 1 Inf
#> • Terminator: <TerminatorEvals> (n_evals=1, k=0)
#> • Result:
#> x classif.ce
#> <num> <num>
#> 1: 0.03519635 0.72
#> • Archive:
#> classif.ce x
#> <num> <num>
#> 1: 0.7 0.04
learner$configure(
x = 1,
sleep_train = function() {
error_mlr3("hello")
}
)
learner$train(tsk("iris"))
#> ERROR [13:22:29.697] [mlr3] train: ✖ hello
#> → Class: Mlr3Error
#> INFO [13:22:29.699] [mlr3] Learner 'classif.debug' on task 'iris' failed to train a model {learner: <LearnerClassifDebug/LearnerClassif/Learner/R6>, messages: `✖ hello
#> → Class: Mlr3Error`}
#> INFO [13:22:29.700] [mlr3] Calling train method of fallback 'classif.debug' on task 'iris' with 150 observations {learner: <LearnerClassifDebug/LearnerClassif/Learner/R6>}
learner
#>
#> ── <LearnerClassifDebug> (classif.debug): Debug Learner for Classification ─────
#> • Model: -
#> • Parameters: sleep_train=<function>, x=1
#> • Validate: NULL
#> • Packages: mlr3
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, character, factor, and ordered
#> • Encapsulation: evaluate (fallback: LearnerClassifDebug)
#> • Properties: hotstart_forward, internal_tuning, marshal, missings, multiclass,
#> twoclass, validation, and weights
#> • Other settings: use_weights = 'use'
#> ✖ Errors: ✖ hello → Class: Mlr3Error Created on 2025-08-12 with reprex v2.1.1 |
sebffischer
commented
Aug 15, 2025
sebffischer
commented
Aug 15, 2025
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Depends on
encapsulate()
also return the condition objects, which in turn could be simplified by Question: Why ismiraiError
not a condition object r-lib/mirai#400.mirai
encapsulationExample
Below, we say that we don't want to train a fallback learner in case of a timeout, and hence actually throw an R condition.
Created on 2025-08-11 with reprex v2.1.1
TODOs:
varying_predict_types
)mirai
(only possible oncemirai
encapsulation is merged)should_catch
oflearner$encapsulate()
towhen
mlr3ErrorConfig
).[ ]--> needs to be done in Use common baseclass from mlr3 #1370 after this PR is mergedprivate$.when
needs to influence the hash.