Skip to content

Conversation

sebffischer
Copy link
Member

@sebffischer sebffischer commented Aug 11, 2025

Summary:

Depends on

Example

Below, we say that we don't want to train a fallback learner in case of a timeout, and hence actually throw an R condition.

  library(mlr3)
  l = lrn("classif.debug",
    timeout = c(train = 0.01),
    sleep_train = function() while (TRUE) NULL
  )

  l$encapsulate(
    "callr",
    lrn("classif.featureless"),
    function(x) {
      !inherits(x, "mlr3ErrorTimeout")
    }
  )
  l$train(tsk("iris"))
#> Error: reached elapsed time limit

Created on 2025-08-11 with reprex v2.1.1

TODOs:

  • add condition objects for warnings (varying_predict_types)
  • Add tests for mirai (only possible once mirai encapsulation is merged)
  • Document the error classes
  • renamed argument should_catch of learner$encapsulate() to when
  • Don't catch configuration errors (mlr3ErrorConfig).
  • [ ] private$.when needs to influence the hash. --> needs to be done in Use common baseclass from mlr3 #1370 after this PR is merged

Summary:
* Depends on: mlr-org/mlr3misc#141, which
  makes `encapsulate()` also return the condition objects
* Introduce custom error/error condition constructors (#1283)
* Allows to specify for which conditions a fallback learner should
  be triggered (#1335)

TODO:
* [ ] add condition objects for warnings (varying_predict_types)
@sebffischer
Copy link
Member Author

I think by default we should not throw a fallback for configuration errors. Configuration errors should 100% be the fault of the user.

@sebffischer
Copy link
Member Author

Some example outputs for how it looks in the logger and in the learner state:

library(mlr3tuning)
#> Loading required package: mlr3
#> Loading required package: paradox
library(mlr3misc)

learner = lrn("classif.debug", x = to_tune(), sleep_train = function() {
  warning_config("hello")
  0
})

learner$encapsulate("evaluate", lrn("classif.debug"))

tune(
  learner = learner,
  task = tsk("iris"),
  resampling = rsmp("holdout"),
  measure = msr("classif.ce"),
  tuner = tnr("random_search"),
  term_evals = 1
)
#> INFO  [13:22:29.452] [bbotk] Starting to optimize 1 parameter(s) with '<OptimizerBatchRandomSearch>' and '<TerminatorEvals> [n_evals=1, k=0]'
#> INFO  [13:22:29.490] [bbotk] Evaluating 1 configuration(s)
#> INFO  [13:22:29.497] [mlr3] Running benchmark with 1 resampling iterations
#> INFO  [13:22:29.568] [mlr3] Applying learner 'classif.debug' on task 'iris' (iter 1/1)
#> WARN  [13:22:29.584] [mlr3] train: ✖ hello
#> → Class: Mlr3WarningConfig
#> INFO  [13:22:29.586] [mlr3] Calling train method of fallback 'classif.debug' on task 'iris' with 100 observations {learner: <LearnerClassifDebug/LearnerClassif/Learner/R6>}
#> INFO  [13:22:29.598] [mlr3] Finished benchmark
#> INFO  [13:22:29.616] [bbotk] Result of batch 1:
#> INFO  [13:22:29.618] [bbotk]           x classif.ce warnings errors runtime_learners
#> INFO  [13:22:29.618] [bbotk]  0.03519635       0.72        1      0            0.015
#> INFO  [13:22:29.618] [bbotk]                                 uhash
#> INFO  [13:22:29.618] [bbotk]  a332e8f9-8a63-4ccd-9bd2-6504a4abcbc5
#> INFO  [13:22:29.629] [bbotk] Finished optimizing after 1 evaluation(s)
#> INFO  [13:22:29.630] [bbotk] Result:
#> INFO  [13:22:29.631] [bbotk]           x learner_param_vals  x_domain classif.ce
#> INFO  [13:22:29.631] [bbotk]       <num>             <list>    <list>      <num>
#> INFO  [13:22:29.631] [bbotk]  0.03519635          <list[2]> <list[1]>       0.72
#> 
#> ── <TuningInstanceBatchSingleCrit> ─────────────────────────────────────────────
#> • State: Optimized
#> • Objective: <ObjectiveTuningBatch>
#> • Search Space:
#>        id    class lower upper nlevels
#>    <char>   <char> <num> <num>   <num>
#> 1:      x ParamDbl     0     1     Inf
#> • Terminator: <TerminatorEvals> (n_evals=1, k=0)
#> • Result:
#>             x classif.ce
#>         <num>      <num>
#> 1: 0.03519635       0.72
#> • Archive:
#>    classif.ce     x
#>         <num> <num>
#> 1:        0.7  0.04

learner$configure(
  x = 1,
  sleep_train = function() {
    error_mlr3("hello")
  }
)
learner$train(tsk("iris"))
#> ERROR [13:22:29.697] [mlr3] train: ✖ hello
#> → Class: Mlr3Error
#> INFO  [13:22:29.699] [mlr3] Learner 'classif.debug' on task 'iris' failed to train a model {learner: <LearnerClassifDebug/LearnerClassif/Learner/R6>, messages: `✖ hello
#> → Class: Mlr3Error`}
#> INFO  [13:22:29.700] [mlr3] Calling train method of fallback 'classif.debug' on task 'iris' with 150 observations {learner: <LearnerClassifDebug/LearnerClassif/Learner/R6>}
learner
#> 
#> ── <LearnerClassifDebug> (classif.debug): Debug Learner for Classification ─────
#> • Model: -
#> • Parameters: sleep_train=<function>, x=1
#> • Validate: NULL
#> • Packages: mlr3
#> • Predict Types: [response] and prob
#> • Feature Types: logical, integer, numeric, character, factor, and ordered
#> • Encapsulation: evaluate (fallback: LearnerClassifDebug)
#> • Properties: hotstart_forward, internal_tuning, marshal, missings, multiclass,
#> twoclass, validation, and weights
#> • Other settings: use_weights = 'use'
#> ✖ Errors: ✖ hello → Class: Mlr3Error

Created on 2025-08-12 with reprex v2.1.1

@sebffischer sebffischer merged commit a3356c1 into main Aug 16, 2025
6 checks passed
@sebffischer sebffischer deleted the feat-error-handling branch August 16, 2025 08:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant