-
Notifications
You must be signed in to change notification settings - Fork 19
Open
Labels
enhancementNew feature or requestNew feature or request
Description
As it stands, we don't make it easy to evaluate the performance of regression against test data. This is a shame, because it makes it harder to make judgements about how well a network is going to perform in practice.
I propose a two-part remedy:
- A way to split a dataset into (shuffled) training and testing sets. sk-learn does this with a function. We could either add a message to dataset, or a dedicated object.
- A way to retrieve the MSE for a supervised prediction. This is a pain to do in the CCE. Suggest adding methods to the regressor
test <dataset inputs_in> <dataset outputs_in> <dataset loss_out>andtestPoint <buffer input_in> <buffer output_in> -> double. These take two inputs (i.e. training pairs) and report the loss. (Although, should the batch version just return a double as well, viz. the mean loss across the whole set? Maybe that makes more sense)
In this way we should be able to enable a more principled workflow for trickier examples where we can monitor the test loss alongside the training loss.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request