Replies: 1 comment 3 replies
-
|
Hi, I'm not sure if you're looking for an answer in here in terms of sktime or keras. In Keras, you just need to do something like: early_stopper = EarlyStopping(monitor='val_loss', mode='min', verbose=True, patience=8, restore_best_weights=True)
reduce_lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.5, min_lr=1e-8, patience=5, verbose=True)
callbacks = [reduce_lr_callback, early_stopper]
history = autoencoder.fit(callbacks=callbacks)Here I'm invoking EarlyStopping and setting it to monitor 'val_loss'. It will stop training when validation loss hits a minimum. (taken from a simple demo here: https://github.com/BlankAdventure/snippets/blob/main/Python/autoenc_denoise.py) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I'm looking for guidance on monitoring validation loss when using the SKTime CNN for class methods. I am currently using CNN for multivariate classification. Specifically, I’m trying to find a way to fit the model and track the loss on validation data throughout the epochs. I know I can use Keras EarlyStopping, but it only monitors training loss. Is there a method to obtain the learning curve for validation loss or to prevent the model from overfitting to the training data? The current fit method does not take validation data as a parameter.
Best,
Ece
Beta Was this translation helpful? Give feedback.
All reactions