From daf9918bb5be8b54ce584632cc6b3ca5bc3587b9 Mon Sep 17 00:00:00 2001 From: Briarion Date: Wed, 25 Mar 2026 15:29:15 +0300 Subject: [PATCH] docs: add early_stopping_patience to parameter table Document the new early_stopping_patience trainer_kwargs parameter in the FreqAI parameter table, including description, datatype, default value, and usage notes. --- docs/freqai-parameter-table.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/freqai-parameter-table.md b/docs/freqai-parameter-table.md index 5fe23e710..bce45133e 100644 --- a/docs/freqai-parameter-table.md +++ b/docs/freqai-parameter-table.md @@ -106,6 +106,7 @@ Mandatory parameters are marked as **Required** and have to be set in one of the | `n_epochs` | The `n_epochs` parameter is a crucial setting in the PyTorch training loop that determines the number of times the entire training dataset will be used to update the model's parameters. An epoch represents one full pass through the entire training dataset. Overrides `n_steps`. Either `n_epochs` or `n_steps` must be set.

**Datatype:** int. optional.
Default: `10`. | `n_steps` | An alternative way of setting `n_epochs` - the number of training iterations to run. Iteration here refer to the number of times we call `optimizer.step()`. Ignored if `n_epochs` is set. A simplified version of the function:

n_epochs = n_steps / (n_obs / batch_size)

The motivation here is that `n_steps` is easier to optimize and keep stable across different n_obs - the number of data points.

**Datatype:** int. optional.
Default: `None`. | `batch_size` | The size of the batches to use during training.

**Datatype:** int.
Default: `64`. +| `early_stopping_patience` | Number of epochs with no improvement in validation loss before training is stopped early. This helps prevent overfitting by halting training when the model stops improving. Set to `0` to disable early stopping. Requires a test/validation split (`test_size > 0`).

**Datatype:** int.
Default: `0` (disabled). ### Additional parameters