etna.models.nn.TFTNativeModel#
- class TFTNativeModel(encoder_length: int, decoder_length: int, n_heads: int = 4, num_layers: int = 2, dropout: float = 0.1, hidden_size: int = 160, lr: float = 0.001, static_categoricals: List[str] | None = None, static_reals: List[str] | None = None, time_varying_categoricals_encoder: List[str] | None = None, time_varying_categoricals_decoder: List[str] | None = None, time_varying_reals_encoder: List[str] | None = None, time_varying_reals_decoder: List[str] | None = None, num_embeddings: Dict[str, int] | None = None, loss: Module | None = None, train_batch_size: int = 16, test_batch_size: int = 16, optimizer_params: dict | None = None, trainer_params: dict | None = None, train_dataloader_params: dict | None = None, test_dataloader_params: dict | None = None, val_dataloader_params: dict | None = None, split_params: dict | None = None)[source]#
Bases:
DeepBaseModel
TFT model. For more details read the paper.
Model needs label encoded inputs for categorical features, for that purposes use
LabelEncoderTransform
. Feature values that were not seen during fit should be set to NaN for expected behaviour with strategy=”none”Passed feature values aren’t validated on being static or being correctly label encoded.
Note
This model requires
torch
extension to be installed. Read more about this at installation page.Init TFT model.
- Parameters:
encoder_length (int) – encoder length
decoder_length (int) – decoder length
n_heads (int) – number of heads in Multi-Head Attention
num_layers (int) – number of layers in LSTM layer
dropout (float) – dropout rate
hidden_size (int) – size of the hidden state
lr (float) – learning rate
static_categoricals (List[str] | None) – categorical features that have one unique feature value for the whole series, e.g. segment. The first value in the series is passed to batch for each feature.
static_reals (List[str] | None) – continuous features that have one unique feature value for the whole series. The first value in the series is passed to batch for each feature.
time_varying_categoricals_encoder (List[str] | None) – time varying categorical features for encoder
time_varying_categoricals_decoder (List[str] | None) – time varying categorical features for decoder (known for future)
time_varying_reals_encoder (List[str] | None) – time varying continuous features for encoder, default to target
time_varying_reals_decoder (List[str] | None) – time varying continuous features for decoder (known for future)
num_embeddings (Dict[str, int] | None) – dictionary where keys are feature names and values are number of unique values of that features
loss (torch.nn.Module | None) – loss function
train_batch_size (int) – batch size for training
test_batch_size (int) – batch size for testing
optimizer_params (dict | None) – parameters for optimizer for Adam optimizer (api reference
torch.optim.Adam
)trainer_params (dict | None) – Pytorch lightning trainer parameters (api reference
pytorch_lightning.trainer.trainer.Trainer
)train_dataloader_params (dict | None) – parameters for train dataloader like sampler for example (api reference
torch.utils.data.DataLoader
)test_dataloader_params (dict | None) – parameters for test dataloader
val_dataloader_params (dict | None) – parameters for validation dataloader
split_params (dict | None) –
- dictionary with parameters for
torch.utils.data.random_split()
for train-test splitting train_size: (float) value from 0 to 1 - fraction of samples to use for training
generator: (Optional[torch.Generator]) - generator for reproducible train-test splitting
torch_dataset_size: (Optional[int]) - number of samples in dataset, in case of dataset not implementing
__len__
- dictionary with parameters for
Methods
fit
(ts)Fit model.
forecast
(ts, prediction_size[, ...])Make predictions.
Get model.
load
(path[, ts])Load an object.
Get default grid for tuning hyperparameters.
predict
(ts, prediction_size[, return_components])Make predictions.
raw_fit
(torch_dataset)Fit model on torch like Dataset.
raw_predict
(torch_dataset)Make inference on torch like Dataset.
save
(path)Save the object.
set_params
(**params)Return new object instance with modified parameters.
to_dict
()Collect all information about etna object in dict.
Attributes
This class stores its
__init__
parameters as attributes.Context size of the model.
- fit(ts: TSDataset) DeepBaseModel [source]#
Fit model.
- Parameters:
ts (TSDataset) – TSDataset with features
- Returns:
Model after fit
- Return type:
DeepBaseModel
- forecast(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset [source]#
Make predictions.
This method will make autoregressive predictions.
- Parameters:
- Returns:
Dataset with predictions
- Return type:
- classmethod load(path: Path, ts: TSDataset | None = None) Self [source]#
Load an object.
Warning
This method uses
dill
module which is not secure. It is possible to construct malicious data which will execute arbitrary code during loading. Never load data that could have come from an untrusted source, or that could have been tampered with.
- params_to_tune() Dict[str, BaseDistribution] [source]#
Get default grid for tuning hyperparameters.
This grid tunes parameters:
num_layers
,n_heads
,hidden_size
,lr
,dropout
. Other parameters are expected to be set by the user.- Returns:
Grid to tune.
- Return type:
- predict(ts: TSDataset, prediction_size: int, return_components: bool = False) TSDataset [source]#
Make predictions.
This method will make predictions using true values instead of predicted on a previous step. It can be useful for making in-sample forecasts.
- Parameters:
- Returns:
Dataset with predictions
- Return type:
- raw_fit(torch_dataset: Dataset) DeepBaseModel [source]#
Fit model on torch like Dataset.
- Parameters:
torch_dataset (Dataset) – Torch like dataset for model fit
- Returns:
Model after fit
- Return type:
DeepBaseModel
- raw_predict(torch_dataset: Dataset) Dict[Tuple[str, str], ndarray] [source]#
Make inference on torch like Dataset.
- set_params(**params: dict) Self [source]#
Return new object instance with modified parameters.
Method also allows to change parameters of nested objects within the current object. For example, it is possible to change parameters of a
model
in aPipeline
.Nested parameters are expected to be in a
<component_1>.<...>.<parameter>
form, where components are separated by a dot.- Parameters:
**params (dict) – Estimator parameters
- Returns:
New instance with changed parameters
- Return type:
Self
Examples
>>> from etna.pipeline import Pipeline >>> from etna.models import NaiveModel >>> from etna.transforms import AddConstTransform >>> model = NaiveModel(lag=1) >>> transforms = [AddConstTransform(in_column="target", value=1)] >>> pipeline = Pipeline(model, transforms=transforms, horizon=3) >>> pipeline.set_params(**{"model.lag": 3, "transforms.0.value": 2}) Pipeline(model = NaiveModel(lag = 3, ), transforms = [AddConstTransform(in_column = 'target', value = 2, inplace = True, out_column = None, )], horizon = 3, )