RollingValidation
The RollingValidation
object provides rolling-validation, where a full dataset is divided into a training set followed by a testing set. Retraining is done as the algorithm rolls through the testing set making out-of-sample predictions/forecasts to keep the parameters from becoming stale. For example, with TE_RATIO = 0.5 and m = 1000 it works as follows: tr(ain) 0 to 499, te(st) 500 to 999 Re-training occurs according to the retraining cycle rc, e.g., rc = 10 implies that retraining would occurs after every 10 forecasts or 50 times for this example.
Attributes
- Graph
-
- Supertypes
-
class Objecttrait Matchableclass Any
- Self type
-
RollingValidation.type
Members list
Value members
Concrete methods
Align the actual response vector for comparison with the predicted/forecasted response vector, returning a time vector and sliced response vector.
Align the actual response vector for comparison with the predicted/forecasted response vector, returning a time vector and sliced response vector.
Value parameters
- tr_size
-
the size of the intial training set
- y
-
the actual response for the full dataset (to be sliced)
Attributes
Use rolling-validation to compute test Quality of Fit (QoF) measures by dividing the dataset into a TRAINING SET (tr) and a TESTING SET (te) as follows: [ <-- tr_size --> | <-- te_size --> ] This version calls predict for one-step ahead out-of-sample forecasts.
Use rolling-validation to compute test Quality of Fit (QoF) measures by dividing the dataset into a TRAINING SET (tr) and a TESTING SET (te) as follows: [ <-- tr_size --> | <-- te_size --> ] This version calls predict for one-step ahead out-of-sample forecasts.
Value parameters
- mod
-
the forecasting model being used (e.g.,
ARIMA
) - rc
-
the retraining cycle (number of forecasts until retraining occurs)
Attributes
Use rolling-validation to compute test Quality of Fit (QoF) measures by dividing the dataset into a TRAINING SET (tr) and a TESTING SET (te). as follows: [ <-- tr_size --> | <-- te_size --> ] This version calls forecast for h-steps ahead out-of-sample forecasts. FIX - makeForecastMatrix is more efficient than forecastAll and show work?
Use rolling-validation to compute test Quality of Fit (QoF) measures by dividing the dataset into a TRAINING SET (tr) and a TESTING SET (te). as follows: [ <-- tr_size --> | <-- te_size --> ] This version calls forecast for h-steps ahead out-of-sample forecasts. FIX - makeForecastMatrix is more efficient than forecastAll and show work?
Value parameters
- h
-
the forecasting horizon (h-steps ahead)
- mod
-
the forecasting model being used (e.g.,
ARIMA
) - rc
-
the retraining cycle (number of forecasts until retraining occurs)
Attributes
Set the training ratio = ratio of training set to full dataset.
Set the training ratio = ratio of training set to full dataset.
Value parameters
- m
-
the size of the full dataset
Attributes
Calculate the size (number of instances) for a testting set (round up).
Calculate the size (number of instances) for a testting set (round up).
Value parameters
- m
-
the size of the full dataset
Attributes
Test assessment and validation for the given forecasting model: (1) in-sample assessment on full dataset (2) out-of-sample validation using rolling validation with predict (one-step) (3) out-of-sample validation using rolling validation with forecast (h-steps)
Test assessment and validation for the given forecasting model: (1) in-sample assessment on full dataset (2) out-of-sample validation using rolling validation with predict (one-step) (3) out-of-sample validation using rolling validation with forecast (h-steps)
Value parameters
- h
-
the forecasting horizon (h-steps ahead)
- mod
-
the forecasting model to test (e.g.,
ARIMA
) - rc
-
the retraining cycle (number of forecasting until retraining occurs)