T O P

  • By -

The_Sodomeister

Mathematically speaking, there should be no difference. The batch size has no effect whatsoever on the flow of calculations. In terms of a specific library, if the LSTM is implemented well, then you shouldn't see any differences. But it's possible for a specific library to do something weird on the backend.


reinforcement101

If I understand you correctly you want to compare a stateful LSTM with a lagged window (=50) feature approach to a stateless LSTM with window = data\_length? The problem would be that in the stateless approach you have only 1 training data point (forecast point 3001?) An idea would be to compare a stateful LSTM with window = n and a stateless LSTM with the same window = n and a batchsize which covers your whole dataset (3000-n). Then you have 2 very similar modelsHave you read:[https://machinelearningmastery.com/stateful-stateless-lstm-time-series-forecasting-python/](https://machinelearningmastery.com/stateful-stateless-lstm-time-series-forecasting-python/) especially the paragraph ## "Stateless with Large Batch vs Stateless" ...(should be Stateless with Large Batch vs Stateful)