Machine Learning on Sequences without RNNs

Nowadays, the more common approach for problems dealing with sequences (text, time series data, etc.) in the Machine Learning world is to utilize some variants of RNNs

However this is not strictly necessary. In [1], the authors utilize adjacent channels in the input to pass information from adjacent steps for time series forecasting of rain. Essentially, they simply add a separate dimension to the input tensor for handling time.

An issue I can see with this is that you need to have your entire input sequence ready for the inference of a single successive point, whereas with RNNs you can continuously feed new input and perform inference at each new point.

References

[1] Agrawal, S., Barrington, L., Bromberg, C., Burge, J., Gazen, C. and Hickey, J. (2019). Machine Learning for Precipitation Nowcasting from Radar Images. arXiv:1912.12132 [cs, stat].

Notes mentioning this note

There are no notes linking to this note.


Here are all the zettels in this zettelkasten, along with their links, visualized as a graph. You may need to zoom and pan around to see something.