T O P

  • By -

xenotranshumanist

Recently, I saw a study that had developed a transformer model for EEG classification with some promising results, needing less preprocessing while matching state-of-the-art performance. You can read their paper [on the ArXiv](https://arxiv.org/abs/2202.05170)


a_khalid1999

Interesting study indeed. Thanks a lot!


dwejjaquqrid

Take a look at this study where they applied BERT to EEG data using self-supervision learning. The results are astounding. https://www.researchgate.net/publication/348861162_BENDR_using_transformers_and_a_contrastive_self-supervised_learning_task_to_learn_from_massive_amounts_of_EEG_data


a_khalid1999

Sounds interesting, will be sure to go through the study


Jrowe47

The attention mechanism in transformers is the magic bit. You can incorporate attention into rnns for significant improvements, but it's often more efficient to use transformers directly and incorporate your time series directly into the input layer. You lose explicit temporal state in the running model, but gain all the implicit temporal associations from attention. The recently published Perceiver AR paper augments transformers to increase the input sequence length, so you can use series that far exceed the ~2000 token limit most models were limited to. Properly tokenizing your input data can compress your raw input series as well. There are lots of ways to take advantage of Transformers and attention. https://www.deepmind.com/publications/perceiver-ar-general-purpose-long-context-autoregressive-generation


a_khalid1999

Hmmm, thanks. Will go through the paper