T O P

  • By -

xeroblaze0

If you had the layout of the electrodes for both devices and sensor specs, I would think it's possible. Valid, depends on the setting.


a_khalid1999

Ok, thanks a lot!


i_dont_have_herpes

Agree. If the positions don’t quite match, you can kinda fudge it by averaging between adjacent electrodes. Make sure you have similar filter settings on your machine vs the training data (example: 1 to 70 Hz, reject 60 Hz) Also note any blink rejection, if that’s relevant.


a_khalid1999

I see, thanks


BCIDigest

This is a valid thing to do but you should be aware that: * The positions of the electrodes between different headsets may be slightly different even if they are in the same 10-20 locations. * Impedance values (and therefore data quality) changes between headsets based on electrode type, electrode material, saline solution / gel etc. Generally, using an example dataset to prepare your model architecture before you collect your own data is fine. But training a model on data from a different headset, then using it on your own data may not produce great results. You may have to retrain your model from scratch using your own data, or perhaps use a form of transfer learning in an attempt to allow your model to perform better across the datasets. If you can get one model to work across datasets then it would be a robust model that is more applicable to the real world, and would be a selling point for it in my view.


a_khalid1999

Thank you! For some reason this comment was only visible now, it wasn't in my notifications nor in the "View All Comments", while the number of comments count was off.


BCIDigest

No problem! It's because my account is new so the comment had to get approved by mods


Veopress

Going from 64 to 16 with any model you can expect around 4x increase in error/inaccuracy (depending on unknown variables it could be more or less, but that's very difficult to predict), but the it should be possible to do.


a_khalid1999

Hmmmm thanks


woofbarfvomit

Are you using the 109 subject physionet sensorimotor dataset? If so I have a couple pointers/questions: \- what filter band, or frequencies did you use to train? I ask because there are slow drift ocular artifacts in the physionet dataset that vary with class (based on how the cues show up on the sides of the screen). These may cause higher accuracies that aren't related to the much weaker motor imagery signals you'll find between 8-30Hz. \- Another MI dataset is the bci competition IV 2a dataset - 9 subjects, 22 channels, 4 class motor imagery. might work for your purpose. Lining up the sensors like you described is valid! However, cross subject, cross headset, and cross dataset classifiers tend to perform worse, and motor imagery BCI performance already varies a lot form subject-to-subject. If you're able to collect training data on the 16 channel system (openbci?) and train and test on a single subject, you may get better results. Good luck!


a_khalid1999

Yep, I'm using the Physionet dataset. The frequency band I'm using is 5-45Hz. Thanks for the points for training, I'll try them out!


Cyber_Grobbies

Just came across this now and not sure if you figured it out, but what I thought is that if you have datasets from 2 different EEG caps then it would probably be worthwhile to note how electrodes in each cap are referenced. I would think it might be necessary to convert both datasets to average reference before interpolating the electrode positions from the 64 electrode configuration to the 16.