https://www.slideshare.net/DonghyeonKim7/eeg-102812945
https://www.mdpi.com/2079-9292/11/15/2293/pdf
https://archive.physionet.org/pn4/eegmmidb/
https://physionet.org/content/eegmmidb/1.0.0/
https://github.com/mne-tools/mne-python/blob/main/mne/datasets/eegbci/eegbci.py
https://neuro.inf.unibe.ch/AlgorithmsNeuroscience/Tutorial_files/BaselineCorrection.html
https://mne.tools/1.0/auto_tutorials/index.html
https://mne.tools/1.0/auto_examples/decoding/decoding_csp_eeg.html
https://github.com/jona-sassenhagen/mne_workshop_amsterdam
https://www.youtube.com/watch?v=IYuAPisoUeI&list=PLXtvZiGkmNVvPS0N9UNBVkIFe0_0t_Nqt
https://github.com/NeuroTechX/moabb
http://learn.neurotechedu.com/machinelearning/#machine-learning-for-brain-computer-interfaces
https://pyriemann.readthedocs.io/en/latest/auto_examples/motor-imagery/plot_single.html
https://github.com/JulesBelveze/eeg-classifier/tree/master/classification
with PyTorch, motor imagery: hands vs feet
https://github.com/mne-tools/mne-torch/blob/master/demo_eeg_csp.py
with sklearn, motor imagery: hands vs feet
https://gist.github.com/F-A/ebd0cf72fb4e8b43d6d278db776f5824
EEG Feature Extraction
https://www.youtube.com/watch?v=rgG9t6DrBAk
https://www.youtube.com/watch?v=AjMdirPPnQQ
https://docs.google.com/presentation/d/1KHbTb6H09P7SWbL6a8ryK1QKINZb5FbTMNx-DMoqaEY/edit#slide=id.p
https://www.sciencedirect.com/science/article/pii/S1110016821007055
https://arro.anglia.ac.uk/id/eprint/706861/1/Selim_2021.pdf
[total-perspective-vortex] In the subject, page5, first paragraph, second sentence. https://cdn.intra.42.fr/pdf/pdf/60845/en.subject.pdf
The data was measured during a motor imagery experiment, where people had to do or imagine a hand or feet movement.
I am wondering if what you meant was using only imagery data.
The data was measured during a motor imagery experiment, where people had to imagine a hand or feet movement.
Bandpass filter Burttherworth-IIR, 7-30 Hz from page 7 of https://www.slideshare.net/victorasanza/eeg-signal-clustering-for-motor-and-imaginary-motor-tasks-on-hands-and-feet-85935652
subject -> subjects from eeg_motor_imagery_002_Motor_imagery_decoding_from_EEG_data_using_the_Common_Spatial_Pattern_(CSP).ipynb
64 channels -> motor cortex area (7 x 3 channels) from pdf/02The EEG Device For Your Project_ Choosing between NeuroSky MindWave, Muse 2, g.tec Unicorn, OpenBCI, Emotiv EPOC+… _ by Tim de Boer _ A Beginner’s Guide to Brain-Computer Interfaces _ Medium.pdf
Common Average Referencing (CAR) data -= data.mean() from pdf/06Improving Preprocessing Of EEG Data In One Line Of Code With CAR _ by Tim de Boer _ A Beginner’s Guide to Brain-Computer Interfaces _ Medium.pdf
#pip list --format=freeze > requirements.txt
#pip install -r requirements.txt
#https://github.com/mne-tools/mne-python/blob/main/mne/decoding/csp.py
# raw = filter_data(raw=prepare_data(raw=fetch_data(raw_fnames=raw_filenames())))
# labels, epochs = fetch_events(raw)
print(epochs.tmin, epochs.tmax)
#-1.0 4.0
#Read epochs (train will be done only between 1 and 2)
epochs_train = epochs.copy().crop(tmin=1., tmax=2.) #-1~4 to 1~2
epochs_data = epochs.get_data()
epochs_data_train = epochs_train.get_data()
print(labels.shape)
#(45,)
print(labels)
# [0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 0 1 0 0 1 0]
print(type(epochs))
#<class 'mne.epochs.Epochs'>
print(type(epochs_data))
#<class 'numpy.ndarray'>
print(epochs_data.shape)
#data for 45 events and 801 original time points ...
#45 matching events found
#64-channel EEG signals
#801 original time points
#(45, 64, 801)
print(type(epochs_train))
#<class 'mne.epochs.Epochs'>
print(type(epochs_data_train))
#<class 'numpy.ndarray'>
print(epochs_data_train.shape)
#(45, 64, 161)
#epochs_data_train
45개 events가 class0 23개, class1 22개로 나뉨
----------------------------------------
_concat_cov: x_class.shape0=(23, 64, 161)
_concat_cov: x_class.shape1=(64, 23, 161)
_concat_cov: x_class.shape2=(64, 3703)
_concat_cov: x_class.shape3=(64, 3703), x_class.T.shape=(3703, 64), x_class.T.conj().shape=(3703, 64)
----------------------------------------
_concat_cov: x_class.shape0=(22, 64, 161)
_concat_cov: x_class.shape1=(64, 22, 161)
_concat_cov: x_class.shape2=(64, 3542)
_concat_cov: x_class.shape3=(64, 3542), x_class.T.shape=(3542, 64), x_class.T.conj().shape=(3542, 64)
----------------------------------------
eegbci_training+predict0*.ipynb CSPTEST2 vs ms/csp.py
x_class0.shape3=(64, 3703), cov0.shape=(64, 64)
x_class1.shape3=(64, 3542), cov1.shape=(64, 64)
covs.shape=(2, 64, 64), sample_weight=[64 64]
eigenvalues.shape=(64,), eigenvectors.shape=(64, 64)
eigen_values, eigen_vectors = scipy.linalg.eigh(covs[0], covs.sum(0))
eigenvalues=
[0.17746337 0.24118598 0.27384043 0.28964807 0.29520345 0.32522068
0.33402773 0.34438121 0.35182909 0.3651519 0.37221281 0.37947045
0.38496811 0.38944195 0.39464295 0.40171966 0.40584041 0.40909791
0.41352371 0.41914926 0.42786026 0.43452281 0.43670259 0.43925342
0.44267184 0.44770765 0.45582463 0.46441613 0.46614347 0.47155521
0.47228989 0.47905544 0.483337 0.48627152 0.49914949 0.50008759
0.50469975 0.51076627 0.51713878 0.51901223 0.52420346 0.52635209
0.53216028 0.53694715 0.54490356 0.54741154 0.55257027 0.55919848
0.56525476 0.57049208 0.57708014 0.57722571 0.58420622 0.58616418
0.59852949 0.60565039 0.6097799 0.61295291 0.62875051 0.64452457
0.65106847 0.65522958 0.68470646 0.709407 ], eigenvectors.shape=(64, 64)
np.abs(eigen_values - 0.5)=
[3.22536629e-01 2.58814023e-01 2.26159570e-01 2.10351931e-01
2.04796545e-01 1.74779322e-01 1.65972268e-01 1.55618788e-01
1.48170906e-01 1.34848098e-01 1.27787194e-01 1.20529551e-01
1.15031894e-01 1.10558052e-01 1.05357050e-01 9.82803406e-02
9.41595863e-02 9.09020887e-02 8.64762874e-02 8.08507425e-02
7.21397388e-02 6.54771909e-02 6.32974123e-02 6.07465800e-02
5.73281622e-02 5.22923493e-02 4.41753652e-02 3.55838688e-02
3.38565306e-02 2.84447888e-02 2.77101107e-02 2.09445638e-02
1.66630020e-02 1.37284787e-02 8.50507250e-04 8.75943758e-05
4.69975333e-03 1.07662706e-02 1.71387791e-02 1.90122347e-02
2.42034625e-02 2.63520938e-02 3.21602820e-02 3.69471454e-02
4.49035582e-02 4.74115373e-02 5.25702711e-02 5.91984838e-02
6.52547568e-02 7.04920831e-02 7.70801444e-02 7.72257103e-02
8.42062173e-02 8.61641753e-02 9.85294874e-02 1.05650393e-01
1.09779897e-01 1.12952910e-01 1.28750510e-01 1.44524567e-01
1.51068475e-01 1.55229579e-01 1.84706461e-01 2.09407004e-01]
np.argsort(np.abs(eigen_values - 0.5))=
[35 34 36 37 33 32 38 39 31 40 41 30 29 42 28 27 43 26 44 45 25 46 24 47
23 22 48 21 49 20 50 51 19 52 53 18 17 16 15 54 14 55 56 13 57 12 11 10
58 9 59 8 60 61 7 6 5 62 4 63 3 2 1 0]
np.argsort(np.abs(eigen_values - 0.5))[::-1]=
[ 0 1 2 3 63 4 62 5 6 7 61 60 8 59 9 58 10 11 12 57 13 56 55 14
54 15 16 17 18 53 52 19 51 50 20 49 21 48 22 23 47 24 46 25 45 44 26 43
27 28 42 29 30 41 40 31 39 38 32 33 37 36 34 35]
ix=
[ 0 1 2 3 63 4 62 5 6 7 61 60 8 59 9 58 10 11 12 57 13 56 55 14
54 15 16 17 18 53 52 19 51 50 20 49 21 48 22 23 47 24 46 25 45 44 26 43
27 28 42 29 30 41 40 31 39 38 32 33 37 36 34 35]
eigenvectors2.shape=(64, 64)
vs
X1.shape=(2, 500), S1.shape=(2, 2)
X2.shape=(2, 500), S2.shape=(2, 2)
eigen_values, eigen_vectors =scipy.linalg.eigh(S1, S1+S2)
eigenvalues=[0.1978 0.80471], eigenvectors.shape=(2, 2)
----------------------------------------
- Preprocessing
- being shown in the video
. using men-tools
import matplotlib
matplotlib.use('qtagg')
import matplotlib.pyplot as plt
plt.ioff()
...
plt.show()
- making an additional filter is not necessary
. The feature extraction section is just to check that
the significative frequencies for a motor imagery task are
kept (~8-40Hz)
- Classification
- Predict: uses sklearn validation tools?
- Implementation
- Score: Over 75% add a point for every 3%. current score is 0
# 1, 2, 3, 4, 5, 6, 42
#-----------------------------------------------------------------
# 0.8, 0.867, 0.73, 0.667, 0.8, 0.8 0.867 #with filter(7, 30)
# avg of [1, 2, 3, 4] = 0.766
#-----------------------------------------------------------------
# 0.82 0.756 0.711 0.644 0.822 0.844 0.867 #with filter(8, 40)
# avg of [1, 2, 3, 4] = 0.732
https://ko.wikipedia.org/wiki/뇌파
https://blog.naver.com/msnayana/80142231659
. 신경세포 사이에 시냅스가 형성되어 신경전달물질을 분비 . 이온채널에 부착, 이온채널 열림, 전위차 발생 . 전압이 생겨 전류가 흐르고, 이 전류가 전기장을 형성 . 전기장의 변화는 자기장을 생성하고 다시 자기장의 변화는 전기장을 생성 . 두피에 전극을 설치하면 10~50uV의 뇌파 신호 측정 가능
. 사건유발전위(ERP: Event-Related Potential): 특정 정보(영상, 음성, 소리, 수행명령 등)로 구성된 자극을 가한 후에 이 자극이 유발한 뇌의 전기적 활성정보가 포함된 신호 . 사건유발전위는 자극이 제시된 시간을 기준으로 측정한 뇌파들을 평균화함으로써 자극과 무관한 뇌의 신호 부분은 제거하고 자극처리에 관여한 활동만을 추려낸 신호
https://www.etri.re.kr/webzine/20170630/common/images/technology.zip
. 보건 및 의료 . 뉴로피드백 - Relaxation protocol, Attention protocol, Asymmetry protocol . 감성 ICT . 뉴로마케팅 . bci . 뇌과학
https://www.etri.re.kr/webzine/20170630/sub04.html
https://en.wikipedia.org/wiki/Markov_blanket
In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket.
https://chobokim.tistory.com/29?category=951467 feature selection vs feature extraction
https://www.sciencedirect.com/topics/engineering/common-spatial-pattern
https://en.wikipedia.org/wiki/Common_spatial_pattern CSP corresponds to Principal component analysis
# 1. Compute the covariance matrix of each class
S1=np.cov(X1)
S2=np.cov(X2)
# 2. Solve the eigenvalue problem S1·W = l·S2·W
l,W=LA.eigh(S1, S1+S2)
l = np.round(l, 5)
A=(np.linalg.inv(W)).T
# 3. get CSP
X1_CSP=np.dot(W.T,X1)
X2_CSP=np.dot(W.T,X2)
class FT_CSP(TransformerMixin, BaseEstimator):
def fit(self, X, y):
def transform(self, X):
def fit_transform(self, X, y, **fit_params):
src/ft_fit.py
if __name__ == "__main__":
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
src/ft_utils.py https://neuro.inf.unibe.ch/AlgorithmsNeuroscience/Tutorial_files/BaselineCorrection.html
def fetch_events(data_filtered, tmin=-0.2, tmax=0.5):
epochs = mne.Epochs(data_filtered, events, event_ids, tmin, tmax, proj=True,
picks=picks,
baseline=(-0.1, 0),
preload=True)
# epochs_wo_bc = cp.deepcopy(epochs)
# inteval = (-0.1, 0)
# bc_epochs = epochs.apply_baseline(inteval)
src/ft_fit.py
def ft_fit(SUBJECTS, RUNS, tmin=-0.2, tmax=0.5, forceplot=False, pipe_long=True):
lda_shrinkage = LDA(solver='lsqr', shrinkage='auto')
csp = FT_CSP(n_components=4, reg=None, log=True, norm_trace=False)
clf = Pipeline([('CSP', csp), ('LDA', lda_shrinkage)])
clf2 = Pipeline([('LDA', lda_shrinkage)])
# https://hal.science/hal-00602686/document
ix = np.argsort(np.abs(eigen_values - 0.5))[::-1]
eigen_vectors = eigen_vectors[:, ix]
self.filters_ = eigen_vectors.T
n_components=4
pick_filters = self.filters_[:self.n_components]