# Epidural-ECoG Food-Tracking Task

Additional Information for "Epidural-ECoG Food-Tracking Task"

## Contents

- 1 Q) For the PLS regression, how many iterations did you take?
- 2 Q) When filtering over certain frequency bands, which are the actual under and upper bounds for the frequencis in the bandpass?
- 3 Q) Didn't you take the chewing artefacts into account in the reported results?
- 4 Q) Do you have video files or time intervals when the monkey was chewing or not chewing?
- 5 Q) I tried to implement your decoding paradigm, but I'm not capable of obtaining the same results as you.
- 6 Q) How did you implement a morlet wavelet transformation? The input is in time domain, is the output then also in time domain?
- 7 Q) Why did you mention to have used a bandpassfilter going from 0.3 Hz to 500 Hz when you only have a signal of 1KHz?
- 8 Q) Why did you use different number of PLS components when you report your result?
- 9 Q) I've constructed a scalogram using my own code. But nevertheless it should do the same.

## Q) For the PLS regression, how many iterations did you take?

A) Although we are not sure the meaning of ‘iterations’ you asked, we used the MATLAB function ‘plsregress’ and made the decoding model with the optimal number of PLS components.

## Q) When filtering over certain frequency bands, which are the actual under and upper bounds for the frequencis in the bandpass?

A) As shown in our paper (2.3. Decoding paradigm), 1kHz of eECoG signal were band-pass filtered from 0.3 to 500Hz. Then, they were down-sampled from 10Hz to 120Hz in 10 by 10 scalogram matrix.

## Q) Didn't you take the chewing artefacts into account in the reported results?

A) No we didn’t.

## Q) Do you have video files or time intervals when the monkey was chewing or not chewing?

Because I want to learn focus myself on the chewing artefacts, as you proposed in your paper.

A) We don’t have a data of actual time intervals during chewing periods by 'mat' file. However, we determined the chewing artifacts, which have sharp peaks with large amplitudes in signals during chewing periods. For example, the LFP signal from 174 to 184 seconds in data “2010-07-05 S1” shows sharp peaks and large amplitude from first 2 seconds to 7 seconds and it is determined as the chewing artifact. Then we fed the food after the monkey finished chewing. We attempted not to feed the food while monkey was chewing. Mastication was identified by video.

## Q) I tried to implement your decoding paradigm, but I'm not capable of obtaining the same results as you.

I used a Morlet wavelet as mentioned in (2.3. Decoding paradigm) in stead of downsampling to 10, .., 120 Hz. Is there a difference in your matlab code that isn't mentioned in your paper? I would also like to ask you if you would be so kind to share your matlab code with me, this might be easier for me obtain the same results and then I can check for differences between our codes.

A) We apologize not to send you the original code what we used in this paper. But, here are some suggestions for your analysis.

- Band-pass filtered the signals from 0.3 to 500Hz (this is the hardware setting).
- downsample: 4 times (results in a sampling rate of 250Hz).
- scalogram: calculate scalograms from 0 to 125Hz.
- Downsample scalogram to a 10 by 10 matrix.

We used the MATLAB function “tfrscalo” to make the scalogram. Parameters we used are shown below.

You can get this MATLAB function from the link below.

http://frankyblabla.free.fr/projet/Matlab/Test/tfrscalo.m

X=downsampled signal to 250Hz, T=1:length(250), WAVE=7, FMIN=10/FS (FS: sampling rate after down sampling=250), FMAX=120/FS, N=10, TRACE=0 - PLS: We used the MATLAB function ‘plsregress’. Here are parameters we used.

X=data of 10 by 10 scalogram, Y=motion data, NCOMP=50, CV=10

## Q) How did you implement a morlet wavelet transformation? The input is in time domain, is the output then also in time domain?

A) The answer is yes. The result for each time series will be a time-frequency representation.

- bin each time series into overlapped windows (1+ 0.1 sec length).
- calculate scalogram for each window.
- downsample the scalogram to a 10 by 10 matrix.

## Q) Why did you mention to have used a bandpassfilter going from 0.3 Hz to 500 Hz when you only have a signal of 1KHz?

A) 0.3 to 500Hz is the hardware setting. So basically it's a high pass filter.

## Q) Why did you use different number of PLS components when you report your result?

Don't you want to find a general decoding mechanism (independent of the dataset)? Don't you want to find a general decoding mechanism (independent of the dataset)?

A) We used the optimal number of PLS components, which are determined by the minimal predictive error sum of squares. Therefore, the number of PLS components is different in each experiment. To optimize decoder for each dataset is not too strange, if we want to compare the best decoder over time. A decoder independent of dataset will be very useful, but we have to verify there's a decoder can actually work for different dataset, and that's what we did, to see the stability of decoders.

## Q) I've constructed a scalogram using my own code. But nevertheless it should do the same.

I can't figure out how the code of tfrscalo actually works. When I take as input a pure sine of 1.1s, tfr is Identity (?) I would expect equal rows and a peak at the corresponding closest frequency of the pure sine What I did is the following:

- Filter ECoG with given morlet waves => 10 bins x 64 channels
- Calculate the energy of each bin (1) during 100ms (squared sum)
- Do (2) for the intervals: -[1049,950],[949,850],...,[149,50] ms ago => 10 lags
- Repeat (1)-(3) for each t (at 20Hz, motion sample rate)

A) If you want to be helpful, generate the time-frequency representation (TFR) of y=sin(w*x), e.g., x=[0:1/250:1.1]; w=2;
There's one thing is important, which should be already mentioned in the paper, that we normalized (zscore) the downsampled scalogram so different frequencies would have equal weights.

We didn't try what you did, so can't comment on that. The easiest way to confirm is for you to run the tfrscalo.m CORRECTLY on a sine wave, and then compare the resultant TFR to the TFR from your own code.

Additionally, here are our comments to your analyses

- Just to be sure, we used Morlet waves with brain signals from 10 to 120Hz.
- We normalized (zscore) the downsampled scalogram.
- Time interval you used is wrong, which is t-1.001sec to t. We used downsampled signal -1.1sec to 0sec from the motion data.
- We didn't down sample the motion data with a sampling rate of 120Hz.

Based on the paper, this is what we actually did for making the scalogram (or at least, this is what we advice you to do). We downsample the signal for 4 times, simply to reduce the computational load. I believe if we do it for 2 or 3 times, or even not downsample, the result will be similar.

- Preprocessing: please check the paper.
- Downsample 4 times (results in a sampling rate of 250Hz).
- For each motion data (X,Y,Z), each channel, calculate a scalogram from the downsampled signal (sampling rate= 1000Hz, -1.1 to 0s from the motion data) using tfrscalo.m as:

[TFR,T,F,WT]=tfrscalo(Signal,Timestamp,WAVE,FMIN,FMAX,N,TRACE)

Signal: signal (1X275 vector, from -1.1 to 0s)

Timestamp: time stamp ([1:275])

WAVE=7

FMIN=10/250 (250 is the sampling rate)

FMAX=120/250

N=10

TRACE=0 - Downsample TFR(:,26:275) (from -1 to 0s) to a 10 by 10 matrix, which represents the 1s history of TFR before the motion (X,Y,Z).