Various input/output methods.
This module provides methods for loading and saving data from- and into various formats.
Bases: object
Pyff communication object.
This class allows for communication with a running Pyff [R1] instance. It uses the json protocol, so you have to start Pyff with the --protocol=json parameter.
Receiving data from Pyff (i.e. the available feedbacks and variables) is not supported for now.
References
[R1] | (1, 2) Bastian Venthur, Simon Scholler, John Williamson, Sven Dähne, Matthias S Treder, Maria T Kramarek, Klaus-Robert Müller and Benjamin Blankertz. Pyff—A Pythonic Framework for Feedback Applications and Stimulus Presentation in Neuroscience. Frontiers in Neuroscience. 2010. doi: 10.3389/fnins.2010.00179. |
Examples
This is an example session, demonstrating how to load a feedback application, set a variable, start it, quit it and closing Pyff in the end.
>>> pyff = PyffComm()
>>> pyff.send_init('TrivialPong')
>>> pyff.set_variables({'FPS': 30})
>>> pyff.play()
>>> pyff.quit()
>>> pyff.quit_pyff()
Send a control signal to the running feedback.
This method is used to send events to the feedback like a new classifier output.
Parameters: | variables : dict
|
---|
Load a Feedback.
This method sends Pyff the send_init(feedback) command which loads a feedback.
Parameters: | fb : string
|
---|
Send interaction signal to Pyff.
Warning
This method is used internally to send low level JSON messages to Pyff. You should not use this method directly.
Parameters: | cmd : str data : dict |
---|
Convert mushu data into wyrm’s Data format.
This convenience method creates a continuous Data object from the parameters given. The timeaxis always starts from zero and its values are calculated from the sampling frequency fs and the length of data. The names and units attributes are filled with default vaules.
Parameters: | data : 2d array
markers : list of tuples: (float, str)
fs : float
channels : list or 1d array of strings
|
---|---|
Returns: | cnt : continuous Data object |
References
https://github.com/venthur/mushu
Examples
Assuming that amp is an Amplifier instance from libmushu, already configured but not started yet:
>>> amp_fs = amp.get_sampling_frequency()
>>> amp_channels = amp.get_channels()
>>> amp.start()
>>> while True:
... data, markers = amp.get_data()
... cnt = convert_mushu_data(data, markers, amp_fs, amp_channels)
... # some more code
>>> amp.stop()
Load a Data object from a file.
Parameters: | filename : str
|
---|---|
Returns: | dat : Data
|
See also
Examples
>>> io.save(dat, 'foo.npy')
>>> dat2 = io.load('foo.npy')
Load the BCI Competition III Data Set 1.
This method loads the data set and converts it into Wyrm’s Data format. Before you use it, you have to download the training- and test data in Matlab format and unpack it into a directory.
Note
If you need the true labels of the test sets, you’ll have to download them separately from http://bbci.de/competition/iii/results/index.html#labels
Parameters: | dirname : str
|
---|---|
Returns: | epo_train, epo_test : epoched Data objects |
Examples
>>> epo_test, epo_train = load_bcicomp3_ds1('/home/foo/bcicomp3_dataset1/')
Load the BCI Competition III Data Set 2.
This method loads the data set and converts it into Wyrm’s Data format. Before you use it, you have to download the data set in Matlab format and unpack it. The directory with the extracted files must contain the Subject_*.mat- and the eloc64.txt files.
Note
If you need the true labels of the test sets, you’ll have to download them separately from http://bbci.de/competition/iii/results/index.html#labels
Parameters: | filename : str
|
---|---|
Returns: | cnt : continuous Data object |
Examples
>>> dat = load_bcicomp3_ds2('/home/foo/data/Subject_A_Train.mat')
Load Brain Vision data from a file.
This methods loads the continuous EEG data, and returns a Data object of continuous data [time, channel], along with the markers and the sampling frequency. The EEG data is returned in micro Volt.
Parameters: | vhdr : str
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
Examples
>>> dat = load_brain_vision_data('path/to/vhdr')
>>> dat.fs
1000
>>> dat.data.shape
(54628, 61)
Load saved EEG data in Mushu’s format.
This method loads saved data in Mushu’s format and returns a continuous Data object.
Parameters: | meta : str
|
---|---|
Returns: | dat : Data
|
Examples
>>> dat = load_mushu_data('testrecording.meta')
Save a Data object into a NumPy .npy file.
Parameters: | dat : Data
filename : str
|
---|
See also
Examples
>>> io.save(dat, 'foo.npy')
>>> dat2 = io.load('foo.npy')
Plotting methods.
This module contains various plotting methods. There are two types of plotting methods: the Primitives and the Composites. The Primitives are the most basic and offer simple, single-plot representations. The Composites are composed of several primitives and offer more complex representations.
The primitive plots are those whose name begin with ax_, (e.g. ax_scalp).
In order to get more reasonable defaults for colors etc. you can call the modules beautify() method:
from wyrm import plot
plot.beautify()
Warning
This module needs heavy reworking! We have yet to find a consistent way to handle primitive and composite plots, deal with the fact that some plots just manipulate axes, while others operate on figures and have to decide on which layer of matplotlib we want to deal with (i.e. pyplot, artist or even pylab).
The API of this module will change and you should not rely on any method here.
Draw a color bar
Draws a color bar on an existing axes. The range of the colors is defined by vmin and vmax.
Note
Unlike the colorbar method from matplotlib, this method does not automatically create a new axis for the colorbar. It will paint in the currently active axis instead, overwriting any existing plots in that axis. Make sure to create a new axis for the colorbar.
Parameters: | vmin, vmax : float
ax : Axes, optional
label : string, optional
ticks : list, optional
|
---|---|
Returns: | ax : Axes
|
Draw a scalp plot.
Draws a scalp plot on an existing axes. The method takes an array of values and an array of the corresponding channel names. It matches the channel names with an internal list of known channels and their positions to project them correctly on the scalp.
Warning
The behaviour for unkown channels is undefined.
Parameters: | v : 1d-array of floats
channels : 1d array of strings
ax : Axes, optional
annotate : Boolean, optional
vmin, vmax : float, optional
|
---|---|
Returns: | ax : Axes
|
See also
Set reasonable defaults matplotlib.
This method replaces matplotlib’s default rgb/cmyk colors with the colarized colors. It also does:
Examples
You can safely call beautify right after you’ve imported the plot module.
>>> from wyrm import plot
>>> plot.beautify()
Calculates a centered grid of Rectangles and their positions.
Parameters: | cols_list : [int]
hpad : float, optional
vpad : float, optional
|
---|---|
Returns: | [[float, float, float, float]] :
|
Examples
Calculates a centered grid with 3 rows of 4, 3 and 2 columns
>>> calc_centered_grid([4, 3, 2])
Calculates a centered grid with more padding
>>> calc_centered_grid([5, 4], hpad=.1, vpad=.75)
Return the x/y position of a channel.
This method calculates the stereographic projection of a channel from CHANNEL_10_20, suitable for a scalp plot.
Parameters: | channame : str
|
---|---|
Returns: | x, y : float or None
|
Examples
>>> plot.get_channelpos('C2')
(0.1720792096741632, 0.0)
>>> # the channels are case insensitive
>>> plot.get_channelpos('c2')
(0.1720792096741632, 0.0)
>>> # lookup for an invalid channel
>>> plot.get_channelpos('foo')
None
Plot all channels for a continuous.
Parameters: | dat : Data |
---|
Plots the values ‘v’ for channels ‘channels’ on a scalp.
Calculates the interpolation of the values v for the corresponding channels ‘channels’ and plots it as a contour plot on a scalp. The degree of gradients as well as the the appearance of the color bar can be adjusted.
Parameters: | v : [value]
channels : [String]
levels : int, optional
colormap : matplotlib.colors.colormap, optional
norm : matplotlib.colors.norm, optional
ticks : array([ints]), optional
annotate : Boolean, optional
position : [x, y, width, height], optional
|
---|---|
Returns: | (Matplotlib.Axes, Matplotlib.Axes) :
|
Examples
Plots the values v for channels ‘channels’ on a scalp
>>> plot_scalp(v, channels)
This plot has finer gradients through increasing the levels to 50.
>>> plot_scalp(v, channels, levels=50)
This plot has a norm and ticks from 0 to 10
>>> n = matplotlib.colors.Normalize(vmin=0, vmax=10, clip=False)
>>> t = np.linspace(0.0, 10.0, 3, endpoint=True)
>>> plot_scalp(v, channels, norm=n, ticks=t)
Plots a scalp with channels on top
Plots the values v for channels ‘channels’ on a scalp as a contour plot. Additionaly plots the channels in channels_ti as a timeinterval on top of the scalp plot. The individual channels are placed over their position on the scalp.
Parameters: | v : [value]
channels : [String]
data : wyrm.types.Data
interval : [begin, end)
scale_ti : float, optional
levels : int, optional
colormap : matplotlib.colors.colormap, optional
norm : matplotlib.colors.norm, optional
ticks : array([ints]), optional
annotate : Boolean, optional
position : [x, y, width, height], optional
|
---|---|
Returns: | ((Matplotlib.Axes, Matplotlib.Axes), [Matplotlib.Axes]) :
|
Calculate the signed r^2 values and plot them in a heatmap.
Parameters: | dat : Data
|
---|
Plots channels on a grid system.
Iterates over every channel in the data structure. If the channelname matches a channel in the tenten-system it will be plotted in a grid of rectangles. The grid is structured like the tenten-system itself, but in a simplified manner. The rows, in which channels appear, are predetermined, the channels are ordered automatically within their respective row. Areas to highlight can be specified, those areas will be marked with colors in every timeinterval plot.
Parameters: | data : wyrm.types.Data
highlights : [[int, int)]
hcolors : [colors], optional
legend : Boolean, optional
scale : Boolean, optional
reg_chans : [regular expressions]
|
---|---|
Returns: | [Matplotlib.Axes], Matplotlib.Axes :
|
Examples
Plotting of all channels within a Data object
>>> plot_tenten(data)
Plotting of all channels with a highlighted area
>>> plot_tenten(data, highlights=[[200, 400]])
Plotting of all channels beginning with ‘A’
>>> plot_tenten(data, reg_chans=['A.*'])
Plots a simple time interval.
Plots all channels of either continuous data or the mean of epoched data into a single timeinterval plot.
Parameters: | data : wyrm.types.Data
r_square : [values], optional
highlights : [[int, int)]
hcolors : [colors], optional
legend : Boolean, optional
reg_chans : [regular expression], optional
position : [x, y, width, height], optional
|
---|---|
Returns: | Matplotlib.Axes or (Matplotlib.Axes, Matplotlib.Axes) :
|
Examples
Plots all channels contained in data with a legend.
>>> plot_timeinterval(data)
Same as above, but without the legend.
>>> plot_timeinterval(data, legend=False)
Adds r-square values to the plot.
>>> plot_timeinterval(data, r_square=[values])
Adds a highlighted area to the plot.
>>> plot_timeinterval(data, highlights=[[200, 400]])
To specify the colors of the highlighted areas use ‘hcolors’.
>>> plot_timeinterval(data, highlights=[[200, 400]], hcolors=['red'])
Sets highlights in form of vertical boxes to axes.
Parameters: | highlights : [(start, end)]
hcolors : [colors], optional
set_axes : [matplotlib.axes.Axes], optional
|
---|
Examples
To create two highlighted areas in all axes of the currently active figure. The first area from 200ms - 300ms in blue and the second area from 500ms - 600ms in green.
>>> set_highlights([[200, 300], [500, 600]])
Processing toolbox methods.
This module contains the processing methods.
Append dat2 to dat.
This method creates a copy of dat (with all attributes), concatenates dat.data and dat2.data along axis as well as dat.axes[axis] and dat2.axes[axis]. If present, it will concatenate the attributes in extra as well and return the result.
It also performs checks if the dimensions and lengths of data and axes match and test if units and names are equal.
Since append cannot know how to deal with the various attributes dat and dat2 might have, it only copies the attributes of dat and deals with the attributes it knows about, namely: data, axes, names, and units.
Warning
This method is really low level and stupid. It does not know about markers or timeaxes, etc. it just appends two data objects. If you want to append continuous or epoched data consider using append_cnt() and append_epo().
Parameters: | dat, dat2 : Data axis : int, optional
extra : list of strings, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
TypeError :
|
See also
Examples
>>> # concatenate two continuous data objects, and their markers,
>>> # please note how the resulting marker is not correct, just
>>> # appended
>>> cnt.markers
[[0, 'a'], [10, 'b']]
>>> cnt2.markers
[[20, 'c'], [30, 'd']]
>>> cnt = append(cnt, cnt2, extra=['markers'])
>>> cnt.markers
[[0, 'a'], [10, 'b'], [20, 'c'], [30, 'd']]
Append two continuous data objects.
This method uses append() to append to continuous data objects. It also takes care that the resulting continuous will have a correct .axes[timeaxis]. For that it uses the .fs attribute and the length of the data to recalculate the timeaxis.
If both dat and dat2 have the markers attribute, the markers will be treated properly (i.e. by moving the markers of dat2 by dat milliseconds to the right.
Parameters: | dat, dat2 : Data timeaxis : int, optional extra: list of strings, optional : |
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
See also
Examples
>>> cnt.axis[0]
[0, 1, 2]
>>> cnt2.axis[0]
[0, 1, 2]
>>> cnt.fs
1000
>>> cnt = append_cnt(cnt, cnt2)
>>> cnt.axis[0]
[0, 1, 2, 3, 4, 5]
Append two epoched data objects.
This method just calls append(). In addition to the errors append() might throw, it will raise an error if the class_names are not equal if present in both objects.
Parameters: | dat, dat2 : Data classaxis : int, optional extra : list of strings, optional |
---|---|
Returns: | dat : Data |
Raises: | ValueError :
|
See also
Examples
>>> epo = append_epo(epo, epo2)
Apply the CSP filter.
Apply the spacial CSP filter to the epoched data.
Parameters: | epo : epoched Data object
filt : 2d array
columns : array of ints, optional
|
---|---|
Returns: | epo : epoched Data object
|
See also
Examples
>>> w, a, d = calculate_csp(epo)
>>> epo = apply_csp(epo, w)
Calculate the Canonical Correlation Analysis (CCA).
This method calculates the canonical correlation coefficient and corresponding weights which maximize a correlation coefficient between linear combinations of the two specified multivariable signals.
Parameters: | dat_x, dat_y : continuous Data object
timeaxis : int, optional
|
---|---|
Returns: | rho : float
w_x, w_y : 1d array
|
Raises: | AssertionError : :
|
References
http://en.wikipedia.org/wiki/Canonical_correlation
Examples
Calculate the CCA of the specified multivariable signals.
>>> rho, w_x, w_y = calculate_cca(dat_x, dat_y)
>>> # Calculate canonical variables via obtained weights
>>> cv_x = np.dot(dat_x, w_x)
>>> cv_y = np.dot(dat_y, w_y)
Calculate the classwise average.
This method calculates the average continuous per class for all classes defined in the dat. In other words, if you have two different classes, with many continuous data per class, this method will calculate the average time course for each class and channel.
Parameters: | dat : Data
classaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
Examples
Split existing continuous data into two classes and calculate the average for each class.
>>> mrk_def = {'std': ['S %2i' % i for i in range(2, 7)],
... 'dev': ['S %2i' % i for i in range(12, 17)]
... }
>>> epo = misc.segment_dat(cnt, mrk_def, [0, 660])
>>> avg_epo = calculate_classwise_average(epo)
>>> plot(avg_epo.data[0])
>>> plot(avg_epo.data[1])
Calculate the Common Spatial Pattern (CSP) for two classes.
This method calculates the CSP and the corresponding filters. Use the columns of the patterns and filters.
Parameters: | epo : epoched Data object
classes : list of two ints, optional
|
---|---|
Returns: | v : 2d array
a : 2d array
d : 1d array
|
Raises: | AssertionError : :
|
See also
References
http://en.wikipedia.org/wiki/Common_spatial_pattern
Examples
Calculate the CSP for the first two classes:
>>> w, a, d = calculate_csp(epo)
>>> # Apply the first two and the last two columns of the sorted
>>> # filter to the data
>>> filtered = apply_csp(epo, w, [0, 1, -2, -1])
>>> # You'll probably want to get the log-variance along the time
>>> # axis, this should result in four numbers (one for each
>>> # channel)
>>> filtered = np.log(np.var(filtered, 0))
Select two classes manually:
>>> w, a, d = calculate_csp(epo, [2, 5])
Calculate the signed r**2 values.
This method calculates the signed r**2 values over the epochs of the dat.
Parameters: | dat : Data
classaxis : int, optional
|
---|---|
Returns: | signed_r_square : ndarray
|
Examples
>>> dat.data.shape
(400, 100, 64)
>>> r = calculate_signed_r_square(dat)
>>> r.shape
(100, 64)
Compute source power co-modulation analysis (SPoC)
Computes spatial filters that optimize the co-modulation (here covariance) between the epoch-wise variance (as a proxy for spectral power) and a given target signal.
This SPoc function returns a full set of components (i.e. filters and patterns) of which the first component maximizes the co-modulation (i.e. positive covariance) and the last component minimizes it (i.e. maximizes negative covariance).
Note
Since the covariance is optimized, it may be affected by outliers in the data (i.e. trials/epochs with very large variance that is due to artifacts). Please remove be sure to remove these epochs if possible before calling this function!
Parameters: | epo : epoched Data oject
|
---|---|
Returns: | v : 2d array
a : 2d array
d : 1d array
|
See also
Notes
SPoC assumes that there is a linear relationship between a measured target signal and the dynamics of the spectral power of an oscillatory source that is hidden in the data. The target signal my be a stimulus property (e.g. intensity, frequency, color, ...), a behavioral measure (e.g. reaction times, ratings, ...) , or any other uni-variate signal of interest. The time-course of spectral power of the oscillatory source signal is approximated by variance across small time segments (epochs). Thus, if the power of a specific frequency band is investigated, the input signals must be band-passed filtered before they are segmented into epochs and given to this function. This method implements SPoC_lambda, presented in [R2]. Thus, source activity is extracted from the input data via spatial filtering. The spatial filters are optimized such that the epoch-wise variance maximally covaries with the given target signal z.
References
[R2] | (1, 2) S. Dähne, F. C. Meinecke, S. Haufe, J. Höhne, M. Tangermann, K. R. Müller, V. V. Nikulin “SPoC: a novel framework for relating the amplitude of neuronal oscillations to behaviorally relevant parameters”, NeuroImage, 86(0):111-122, 2014 |
[R3] | (1, 2) S. Haufe, F. Meinecke, K. Görgen, S. Dähne, J. Haynes, B. Blankertz, F. Biessmann, “On the interpretation of weight vectors of linear models in multivariate neuroimaging”, NeuroImage, 87:96-110, 2014 |
Examples
Split data in training and test set
Calculate SPoC:
>>> w, a, d = calculate_spoc(epo)
Identify the components with strongest co-modulation by checking the covariance values stored in d. If there is positive covariance with the target variable it will be the first, otherwise the last:
>>> w = w[:, 0]
Apply the filter(s) to the test data:
>>> filtered = np.dot(data, w)
Remove markers that are outside of the dat time interval.
This method removes the markers that are out of the time interval described in the dat object.
If the dat object has not markers attribute or the markers are empty, simply a copy of dat is returned.
If dat.data is empty, but has markers, all markers are removed.
Parameters: | dat : Data timeaxis : int, optional |
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
Examples
>>> dat.axes[0]
array([-5., -4., -3., -2., -1., 0., 1., 2., 3., 4.])
>>> dat.fs
1000
>>> dat.markers
[[-6, 'a'], [-5, 'b'], [0, 'c'], [4.9999, 'd'], [5, 'e']]
>>> dat = clear_markers(dat)
>>> dat.markers
[[-5, 'b'], [0, 'c'], [4.9999, 'd']]
Subtract the baseline.
For each epoch and channel in the given dat, this method calculates the average value for the given interval and subtracts this value from the channel data within this epoch and channel.
This method generalizes to dats with more than 3 dimensions.
Parameters: | dat : Dat ival : array of two floats
timeaxis : int, optional
|
---|---|
Returns: | dat : Dat
|
Raises: | AssertionError :
|
See also
numpy.average, numpy.expand_dims
Notes
The Algorithm calculates the average(s) along the timeaxis within the given interval. The resulting array has one dimension less than the original one (the elements on timeaxis where reduced).
The resulting avgarray is then subtracted from the original data. To match the shape, a new axis is created on timeaxis of avgarray. And the shapes are then matched via numpy’s broadcasting.
Examples
Remove the baselines for the interval [100, 0)
>>> dat = correct_for_baseline(dat, [-100, 0])
Create feature vectors from epoched data.
This method flattens a Data objects down to 2 dimensions: the first one for the classes and the second for the feature vectors. All surplus dimensions of the dat argument are clashed into the appropriate class.
Parameters: | dat : Data classaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Examples
>>> dat.shape
(300, 2, 64)
>>> dat = create_feature_vectors(dat)
>>> dat.shape
(300, 128)
A forward-backward filter.
Filter data twice, once forward and once backwards, using the filter defined by the filter coefficients.
This method mainly delegates the call to scipy.signal.filtfilt().
Parameters: | dat : Data
b : 1-d array
a : 1-d array
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
See also
Examples
Generate and use a Butterworth bandpass filter for complete (off-line data):
>>> # the sampling frequency of our data in Hz
>>> dat.fs
100
>>> # calculate the nyquist frequency
>>> fn = dat.fs / 2
>>> # the desired low and high frequencies in Hz
>>> f_low, f_high = 2, 13
>>> # the order of the filter
>>> butter_ord = 4
>>> # calculate the filter coefficients
>>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band')
>>> filtered = filtfilt(dat, b, a)
Calculate the jumping means.
Parameters: | dat : Data ivals : array of [float, float]
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Apply feature vector to LDA classifier.
Parameters: | fv : Data object
clf : (1d array, float) |
---|---|
Returns: | out : 1d array
|
See also
Examples
>>> clf = lda_train(fv_train)
>>> out = lda_apply(fv_test, clf)
Train the LDA classifier.
Parameters: | fv : Data object
shrink : Boolean, optional
|
---|---|
Returns: | w : 1d array b : float |
Raises: | ValueError : if the class labels are not exactly 0s and 1s |
See also
Examples
>>> clf = lda_train(fv_train)
>>> out = lda_apply(fv_test, clf)
Filter data using the filter defined by the filter coefficients.
This method mainly delegates the call to scipy.signal.lfilter().
Parameters: | dat : Data
b : 1-d array
a : 1-d array
zi : nd array, optional
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
See also
lfilter_zi(), filtfilt(), scipy.signal.lfilter(), scipy.signal.butter(), scipy.signal.butterord()
Examples
Generate and use a Butterworth bandpass filter for complete (off-line data):
>>> # the sampling frequency of our data in Hz
>>> dat.fs
100
>>> # calculate the nyquist frequency
>>> fn = dat.fs / 2
>>> # the desired low and high frequencies in Hz
>>> f_low, f_high = 2, 13
>>> # the order of the filter
>>> butter_ord = 4
>>> # calculate the filter coefficients
>>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band')
>>> filtered = lfilter(dat, b, a)
Similar to the above this time in an on-line setting:
>>> # pre-calculate the filter coefficients and the initial filter
>>> # state
>>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band')
>>> zi = proc.lfilter_zi(b, a, len(CHANNELS))
>>> while 1:
... data, markers = amp.get_data()
... # convert incoming data into ``Data`` object
... cnt = Data(data, ...)
... # filter the data, note how filter now also returns the
... # filter state which we feed back into the next call of
... # ``filter``
... cnt, zi = lfilter(cnt, b, a, zi=zi)
... ...
Compute an initial state zi for the lfilter() function.
When n == 1 (default), this method mainly delegates the call to scipy.signal.lfilter_zi() and returns the result zi. If n > 1, zi is repeated n times. This is useful if you want to filter n-dimensional data like multi channel EEG.
Parameters: | b, a : 1-d array
n : int, optional
|
---|---|
Returns: | zi : n-d array
|
See also
lfilter(), scipy.signal.lfilter_zi()
Examples
>>> # pre-calculate the filter coefficients and the initial filter
>>> # state
>>> b, a = signal.butter(butter_ord, [f_low / fn, f_high / fn], btype='band')
>>> zi = proc.lfilter_zi(b, a, len(CHANNELS))
>>> while 1:
... data, markers = amp.get_data()
... # convert incoming data into ``Data`` object
... cnt = Data(data, ...)
... # filter the data, note how filter now also returns the
... # filter state which we feed back into the next call of
... # ``filter``
... cnt, zi = lfilter(cnt, b, a, zi=zi)
... ...
Computes the element wise natural logarithm of dat.data.
Calling this method is equivalent to calling
>>> dat.copy(data=np.log(dat.data))
Parameters: | dat : Data
|
---|---|
Returns: | dat : Data
|
See also
Calculate the absolute values in dat.data.
Parameters: | dat : Data |
---|---|
Returns: | dat : Data
|
Examples
>>> print np.average(dat.data)
0.391987338917
>>> dat = rectify_channels(dat)
>>> print np.average(dat.data)
22.40234266
Remove channels from data.
This method just calls select_channels() with the same parameters and the invert parameter set to True.
Returns: | dat : Data
|
---|
See also
Remove classes from an epoched Data object.
This method just calls select_epochs() with the inverse parameter set to True.
Returns: | dat : Data
|
---|
See also
Remove epochs from an epoched Data object.
This method just calls select_epochs() with the inverse paramerter set to True.
Returns: | dat : Data
|
---|
See also
Convert a continuous data object to an epoched one.
Given a continuous data object, a definition of classes, and an interval, this method looks for markers as defined in marker_def and slices the dat according to the time interval given with ival along the timeaxis. The returned dat object stores those slices and the class each slice belongs to.
Epochs that are too close to the borders and thus too short are ignored.
If the segmentation does not result in any epochs (i.e. the markers in marker_def could not be found in dat, the resulting dat.data will be an empty array.
This method is also suitable for online processing, please read the documentation for the newsamples parameter and have a look at the Examples below.
Parameters: | dat : Data
marker_def : dict
ival : [int, int]
newsamples : int, optional
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
Examples
Offline Experiment
>>> # Define the markers belonging to class 1 and 2
>>> md = {'class 1': ['S1', 'S2'],
... 'class 2': ['S3', 'S4']
... }
>>> # Epoch the data -500ms and +700ms around the markers defined in
>>> # md
>>> epo = segment_dat(cnt, md, [-500, 700])
Online Experiment
>>> # Define the markers belonging to class 1 and 2
>>> md = {'class 1': ['S1', 'S2'],
... 'class 2': ['S3', 'S4']
... }
>>> # define the interval to epoch around a marker
>>> ival = [0, 300]
>>> while 1:
... dat, mrk = amp.get_data()
... newsamples = len(dat)
... # the ringbuffer shall keep the last 2000 milliseconds,
... # which is way bigger than our ival...
... ringbuffer.append(dat, mrk)
... cnt, mrk = ringbuffer.get()
... # cnt contains now data up to 2000 millisecons, to make sure
... # we don't see old markers again and again until they where
... # pushed out of the ringbuffer, we need to tell segment_dat
... # how many samples of cnt are actually new
... epo = segment_dat(cnt, md, ival, newsamples=newsamples)
Select channels from data.
The matching is case-insensitive and locale-aware (as in re.IGNORECASE and re.LOCALE). The regular expression always has to match the whole channel name string
Parameters: | dat : Data regexp_list : list of regular expressions
invert : Boolean, optional
chanaxis : int, optional
|
---|---|
Returns: | dat : Data
|
See also
Examples
Select all channels Matching ‘af.*’ or ‘fc.*’
>>> dat_new = select_channels(dat, ['af.*', 'fc.*'])
Remove all channels Matching ‘emg.*’ or ‘eog.*’
>>> dat_new = select_channels(dat, ['emg.*', 'eog.*'], invert=True)
Even if you only provide one Regular expression, it has to be in an array:
>>> dat_new = select_channels(dat, ['af.*'])
Select classes from an epoched data object.
This method selects the classes with the specified indices.
Parameters: | dat : Data
indices : array of ints
invert : Boolean, optional
classaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
See also
Examples
Get the classes 1 and 2.
>>> dat.axes[0]
[0, 0, 1, 2, 2]
>>> dat = select_classes(dat, [1, 2])
>>> dat.axes[0]
[1, 2, 2]
Remove class 2
>>> dat.axes[0]
[0, 0, 1, 2, 2]
>>> dat = select_classes(dat, [2], invert=True)
>>> dat.axes[0]
[0, 0, 1]
Select epochs from an epoched data object.
This method selects the epochs with the specified indices.
Parameters: | dat : Data
indices : array of ints
invert : Boolean, optional
classaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
See also
Examples
Get the first three epochs.
>>> dat.axes[0]
[0, 0, 1, 2, 2]
>>> dat = select_epochs(dat, [0, 1, 2])
>>> dat.axes[0]
[0, 0, 1]
Remove the fourth epoch
>>> dat.axes[0]
[0, 0, 1, 2, 2]
>>> dat = select_epochs(dat, [3], invert=True)
>>> dat.axes[0]
[0, 0, 1, 2]
Select interval from data.
This method selects the time segment(s) defined by ival. It will also automatically remove markers outside of the desired interval in the returned Data object.
Parameters: | dat : Data ival : list of two floats
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
Examples
Select the first 200ms of the epoched data:
>>> dat.fs
100.
>>> dat2 = select_ival(dat, [0, 200])
>>> print dat2.t[0], dat2.t[-1]
0. 199.
Sort channels.
This method sorts the channels in the dat according to the 10-20 system, from frontal to occipital and within the rows from left to right. The method uses the CHANNEL_10_20 list and relies on the elements in that list to be sorted correctly. This method will put unknown channel names to the back of the resulting list.
The channel matching is case agnostic.
Parameters: | dat : Data object chanaxis : int, optional
|
---|---|
Returns: | dat : Data object
|
Examples
>>> dat.axes[-1]
array(['PPO4' 'CP4' 'PCP1' 'F5' 'C3' 'C4' 'O1' 'PPO2' 'FFC2' 'FAF5'
'PO1' 'TP10' 'FAF1' 'FFC6' 'FFC1' 'PO10' 'O10' 'C1' 'Cz' 'F2'
'CFC1' 'CCP2' 'F4' 'PO9' 'CFC6' 'TP7' 'FC6' 'AF8' 'Fz' 'AF4'
'PCP9' 'F6' 'FT10' 'FAF6' 'PO5' 'O2' 'OPO2' 'AF5' 'C2' 'P4'
'TP9' 'PCP7' 'FT8' 'A2' 'PO6' 'FC3' 'PPO1' 'CCP8' 'OPO1' 'AFp2'
'OI2' 'OI1' 'FCz' 'CCP6' 'CCP1' 'CPz' 'POz' 'FFC3' 'FFC7' 'FC2'
'F1' 'FT9' 'P2' 'P10' 'T9' 'FC1' 'C5' 'T7' 'CFC4' 'P6' 'F8'
'TP8' 'CFC5' 'PCP8' 'CFC9' 'AF7' 'FC5' 'I1' 'CFC8' 'FFC8' 'Oz'
'Pz' 'PCP4' 'FAF2' 'PCP5' 'CP1' 'PCP3' 'P1' 'Iz' 'CCP5' 'PO2'
'PCP2' 'PO4' 'Fpz' 'F7' 'PO8' 'AFz' 'F10' 'FFC10' 'CCP3' 'PPO8'
'T10' 'AF6' 'F9' 'PPO5' 'CP6' 'I2' 'PPO7' 'FC4' 'CCP4' 'PO7'
'A1' 'CP2' 'CFC3' 'T8' 'PPO3' 'Fp2' 'PCP6' 'AFp1' 'C6' 'FFC9'
'FT7' 'AF3' 'Fp1' 'CFC10' 'CCP7' 'CFC7' 'PO3' 'P7' 'P9' 'FFC4'
'P5' 'CFC2' 'F3' 'CP3' 'PPO6' 'P3' 'O9' 'PCP10' 'P8' 'CP5'
'FFC5'], dtype='|S5')
>>> dat = sort_channels(dat)
>>> dat.axes[-1]
array(['Fpz', 'Fp1', 'AFp1', 'AFp2', 'Fp2', 'AF7', 'AF5', 'AF3',
'AFz', 'AF4', 'AF6', 'AF8', 'FAF5', 'FAF1', 'FAF2', 'FAF6',
'F9', 'F7', 'F5', 'F3', 'F1', 'Fz', 'F2', 'F4', 'F6', 'F8',
'F10', 'FFC9', 'FFC7', 'FFC5', 'FFC3', 'FFC1', 'FFC2', 'FFC4',
'FFC6', 'FFC8', 'FFC10', 'FT9', 'FT7', 'FC5', 'FC3', 'FC1',
'FCz', 'FC2', 'FC4', 'FC6', 'FT8', 'FT10', 'CFC9', 'CFC7',
'CFC5', 'CFC3', 'CFC1', 'CFC2', 'CFC4', 'CFC6', 'CFC8', 'CFC10',
'T9', 'T7', 'C5', 'C3', 'C1', 'Cz', 'C2', 'C4', 'C6', 'T8',
'T10', 'A1', 'CCP7', 'CCP5', 'CCP3', 'CCP1', 'CCP2', 'CCP4',
'CCP6', 'CCP8', 'A2', 'TP9', 'TP7', 'CP5', 'CP3', 'CP1', 'CPz',
'CP2', 'CP4', 'CP6', 'TP8', 'TP10', 'PCP9', 'PCP7', 'PCP5',
'PCP3', 'PCP1', 'PCP2', 'PCP4', 'PCP6', 'PCP8', 'PCP10', 'P9',
'P7', 'P5', 'P3', 'P1', 'Pz', 'P2', 'P4', 'P6', 'P8', 'P10',
'PPO7', 'PPO5', 'PPO3', 'PPO1', 'PPO2', 'PPO4', 'PPO6', 'PPO8',
'PO9', 'PO7', 'PO5', 'PO3', 'PO1', 'POz', 'PO2', 'PO4', 'PO6',
'PO8', 'PO10', 'OPO1', 'OPO2', 'O9', 'O1', 'O2', 'O10', 'Oz',
'OI1', 'OI2', 'I1', 'Iz', 'I2'], dtype='|S5')
Calculate the spectrum of a data object.
This method performs a fast fourier transform on the data along the timeaxis and returns a new Data object which is transformed into the frequency domain. The values are the amplitudes of of the respective frequencies.
Parameters: | dat : Data
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
See also
Examples
>>> # dat can be continuous or epoched
>>> dat.axes
['time', 'channel']
>>> spm = spectrum(dat)
>>> spm.axes
['frequency', 'channel']
Computes the element wise square of dat.data.
Calling this method is equivalent to calling
>>> dat.copy(data=np.square(dat.data))
Parameters: | dat : Data
|
---|---|
Returns: | dat : Data
|
See also
Short time fourier transform of a real sequence.
This method performs a discrete short time Fourier transform. It uses a sliding window to perform discrete Fourier transforms on the data in the Window. The results are returned in an array.
This method uses a Hanning window on the data in the window before calculating the Fourier transform.
The sliding windows are overlapping by width / 2.
Parameters: | x : ndarray width: int :
|
---|---|
Returns: | fourier : 2d complex array
|
See also
spectrum, spectrogram, scipy.hanning, scipy.fftpack.rfft
Subsample the data to freq Hz.
This method subsamples data along timeaxis by taking every n th element starting with the first one and n being dat.fs / freq. Please note that freq must be a whole number divisor of dat.fs.
Note
Note that this method does not low-pass filter the data before sub-sampling.
Note
If you use this method in an on-line setting (i.e. where you process the data in chunks and not as a whole), you should make sure that subsample does not drop “half samples” by ensuring the source data’s length is in multiples of the target data’s sample length.
Let’s assume your source data is sampled in 1kHz and you want to subsample down to 100Hz. One sample of the source data is 1ms long, while the target samples will be 10ms long. In order to ensure that subsample does not eat fractions of samples at the end of your data, you have to make sure that your source data is multiples of 10ms (i.e. 1010, 1020, etc) long. You might want to use wyrm.types.BlockBuffer for this (see Examples below).
Parameters: | dat : Data
freq : float
timeaxis : int, optional
|
---|---|
Returns: | dat : Data
|
Raises: | AssertionError :
|
See also
Examples
Load some EEG data with 1kHz, bandpass filter it and downsample it to 100Hz.
>>> dat = load_brain_vision_data('some/path')
>>> dat.fs
1000.0
>>> fn = dat.fs / 2 # nyquist frequ
>>> b, a = butter(4, [8 / fn, 40 / fn], btype='band')
>>> dat = lfilter(dat, b, a)
>>> dat = subsample(dat, 100)
>>> dat.fs
100.0
Online Experiment
>>> bbuffer = BlockBuffer(10) # 10 ms is the target block size
>>> while 1:
... cnt = ... # get 1kHz continous data from your amp
... # put the data into the block buffer
... # bbget will onlry return the data in multiples of 10ms or
... # nothing
... bbuffer.append(cnt)
... cnt = bbuffer.get()
... if not cnt:
... continue
... # filter, etc
... subsample(cnt, 100)
Swap axes of a Data object.
This method swaps two axes of a Data object by swapping the appropriate .data, .names, .units, and .axes.
Parameters: | dat : Data ax1, ax2 : int
|
---|---|
Returns: | dat : Data
|
See also
numpy.swapaxes
Examples
>>> dat.names
['time', 'channels']
>>> dat = swapaxes(dat, 0, 1)
>>> dat.names
['channels', 'time']
Compute the variance along the timeaxis of dat.
This method reduces the dimensions of dat.data by one.
Parameters: | dat : Data |
---|---|
Returns: | dat : Data
|
Examples
>>> epo.names
['class', 'time', 'channel']
>>> var = variance(cnt)
>>> var.names
['class', 'channel']
Data type definitions.
This module provides the basic data types for Wyrm, like the Data and RingBuffer classes.
Bases: object
A buffer that returns data chunks in multiples of a block length.
This buffer is a first-in-first-out (FIFO) buffer that returns data in multiples of a desired block length. The block length is defined in samples.
Parameters: | samples : int, optional
|
---|
Examples
>>> bbuffer = BlockBuffer(10)
>>> ...
>>> while 1:
... cnt = some_aquisition_method()
... # How to use the BlockBuffer
... bbuffer.append(cnt)
... cnt = bbuffer.get()
... if not cnt:
... continue
... # after here cnt is guaranteed to be in multiples of 10 samples
Bases: object
Generic, self-describing data container.
This data structure is very generic on purpose. The goal here was to provide something which can fit the various different known and yet unknown requirements for BCI algorithms.
At the core of Data is its n-dimensional .data attribute which holds the actual data. Along with the data, there is meta information about each axis of the data, contained in .axes, .names, and .units.
Most toolbox methods rely on a convention how specific data should be structured (i.e. they assume that the channels are always in the last dimension). You don’t have to follow this convention (or sometimes it might not even be possible when trying out new things), and all methods, provide an optional parameter to tell them on which axis they should work on.
Data.__eq__() and Data.__ne__() functions are provided to test for equality of two Data objects (via == and !=). This method only checks for the known attributes and does not guaranty correct result if the Data object contains custom attributes. It is mainly used in unittests.
Parameters: | data : ndarray axes : nlist of 1darrays names : nlist of strings units : nlist of strings |
---|
Attributes
data | ndarray | n-dimensional data array if the array is empty (i.e. data.size == 0), the Data object is assumed to be empty |
axes | nlist of 1-darrays | each element of corresponds to a dimension of .data (i.e. the first one in .axes to the first dimension in .data and so on). The 1-dimensional arrays contain the description of the data along the appropriate axis in .data. For example if .data contains Continuous Data, then .axes[0] should be an array of timesteps and .axes[1] an array of channel names |
names | nlist of strings | the human readable description of each axis, like ‘time’, or ‘channel’ |
units | nlist of strings | the human readable description of the unit used for the data in .axes |
Return a memory efficient deep copy of self.
It first creates a shallow copy of self, sets the attributes in kwargs if necessary and returns a deep copy of the resulting object.
Parameters: | kwargs : dict, optional
|
---|---|
Returns: | dat : Data
|
Examples
>>> # perform an ordinary deep copy of dat
>>> dat2 = dat.copy()
>>> # perform a deep copy but overwrite .axes first
>>> dat.axes
['time', 'channels']
>>> dat3 = dat.copy(axes=['foo'], ['bar'])
>>> dat3.axes
['foo', 'bar']
>>> dat.axes
['time', 'channel']
Bases: object
Circular Buffer implementation.
This implementation has a guaranteed upper bound for read and write operations as well as a constant memory usage, which is the size of the maximum length of the buffer in memory.
Reading and writing will take at most the time it takes to copy a continuous chunk of length MAXLEN in memory. E.g. for the extreme case of storing the last 60 seconds of 64bit data, sampled with 1kHz and 128 channels (~60MB), reading a full buffer will take ~25ms, as well as writing when storing more than than 60 seconds at once. Writing will be usually much faster, as one stores usually only a few milliseconds of data per run. In that case writing will be a fraction of a millisecond.
Parameters: | length_ms : int
|
---|
Examples
>>> rb = RingBuffer(length)
>>> while True:
... rb.append(amp.get_data())
... buffered = rb.get()
... # do something with buffered
Attributes
length_ms | int | the length of the ring buffer in milliseconds |
length | int | the length of the ring buffer in samples |
data | ndarray | the contents of the ring buffer, you should not read or write this attribute directly but via the RingBuffer.get() and RingBuffer.append() methods |
markers | array of [int, str] | the markers belonging to the data currently in the ring buffer |
full | boolean | indicates if the buffer has at least length elements stored |
idx | int | the starting position of the oldest data in the ring buffer |