SpikeData

class syncopy.SpikeData(data=None, filename=None, trialdefinition=None, samplerate=None, channel=None, unit=None, dimord=None)[source]

Bases: syncopy.datatype.discrete_data.DiscreteData

Spike times of multi- and/or single units

This class can be used for representing spike trains. The data is always stored as a two-dimensional [nSpikes x 3] array on disk with the columns being ["sample", "channel", "unit"].

Data is only read from disk on demand, similar to memory maps and HDF5 files.

Attributes Summary

cfg

Dictionary of previous operations on data

channel

list of original channel names for each unit

container

data

array-like object representing data without trials

dimord

ordered list of data dimension labels

filename

hdr

dict with information about raw data

log

log of previous operations on data

mode

write mode for data, ‘r’ for read-only, ‘w’ for writable

sample

Indices of all recorded samples

sampleinfo

nTrials x 2 numpy.ndarray of [start, end] sample indices

samplerate

underlying sampling rate of non-uniformly data acquisition

tag

trialdefinition

]]

trialid

numpy.ndarray of trial id associated with the sample

trialinfo

nTrials x M numpy.ndarray with numeric information about each trial

trials

trial slices of data property

trialtime

trigger-relative sample times in s

unit

unit names

Methods Summary

clear()

Clear loaded data from memory

copy([deep])

Create a copy of the data object in memory.

definetrial([trialdefinition, pre, post, …])

(Re-)define trials of a Syncopy data object

save([container, tag, filename, overwrite, …])

Save data object as new spy container to disk (syncopy.save_data())

selectdata([trials, toi, toilim, units, …])

Create new SpikeData object from selection

Attributes Documentation

cfg

Dictionary of previous operations on data

channel

list of original channel names for each unit

Type

numpy.ndarray

container
data

array-like object representing data without trials

Trials are concatenated along the time axis.

dimord

ordered list of data dimension labels

Type

list(str)

filename
hdr

dict with information about raw data

This property is empty for data created by Syncopy.

log

log of previous operations on data

Type

str

mode

write mode for data, ‘r’ for read-only, ‘w’ for writable

FIXME: append/replace with HDF5?

Type

str

sample

Indices of all recorded samples

sampleinfo

nTrials x 2 numpy.ndarray of [start, end] sample indices

samplerate

underlying sampling rate of non-uniformly data acquisition

Type

float

tag
trialdefinition

]]

Type

nTrials x >=3 numpy.ndarray of [start, end, offset, trialinfo[

trialid

numpy.ndarray of trial id associated with the sample

trialinfo

nTrials x M numpy.ndarray with numeric information about each trial

Each trial can have M properties (condition, original trial no., …) coded by numbers. This property are the fourth and onward columns of BaseData._trialdefinition.

trials

trial slices of data property

Type

list-like([sample x (>=2)] numpy.ndarray)

trialtime

trigger-relative sample times in s

Type

list(numpy.ndarray)

unit

unit names

Type

numpy.ndarray(str)

Methods Documentation

clear()

Clear loaded data from memory

Calls flush method of HDF5 dataset or memory map. Memory maps are deleted and re-instantiated.

copy(deep=False)

Create a copy of the data object in memory.

Parameters

deep (bool) – If True, a copy of the underlying data file is created in the temporary Syncopy folder.

Returns

in-memory copy of data object

Return type

Syncopy data object

See also

syncopy.save()

definetrial(trialdefinition=None, pre=None, post=None, start=None, trigger=None, stop=None, clip_edges=False)

(Re-)define trials of a Syncopy data object

Data can be structured into trials based on timestamps of a start, trigger and end events:

            start    trigger    stop
|---- pre ----|--------|---------|--- post----|
Parameters
  • obj (Syncopy data object (BaseData-like)) –

  • trialdefinition (EventData object or Mx3 array) – [start, stop, trigger_offset] sample indices for M trials

  • pre (float) – offset time (s) before start event

  • post (float) – offset time (s) after end event

  • start (int) – event code (id) to be used for start of trial

  • stop (int) – event code (id) to be used for end of trial

  • trigger – event code (id) to be used center (t=0) of trial

  • clip_edges (bool) – trim trials to actual data-boundaries.

Returns

Return type

Syncopy data object (BaseData-like))

Notes

definetrial() supports the following argument combinations:

>>> # define M trials based on [start, end, offset] indices
>>> definetrial(obj, trialdefinition=[M x 3] array)
>>> # define trials based on event codes stored in <:class:`EventData` object>
>>> definetrial(obj, trialdefinition=<EventData object>,
                pre=0, post=0, start=startCode, stop=stopCode,
                trigger=triggerCode)
>>> # apply same trial definition as defined in <:class:`EventData` object>
>>> definetrial(<AnalogData object>,
                trialdefinition=<EventData object w/sampleinfo/t0/trialinfo>)
>>> # define whole recording as single trial
>>> definetrial(obj, trialdefinition=None)
save(container=None, tag=None, filename=None, overwrite=False, memuse=100)

Save data object as new spy container to disk (syncopy.save_data())

FIXME: update docu

Parameters
  • container (str) – Path to Syncopy container folder (*.spy) to be used for saving. If omitted, a .spy extension will be added to the folder name.

  • tag (str) – Tag to be appended to container basename

  • filename (str) – Explicit path to data file. This is only necessary if the data should not be part of a container folder. An extension (*.<dataclass>) will be added if omitted. The tag argument is ignored.

  • overwrite (bool) – If True an existing HDF5 file and its accompanying JSON file is overwritten (without prompt).

  • memuse (scalar) – Approximate in-memory cache size (in MB) for writing data to disk (only relevant for VirtualData or memory map data sources)

Examples

>>> save_spy(obj, filename="session1")
>>> # --> os.getcwd()/session1.<dataclass>
>>> # --> os.getcwd()/session1.<dataclass>.info
>>> save_spy(obj, filename="/tmp/session1")
>>> # --> /tmp/session1.<dataclass>
>>> # --> /tmp/session1.<dataclass>.info
>>> save_spy(obj, container="container.spy")
>>> # --> os.getcwd()/container.spy/container.<dataclass>
>>> # --> os.getcwd()/container.spy/container.<dataclass>.info
>>> save_spy(obj, container="/tmp/container.spy")
>>> # --> /tmp/container.spy/container.<dataclass>
>>> # --> /tmp/container.spy/container.<dataclass>.info
>>> save_spy(obj, container="session1.spy", tag="someTag")
>>> # --> os.getcwd()/container.spy/session1_someTag.<dataclass>
>>> # --> os.getcwd()/container.spy/session1_someTag.<dataclass>.info
selectdata(trials=None, toi=None, toilim=None, units=None, channels=None)[source]

Create new SpikeData object from selection

Please refer to syncopy.selectdata() for detailed usage information.

Examples

>>> spkUnit01 = spk.selectdata(units=[0, 1])

See also

syncopy.selectdata()

create new objects via deep-copy selections

selectdata(trials=None, toi=None, toilim=None, units=None, channels=None)[source]

Create new SpikeData object from selection

Please refer to syncopy.selectdata() for detailed usage information.

Examples

>>> spkUnit01 = spk.selectdata(units=[0, 1])

See also

syncopy.selectdata()

create new objects via deep-copy selections

__init__(data=None, filename=None, trialdefinition=None, samplerate=None, channel=None, unit=None, dimord=None)[source]

Initialize a SpikeData object.

Parameters
  • data ([nSpikes x 3] numpy.ndarray) –

  • filename (str) – path to filename or folder (spy container)

  • trialdefinition (EventData object or nTrials x 3 array) – [start, stop, trigger_offset] sample indices for M trials

  • samplerate (float) – sampling rate in Hz

  • channel (str or list/array(str)) – original channel names

  • unit (str or list/array(str)) – names of all units

  • dimord (list(str)) – ordered list of dimension labels

  1. filename + data : create hdf dataset incl. sampleinfo @filename

  2. filename no data : read from file or memmap (spy, hdf5, npy file array -> memmap)

  3. just data : try to attach data (error checking done by SpikeData.data.setter())

property cfg

Dictionary of previous operations on data

clear()

Clear loaded data from memory

Calls flush method of HDF5 dataset or memory map. Memory maps are deleted and re-instantiated.

property container
copy(deep=False)

Create a copy of the data object in memory.

Parameters

deep (bool) – If True, a copy of the underlying data file is created in the temporary Syncopy folder.

Returns

in-memory copy of data object

Return type

Syncopy data object

See also

syncopy.save()

property data

array-like object representing data without trials

Trials are concatenated along the time axis.

definetrial(trialdefinition=None, pre=None, post=None, start=None, trigger=None, stop=None, clip_edges=False)

(Re-)define trials of a Syncopy data object

Data can be structured into trials based on timestamps of a start, trigger and end events:

            start    trigger    stop
|---- pre ----|--------|---------|--- post----|
Parameters
  • obj (Syncopy data object (BaseData-like)) –

  • trialdefinition (EventData object or Mx3 array) – [start, stop, trigger_offset] sample indices for M trials

  • pre (float) – offset time (s) before start event

  • post (float) – offset time (s) after end event

  • start (int) – event code (id) to be used for start of trial

  • stop (int) – event code (id) to be used for end of trial

  • trigger – event code (id) to be used center (t=0) of trial

  • clip_edges (bool) – trim trials to actual data-boundaries.

Returns

Return type

Syncopy data object (BaseData-like))

Notes

definetrial() supports the following argument combinations:

>>> # define M trials based on [start, end, offset] indices
>>> definetrial(obj, trialdefinition=[M x 3] array)
>>> # define trials based on event codes stored in <:class:`EventData` object>
>>> definetrial(obj, trialdefinition=<EventData object>,
                pre=0, post=0, start=startCode, stop=stopCode,
                trigger=triggerCode)
>>> # apply same trial definition as defined in <:class:`EventData` object>
>>> definetrial(<AnalogData object>,
                trialdefinition=<EventData object w/sampleinfo/t0/trialinfo>)
>>> # define whole recording as single trial
>>> definetrial(obj, trialdefinition=None)
property dimord

ordered list of data dimension labels

Type

list(str)

property filename
property hdr

dict with information about raw data

This property is empty for data created by Syncopy.

property log

log of previous operations on data

Type

str

property mode

write mode for data, ‘r’ for read-only, ‘w’ for writable

FIXME: append/replace with HDF5?

Type

str

property sample

Indices of all recorded samples

property sampleinfo

nTrials x 2 numpy.ndarray of [start, end] sample indices

property samplerate

underlying sampling rate of non-uniformly data acquisition

Type

float

save(container=None, tag=None, filename=None, overwrite=False, memuse=100)

Save data object as new spy container to disk (syncopy.save_data())

FIXME: update docu

Parameters
  • container (str) – Path to Syncopy container folder (*.spy) to be used for saving. If omitted, a .spy extension will be added to the folder name.

  • tag (str) – Tag to be appended to container basename

  • filename (str) – Explicit path to data file. This is only necessary if the data should not be part of a container folder. An extension (*.<dataclass>) will be added if omitted. The tag argument is ignored.

  • overwrite (bool) – If True an existing HDF5 file and its accompanying JSON file is overwritten (without prompt).

  • memuse (scalar) – Approximate in-memory cache size (in MB) for writing data to disk (only relevant for VirtualData or memory map data sources)

Examples

>>> save_spy(obj, filename="session1")
>>> # --> os.getcwd()/session1.<dataclass>
>>> # --> os.getcwd()/session1.<dataclass>.info
>>> save_spy(obj, filename="/tmp/session1")
>>> # --> /tmp/session1.<dataclass>
>>> # --> /tmp/session1.<dataclass>.info
>>> save_spy(obj, container="container.spy")
>>> # --> os.getcwd()/container.spy/container.<dataclass>
>>> # --> os.getcwd()/container.spy/container.<dataclass>.info
>>> save_spy(obj, container="/tmp/container.spy")
>>> # --> /tmp/container.spy/container.<dataclass>
>>> # --> /tmp/container.spy/container.<dataclass>.info
>>> save_spy(obj, container="session1.spy", tag="someTag")
>>> # --> os.getcwd()/container.spy/session1_someTag.<dataclass>
>>> # --> os.getcwd()/container.spy/session1_someTag.<dataclass>.info
property tag
property trialdefinition

]]

Type

nTrials x >=3 numpy.ndarray of [start, end, offset, trialinfo[

property trialid

numpy.ndarray of trial id associated with the sample

property trialinfo

nTrials x M numpy.ndarray with numeric information about each trial

Each trial can have M properties (condition, original trial no., …) coded by numbers. This property are the fourth and onward columns of BaseData._trialdefinition.

property trials

trial slices of data property

Type

list-like([sample x (>=2)] numpy.ndarray)

property trialtime

trigger-relative sample times in s

Type

list(numpy.ndarray)

property channel

list of original channel names for each unit

Type

numpy.ndarray

property unit

unit names

Type

numpy.ndarray(str)