syncopy.datatype.continuous_data.ContinuousData#
- class syncopy.datatype.continuous_data.ContinuousData(data=None, channel=None, samplerate=None, **kwargs)[source]#
Abstract class for uniformly sampled data
Notes
This class cannot be instantiated. Use one of the children instead.
- __init__(data=None, channel=None, samplerate=None, **kwargs)[source]#
Keys of kwargs are the datasets from _hdfFileDatasetProperties, and kwargs must only include datasets for which a property with a setter exists.
filename + data = create HDF5 file at filename with data in it
data only
Methods
__init__
([data, channel, samplerate])Keys of kwargs are the datasets from _hdfFileDatasetProperties, and kwargs must only include datasets for which a property with a setter exists.
clear
()Clear loaded data from memory
copy
()Create a copy of the entire object on disk.
definetrial
([trialdefinition, pre, post, ...])(Re-)define trials of a Syncopy data object
save
([container, tag, filename, overwrite])Save data object as new
spy
container to disk (syncopy.save_data()
)selectdata
([trials, channel, channel_i, ...])Create a new Syncopy object from a selection
show
([squeeze])Show (partial) contents of Syncopy object
Check DiscreteData input data for shape consistency
_close
()Close backing hdf5 file.
Get handle to h5py.File instance of backing HDF5 file
_get_trial
(trialno)properties that are mapped onto HDF5 datasets
properties that are written into the JSON file and HDF5 attributes upon save
_preview_trial
(trialno)Generate a FauxTrial instance of a trial
_register_dataset
(propertyName[, inData])Register a new dataset, so that it is handled during saving, comparison, copy and other operations.
_reopen
()Reattach datasets from backing hdf5 file.
_set_dataset_property
(inData, propertyName)Set property that is streamed from HDF dataset ('dataset property')
Set a dataset property with a list of NumPy arrays.
_set_dataset_property_with_dataset
(inData, ...)Set a dataset property with an already loaded HDF5 dataset
_set_dataset_property_with_generator
(gen, ...)Create a dataset from a generator yielding (single trial) numpy arrays.
_set_dataset_property_with_list
(inData, ...)Set a dataset property with a list of NumPy arrays or syncopy
_set_dataset_property_with_ndarray
(inData, ...)Set a dataset property with a NumPy array
_set_dataset_property_with_none
(inData, ...)Set a dataset property to None
_set_dataset_property_with_spy_list
(inData, ndim)Set the data dataset property from a list of compatible
_set_dataset_property_with_str
(filename, ...)Set a dataset property with a filename str
These are the (trigger) offsets
_unregister_dataset
(propertyName[, ...])Unregister and delete an additional dataset from the Syncopy data object, and optionally delete it from the backing hdf5 file.
_update_dataset
(propertyName[, inData])Resets an additional dataset which was already registered via
_register_dataset
toinData
.Attributes
Dictionary of previous operations on data
list of recording channel names
HDF5 dataset property representing contiguous data without trialdefinition.
ordered list of data dimension labels
Dictionary of auxiliary meta information
log of previous operations on data
write mode for data, 'r' for read-only, 'w' for writable
nTrials x 2
numpy.ndarray
of [start, end] sample indicessampling rate of uniformly sampled data in Hz
Data selection specified by
Selector
indexable iterable of the time arrays
Index list of trials
]]
nTrials x M
numpy.ndarray
with numeric information about each trialnTrials x 2
numpy.ndarray
of [start, end] times in secondslist-like iterable of trials
- _infoFileProperties = ('dimord', '_version', '_log', 'cfg', 'info', 'samplerate', 'channel')#
properties that are written into the JSON file and HDF5 attributes upon save
- _hdfFileDatasetProperties = ('data',)#
properties that are mapped onto HDF5 datasets
- _selectionKeyWords = ('trials', 'latency')#
- property data#
HDF5 dataset property representing contiguous data without trialdefinition.
Trials are concatenated along the time axis.
- property is_time_locked#
- property _shapes#
- property trialdefinition#
]]
- Type:
nTrials x >=3
numpy.ndarray
of [start, end, offset, trialinfo[
- property time#
indexable iterable of the time arrays
- _preview_trial(trialno)[source]#
Generate a FauxTrial instance of a trial
- Parameters:
trialno (int) – Number of trial the FauxTrial object is intended to mimic
- Returns:
faux_trl – An instance of
syncopy.datatype.base_data.FauxTrial
mainly intended to be used in noCompute runs ofsyncopy.shared.computational_routine.ComputationalRoutine.computeFunction()
to avoid loading actual trial-data into memory.- Return type:
Notes
If an active in-place selection is found, the generated FauxTrial object respects it (e.g., if only 2 of 10 channels are selected in-place, faux_trl reports to only contain 2 channels)
See also
syncopy.datatype.base_data.FauxTrial
class definition and further details
syncopy.shared.computational_routine.ComputationalRoutine
Syncopy compute engine
- __init__(data=None, channel=None, samplerate=None, **kwargs)[source]#
Keys of kwargs are the datasets from _hdfFileDatasetProperties, and kwargs must only include datasets for which a property with a setter exists.
filename + data = create HDF5 file at filename with data in it
data only
- property channel#
list of recording channel names
- Type:
- _abc_impl = <_abc._abc_data object>#
- _check_dataset_property_discretedata(inData)#
Check DiscreteData input data for shape consistency
- Parameters:
inData (array/h5py.Dataset) – array-like to be stored as a DiscreteData data source
- _checksum_algorithm = 'openssl_sha1'#
- _classname_to_extension()#
- _close()#
Close backing hdf5 file.
- abstract property _defaultDimord#
- _dimord = None#
- _filename = None#
- _gen_filename()#
- _get_backing_hdf5_file_handle()#
Get handle to h5py.File instance of backing HDF5 file
Checks all datasets in self._hdfFileDatasetProperties for valid handles, returns None if none found.
Note that the mode of the returned instance depends on the current value of self.mode.
- _hdfFileAttributeProperties = ('dimord', '_version', '_log')#
- _lhd = '\n\t\t>>> SyNCopy v. {ver:s} <<< \n\nCreated: {timestamp:s} \n\nSystem Profile: \n{sysver:s} \nACME: {acver:s}\nDask: {daver:s}\nNumPy: {npver:s}\nSciPy: {spver:s}\n\n--- LOG ---'#
- _log = ''#
- _log_header = '\n\t\t>>> SyNCopy v. 2023.7 <<< \n\nCreated: Wed Aug 30 15:19:16 2023 \n\nSystem Profile: \n3.10.12 (main, Jul 27 2023, 08:44:48) [GCC 11.3.0] \nACME: --\nDask: --\nNumPy: 1.24.4\nSciPy: 1.10.1\n\n--- LOG ---'#
- _mode = None#
- _register_dataset(propertyName, inData=None)#
Register a new dataset, so that it is handled during saving, comparison, copy and other operations. This dataset is not managed in any way during parallel operations and is intended for holding additional data things like statistics. Thus it is NOT safe to use this in a multi-threaded/parallel context, like in a compute function (cF).
- Parameters:
propertyName (str) – The name for the new dataset, this will be used as the dataset name in the hdf5 container when saving. It will be added as an attribute named ‘_’ + propertyName to this SyncopyData object. Note that this means that your propertyName must not clash with other attribute names of syncopy data objects. To ensure the latter, it is recommended to use names with a prefix like ‘dset_’. Clashes will be detected and result in errors.
in_data (None or np.ndarray or h5py.Dataset) – The data to store. Must have the final number of dimensions you want.
- _reopen()#
Reattach datasets from backing hdf5 file. Respects current self.mode.
- _set_dataset_property(inData, propertyName, ndim=None)#
Set property that is streamed from HDF dataset (‘dataset property’)
This method automatically selects the appropriate set method according to the type of the input data (dataIn).
- Parameters:
dataIn (str, np.ndarray, or h5py.Dataset, list, generator) – Filename, array, list of arrays, list of syncopy objects, HDF5 dataset or generator object to be stored in property
propertyName (str) – Name of the property. The actual data must reside in the attribute “_” + propertyName
ndim (int) – Number of expected array dimensions.
- _set_dataset_property_with_array_list(inData, propertyName, ndim)#
Set a dataset property with a list of NumPy arrays.
- _set_dataset_property_with_dataset(inData, propertyName, ndim)#
Set a dataset property with an already loaded HDF5 dataset
- Parameters:
inData (h5py.Dataset) – HDF5 dataset to be stored in property of name propertyName
propertyName (str) – Name of the property to be filled with the dataset
ndim (int) – Number of expected array dimensions.
- _set_dataset_property_with_generator(gen, propertyName, ndim, shape=None)#
Create a dataset from a generator yielding (single trial) numpy arrays. If shape is not given fall back to HDF5 resizable datasets along the stacking dimension.
Expects empty property - will not try to overwrite datasets with generators!
- Parameters:
gen (generator) – Generator yielding (single trial) numpy arrays. Their shapes have to match except along the stacking_dim
ndim (int) – Number of dimensions of the numpy arrays
propertyName (str) – The name of the property which manages the dataset
shape (tuple) – The final shape of the hdf5 dataset. If left at None, the dataset will be resized along the stacking dimension for every trial drawn from the generator
- _set_dataset_property_with_list(inData, propertyName, ndim)#
- Set a dataset property with a list of NumPy arrays or syncopy
data objects.
- _set_dataset_property_with_ndarray(inData, propertyName, ndim)#
Set a dataset property with a NumPy array
If no data exists, a backing HDF5 dataset will be created.
- Parameters:
inData (numpy.ndarray) – NumPy array to be stored in property of name propertyName
propertyName (str) – Name of the property to be filled with inData. Will get an underscore (‘_’) prefix added, so do not include that.
ndim (int) – Number of expected array dimensions.
- _set_dataset_property_with_none(inData, propertyName, ndim)#
Set a dataset property to None
- _set_dataset_property_with_spy_list(inData, ndim)#
- Set the data dataset property from a list of compatible
syncopy data objects. This implements concatenation along trials of syncopy data objects.
- Parameters:
inData (list) – Non empty list of syncopy data objects, e.g.
AnalogData
. Trials are stacked together to fill dataset.ndim (int) – Number of expected array dimensions.
- _set_dataset_property_with_str(filename, propertyName, ndim)#
Set a dataset property with a filename str
- _spwCaller = 'BaseData.{}'#
- property _stackingDim#
- _stackingDimLabel = None#
- property _t0#
These are the (trigger) offsets
- _trialdefinition = None#
- _unregister_dataset(propertyName, del_from_file=True, del_attr=True)#
Unregister and delete an additional dataset from the Syncopy data object, and optionally delete it from the backing hdf5 file.
Assumes that the backing h5py file is open in writeable mode.
- Parameters:
propertyName (str) – The name of the entry in self._hdfFileDatasetProperties to remove. The attribute named ‘_’ + propertyName of this SyncopyData object will be deleted.
del_from_file (bool) – Whether to remove the dataset named ‘propertyName’ from the backing hdf5 file on disk.
del_attr (bool) – Whether to remove the dataset attribute from the Syncopy data object.
- _update_dataset(propertyName, inData=None)#
Resets an additional dataset which was already registered via
_register_dataset
toinData
.
- property cfg#
Dictionary of previous operations on data
- clear()#
Clear loaded data from memory
Calls flush method of HDF5 dataset.
- property container#
- copy()#
Create a copy of the entire object on disk.
- Returns:
cpy – Reference to the copied data object on disk
- Return type:
Syncopy data object
Notes
For copying only a subset of the data use
syncopy.selectdata()
directly with the default inplace=False parameter.See also
syncopy.save()
save to specific file path
syncopy.selectdata()
creates copy of a selection with inplace=False
- definetrial(trialdefinition=None, pre=None, post=None, start=None, trigger=None, stop=None, clip_edges=False)#
(Re-)define trials of a Syncopy data object
Data can be structured into trials based on timestamps of a start, trigger and end events:
start trigger stop |---- pre ----|--------|---------|--- post----|
Note: To define a trial encompassing the whole dataset simply invoke this routine with no arguments, i.e.,
definetrial(obj)
or equivalentlyobj.definetrial()
- Parameters:
obj (Syncopy data object (
BaseData
-like)) –trialdefinition (
EventData
object or Mx3 array) – [start, stop, trigger_offset] sample indices for M trialspre (float) – offset time (s) before start event
post (float) – offset time (s) after end event
start (int) – event code (id) to be used for start of trial
stop (int) – event code (id) to be used for end of trial
trigger – event code (id) to be used center (t=0) of trial
clip_edges (bool) – trim trials to actual data-boundaries.
- Return type:
Syncopy data object (
BaseData
-like))
Notes
definetrial()
supports the following argument combinations:>>> # define M trials based on [start, end, offset] indices >>> definetrial(obj, trialdefinition=[M x 3] array)
>>> # define trials based on event codes stored in <:class:`EventData` object> >>> definetrial(obj, trialdefinition=<EventData object>, pre=0, post=0, start=startCode, stop=stopCode, trigger=triggerCode)
>>> # apply same trial definition as defined in <:class:`EventData` object> >>> definetrial(<AnalogData object>, trialdefinition=<EventData object w/sampleinfo/t0/trialinfo>)
>>> # define whole recording as single trial >>> definetrial(obj, trialdefinition=None)
- property filename#
- property info#
Dictionary of auxiliary meta information
- property mode#
write mode for data, ‘r’ for read-only, ‘w’ for writable
FIXME: append/replace with HDF5?
- Type:
- property sampleinfo#
nTrials x 2
numpy.ndarray
of [start, end] sample indices
- save(container=None, tag=None, filename=None, overwrite=False)#
Save data object as new
spy
container to disk (syncopy.save_data()
)FIXME: update docu
- Parameters:
container (str) – Path to Syncopy container folder (*.spy) to be used for saving. If omitted, a .spy extension will be added to the folder name.
tag (str) – Tag to be appended to container basename
filename (str) – Explicit path to data file. This is only necessary if the data should not be part of a container folder. An extension (*.<dataclass>) will be added if omitted. The tag argument is ignored.
overwrite (bool) – If True an existing HDF5 file and its accompanying JSON file is overwritten (without prompt).
Examples
>>> save_spy(obj, filename="session1") >>> # --> os.getcwd()/session1.<dataclass> >>> # --> os.getcwd()/session1.<dataclass>.info
>>> save_spy(obj, filename="/tmp/session1") >>> # --> /tmp/session1.<dataclass> >>> # --> /tmp/session1.<dataclass>.info
>>> save_spy(obj, container="container.spy") >>> # --> os.getcwd()/container.spy/container.<dataclass> >>> # --> os.getcwd()/container.spy/container.<dataclass>.info
>>> save_spy(obj, container="/tmp/container.spy") >>> # --> /tmp/container.spy/container.<dataclass> >>> # --> /tmp/container.spy/container.<dataclass>.info
>>> save_spy(obj, container="session1.spy", tag="someTag") >>> # --> os.getcwd()/container.spy/session1_someTag.<dataclass> >>> # --> os.getcwd()/container.spy/session1_someTag.<dataclass>.info
- selectdata(trials=None, channel=None, channel_i=None, channel_j=None, latency=None, frequency=None, taper=None, unit=None, eventid=None, inplace=False, clear=False, parallel=None, **kwargs)#
Create a new Syncopy object from a selection
Usage Notice
Syncopy offers two modes for selecting data:
in-place selections mark subsets of a Syncopy data object for processing via a
select
dictionary without creating a new objectdeep-copy selections copy subsets of a Syncopy data object to keep and preserve in a new object created by
selectdata()
All Syncopy metafunctions, such as
freqanalysis()
, support in-place data selection via aselect
keyword, effectively avoiding potentially slow copy operations and saving disk space. The keys accepted by the select dictionary are identical to the keyword arguments discussed below. In addition,select = "all"
can be used to select entire object contents. Examples>>> select = {"toilim" : [-0.25, 0]} >>> spy.freqanalysis(data, select=select) >>> # or equivalently >>> cfg = spy.get_defaults(spy.freqanalysis) >>> cfg.select = select >>> spy.freqanalysis(cfg, data)
Usage Summary
List of Syncopy data objects and respective valid data selectors:
AnalogData
trials, channel, toi/toilimExamples
>>> spy.selectdata(data, trials=[0, 3, 5], channel=["channel01", "channel02"]) >>> cfg = spy.StructDict() >>> cfg.trials = [5, 3, 0]; cfg.toilim = [0.25, 0.5] >>> spy.selectdata(cfg, data)
SpectralData
trials, channel, toi/toilim, foi/foilim, taperExamples
>>> spy.selectdata(data, trials=[0, 3, 5], channel=["channel01", "channel02"]) >>> cfg = spy.StructDict() >>> cfg.foi = [30, 40, 50]; cfg.taper = slice(2, 4) >>> spy.selectdata(cfg, data)
EventData
trials, toi/toilim, eventidExamples
>>> spy.selectdata(data, toilim=[-1, 2.5], eventid=[0, 1]) >>> cfg = spy.StructDict() >>> cfg.trials = [0, 0, 1, 0]; cfg.eventid = slice(2, None) >>> spy.selectdata(cfg, data)
SpikeData
trials, toi/toilim, unit, channelExamples
>>> spy.selectdata(data, toilim=[-1, 2.5], unit=range(0, 10)) >>> cfg = spy.StructDict() >>> cfg.toi = [1.25, 3.2]; cfg.trials = [0, 1, 2, 3] >>> spy.selectdata(cfg, data)
Note Any property that is not specifically accessed via one of the provided selectors is taken as is, e.g.,
spy.selectdata(data, trials=[1, 2])
selects the entire contents of trials no. 2 and 3, whilespy.selectdata(data, channel=range(0, 50))
selects the first 50 channels of data across all defined trials. Consequently, if no keywords are specified, the entire contents of data is selected.Full documentation below
The parameters listed below can be provided as is or a via a cfg configuration ‘structure’, see Notes for details.
- Parameters:
data (Syncopy data object) – A non-empty Syncopy data object. Note the type of data determines which keywords can be used. Some keywords are only valid for certain types of Syncopy objects, e.g., “freqs” is not a valid selector for an
AnalogData
object.trials (list (integers) or None or "all") – List of integers representing trial numbers to be selected; can include repetitions and need not be sorted (e.g.,
trials = [0, 1, 0, 0, 2]
is valid) but must be finite and not NaN. If trials is None, ortrials = "all"
all trials are selected.channel (list (integers or strings), slice, range, str, int, None or "all") – Channel-selection; can be a list of channel names (
['channel3', 'channel1']
), a list of channel indices ([3, 5]
), a slice (slice(3, 10)
) or range (range(3, 10)
). Note that following Python conventions, channels are counted starting at zero, and range and slice selections are half-open intervals of the form [low, high), i.e., low is included , high is excluded. Thus,channel = [0, 1, 2]
orchannel = slice(0, 3)
selects the first up to (and including) the third channel. Selections can be unsorted and may include repetitions but must match exactly, be finite and not NaN. If channel is None, orchannel = "all"
all channels are selected.latency ([begin, end], {'maxperiod', 'minperiod', 'prestim', 'poststim', 'all'} or None) – Either set desired time window ([begin, end]) in seconds, ‘maxperiod’ (default) for the maximum period available or ‘minperiod’ for minimal time-window all trials share, or `’prestim’ (all t < 0) or ‘poststim’ (all t > 0) If set this will apply a selection which is timelocked, meaning non-fitting (effectively too short) trials will be excluded
frequency (list (floats [fmin, fmax]) or None or "all") – Frequency-window
[fmin, fmax]
(in Hz) to be extracted. Window specifications must be sorted (e.g.,[90, 70]
is invalid) and not NaN but may be unbounded (e.g.,[-np.inf, 60.5]
is valid). Edges fmin and fmax are included in the selection. If foilim is None orfoilim = "all"
, all frequencies are selected.taper (list (integers or strings), slice, range, str, int, None or "all") – Taper-selection; can be a list of taper names (
['dpss-win-1', 'dpss-win-3']
), a list of taper indices ([3, 5]
), a slice (slice(3, 10)
) or range (range(3, 10)
). Note that following Python conventions, tapers are counted starting at zero, and range and slice selections are half-open intervals of the form [low, high), i.e., low is included , high is excluded. Thus,taper = [0, 1, 2]
ortaper = slice(0, 3)
selects the first up to (and including) the third taper. Selections can be unsorted and may include repetitions but must match exactly, be finite and not NaN. If taper is None ortaper = "all"
, all tapers are selected.unit (list (integers or strings), slice, range, str, int, None or "all") – Unit-selection; can be a list of unit names (
['unit10', 'unit3']
), a list of unit indices ([3, 5]
), a slice (slice(3, 10)
) or range (range(3, 10)
). Note that following Python conventions, units are counted starting at zero, and range and slice selections are half-open intervals of the form [low, high), i.e., low is included , high is excluded. Thus,unit = [0, 1, 2]
orunit = slice(0, 3)
selects the first up to (and including) the third unit. Selections can be unsorted and may include repetitions but must match exactly, be finite and not NaN. If unit is None orunit = "all"
, all units are selected.eventid (list (integers), slice, range, int, None or "all") – Event-ID-selection; can be a list of event-id codes (
[2, 0, 1]
), slice (slice(0, 2)
) or range (range(0, 2)
). Note that following Python conventions, range and slice selections are half-open intervals of the form [low, high), i.e., low is included , high is excluded. Selections can be unsorted and may include repetitions but must match exactly, be finite and not NaN. If eventid is None oreventid = "all"
, all events are selected.inplace (bool) – If inplace is True no new object is created. Instead the provided selection is stored in the input object’s selection attribute for later use. By default inplace is False and all calls to selectdata create a new Syncopy data object.
clear (bool) – If True remove any active in-place selection. Note that in-place selections can also be removed manually by assinging None to the selection property, i.e.,
mydata.selection = None
is equivalent tospy.selectdata(mydata, clear=True)
ormydata.selectdata(clear=True)
parallel (None or bool) – If None (recommended), processing is automatically performed in parallel (i.e., concurrently across trials/channel-groups), provided a dask parallel processing client is running and available. Parallel processing can be manually disabled by setting parallel to False. If parallel is True but no parallel processing client is running, computing will be performed sequentially.
- Returns:
dataselection – Syncopy data object of the same type as data but containing only the subset specified by provided selectors.
- Return type:
Syncopy data object
Notes
This function can be either called providing its input arguments directly or via a cfg configuration ‘structure’. For instance, the following function calls are equivalent
>>> spy.selectdata(data, trials=...) >>> cfg = spy.StructDict() >>> cfg.trials = ... >>> spy.selectdata(cfg, data) >>> cfg.data = data >>> spy.selectdata(cfg)
Please refer to Syncopy for FieldTrip Users for further details.
This routine represents a convenience function for creating new Syncopy objects based on existing data entities. However, in many situations, the creation of a new object (and thus the allocation of additional disk-space) might not be necessary: all Syncopy metafunctions, such as
freqanalysis()
, support in-place data selection.Consider the following example: assume data is an
AnalogData
object representing 220 trials of LFP recordings containing baseline (between second -0.25 and 0) and stimulus-on data (on the interval [0.25, 0.5]). To compute the baseline spectrum, data-selection does not have to be performed before callingfreqanalysis()
but instead can be done in-place:>>> import syncopy as spy >>> cfg = spy.get_defaults(spy.freqanalysis) >>> cfg.method = 'mtmfft' >>> cfg.taper = 'dpss' >>> cfg.output = 'pow' >>> cfg.tapsmofrq = 10 >>> # define baseline/stimulus-on ranges >>> baseSelect = {"toilim": [-0.25, 0]} >>> stimSelect = {"toilim": [0.25, 0.5]} >>> # in-place selection of baseline interval performed by `freqanalysis` >>> cfg.select = baseSelect >>> baselineSpectrum = spy.freqanalysis(cfg, data) >>> # in-place selection of stimulus-on time-frame performed by `freqanalysis` >>> cfg.select = stimSelect >>> stimonSpectrum = spy.freqanalysis(cfg, data)
Especially for large data-sets, in-place data selection performed by Syncopy’s metafunctions does not only save disk-space but can significantly increase performance.
Examples
Use
generate_artificial_data()
to create a syntheticsyncopy.AnalogData
object.>>> from syncopy.tests.misc import generate_artificial_data >>> adata = generate_artificial_data(nTrials=10, nChannels=32)
Assume a hypothetical trial onset at second 2.0 with the first second of each trial representing baseline recordings. To extract only the stimulus-on period from adata, one could use
>>> stimon = spy.selectdata(adata, toilim=[2.0, np.inf])
Note that this is equivalent to
>>> stimon = adata.selectdata(toilim=[2.0, np.inf])
See also
syncopy.show()
Show (subsets) of Syncopy objects
- property selection#
Data selection specified by
Selector
- show(squeeze=True, **kwargs)#
Show (partial) contents of Syncopy object
Usage Notice
Syncopy uses HDF5 files as on-disk backing device for data storage. This allows working with larger-than-memory data-sets by streaming only relevant subsets of data from disk on demand without excessive RAM use. However, using
show()
this mechanism is bypassed and the requested data subset is loaded into memory at once. Thus, inadvertent usage ofshow()
on a large data object can lead to memory overflow or even out-of-memory errors.Usage Summary
Data selectors for showing subsets of Syncopy data objects follow the syntax of
selectdata()
. Please refer toselectdata()
for a list of valid data selectors for respective Syncopy data objects.- Parameters:
data (Syncopy data object) – As for subset-selection via
selectdata()
, the type of data determines which keywords can be used. Some keywords are only valid for certain types of Syncopy objects, e.g., “freqs” is not a valid selector for anAnalogData
object.squeeze (bool) – If True (default) any singleton dimensions are removed from the output array, i.e., the shape of the returned array does not contain ones (e.g.,
arr.shape = (2,)
notarr.shape = (1,2,1,1)
).**kwargs (keywords) – Valid data selectors (e.g., trials, channels, toi etc.). Please refer to
selectdata()
for a full list of available data selectors.
- Returns:
arr – A (selection) of data retrieved from the data input object.
- Return type:
NumPy nd-array
Notes
This routine represents a convenience function for quickly inspecting the contents of Syncopy objects. It is always possible to manually access an object’s numerical data by indexing the underlying HDF5 dataset: data.data[idx]. The dimension labels of the dataset are encoded in data.dimord, e.g., if data is a
AnalogData
with data.dimord being [‘time’, ‘channel’] and data.data.shape is (15000, 16), then data.data[:, 3] returns the contents of the fourth channel across all time points.Examples
Use
generate_artificial_data()
to create a syntheticsyncopy.AnalogData
object.>>> from syncopy.tests.misc import generate_artificial_data >>> adata = generate_artificial_data(nTrials=10, nChannels=32)
Show the contents of ‘channel02’ across all trials:
>>> spy.show(adata, channel='channel02') Syncopy <show> INFO: Showing all times 10 trials Out[2]: array([1.0871, 0.7267, 0.2816, ..., 1.0273, 0.893 , 0.7226], dtype=float32)
Note that this is equivalent to
>>> adata.show(channel='channel02')
To preserve singleton dimensions use
squeeze=False
:>>> adata.show(channel='channel02', squeeze=False) Out[3]: array([[1.0871], [0.7267], [0.2816], ..., [1.0273], [0.893 ], [0.7226]], dtype=float32)
See also
syncopy.selectdata()
Create a new Syncopy object from a selection
- property tag#
- property trial_ids#
Index list of trials
- property trialinfo#
nTrials x M
numpy.ndarray
with numeric information about each trialEach trial can have M properties (condition, original trial no., …) coded by numbers. This property are the fourth and onward columns of BaseData._trialdefinition.
- property trialintervals#
nTrials x 2
numpy.ndarray
of [start, end] times in seconds
- property trials#
list-like iterable of trials