syncopy.specest.mtmconvol.MultiTaperFFTConvol¶

class
syncopy.specest.mtmconvol.
MultiTaperFFTConvol
(*argv, **kwargs)[source]¶ Compute class that performs timefrequency analysis of
AnalogData
objectsSubclass of
ComputationalRoutine
, see Design Guide: Syncopy Compute Classes for technical details on Syncopy’s compute classes and metafunctions.See also
syncopy.freqanalysis
parent metafunction

__init__
(*argv, **kwargs)¶ Instantiate a
ComputationalRoutine
subclass Parameters
*argv (tuple) – Tuple of positional arguments passed on to
computeFunction()
**kwargs (dict) – Keyword arguments passed on to
computeFunction()
 Returns
obj – Usable class instance for processing Syncopy data objects.
 Return type
instance of
ComputationalRoutine
subclass
Methods
__init__
(*argv, **kwargs)Instantiate a
ComputationalRoutine
subclasscompute
(data, out[, parallel, …])Central management and processing method
computeFunction
(trl_dat, *wrkargs, **kwargs)Perform timefrequency analysis on multichannel time series data using a sliding window FFT
compute_parallel
(data, out)Concurrent computing kernel
compute_sequential
(data, out)Sequential computing kernel
initialize
(data[, chan_per_worker, keeptrials])Perform dryrun of calculation to determine output shape
preallocate_output
(out[, parallel_store])Storage allocation and provisioning
process_metadata
(data, out)Metainformation manager
write_log
(data, out[, log_dict])Processing of output log

static
computeFunction
(trl_dat, *wrkargs, **kwargs)¶ Perform timefrequency analysis on multichannel time series data using a sliding window FFT
 Parameters
trl_dat (2D
numpy.ndarray
) – Uniformly sampled multichannel timeseriessoi (list of slices or slice) – Samples of interest; either a single slice encoding begin to endsamples to perform analysis on (if sliding window centroids are equidistant) or list of slices with each slice corresponding to coverage of a single analysis window (if spacing between windows is not constant)
padbegin (int) – Number of samples to prepend to trl_dat
padend (int) – Number of samples to append to trl_dat
samplerate (float) – Samplerate of trl_dat in Hz
noverlap (int) – Number of samples covered by two adjacent analysis windows
nperseg (int) – Size of analysis windows (in samples)
equidistant (bool) – If True, spacing of windowcentroids is equidistant.
toi (1D
numpy.ndarray
or float or str) – Either timepoints to center windows on if toi is anumpy.ndarray
, or percentage of overlap between windows if toi is a scalar or “all” to center windows on all samples in trl_dat. Please refer tofreqanalysis()
for further details. Note: The value of toi has to agree with provided padding and window settings. See Notes for more information.foi (1D
numpy.ndarray
) – Frequencies of interest (Hz) for output. If desired frequencies cannot be matched exactly the closest possible frequencies (respecting data length and padding) are used.nTaper (int) – Number of tapers to use
timeAxis (int) – Index of running time axis in trl_dat (0 or 1)
taper (callable) – Taper function to use, one of
availableTapers
taperopt (dict) – Additional keyword arguments passed to taper (see above). For further details, please refer to the SciPy docs
keeptapers (bool) – If True, results of Fourier transform are preserved for each taper, otherwise spectrum is averaged across tapers.
polyremoval (int) – FIXME: Not implemented yet Order of polynomial used for detrending. A value of 0 corresponds to subtracting the mean (“demeaning”),
polyremoval = 1
removes linear trends (subtracting the least squares fit of a linear function),polyremoval = N
for N > 1 subtracts a polynomial of order N (N = 2
quadratic,N = 3
cubic etc.). If polyremoval is None, no detrending is performed.output_fmt (str) – Output of spectral estimation; one of
availableOutputs
noCompute (bool) – Preprocessing flag. If True, do not perform actual calculation but instead return expected shape and
numpy.dtype
of output array.chunkShape (None or tuple) – If not None, represents shape of output object spec (respecting provided values of nTaper, keeptapers etc.)
 Returns
spec – Complex or real timefrequency representation of (padded) input data.
 Return type
Notes
This method is intended to be used as
computeFunction()
inside aComputationalRoutine
. Thus, input parameters are presumed to be forwarded from a parent metafunction. Consequently, this function does not perform any error checking and operates under the assumption that all inputs have been externally validated and crosschecked.The computational heavy lifting in this code is performed by SciPy’s Short Time Fourier Transform (STFT) implementation
scipy.signal.stft()
.See also
syncopy.freqanalysis()
parent metafunction
MultiTaperFFTConvol()
ComputationalRoutine
instance that calls this method ascomputeFunction()
scipy.signal.stft()
SciPy’s STFT implementation

process_metadata
(data, out)[source]¶ Metainformation manager
 Parameters
data (syncopy data object) – Syncopy data object that has been processed
out (syncopy data object) – Syncopy data object holding calculation results
 Returns
Nothing
 Return type
Notes
This routine is an abstract method and is thus intended to be overloaded. Consult the developer documentation (Design Guide: Syncopy Compute Classes) for further details.
See also
write_log()
Logging of calculation parameters

__init__
(*argv, **kwargs)¶ Instantiate a
ComputationalRoutine
subclass Parameters
*argv (tuple) – Tuple of positional arguments passed on to
computeFunction()
**kwargs (dict) – Keyword arguments passed on to
computeFunction()
 Returns
obj – Usable class instance for processing Syncopy data objects.
 Return type
instance of
ComputationalRoutine
subclass

compute
(data, out, parallel=False, parallel_store=None, method=None, mem_thresh=0.5, log_dict=None, parallel_debug=False)¶ Central management and processing method
 Parameters
data (syncopy data object) – Syncopy data object to be processed (has to be the same object that was used by
initialize()
in the precalculation dryrun).out (syncopy data object) – Empty object for holding results
parallel (bool) – If True, processing is performed in parallel (i.e.,
computeFunction()
is executed concurrently across trials). If parallel is False,computeFunction()
is executed consecutively trial after trial (i.e., the calculation realized incomputeFunction()
is performed sequentially).parallel_store (None or bool) – Flag controlling saving mechanism. If None,
parallel_store = parallel
, i.e., the computeparadigm dictates the employed writing method. Thus, in case of parallel processing, results are written in a fully concurrent manner (each worker saves its own local result segment on disk as soon as it is done with its part of the computation). If parallel_store is False and parallel is True the processing result is saved sequentially using a mutex. If both parallel and parallel_store are False standard singleprocess HDF5 writing is employed for saving the result of the (sequential) computation.method (None or str) – If None the predefined methods
compute_parallel()
orcompute_sequential()
are used to control the actual computation (specifically, callingcomputeFunction()
) depending on whether parallel is True or False, respectively. If method is a string, it has to specify the name of an alternative (provided) class method that is invoked using getattr.mem_thresh (float) – Fraction of available memory required to perform computation. By default, the largest single trial result must not occupy more than 50% (
mem_thresh = 0.5
) of available singlemachine or worker memory (if parallel is False or True, respectively).log_dict (None or dict) – If None, the log properties of out is populated with the employed keyword arguments used in
computeFunction()
. Otherwise, out’s log properties are filled with items taken from log_dict.parallel_debug (bool) – If True, concurrent processing is performed using a singlethreaded scheduler, i.e., all parallel computing task are run in the current Python thread permitting usage of tools like pdb/ipdb, cProfile and the like in
computeFunction()
. Note that enabling parallel debugging effectively runs the given computation on the calling local machine thereby requiring sufficient memory and CPU capacity.
 Returns
Nothing – The result of the computation is available in out once
compute()
terminated successfully. Return type
Notes
This routine calls several other class methods to perform all necessary pre and postprocessing steps in a fully automatic manner without requiring any userinput. Specifically, the following class methods are invoked consecutively (in the given order):
preallocate_output()
allocates a (virtual) HDF5 dataset of appropriate dimension for storing the resultcompute_parallel()
(orcompute_sequential()
) performs the actual computation via concurrently (or sequentially) callingcomputeFunction()
process_metadata()
attaches all relevant metainformation to the result out after successful termination of the calculationwrite_log()
stores employed input arguments in out.cfg and out.log to reproduce all relevant computational steps that generated out.
See also
initialize()
precalculation preparations
preallocate_output()
storage provisioning
compute_parallel()
concurrent computation using
computeFunction()
compute_sequential()
sequential computation using
computeFunction()
process_metadata()
management of metainformation
write_log()
logentry organization

compute_parallel
(data, out)¶ Concurrent computing kernel
 Parameters
data (syncopy data object) – Syncopy data object to be processed
out (syncopy data object) – Empty object for holding results
 Returns
Nothing
 Return type
Notes
This method mereley acts as a concurrent wrapper for
computeFunction()
by passing along all necessary information for parallel execution and storage of results using a dask bag of dictionaries. The actual reading of source data and writing of results is managed by the decoratorsyncopy.shared.parsers.unwrap_io()
. Note that this routine first builds an entire parallel instruction tree and only kicks off execution on the cluster at the very end of the calculation command assembly.See also
compute()
management routine invoking parallel/sequential compute kernels
compute_sequential()
serial processing counterpart of this method

compute_sequential
(data, out)¶ Sequential computing kernel
 Parameters
data (syncopy data object) – Syncopy data object to be processed
out (syncopy data object) – Empty object for holding results
 Returns
Nothing
 Return type
Notes
This method most closely reflects classic iterative process execution: trials in data are passed sequentially to
computeFunction()
, results are stored consecutively in a regular HDF5 dataset (that was preallocated bypreallocate_output()
). Since the calculation result is immediately stored on disk, propagation of arrays across routines is avoided and memory usage is kept to a minimum.See also
compute()
management routine invoking parallel/sequential compute kernels
compute_parallel()
concurrent processing counterpart of this method

initialize
(data, chan_per_worker=None, keeptrials=True)¶ Perform dryrun of calculation to determine output shape
 Parameters
data (syncopy data object) – Syncopy data object to be processed (has to be the same object that is passed to
compute()
for the actual calculation).chan_per_worker (None or int) – Number of channels to be processed by each worker (only relevant in case of concurrent processing). If chan_per_worker is None (default) bytrial parallelism is used, i.e., each worker processes data corresponding to a full trial. If chan_per_worker > 0, trials are split into channelgroups of size chan_per_worker (+ rest if the number of channels is not divisible by chan_per_worker without remainder) and workers are assigned bytrial channelgroups for processing.
keeptrials (bool) – Flag indicating whether to return individual trials or average
 Returns
Nothing
 Return type
Notes
This class method has to be called prior to performing the actual computation realized in
computeFunction()
.See also
compute()
core routine performing the actual computation

preallocate_output
(out, parallel_store=False)¶ Storage allocation and provisioning
 Parameters
out (syncopy data object) – Empty object for holding results
parallel_store (bool) – If True, a directory for virtual source files is created in Syncopy’s temporary ondisk storage (defined by syncopy.__storage__). Otherwise, a dataset of appropriate type and shape is allocated in a new regular HDF5 file created inside Syncopy’s temporary storage folder.
 Returns
Nothing
 Return type
See also
compute()
management routine controlling memory preallocation

write_log
(data, out, log_dict=None)¶ Processing of output log
 Parameters
data (syncopy data object) – Syncopy data object that has been processed
out (syncopy data object) – Syncopy data object holding calculation results
log_dict (None or dict) – If None, the log properties of out is populated with the employed keyword arguments used in
computeFunction()
. Otherwise, out’s log properties are filled with items taken from log_dict.
 Returns
Nothing
 Return type
See also
process_metadata()
Management of metainformation