8.4.7. Data Management
|
Enum for dimensionality representation of data |
|
Enum for source of data |
|
Enum for distribution of data |
|
Object holding info and data about physical axis of some data |
|
Base object to store homogeneous data and metadata generated by pymodaq's objects. |
|
Specialized DataWithAxes set with source as 'raw'. |
|
Specialized DataWithAxes set with source as 'calculated'. |
|
Specialized DataWithAxes set with source as 'raw'. |
|
Specialized DataWithAxes set with source as 'calculated'.To be used for processed data from region of interest |
|
Object to store all raw and calculated DataWithAxes data for later exporting, saving, sending signal... |
8.4.7.1. Axes
Created the 28/10/2022
@author: Sebastien Weber
- class pymodaq.utils.data.Axis(label: str = '', units: str = '', data: Optional[ndarray] = None, index: int = 0, scaling=None, offset=None, spread_order: int = -1)[source]
Object holding info and data about physical axis of some data
In case the axis’s data is linear, store the info as a scale and offset else store the data
- Parameters
label (str) – The label of the axis, for instance ‘time’ for a temporal axis
units (str) – The units of the data in the object, for instance ‘s’ for seconds
data (ndarray) – A 1D ndarray holding the data of the axis
index (int) – an integer representing the index of the Data object this axis is related to
scaling (float) – The scaling to apply to a linspace version in order to obtain the proper scaling
offset (float) – The offset to apply to a linspace/scaled version in order to obtain the proper axis
spread_order (int) – An integer needed in the case where data has a spread DataDistribution. It refers to the index along the data’s spread_index dimension
Examples
>>> axis = Axis('myaxis', units='seconds', data=np.array([1,2,3,4,5]), index=0)
- create_linear_data(nsteps: int)[source]
replace the axis data with a linear version using scaling and offset
- get_data() ndarray [source]
Convenience method to obtain the axis data (usually None because scaling and offset are used)
- get_scale_offset_from_data(data: Optional[ndarray] = None)[source]
Get the scaling and offset from the axis’s data
If data is not None, extract the scaling and offset
- Parameters
data (ndarray) –
- property data
get/set the data of Axis
- Type
np.ndarray
8.4.7.2. DataObjects
Created the 28/10/2022
@author: Sebastien Weber
- class pymodaq.utils.data.DataBase(name: str, source: Optional[DataSource] = None, dim: Optional[DataDim] = None, distribution: DataDistribution = DataDistribution.uniform, data: Optional[List[ndarray]] = None, labels: List[str] = [], origin: str = '', **kwargs)[source]
Base object to store homogeneous data and metadata generated by pymodaq’s objects. To be inherited for real data
- Parameters
name (str) – the identifier of these data
source (DataSource or str) – Enum specifying if data are raw or processed (for instance from roi)
distribution (DataDistribution or str) – The distribution type of the data: uniform if distributed on a regular grid or spread if on specific unordered points
data (list of ndarray) – The data the object is storing
origin (str) – An identifier of the element where the data originated, for instance the DAQ_Viewer’s name. Used when appending DataToExport in DAQ_Scan to disintricate from which origin data comes from when scanning multiple detectors.
kwargs (named parameters) – All other parameters are stored dynamically using the name/value pair. The name of these extra parameters are added into the extra_attributes attribute
- source
Enum specifying if data are raw or processed (for instance from roi)
- Type
DataSource or str
- distribution
The distribution type of the data: uniform if distributed on a regular grid or spread if on specific unordered points
- Type
- origin
An identifier of the element where the data originated, for instance the DAQ_Viewer’s name. Used when appending DataToExport in DAQ_Scan to disintricate from which origin data comes from when scanning multiple detectors.
- Type
- extra_attributes
list of string giving identifiers of the attributes added dynamically at the initialization (for instance to save extra metadata using the DataSaverLoader
- Type
List[str]
See also
DataWithAxes
,DataFromPlugins
,DataRaw
,DataSaverLoader
Examples
>>> import numpy as np >>> from pymodaq.utils.data import DataBase, DataSource, DataDim, DataDistribution >>> data = DataBase('mydata', source=DataSource['raw'], dim=DataDim['Data1D'], distribution=DataDistribution['uniform'], data=[np.array([1.,2.,3.]), np.array([4.,5.,6.])], labels=['channel1', 'channel2'], origin='docutils code') >>> data.dim <DataDim.Data1D: 1> >>> data.source <DataSource.raw: 0> >>> data.shape (3,) >>> data.length 2 >>> data.size 3
- average(other: DataBase, weight: int) DataBase [source]
Compute the weighted average between self and other DataBase
- get_full_name() str [source]
Get the data ful name including the origin attribute into the returned value
- Returns
str
- Return type
the name of the ataWithAxes data constructed as : origin/name
Examples
d0 = DataBase(name=’datafromdet0’, origin=’det0’)
- set_dim(dim: Union[DataDim, str])[source]
Addhoc modification of dim independantly of the real data shape, should be used with extra care
- property data: List[ndarray]
get/set (and check) the data the object is storing
- Type
List[np.ndarray]
- property distribution
the enum representing the distribution of the stored data
- Type
- property length
The length of data. This is the length of the list containing the nd-arrays
- property shape
The shape of the nd-arrays
- property size
The size of the nd-arrays
- property source
the enum representing the source of the data
- Type
- class pymodaq.utils.data.DataCalculated(*args, axes=[], **kwargs)[source]
Specialized DataWithAxes set with source as ‘calculated’. To be used for processed/calculated data
- class pymodaq.utils.data.DataFromPlugins(*args, **kwargs)[source]
Specialized DataWithAxes set with source as ‘raw’. To be used for raw data generated by Detector plugins
It introduces by default to extra attributes, plot and save. Their presence can be checked in the extra_attributes list.
- Parameters
8.4.7.3. Data Characteristics
Created the 28/10/2022
@author: Sebastien Weber
8.4.7.4. Union of Data
When exporting multiple set of Data objects, one should use a DataToExport
Created the 28/10/2022
@author: Sebastien Weber
- class pymodaq.utils.data.DataToExport(name: str, data: List[DataWithAxes] = [], **kwargs)[source]
Object to store all raw and calculated DataWithAxes data for later exporting, saving, sending signal…
Includes methods to retrieve data from dim, source… Stored data have a unique identifier their name. If some data is appended with an existing name, it will replace the existing data. So if you want to append data that has the same name
- Parameters
- name
- timestamp
- data
- affect_name_to_origin_if_none()[source]
Affect self.name to all DataWithAxes children’s attribute origin if this origin is not defined
- average(other: DataToExport, weight: int) DataToExport [source]
Compute the weighted average between self and other DataToExport and attributes it to self
- Parameters
other (DataToExport) –
weight (int) – The weight the ‘other_data’ holds with respect to self
- get_data_from_Naxes(Naxes: int, deepcopy: bool = False) DataToExport [source]
Get the data matching the given number of axes
- Parameters
Naxes (int) – Number of axes in the DataWithAxes objects
- Returns
DataToExport
- Return type
filtered with data matching the number of axes
- get_data_from_attribute(attribute: str, attribute_value: Any, deepcopy=False) DataToExport [source]
Get the data matching a given attribute value
- Returns
DataToExport
- Return type
filtered with data matching the attribute presence and value
- get_data_from_dim(dim: DataDim, deepcopy=False) DataToExport [source]
Get the data matching the given DataDim
- Returns
DataToExport
- Return type
filtered with data matching the dimensionality
- get_data_from_dims(dims: List[DataDim], deepcopy=False) DataToExport [source]
Get the data matching the given DataDim
- Returns
DataToExport
- Return type
filtered with data matching the dimensionality
- get_data_from_full_name(full_name: str, deepcopy=False) DataWithAxes [source]
Get the DataWithAxes with matching full name
- get_data_from_missing_attribute(attribute: str, deepcopy=False) DataToExport [source]
Get the data matching a given attribute value
- get_data_from_name_origin(name: str, origin: str = '') DataWithAxes [source]
Get the data matching the given name and the given origin
- get_data_from_sig_axes(Naxes: int, deepcopy: bool = False) DataToExport [source]
Get the data matching the given number of signal axes
- Parameters
Naxes (int) – Number of signal axes in the DataWithAxes objects
- Returns
DataToExport
- Return type
filtered with data matching the number of signal axes
- get_data_from_source(source: DataSource, deepcopy=False) DataToExport [source]
Get the data matching the given DataSource
- Returns
DataToExport
- Return type
filtered with data matching the dimensionality
- get_data_with_naxes_lower_than(n_axes=2, deepcopy: bool = False) DataToExport [source]
Get the data with n axes lower than the given number
- Parameters
Naxes (int) – Number of axes in the DataWithAxes objects
- Returns
DataToExport
- Return type
filtered with data matching the number of axes
- get_full_names(dim: Optional[DataDim] = None)[source]
Get the ful names including the origin attribute into the returned value, eventually filtered by dim
- Parameters
- Returns
list of str
- Return type
the names of the (filtered) DataWithAxes data constructed as : origin/name
Examples
d0 = DataWithAxes(name=’datafromdet0’, origin=’det0’)
- get_names(dim: Optional[DataDim] = None) List[str] [source]
Get the names of the stored DataWithAxes, eventually filtered by dim
- index_from_name_origin(name: str, origin: str = '') List[DataWithAxes] [source]
Get the index of a given DataWithAxes within the list of data
- merge_as_dwa(dim: DataDim, name: Optional[str] = None) DataRaw [source]
attempt to merge all dwa into one
Only possible if all dwa and underlying data have same shape