allensdk.brain_observatory.behavior.behavior_ophys_session module¶
-
class
allensdk.brain_observatory.behavior.behavior_ophys_session.
BehaviorOphysSession
(api=None, eye_tracking_z_threshold: float = 3.0, eye_tracking_dilation_frames: int = 2)[source]¶ Bases:
allensdk.brain_observatory.session_api_utils.ParamsMixin
Represents data from a single Visual Behavior Ophys imaging session. Can be initialized with an api that fetches data, or by using class methods from_lims and from_nwb_path.
-
average_projection
¶ 2D image of the microscope field of view, averaged across the experiment :rtype: pandas.DataFrame
-
cell_specimen_table
¶ Cell roi information organized into a dataframe; index is the cell roi ids. :rtype: pandas.DataFrame
-
corrected_fluorescence_traces
¶ The motion-corrected fluorescence traces organized into a dataframe; index is the cell roi ids. :rtype: pandas.DataFrame
-
deserialize_image
(self, sitk_image)[source]¶ Convert SimpleITK image returned by the api to an Image class:
- Args:
- sitk_image (SimpleITK image): image object returned by the api
- Returns
- img (allensdk.brain_observatory.behavior.image_api.Image)
-
dff_traces
¶ Traces of dff organized into a dataframe; index is the cell roi ids. :rtype: pandas.DataFrame
-
eye_tracking
¶ A dataframe containing ellipse fit parameters for the eye, pupil and corneal reflection (cr). Fits are derived from tracking points from a DeepLabCut model applied to video frames of a subject’s right eye. Raw tracking points and raw video frames are not exposed by the SDK.
Notes: - All columns starting with ‘pupil_’ represent ellipse fit parameters
relating to the pupil.- All columns starting with ‘eye_’ represent ellipse fit parameters relating to the eyelid.
- All columns starting with ‘cr_’ represent ellipse fit parameters relating to the corneal reflection, which is caused by an infrared LED positioned near the eye tracking camera.
- All positions are in units of pixels.
- All areas are in units of pixels^2
- All values are in the coordinate space of the eye tracking camera, NOT the coordinate space of the stimulus display (i.e. this is not gaze location), with (0, 0) being the upper-left corner of the eye-tracking image.
- The ‘likely_blink’ column is True for any row (frame) where the pupil fit failed OR eye fit failed OR an outlier fit was identified.
- All ellipse fits are derived from tracking points that were output by a DeepLabCut model that was trained on hand-annotated data frome a subset of imaging sessions on optical physiology rigs.
- Raw DeepLabCut tracking points are not publicly available.
Return type: pandas.DataFrame
-
classmethod
from_lims
(ophys_experiment_id: int, eye_tracking_z_threshold: float = 3.0, eye_tracking_dilation_frames: int = 2) → 'BehaviorOphysSession'[source]¶
-
get_average_projection
(self)[source]¶ Returns an image whose values are the average obtained values at each pixel of the ophys movie over time.
Returns: - allensdk.brain_observatory.behavior.image_api.Image:
array-like interface to max projection image data and metadata
-
get_max_projection
(self)[source]¶ Returns an image whose values are the maximum obtained values at each pixel of the ophys movie over time.
Returns: - allensdk.brain_observatory.behavior.image_api.Image:
array-like interface to max projection image data and metadata
-
get_roi_masks
(self, cell_specimen_ids=None)[source]¶ Obtains boolean masks indicating the location of one or more cell’s ROIs in this session.
Parameters: - cell_specimen_ids : array-like of int, optional
ROI masks for these cell specimens will be returned. The default behavior is to return masks for all cell specimens.
Returns: - result : xr.DataArray
- dimensions are:
- cell_specimen_id : which cell’s roi is described by this mask?
- row : index within the underlying image
- column : index within the image
values are 1 where an ROI was present, otherwise 0.
-
get_segmentation_mask_image
(self)[source]¶ Returns an image with value 1 if the pixel was included in an ROI, and 0 otherwise
Returns: - allensdk.brain_observatory.behavior.image_api.Image:
array-like interface to segmentation_mask image data and metadata
-
licks
¶ A dataframe containing lick timestamps. :rtype: pandas.DataFrame
-
max_projection
¶ 2D max projection image. :rtype: allensdk.brain_observatory.behavior.image_api.Image
-
metadata
¶ Dictionary of session-specific metadata. :rtype: dict
-
motion_correction
¶ A dataframe containing trace data used during motion correction computation :rtype: pandas.DataFrame
-
ophys_experiment_id
¶ Unique identifier for this experimental session. :rtype: int
-
ophys_timestamps
¶ Timestamps associated with frames captured by the microscope :rtype: numpy.ndarray
-
rewards
¶ A dataframe containing timestamps of delivered rewards. :rtype: pandas.DataFrame
-
running_data_df
¶ Dataframe containing various signals used to compute running speed :rtype: pandas.DataFrame
-
running_speed
¶ - Running speed of mouse. NamedTuple with two fields
- timestamps : numpy.ndarray
- Timestamps of running speed data samples
- values : np.ndarray
- Running speed of the experimental subject (in cm / s).
Return type: allensdk.brain_observatory.running_speed.RunningSpeed
-
segmentation_mask_image
¶ An image with pixel value 1 if that pixel was included in an ROI, and 0 otherwise :rtype: allensdk.brain_observatory.behavior.image_api.Image
-
stimulus_presentations
¶ Table whose rows are stimulus presentations (i.e. a given image, for a given duration, typically 250 ms) and whose columns are presentation characteristics. :rtype: pandas.DataFrame
-
stimulus_templates
¶ A dictionary containing the stimulus images presented during the session keys are data set names, and values are 3D numpy arrays. :rtype: dict
-
stimulus_timestamps
¶ Timestamps associated with stimulus presentations on the monitor (corrected for monitor delay). :rtype: numpy.ndarray
-
task_parameters
¶ A dictionary containing parameters used to define the task runtime behavior. :rtype: dict
-
trials
¶ A dataframe containing behavioral trial start/stop times, and trial data :rtype: pandas.DataFrame
-