allensdk.brain_observatory.behavior.stimulus_processing package

Module contents

allensdk.brain_observatory.behavior.stimulus_processing.get_gratings_metadata(stimuli: Dict, start_idx: int = 0) → pandas.core.frame.DataFrame[source]

This function returns the metadata for each unique grating that was presented during the experiment. If no gratings were displayed during this experiment it returns an empty dataframe with the expected columns. Parameters ———- stimuli:

The stimuli field (pkl[‘items’][‘behavior’][‘stimuli’]) loaded from the experiment pkl file.
start_idx:
The index to start index column
Returns:
pd.DataFrame:

DataFrame containing the unique stimuli presented during an experiment. The columns contained in this DataFrame are ‘image_category’, ‘image_name’, ‘image_set’, ‘phase’, ‘spatial_frequency’, ‘orientation’, and ‘image_index’. This returns empty if no gratings were presented.

allensdk.brain_observatory.behavior.stimulus_processing.get_images_dict(pkl) → Dict[source]

Gets the dictionary of images that were presented during an experiment along with image set metadata and the image specific metadata. This function uses the path to the image pkl file to read the images and their metadata from the pkl file and return this dictionary. Parameters ———- pkl: The pkl file containing the data for the stimuli presented during

experiment
Returns:
Dict:

A dictionary containing keys images, metadata, and image_attributes. These correspond to paths to image arrays presented, metadata on the whole set of images, and metadata on specific images, respectively.

allensdk.brain_observatory.behavior.stimulus_processing.get_stimulus_metadata(pkl) → pandas.core.frame.DataFrame[source]

Gets the stimulus metadata for each type of stimulus presented during the experiment. The metadata is return for gratings, images, and omitted stimuli. Parameters ———- pkl: the pkl file containing the information about what stimuli were

presented during the experiment
Returns:
pd.DataFrame:

The dataframe containing a row for every stimulus that was presented during the experiment. The row contains the following data, image_category, image_name, image_set, phase, spatial_frequency, orientation, and image index.

allensdk.brain_observatory.behavior.stimulus_processing.get_stimulus_presentations(data, stimulus_timestamps) → pandas.core.frame.DataFrame[source]

This function retrieves the stimulus presentation dataframe and renames the columns, adds a stop_time column, and set’s index to stimulus_presentation_id before sorting and returning the dataframe. :param data: stimulus file associated with experiment id :param stimulus_timestamps: timestamps indicating when stimuli switched

during experiment
Returns:stimulus_table: dataframe containing the stimuli metadata as well as what stimuli was presented
allensdk.brain_observatory.behavior.stimulus_processing.get_stimulus_templates(pkl: dict, grating_images_dict: Union[dict, NoneType] = None) → Union[allensdk.brain_observatory.behavior.stimulus_processing.stimulus_templates.StimulusTemplate, NoneType][source]

Gets images presented during experiments from the behavior stimulus file (*.pkl)

Parameters:
pkl : dict

Loaded pkl dict containing data for the presented stimuli.

grating_images_dict : Optional[dict]

Because behavior pkl files do not contain image versions of grating stimuli, they must be obtained from an external source. The grating_images_dict is a nested dictionary where top level keys correspond to grating image names (e.g. ‘gratings_0.0’, ‘gratings_270.0’) as they would appear in table returned by get_gratings_metadata(). Sub-nested dicts are expected to have ‘warped’ and ‘unwarped’ keys where values are numpy image arrays of aforementioned warped or unwarped grating stimuli.

Returns:
StimulusTemplate:

StimulusTemplate object containing images that were presented during the experiment

allensdk.brain_observatory.behavior.stimulus_processing.get_visual_stimuli_df(data, time) → pandas.core.frame.DataFrame[source]

This function loads the stimuli and the omitted stimuli into a dataframe. These stimuli are loaded from the input data, where the set_log and draw_log contained within are used to calculate the epochs. These epochs are used as start_frame and end_frame and converted to times by input stimulus timestamps. The omitted stimuli do not have a end_frame by design though there duration is always 250ms. :param data: the behavior data file :param time: the stimulus timestamps indicating when each stimuli is

displayed
Returns:df: a pandas dataframe containing the stimuli and omitted stimuli that were displayed with their frame, end_frame, start_time, and duration
allensdk.brain_observatory.behavior.stimulus_processing.is_change_event(stimulus_presentations: pandas.core.frame.DataFrame) → pandas.core.series.Series[source]

Returns whether a stimulus is a change stimulus A change stimulus is defined as the first presentation of a new image_name Omitted stimuli are ignored The first stimulus in the session is ignored

:param stimulus_presentations
The stimulus presentations table
Returns:is_change: pd.Series indicating whether a given stimulus is a change stimulus
allensdk.brain_observatory.behavior.stimulus_processing.load_pickle(pstream)[source]
allensdk.brain_observatory.behavior.stimulus_processing.unpack_change_log(change)[source]