This notebook shows how to load ophys data from Visual Behavior Project using AllenSDK tools. It briefly describes the type of data available and shows a few simple ways of plotting ophys traces along with animal's behavior.
We will first install allensdk into your environment by running the appropriate commands below.
You can install AllenSDK locally with:
pip install allensdk
You can install AllenSDK into your notebook environment by executing the cell below.
If using Google Colab, click on the RESTART RUNTIME button that appears at the end of the output when this cell is complete,. Note that running this cell will produce a long list of outputs and some error messages. Clicking RESTART RUNTIME at the end will resolve these issues. You can minimize the cell after you are done to hide the output.
!pip install --upgrade pip
!pip install allensdk
We need to import libraries for plotting and manipulating data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook', font_scale=1.5, rc={'lines.markeredgewidth': 2})
# prefered magic functions for jupyter notebook
%load_ext autoreload
%autoreload 2
%matplotlib inline
# confirm that you are currently using the newest version of SDK (2.10.0 for now)
import allensdk
allensdk.__version__
'2.10.2'
# import behavior projet cache class from SDK to be able to load the data
import allensdk.brain_observatory.behavior.behavior_project_cache as bpc
This code block allows you to use behavior_project_cache
(bpc) class to get behavior and ophys tables.
my_cache_dir = r'\Data\visual_behavior_ophys_cache_dir'
bc = bpc.VisualBehaviorOphysProjectCache.from_s3_cache(cache_dir=my_cache_dir)
behavior_session_table = bc.get_behavior_session_table()
ophys_session_table = bc.get_ophys_session_table()
experiment_table = bc.get_ophys_experiment_table()
#print number of items in each table for all imaging and behavioral sessions
print('Number of behavior sessions = {}'.format(len(behavior_session_table)))
print('Number of ophys sessions = {}'.format(len(ophys_session_table)))
print('Number of ophys experiments = {}'.format(len(experiment_table)))
#print number of items in each table with Mesoscope imaging
print('Number of behavior sessions with Mesoscope = {}'.format(len(behavior_session_table[behavior_session_table.project_code.isin(['VisualBehaviorMultiscope'])])))
print('Number of ophys sessions with Mesoscope = {}'.format(len(ophys_session_table[ophys_session_table.project_code.isin(['VisualBehaviorMultiscope'])])))
print('Number of ophys experiments with Mesoscope = {}'.format(len(experiment_table[experiment_table.project_code.isin(['VisualBehaviorMultiscope'])])))
ophys_session_table.csv: 100%|██████████| 165k/165k [00:00<00:00, 1.88MMB/s] behavior_session_table.csv: 100%|██████████| 885k/885k [00:00<00:00, 5.36MMB/s] ophys_experiment_table.csv: 100%|██████████| 336k/336k [00:00<00:00, 3.12MMB/s]
Number of behavior sessions = 3572 Number of ophys sessions = 551 Number of ophys experiments = 1165 Number of behavior sessions with Mesoscope = 133 Number of ophys sessions with Mesoscope = 133 Number of ophys experiments with Mesoscope = 747
cre_line
, session_type
, project_code
, etc). This table gives you an overview of what data at the level of each experiment is available. The term experiment is used to describe one imaging plane during one session. For sessions that are imaged using mesoscope (equipment_name
= MESO.1), there will be up to 4 experiments associated with that sessions (2 imaging depths by 2 visual areas). Across sessions, the same imaging planes or experiments are linked using ophys_container_id
. For sessions that are imaged using scientifica (equipment_name
= CAMP#.#), there will be only 1 experiment which are similarly linked across different session types using ophys_container_id
. ophys_session_id
and provides metadata associated with those sessions. behavior_session_id
. Behavior sessions that were also imaging sessions have ophys ids assosiated with them. In this notebook, we will use experiment_table
to select experiments of interest and look at them in a greater detail.
# let's print a sample of 5 rows to see what's in the table
experiment_table.sample(5)
equipment_name | full_genotype | mouse_id | reporter_line | driver_line | sex | age_in_days | session_type | cre_line | indicator | ... | prior_exposures_to_image_set | prior_exposures_to_omissions | ophys_session_id | behavior_session_id | ophys_container_id | project_code | imaging_depth | targeted_structure | date_of_acquisition | file_id | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ophys_experiment_id | |||||||||||||||||||||
905955228 | MESO.1 | Slc17a7-IRES2-Cre/wt;Camk2a-tTA/wt;Ai93(TITL-G... | 453911 | Ai93(TITL-GCaMP6f) | [Slc17a7-IRES2-Cre, Camk2a-tTA] | M | 154.0 | OPHYS_2_images_A_passive | Slc17a7-IRES2-Cre | GCaMP6f | ... | 33.0 | 3 | 904418381 | 904574580 | 1018027878 | VisualBehaviorMultiscope | 177 | VISp | 2019-07-12 08:07:32.656976 | 1086012638 |
891108763 | MESO.1 | Vip-IRES-Cre/wt;Ai148(TIT2L-GC6f-ICL-tTA2)/wt | 435431 | Ai148(TIT2L-GC6f-ICL-tTA2) | [Vip-IRES-Cre] | M | 215.0 | OPHYS_5_images_B_passive | Vip-IRES-Cre | GCaMP6f | ... | 7.0 | 10 | 889944877 | 890157940 | 1018028374 | VisualBehaviorMultiscope | 274 | VISp | 2019-06-19 09:25:00.659441 | 1085673876 |
881949068 | MESO.1 | Vip-IRES-Cre/wt;Ai148(TIT2L-GC6f-ICL-tTA2)/wt | 435431 | Ai148(TIT2L-GC6f-ICL-tTA2) | [Vip-IRES-Cre] | M | 201.0 | OPHYS_2_images_A_passive | Vip-IRES-Cre | GCaMP6f | ... | 42.0 | 2 | 881094781 | 881278000 | 1018028367 | VisualBehaviorMultiscope | 152 | VISp | 2019-06-05 09:12:12.735423 | 1085673753 |
975608394 | MESO.1 | Sst-IRES-Cre/wt;Ai148(TIT2L-GC6f-ICL-tTA2)/wt | 482853 | Ai148(TIT2L-GC6f-ICL-tTA2) | [Sst-IRES-Cre] | M | 124.0 | OPHYS_1_images_A | Sst-IRES-Cre | GCaMP6f | ... | 25.0 | 0 | 975452945 | 975498864 | 1018028202 | VisualBehaviorMultiscope | 225 | VISp | 2019-11-01 14:40:51.922948 | 1085394144 |
989610992 | MESO.1 | Slc17a7-IRES2-Cre/wt;Camk2a-tTA/wt;Ai93(TITL-G... | 479839 | Ai93(TITL-GCaMP6f) | [Slc17a7-IRES2-Cre, Camk2a-tTA] | M | 161.0 | OPHYS_2_images_A_passive | Slc17a7-IRES2-Cre | GCaMP6f | ... | 34.0 | 2 | 989267296 | 989340717 | 1018028067 | VisualBehaviorMultiscope | 367 | VISl | 2019-11-22 08:28:30.129503 | 1086012717 |
5 rows × 22 columns
You can get any experiment ids from the experiment table by subsetting the table using various conditions (aka columns) in the table. Here, we can select experiments from Sst mice only, novel Ophys session 4, with 0 prior exposures to the stimulus (meaning the session was not a relake).
# get all Sst experiments for ophys session 4
selected_experiment_table = experiment_table[(experiment_table.cre_line=='Sst-IRES-Cre')&
(experiment_table.session_number==4) &
(experiment_table.prior_exposures_to_image_set==0)]
print('Number of experiments: {}'.format(len(selected_experiment_table)))
Number of experiments: 27
Remember that any given experiment contains data from only one imaging plane. Some of these experiments come from the same imaging session. Here, we can check how many unique imaging sessions are associated with experiments selected above.
print('Number of unique sessions: {}'.format(len(selected_experiment_table['ophys_session_id'].unique())))
Number of unique sessions: 11
Let's pick a random experiment from the table and plot example ophys and behavioral data.
# select first experiment from the table to look at in more detail.
# Note that python enumeration starts at 0.
ophys_experiment_id = selected_experiment_table.index[0]
dataset = bc.get_behavior_ophys_experiment(ophys_experiment_id)
behavior_ophys_experiment_957759562.nwb: 100%|██████████| 263M/263M [00:24<00:00, 10.7MMB/s]
dataset.metadata
{'equipment_name': 'MESO.1', 'imaging_plane_group_count': 4, 'session_type': 'OPHYS_4_images_B', 'ophys_experiment_id': 957759562, 'ophys_session_id': 957020350, 'field_of_view_width': 512, 'behavior_session_id': 957032492, 'stimulus_frame_rate': 60.0, 'behavior_session_uuid': UUID('40897cd4-3279-4a2d-b65d-b3f984e34e17'), 'imaging_depth': 150, 'field_of_view_height': 512, 'experiment_container_id': 1018028342, 'imaging_plane_group': 0, 'mouse_id': 457841, 'sex': 'F', 'age_in_days': 233, 'full_genotype': 'Sst-IRES-Cre/wt;Ai148(TIT2L-GC6f-ICL-tTA2)/wt', 'reporter_line': 'Ai148(TIT2L-GC6f-ICL-tTA2)', 'driver_line': ['Sst-IRES-Cre'], 'cre_line': 'Sst-IRES-Cre', 'date_of_acquisition': datetime.datetime(2019, 9, 27, 8, 28, 5, tzinfo=tzutc()), 'ophys_frame_rate': 11.0, 'indicator': 'GCaMP6f', 'targeted_structure': 'VISp', 'excitation_lambda': 910.0, 'emission_lambda': 520.0}
You can get additional information about this experiment from the metadata field of the dataset class. Here, you can see that this experiment was in Sst Cre line, in a female mouse at 233 days old, recorded using mesoscope (this is one of four imaging planes), at imaging depth of 150 microns, in primary visual cortex (VISp). This experiment is also from OPHYS 1 session using image set A.
plt.imshow(dataset.max_projection, cmap='gray')
<matplotlib.image.AxesImage at 0x7fde2ae8a1d0>
Max projection plots an average image from the movie recorded during an imaging session. Plotting max projection can give you a sense of how many neurons were visible during imaging and how clear and stable the imaging session was.
dataset.cell_specimen_table.sample(3)
cell_roi_id | height | mask_image_plane | max_correction_down | max_correction_left | max_correction_right | max_correction_up | valid_roi | width | x | y | roi_mask | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
cell_specimen_id | ||||||||||||
1086616398 | 1080740952 | 20 | 0 | 5.0 | 25.0 | 3.0 | 30.0 | True | 19 | 220 | 161 | [[False, False, False, False, False, False, Fa... |
1086615620 | 1080740955 | 17 | 0 | 5.0 | 25.0 | 3.0 | 30.0 | True | 19 | 280 | 104 | [[False, False, False, False, False, False, Fa... |
1086616101 | 1080741008 | 16 | 0 | 5.0 | 25.0 | 3.0 | 30.0 | True | 16 | 439 | 54 | [[False, False, False, False, False, False, Fa... |
cell_specimen_table
includes information about x
and y
coordinates of the cell in the imaging plane as well as how much correction was applied during motion correction process.
cell_roi_id
is a unique id assigned to each ROI during segmentation.
cell_specimen_id
is a unique id assigned to each cell after cell matching, which means that if we were able to identify and match the same cell across multiple sessions, it can be identified by its unique cell specimen id.
roi_mask
is a boolean array that can be used to visualize where any given cell is in the imaging field.
plt.imshow(dataset.cell_specimen_table.iloc[1]['roi_mask'])
<matplotlib.image.AxesImage at 0x7fde2ae5e250>
dataset.dff_traces.head(10)
cell_roi_id | dff | |
---|---|---|
cell_specimen_id | ||
1086614149 | 1080740882 | [0.3417784869670868, 0.1917504370212555, 0.169... |
1086614819 | 1080740886 | [0.0, -2.010002374649048, -1.3567286729812622,... |
1086614512 | 1080740888 | [0.07121878117322922, 0.10929209738969803, 0.0... |
1086613265 | 1080740947 | [0.3090711534023285, 0.02156120352447033, -0.0... |
1086616398 | 1080740952 | [0.0, 0.15438319742679596, 0.37885594367980957... |
1086615620 | 1080740955 | [0.15867245197296143, 0.15752384066581726, 0.2... |
1086615201 | 1080740967 | [0.42869994044303894, 0.23723232746124268, -0.... |
1086616101 | 1080741008 | [0.727859377861023, 0.20604123175144196, 0.333... |
dff_traces
dataframe contains traces for all neurons in this experiment, unaligned to any events in the task.
You can select rows by their enumerated number using .iloc[]
method:
dataset.dff_traces.iloc[4]
cell_roi_id 1080740952 dff [0.0, 0.15438319742679596, 0.37885594367980957... Name: 1086616398, dtype: object
Alternatively, you can use cell_specimen_id
as index to select cells with .loc[]
method:
dataset.dff_traces.loc[1086616398]
cell_roi_id 1080740952 dff [0.0, 0.15438319742679596, 0.37885594367980957... Name: 1086616398, dtype: object
If you don't want dff in a pandas dataframe format, you can load dff traces as an array, using np.vstack
function to format the data into cell by time array and .values
to only grab values in dff column:
dff_array = np.vstack(dataset.dff_traces.dff.values)
print('This array contrains dff traces from {} neurons and it is {} samples long.'.format(dff_array.shape[0], dff_array.shape[1]))
This array contrains dff traces from 8 neurons and it is 48284 samples long.
dataset.events.head(10)
cell_roi_id | events | filtered_events | lambda | noise_std | |
---|---|---|---|---|---|
cell_specimen_id | |||||
1086614149 | 1080740882 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.0753 | 0.085235 |
1086614819 | 1080740886 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 5.3130 | 0.714571 |
1086614512 | 1080740888 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.0324 | 0.055955 |
1086613265 | 1080740947 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.0272 | 0.051073 |
1086616398 | 1080740952 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.0570 | 0.074011 |
1086615620 | 1080740955 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.0769 | 0.086979 |
1086615201 | 1080740967 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.0872 | 0.091952 |
1086616101 | 1080741008 | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, ... | 0.1596 | 0.123868 |
events
table is similar to dff_traces
but the output provides traces of extrapolated events. Events are computed on unmixed dff traces for each cell as described in Giovannucci et al. 2019. The magnitude of events approximates the firing rate of neurons with the resolusion of about 200 ms. The biggest advantage of using events over dff traces is they exclude prolonged Ca transients that may conteminate neural responses to subsequent stimuli. You can also use filtered_events
which are events convolved with a filter created using stats.halfnorm
method.
lambda
is computed from Poisson distribution of events in the trace (think of it as a center of mass of the distribution, larger lambda == higher "firing rate").
noise_std
is a measure of variability in the events trace.
The timestamps are the same for dff_traces
and events
, in seconds
dataset.ophys_timestamps
array([ 8.72468, 8.81788, 8.91108, ..., 4509.98089, 4510.07412, 4510.16735])
We can select a random cell from the experiment and plot its dff and events traces along with other behavioral and stimulus data.
cell_specimen_ids = dataset.cell_specimen_table.index.values # a list of all cell ids
cell_specimen_id = cell_specimen_ids[5] # let's pick 6th cell
print('Cell specimen id = {}'.format(cell_specimen_id)) # print id
Cell specimen id = 1086615620
# plot dff and events traces overlaid from the cell selected above
fig, ax = plt.subplots(1,1, figsize = (20,10))
ax.plot(dataset.ophys_timestamps, dataset.dff_traces.loc[cell_specimen_id, 'dff'])
ax.plot(dataset.ophys_timestamps, dataset.events.loc[cell_specimen_id, 'events'])
ax.set_xlabel('time (seconds)')
ax.set_ylabel('trace magnitude')
ax.set_title('Cell specimen id = {}'.format(cell_specimen_id), fontsize = 20)
ax.legend(['dff', 'events'], fontsize = 20)
<matplotlib.legend.Legend at 0x7fde167c8cd0>
We can see that as expected, events trace is much cleaner than dff and it generally follows big Ca transients really well. We can also see that this cell was not very active during our experiment. Each experiment has a 5 minute movie at the end, which often drives neural activity really well. We can see a notable increase in cell's activity at the end of this experiment as well.
fig, ax = plt.subplots(1,1, figsize = (20,5))
ax.plot(dataset.stimulus_timestamps, dataset.running_speed['speed'], color='gray', linestyle='--')
ax.set_xlabel('time (seconds)')
ax.set_ylabel('running speed (cm/s)')
ax.set_title('Ophys experiment {}'.format(ophys_experiment_id), fontsize = 20)
Text(0.5, 1.0, 'Ophys experiment 957759562')
fig, ax = plt.subplots(1,1, figsize = (20,5))
ax.plot(dataset.eye_tracking.timestamps, dataset.eye_tracking.pupil_area, color='gray')
ax.set_xlabel('time (seconds)')
ax.set_ylabel('pupil area (pixels^2)')
ax.set_title('Ophys experiment {}'.format(ophys_experiment_id), fontsize = 20)
Text(0.5, 1.0, 'Ophys experiment 957759562')
You can find all attributes and methods that belong to dataset class using this helpful method:
dataset.list_data_attributes_and_methods()
['average_projection', 'behavior_session_id', 'cell_specimen_table', 'corrected_fluorescence_traces', 'dff_traces', 'events', 'eye_tracking', 'eye_tracking_rig_geometry', 'get_cell_specimen_ids', 'get_cell_specimen_indices', 'get_dff_traces', 'get_performance_metrics', 'get_reward_rate', 'get_rolling_performance_df', 'get_segmentation_mask_image', 'licks', 'max_projection', 'metadata', 'motion_correction', 'ophys_experiment_id', 'ophys_session_id', 'ophys_timestamps', 'raw_running_speed', 'rewards', 'roi_masks', 'running_speed', 'segmentation_mask_image', 'stimulus_presentations', 'stimulus_templates', 'stimulus_timestamps', 'task_parameters', 'trials']
You can learn more about them by calling help
on them:
help(dataset.get_segmentation_mask_image)
Help on method get_segmentation_mask_image in module allensdk.brain_observatory.behavior.behavior_ophys_experiment: get_segmentation_mask_image() method of allensdk.brain_observatory.behavior.behavior_ophys_experiment.BehaviorOphysExperiment instance Returns an image with value 1 if the pixel was included in an ROI, and 0 otherwise Returns ---------- allensdk.brain_observatory.behavior.image_api.Image: array-like interface to segmentation_mask image data and metadata
get stimulus information for this experiment and assign it to a table called stimulus_table
stimulus_table = dataset.stimulus_presentations
stimulus_table.head(10)
duration | end_frame | image_index | image_name | image_set | index | omitted | start_frame | start_time | stop_time | is_change | |
---|---|---|---|---|---|---|---|---|---|---|---|
stimulus_presentations_id | |||||||||||
0 | 0.25020 | 18001.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 0 | False | 17986 | 308.75313 | 309.00333 | False |
1 | 0.25020 | 18046.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 1 | False | 18031 | 309.50374 | 309.75394 | False |
2 | 0.25017 | 18091.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 2 | False | 18076 | 310.25441 | 310.50458 | False |
3 | 0.25026 | 18136.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 3 | False | 18121 | 311.00496 | 311.25522 | False |
4 | 0.25020 | 18181.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 4 | False | 18166 | 311.75558 | 312.00578 | False |
5 | 0.25021 | 18226.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 5 | False | 18211 | 312.50620 | 312.75641 | False |
6 | 0.25022 | 18271.0 | 0 | im000 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 6 | False | 18256 | 313.25681 | 313.50703 | False |
7 | 0.25020 | 18316.0 | 6 | im031 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 7 | False | 18301 | 314.00741 | 314.25761 | True |
8 | 0.25020 | 18361.0 | 6 | im031 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 8 | False | 18346 | 314.75803 | 315.00823 | False |
9 | 0.25019 | 18406.0 | 6 | im031 | Natural_Images_Lum_Matched_set_ophys_6_2017.07.14 | 9 | False | 18391 | 315.50864 | 315.75883 | False |
This table provides helpful information like image name, start, duration and stop of image presentation, and whether the image was omitted.
print('This experiment had {} stimuli.'.format(len(stimulus_table)))
print('Out of all stimuli presented, {} were omitted.'.format(len(stimulus_table[stimulus_table['image_name']=='omitted'])))
This experiment had 4797 stimuli. Out of all stimuli presented, 160 were omitted.
You can also use keys()
method to see the names of the columns in any pandas dataframe table:
stimulus_table.keys()
Index(['duration', 'end_frame', 'image_index', 'image_name', 'image_set', 'index', 'omitted', 'start_frame', 'start_time', 'stop_time', 'is_change'], dtype='object')
get behavioral trial information and assign it to trials_table
trials_table = dataset.trials
trials_table.head(5)
start_time | stop_time | lick_times | reward_time | reward_volume | hit | false_alarm | miss | stimulus_change | aborted | ... | catch | auto_rewarded | correct_reject | trial_length | response_time | change_frame | change_time | response_latency | initial_image_name | change_image_name | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
trials_id | |||||||||||||||||||||
0 | 308.73646 | 309.85402 | [309.12011, 309.30359, 309.53711] | NaN | 0.000 | False | False | False | False | True | ... | False | False | False | 1.11756 | NaN | NaN | NaN | NaN | im000 | im000 |
1 | 310.23770 | 318.26089 | [314.37439, 314.50783, 314.64127, 314.79139, 3... | 314.14086 | 0.005 | False | False | False | True | False | ... | False | True | False | 8.02319 | 314.37439 | 18300.0 | 314.026728 | 0.347662 | im000 | im031 |
2 | 318.49442 | 319.37846 | [318.6779, 318.86138, 319.06156, 319.39515, 31... | NaN | 0.000 | False | False | False | False | True | ... | False | False | False | 0.88404 | NaN | NaN | NaN | NaN | im031 | im031 |
3 | 319.99565 | 323.19826 | [322.58109, 322.71452, 322.88134, 323.46514, 3... | NaN | 0.000 | False | False | False | False | True | ... | False | False | False | 3.20261 | NaN | NaN | NaN | NaN | im031 | im031 |
4 | 323.73205 | 331.02131 | [327.1515, 327.26826, 327.38503, 327.51846, 32... | 326.91797 | 0.005 | False | False | False | True | False | ... | False | True | False | 7.28926 | 327.15150 | 19065.0 | 326.787158 | 0.364342 | im031 | im000 |
5 rows × 21 columns
trials_table.keys()
Index(['start_time', 'stop_time', 'lick_times', 'reward_time', 'reward_volume', 'hit', 'false_alarm', 'miss', 'stimulus_change', 'aborted', 'go', 'catch', 'auto_rewarded', 'correct_reject', 'trial_length', 'response_time', 'change_frame', 'change_time', 'response_latency', 'initial_image_name', 'change_image_name'], dtype='object')
This table has information about experiment trials. go
trials are change trials when the animal was supposed to lick. If the animal licked, hit
is set to True for that trial. If the animal was rewarded, reward_time
will have time in seconds. If this was an auto rewarded trial (regardless of whether the animal got it right), auto_rewarded
is set to True. The trials table also includes response_latency
which can be used as reaction time of the animal during the experiment.
Now, we will put together a plotting functions that utilizes data in the dataset class to plot ophys traces and behavioral data from an experiment.
# create a list of all unique stimuli presented in this experiment
unique_stimuli = [stimulus for stimulus in dataset.stimulus_presentations['image_name'].unique()]
# create a colormap with each unique image having its own color
colormap = {image_name: sns.color_palette()[image_number] for image_number, image_name in enumerate(np.sort(unique_stimuli))}
colormap['omitted'] = (1,1,1) # set omitted stimulus to white color
# add the colors for each image to the stimulus presentations table in the dataset
dataset.stimulus_presentations['color'] = dataset.stimulus_presentations['image_name'].map(lambda image_name: colormap[image_name])
# function to plot dff traces
def plot_dff_trace(ax, cell_specimen_id, initial_time, final_time):
'''
ax: axis on which to plot
cell_specimen_id: id of the cell to plot
intial_time: initial time to plot from
final_time: final time to plot to
'''
#create a dataframe using dff trace from one seleted cell
data = {'dff': dataset.dff_traces.loc[cell_specimen_id].dff,
'timestamps': dataset.ophys_timestamps}
df = pd.DataFrame(data)
dff_trace_sample = df.query('timestamps >= @initial_time and timestamps <= @final_time')
ax.plot(
dff_trace_sample['timestamps'],
dff_trace_sample['dff']/dff_trace_sample['dff'].max()
)
# function to plot events traces
def plot_events_trace(ax, cell_specimen_id, initial_time, final_time):
# create a dataframe using events trace from one seleted cell
data = {'events': dataset.events.loc[cell_specimen_id].events,
'timestamps': dataset.ophys_timestamps}
df = pd.DataFrame(data)
events_trace_sample = df.query('timestamps >= @initial_time and timestamps <= @final_time')
ax.plot(
events_trace_sample['timestamps'],
events_trace_sample['events']/events_trace_sample['events'].max()
)
# function to plot running speed
def plot_running(ax, initial_time, final_time):
running_sample = dataset.running_speed.query('timestamps >= @initial_time and timestamps <= @final_time')
ax.plot(
running_sample['timestamps'],
running_sample['speed']/running_sample['speed'].max(),
'--',
color = 'gray',
linewidth = 1
)
# function to plot pupil diameter
def plot_pupil(ax, initial_time, final_time):
pupil_sample = dataset.eye_tracking.query('timestamps >= @initial_time and timestamps <= @final_time')
ax.plot(
pupil_sample['timestamps'],
pupil_sample['pupil_width']/pupil_sample['pupil_width'].max(),
color = 'gray',
linewidth = 1
)
# function to plot licks
def plot_licks(ax, initial_time, final_time):
licking_sample = dataset.licks.query('timestamps >= @initial_time and timestamps <= @final_time')
ax.plot(
licking_sample['timestamps'],
np.zeros_like(licking_sample['timestamps']),
marker = 'o',
markersize = 3,
color = 'black',
linestyle = 'none'
)
# function to plot rewards
def plot_rewards(ax, initial_time, final_time):
rewards_sample = dataset.rewards.query('timestamps >= @initial_time and timestamps <= @final_time')
ax.plot(
rewards_sample['timestamps'],
np.zeros_like(rewards_sample['timestamps']),
marker = 'd',
color = 'blue',
linestyle = 'none',
markersize = 12,
alpha = 0.5
)
def plot_stimuli(ax, initial_time, final_time):
stimulus_presentations_sample = dataset.stimulus_presentations.query('stop_time >= @initial_time and start_time <= @final_time')
for idx, stimulus in stimulus_presentations_sample.iterrows():
ax.axvspan(stimulus['start_time'], stimulus['stop_time'], color=stimulus['color'], alpha=0.25)
initial_time = 820 # start time in seconds
final_time = 860 # stop time in seconds
fig, ax = plt.subplots(2,1,figsize = (15,7))
plot_dff_trace(ax[0], cell_specimen_ids[3], initial_time, final_time)
plot_events_trace(ax[0], cell_specimen_ids[3], initial_time, final_time)
plot_stimuli(ax[0], initial_time, final_time)
ax[0].set_ylabel('normalized response magnitude')
ax[0].set_yticks([])
ax[0].legend(['dff trace', 'events trace'])
plot_running(ax[1], initial_time, final_time)
plot_pupil(ax[1], initial_time, final_time)
plot_licks(ax[1], initial_time, final_time)
plot_rewards(ax[1], initial_time, final_time)
plot_stimuli(ax[1], initial_time, final_time)
ax[1].set_yticks([])
ax[1].legend(['running speed', 'pupil','licks', 'rewards'])
<matplotlib.legend.Legend at 0x7fde16c52090>
From looking at the activity of this neuron, we can see that it was very active during our experiment but its activity does not appear to be reliably locked to image presentations. It does seem to vaguely follow animal's running speed, thus it might be modulated by running.
We can get a different, Vip experiment from Ophys session 1 and plot it to compare response traces. This gives us a similar plot from a different inhibitory neuron to compare their neural dynamics.
selected_experiment_table = experiment_table[(experiment_table.cre_line=='Vip-IRES-Cre')&
(experiment_table.session_number==1)]
dataset = bc.get_behavior_ophys_experiment(selected_experiment_table.index.values[1])
cell_specimen_ids = dataset.cell_specimen_table.index.values # a list of all cell ids
behavior_ophys_experiment_826587940.nwb: 100%|██████████| 397M/397M [00:37<00:00, 10.7MMB/s]
# create a list of all unique stimuli presented in this experiment
unique_stimuli = [stimulus for stimulus in dataset.stimulus_presentations['image_name'].unique()]
# create a colormap with each unique image having its own color
colormap = {image_name: sns.color_palette()[image_number] for image_number, image_name in enumerate(np.sort(unique_stimuli))}
colormap['omitted'] = (1,1,1)
# add the colors for each image to the stimulus presentations table in the dataset
dataset.stimulus_presentations['color'] = dataset.stimulus_presentations['image_name'].map(lambda image_name: colormap[image_name])
# we can plot the same information for a different cell in the dataset
initial_time = 580 # start time in seconds
final_time = 620 # stop time in seconds
fig, ax = plt.subplots(2,1,figsize = (15,7))
plot_dff_trace(ax[0], cell_specimen_ids[5], initial_time, final_time)
plot_events_trace(ax[0], cell_specimen_ids[5], initial_time, final_time)
plot_stimuli(ax[0], initial_time, final_time)
ax[0].set_ylabel('normalized response magnitude')
ax[0].set_yticks([])
ax[0].legend(['dff trace', 'events trace'])
plot_running(ax[1], initial_time, final_time)
plot_pupil(ax[1], initial_time, final_time)
plot_licks(ax[1], initial_time, final_time)
plot_rewards(ax[1], initial_time, final_time)
plot_stimuli(ax[1], initial_time, final_time)
ax[1].set_yticks([])
ax[1].legend(['running speed', 'pupil','licks', 'rewards'])
<matplotlib.legend.Legend at 0x7fddf4944e90>
We can see that the dynamics of a Vip neuron are also not driven by the visual stimuli. Aligning neural activity to different behavioral or experimental events might reveal what this neuron is driven by.