Brain Observatory Monitor

This notebook demonstrates how to query an BrainObservatoryDataNwbDataSet object to find out what type of stimulus was on the monitor at a given acquisiton frame during an experiment, and align that stimulus on the monitor with stimulus templates from other parts of a session (or other sessions in a container).

This dataframe summarizes the epochs of the experiment, and their start and end acquisition frames:

To get the stimulus parameters shown on acquisition frame 1010:

This is what was on the moniter during that acquision frame:

Sometimes it is nice to have this stimulus visualized exactly as it was shown on the monitor:

During a Brain Observatory experiment, the image on the screen (for example, above) is warped so that spherical coordinates could be displayed on a flat monitor. When an stimulus frame is displayed without this warping, it is useful to overlay a mask of the eventual maximal extent of the stimulus that will be shown on the monitor.

Using the map_stimulus_coordinate_to_monitor_coordinate function, we can assosicate a position on the monitor (eitherpre- or post-warp) between multiple types of stimuli, despite the fact the templates for these stimuli might have different sizes and aspect ratios.

For example, in the top row of the figure above, the same position on the monitor (pre-warp) is marked in two different stimuli (left: locally sparse noise, right: natural movie). Below, the same point is reproduced on the template image for reach stimuli. You can see that in both images, the relative location of the marked point (red) is the same in both the pre-warp monitor image and the template image. This example demonstrates how to co-register locations between stimulus sets.

An optional translation argument will reposition the stimulus frame relative to a gray background canvas; translation in supplied in units of monitor pixels: