![]() Musall et al., 2019, used video recordings of motion data from several parts of the face of mice as they performed a decision-making task and related the measures from the video recordings to cortical imaging data. For example, Stringer et al., 2019, used dimensionality reduction methods to study the spontaneous coding of visual- and movement-related information in the mouse visual cortex in relation to facial movements. More advanced analyses could be used to quantify movements across many pixels simultaneously in video recordings. However, these simpler tracking methods cannot account for movements of discrete sets of body parts (e.g., head scanning in rodents, which is associated with a classic measure of reward-guided decisions called ‘vicarious trial-and-error’ behavior: see Redish, 2016, for review). More sophisticated versions of these products may also detect the head and tail of common laboratory species such as rodents or zebrafish and draw inferences from the shape and location of the animal to classify a small subset of an animal’s behavioral repertoire. These ‘center of mass’ tracking methods could be used to measure where an animal is and how fast it is moving. These methods provide estimates of the overall position of an animal in its environment and can be used to measure the direction and velocity of its movements. This can be challenging to do in naturalistic settings or for species or strains that do not have a uniform color (e.g., Long-Evans rats). These methods track animals based on differences between the animals and the background color or luminance. This could lead to, for example, a neural recording study labeling a cell as ‘reward encoding’ when it actually reflects differences in movement.Ĭommercial products (e.g., Ethovision by Noldus, Any-Maze by Stoelting) and open-source projects (e.g., JAABA: Kabra et al., 2013 SCORHE: Salem et al., 2015 OptiMouse: Ben-Shaul, 2017 ezTrack: Pennington et al., 2019) are available for semi-automated annotation and tracking of behaviors. Animals may not move in the same way to a reward port when they expect a larger or smaller reward (e.g., Davidson et al., 1980). By relying only on the discrete times when animals make a choice and receive a reward, it is not possible to describe how the animal moves during a choice or how it collects a reward. Collectively, these issues present major challenges for research reproducibility and the difficulty and cost of manual video analysis has led to the dominance of easy-to-use measures (lever pressing, beam breaks) in the neuroscience literature, and this has limited our understanding of brain-behavior relationships ( Krakauer et al., 2017).įor example, ‘reward seeking’ has been a popular topic in recent years and is typically measured using beam breaks between response and reward ports located inside an operant arena (e.g., Cowen et al., 2012 Feierstein et al., 2006 Lardeux et al., 2009 van Duuren et al., 2009). There are variations in scoring criteria across researchers and labs, even over time by a single researcher. Complete sets of videos are rarely made accessible in published studies and the analysis methods are often vaguely described. Video recordings are often made for many different animals and behavioral test sessions, but only reviewed for a subset of experiments. These analyses are very time-consuming, require expert knowledge in the target species and experimental design, and are prone to user bias ( Anderson and Perona, 2014). Traditional approaches to analyzing video data have involved researchers watching video playback and noting the times and locations of specific events of interest. ![]() We encourage broader adoption and continued development of these tools, which have tremendous potential for accelerating scientific progress in understanding the brain and behavior. We also discuss best practices for developing and using video analysis methods, including community-wide standards and critical needs for the open sharing of datasets and code, more widespread comparisons of video analysis methods, and better documentation for these methods especially for new users. Here, we review currently available open-source tools for video analysis and discuss how to set up these methods for labs new to video recording. The expansion of open-source tools for video acquisition and analysis has led to new experimental approaches to understand behavior. These tools overcome long-standing limitations of manual scoring of video frames and traditional ‘center of mass’ tracking algorithms to enable video analysis at scale. Recently developed methods for video analysis, especially models for pose estimation and behavior classification, are transforming behavioral quantification to be more precise, scalable, and reproducible in fields such as neuroscience and ethology. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |