For sighted animals including ourselves, viewing conditions are continually changing. Such changes occur at different timescales ranging from hours e.g. changing sunlight intensity, to milliseconds e.g. when navigating through the environment. Despite these changes, however slow or quick they occur, we are able to stably and reliably interpret visual cues. Our goal is to understand how stable visual processing is achieved in dynamic environments.
Flies also exhibit invariant behavioral responses to stimuli of equal contrast across varying luminance (Ketkar and Sporar et al, 2020), thus making them a suitable model in which to investigate the neural correlates of stable visual processing. As a first aim, we wish to find where in the visual circuity luminance invariance is achieved. Whereas photoreceptors do not attain such invariance in dynamic environments, we identified a corrective luminance signal downstream of the fly photoreceptors that provides further luminance gain required to achieve such invariance in behavior (Ketkar and Sporar et al, 2020). We want to unravel how the post-receptor gain control is implemented at the circuit levels, are different cell types become distinct, and how contrast and luminance information is ultimately combined to drive behavior. Cell-type specific access allows us to characterize specific neuronal responses and assess their behavioral role. We complement in vivo experiments with computational modelling approaches, which further allows us to explore the algorithmic links between physiology and behavior.
From Ketkar and Sporar et al, 2020
Different species live in distinct visual environments, that challenge visual systems with different scene statistics. A recent interest of ours also lies in understanding how different visual systems evolved the processing strategies to match their specific environmental demands. Currently, we explore if the key strategies behind luminance invariance compare between different Drosophila species occupying different environmental niches.