Abstract

Visual search is a complex behavior influenced by many factors. To control for these factors, many studies use highly simplified stimuli. However, the statistics of these stimuli are very different from the statistics of the natural images that the human visual system is optimized by evolution and experience to perceive. Could this difference change search behavior? If so, simplified stimuli may contribute to effects typically attributed to cognitive processes, such as selective attention. Here we use deep neural networks to test how optimizing models for the statistics of one distribution of images constrains performance on a task using images from a different distribution. We train four deep neural network architectures on one of three source datasets—natural images, faces, and x-ray images—and then adapt them to a visual search task using simplified stimuli. This adaptation produces models that exhibit performance limitations similar to humans, whereas models trained on the search task alone exhibit no such limitations. However, we also find that deep neural networks trained to classify natural images exhibit similar limitations when adapted to a search task that uses a different set of natural images. Therefore, the distribution of data alone cannot explain this effect. We discuss how future work might integrate an optimization-based approach into existing models of visual search behavior.

Download paper here

BibTex:

@article{nicholson2022could,
  title={Could simplified stimuli change how the brain performs visual search tasks? A deep neural network study},
  author={Nicholson, David A and Prinz, Astrid A},
  journal={Journal of Vision},
  volume={22},
  number={7},
  pages={3--3},
  year={2022},
  publisher={The Association for Research in Vision and Ophthalmology}
}

Updated: