Our ability to perceive motion arises from a hierarchy of motion-tuned cells in visual cortices. Signatures of V1 and MT motion tuning emerge in artificial neural networks trained to report speed and direction of sliding images (Rideaux & Welchman, …
Visually understanding the world requires us to interpret surface properties like shape, depth, and reflectance from retinal images—with little or no access to the ground truth about these properties from which to learn. Previous work showed that …
Using visual perception as a case study, I will propose that questions in cognitive science are not passed from one discipline to the next, but are conversations among increasingly many disciplines. The question of 'how vision works' has spread from …
A photograph or painting of a glazed vase might consist of irregularly-shaped bright patches, small white dots, and large low-contrast gradients—yet we immediately see these as reflections on the glossy surface, sharp highlights, and the smooth …
Deep neural networks (DNNs) have revolutionised computer vision, often now recognising objects and faces as well as humans can. An initial wave of fMRI and electrophysiological studies around 2015 showed that features in object-recognition-trained …
Models of vision have come far in the past 10 years. Deep neural networks can recognise objects with near-human accuracy, and predict brain activity in high-level visual regions. However, most networks require supervised training using ground-truth …
Computational visual neuroscience has come a long way in the past 10 years. For the first time, we have fully explicit, image-computable models that can recognise objects with near-human accuracy, and predict brain activity in high-level visual …
Despite the impressive achievements of supervised deep neural networks, brains must learn to represent the world without access to ground-truth training data. We propose that perception of distal properties arises instead from unsupervised learning …
Level Up Human is a podcast panel show, in which scientists compete to pitch improvements to the human design. In this episode, I pitch a bugfix for human vision: the ability to see the polarisation of light.
Perceiving the glossiness of a surface is a challenging visual inference that requires disentangling the contributions of reflectance, lighting, and shape to the retinal image. How do our visual systems develop this ability? We suggest that brains …