'Scene wheel' could help explain why eyewitness testimony is unreliable: 鶹Ƶ researchers
Researchers at the University of Toronto have developed an innovative tool to aid in the investigation of how we perceive and remember visual experiences.
The new tool, called a “scene wheel,” promises to shed light on how accurately people construct mental representations of visual experiences for later retrieval – for example: how well an eyewitness recalls details of a crime or accident.
The wheel is a continuous, looping series of gradually changing images depicting typical domestic spaces: dining rooms, living rooms and bedrooms. The images are detailed and realistic, and vary continuously in subtle ways: tables subtly transform into desks, mirrors become framed pictures, walls become windows, and so on.
“We know that eyewitness testimony is not reliable,” says Gaeun Son, a PhD student in the Faculty of Arts & Science’s department of psychology who is lead author of that describes the scene wheel methodology.
“With the new scene wheel, we can start to characterize the specific nature of those memory failures.”
Son’s co-authors on the paper are Assistant Professor Michael Mack and Associate Professor Dirk Bernhardt-Walther – both in the department of psychology.
“Studying how people perceive and remember the world requires careful control of the physical stimuli presented in experiments,” says Mack. “This kind of control isn’t difficult in experiments using simple stimuli like colour. But it’s very challenging for more complex, realistic scenes.”
Traditional experiments in this field involve test subjects performing tasks such as identifying which colour or arrangement of graphic symbols most resembles a previously viewed colour or graphic. While these methods provide some insight, their simplicity imposes a fundamental limit to what they can reveal.
The scene wheel, by contrast, moves into a whole new experimental realm by using highly realistic images that more closely simulate our day-to-day visual experiences – all while still providing the rigorous control needed.
The study’s collaborators used deep-learning methods in computer vision – specifically, generative adversarial networks (GAN) – to create the images and arrange them in a continuous “spectrum” that’s analogous to a 360-degree colour wheel.
“The success of this project is all thanks to the recent revolution in deep-learning fields – especially in GANs, which is the same sort of approach used in creating so-called ‘deep fake’ videos in which one person’s face is very realistically replaced with someone else’s,” Son says.
To test whether their approach worked, the researchers had subjects view a still image of a scene from the wheel for one second, followed by a blank screen. Next, the subjects were presented with a scene similar to the one they had just viewed.
The subjects then altered the second image by moving their cursor in a circle around it. As they moved their cursor, the scene changed. Subjects were asked to stop their cursor when the image matched their memory of the original image.
“With the scene wheel, we’ve provided a new experimental bridge that brings more of the richness of everyday experience into a controlled experimental setting,” says Son. “We anticipate that our method will allow researchers to test the validity of classic findings in the field that are based on experiments using simple stimuli.”
The approach could potentially lead to a “face wheel” that could take the place of police lineups, which are not particularly reliable in identifying individuals.
“Our method will allow for a better understanding of how precise that identification of individuals actually is,” Mack says.
The research was supported by the Natural Sciences and Engineering Research Council of Canada, the Canada Foundation for Innovation and the Ontario Research Fund.