Research
Our Philosophy
Computational: A model-based research approach
We believe that scientific progress is facilitated when we define our questions and hypotheses precisely, through the use of formal mathematical models. Are different aspects of a face (for example, identity and expression) represented independently? Are face parts represented holistically? It depends on what one means by “independently” and “holistically.” Formal theories from psychology (e.g., GRT) and neuroscience can provide precise definitions for such ambiguous concepts, and show us the way to experimentally measure them in both behavioral and neuroimaging experiments. What are the mechanisms by which depression influences perception of face expressions? What mechanisms underlie the deficit to recognize faces from a different race? We can use models (e.g., perceptual observer models and encoding models) to formalize multiple hypotheses and decide between them. For us, the use of computational models is not a gimmick. We see it as necessary to move the field forward.
Cognitive: Explaining complex adaptive behavior through simple mechanisms
Our ultimate goal is to understand complex behavior that helps people adapt to their environment. We believe that this complexity arises from the interaction of multiple, simpler mechanisms of cognitive computation. For example, simple mechanisms that modify face representations (such as gain) can help explaining the influence of category learning on face perception, and the influence of depression in expression perception. Simple mechanisms of learning and representation can in part explain how we learn to group objects into categories.
Neuroscience: Brain computation matters
The same observed behavior can be explained through multiple possible cognitive mechanisms (the identifiability problem). How can we decide which one of them is closer to the truth? We believe that when alternative cognitive mechanisms are implemented using principles of neural computation, it becomes easier to decide between them by using what we already know about the brain (are the mechanisms plausible?) and by testing them through a wider range of data (can the mechanisms predict both behavioral and neural data?)
Face perception: The ideal testbed
In our daily lives, we must constantly process information from faces because of their importance for social interaction. The full range of complexity of our cognitive skills is revealed in face processing, from simply recognizing a friend to trying to figure out what they are thinking. On the other hand, faces are relatively simple objects and their properties have been widely studied in medicine, anthropology, and computer graphics. This makes it easy to rigorously manipulate faces in research, through 3d modeling. Face research is both rigorous and impactful, which is why we focus on it.
Our Research
How do different encoding strategies promote adaptive behavior?
Two objects or features that are observed together (e.g., face expression and identity) can be encoded independently or cohesively. Cognitive theories suggest that whether the brain uses either form of encoding depends on learned environmental regularities. Once such environmental structure is learned, it can greatly facilitate new learning and generalization. For example, learning a separate representation for a group of faces greatly facilitates fast learning of new rules about that group. Learning that two events independently cause an effect leads to summing their influence when they happen together, which is optimal, but there are conditions under which such optimality can break down. How two features are encoded can thus give us clues about how they are used for adaptive behavior. Both independent and integrated encoding can be adaptive in different situations. For example, vision theories have proposed that face features like identity and expression are processed independently, and this would facilitate identifying one while ignoring the other (i.e., invariant recognition). On the other hand, affective science theories have proposed that information from such features is integrated to facilitate recognition in ambiguous social situations. Using model-based definitions and analyses, we have found that they are encoded in an integrated manner. Similarly, expectations of what face motions looks natural depends on the shape of a face. It seems like the complexity and ambiguity of social situations drives integrated or context-specific encoding of face features in many situations. More generally, we believe that encoding is adaptive, depending on mechanisms of learning and cognitive control.
How does learning influence face encoding?
There is strong evidence that learning can influence visual representations. For example, we have found that categorization training influences how face identities are represented, and that a simple gain mechanism could explain these results. We are currently expanding this line of research to understand what conditions produce the so-called other-race effects in face perception. Our hypotheses are that different learning conditions produce different aspects of the other race effects, and simple mechanisms such as gain can account for the results. We are also studying how non-linear discrimination tasks, which are learned more easily with integrated representations of feature combinations, may influence their encoding. Do previously independent features become more integrated during a nonlinear task, and at what stage of processing? Is it possible to train people to encode memories that depend on a particular context? We expect to answer these questions in the near future.
How do changes in affect influence face encoding?
Depression is accompanied by deficits in the perception of face expression. Using a computational psychiatry approach, we have shown that a reduction in gain for positive expressions may underlie this effect, and that this depends on a reduction in signal-to-noise ratio rather than a change in how face information is used. Besides continuing our research in depression, we are interested in understanding how learning the affective value of an object might influence its encoding. Are faces encoded differently if they are associated with negative or positive events?
How does cognition influence face encoding?
Whether perception depends on high-level cognitive states, such as awareness or meta-cognition, is a widely studied topic that still does not have a clear answer. We have used a model-based approach to approach this question in a rigorous manner, finding that awareness and metacognition strongly influence perception and memory, but are not necessary for their operation. We are currenty interested in how more specific mechanisms of cognitive control may influence perceptual and memory encoding. For example, how does error monitoring influence memory for faces? How does it influence perceptual encoding of face identities and expressions?