Research Interests

“Scientific knowledge is a body of statements of varying degrees of certainty — some most unsure, some nearly sure, but none absolutely certain.” Richard Feynman

As Feynman's quote illustrates, most of what we know (in science and elsewhere) is known to a level of uncertainty. From simple decisions like deciding whether to carry an umbrella to work, or more complex decisions like deciding whether to change jobs, most of our decisions must be made based on uncertain information (for example, we don't know for sure if it's going to rain later today). Knowing that our knowledge is uncertain allows us to take into account the less likely outcomes of our actions and update our knowledge in the light of new information.

In the laboratory we are particularly interested in how the different sources of uncertainty that affect our decisions are used to build confidence judgments about the correctness or appropriateness of these decisions. Confidence accompanies most of our decisions. Together with accuracy and reaction-time, confidence has been referred to as one of the three pillars of choice behavior. Arguably, it is the most important of the three—as confidence is often all we know about the accuracy of our decisions—yet it is also the least well understood.

In the laboratory we study the behavioral and computational basis of confidence. We make progress conducting behavioral experiments (both in the lab and at a larger scale online), and developing theories, computational models and simulations to capture human behavior in these tasks. We care about many aspects of confidence, including:

  • what is the role of confidence in decisions composed of multiple sub-decisions?,
  • what is the most efficient way to communicate our confidence judgments to other decision makers (e.g, verbally or numerically) ?
  • how malleable are confidence reports to feedback? Can we change people’s confidence?
  • how can confidence be computed with biologically plausible neural networks?
  • how can we endow artificial decision makers with rudimentary notions of confidence and metacognition?

Confidence is also a good model to study how beliefs are formed, for al least two reasons. First, confidence is clearly subjective, varying markedly among individuals. For example, we have all met students who regularly failed their exams despite being sure they would pass, and others who scored well despite being sure they would fail. Second, as experimenters we can control the confidence that a decision maker should have in a specific task, if confidence were calculated following the theory of probability. The difference between what confidence is and should be is a window to study how a subjective report is constructed.

In addition to confidence, we are interested in many other research topics related to decision making, artificial intelligence and brain computation. We collaborate extensively with experimentalists who make physiological recording in monkeys and humans. In general, we like to interact with data, from spiking data to large behavioral datasets and text corpuses.

Below we summarize some projects that are representative of the type of research that we do.

Kernels of Confidence

When we ask people to report their confidence in a decision, we are asking them to report the probability that the option they chose is correct, given the evidence they used to decide. That is, what in Bayesian Analysis is called the 'posterior probability'. There is no consensus among researchers, however, about the degree to which confidence corresponds to the posterior probability as defined by Bayes rule.

To study this question, we developed a paradigm in which people had to make a decision and, simultaneously, report confidence in this decision. The novelty of the design was that we use reverse-correlation techniques to study how sensory information contributes to decision and confidence at different times. In one experiment, people had to decide which of two stimuli was brighter on average. The luminance of each stimulus varied over time, which allowed us to estimate the influence of each sample of evidence on the decision and on confidence. We observed that the decision was based on the difference in brightness between the two stimuli. Confidence, on the other hand, was almost exclusively determined by the brightness of the stimulus chosen as brighter, practically ignoring the luminosity of the other stimulus. These results suggest that there is a dissociation between the evidence used to decide and that used to estimate the accuracy of these decisions.

Learning statistical regularities for decision making

From Zylberberg et al. Neuron 2018.

One salient aspect of human cognition is the capacity to use observations to build internal models that capture the statistical regularities of the environment. For instance, when touring a new city we may gradually learn a model of the physical environment that allow us to not get lost. We recently conducted an experiment to study how people learn the statistical regularities of an environment from observations and past decisions.

Participants made difficult decisions about the direction of dynamic random dot motion. Across blocks of 15–42 trials, the base rate favoring left or right varied. Participants were not informed of the base rate or choice accuracy, yet they gradually biased their choices and thereby increased accuracy and confidence in their decisions. They achieved this by updating knowledge of base rate after each decision, using a counterfactual representation of confidence that simulates a neutral prior. The strategy is consistent with Bayesian updating of belief and suggests that humans represent both true confidence, which incorporates the evolving belief of the prior, and counterfactual confidence, which discounts the prior.

The brain's Turing machine

From Zylberberg et al, Trends in Cognitive Sciences 2011

Cognitive tasks are often composed of a sequence of decisions, each one passing information to the next. We have explored the possibility that the neural mechanisms that underlie simple perceptual decisions also enable the execution of more complex cognitive tasks. The key idea was that just as groups of neurons in associative cortices can accumulate evidence for and against the execution of specific motor actions (e.g., saccade or reach to a target), other neurons could accumulate evidence for and against the selection of internal non-motor actions, like deciding which information to store or retrieve from memory or selecting a rule about how to respond to a subsequent stimulus. In theoretical studies and simulations, we showed that this simple idea endows neural networks with the computational power of a Turing machine, while relying exclusively on operations derived from neurobiology.

We also explored the neurophysiological basis of tasks composed of a sequence of decisions. In one study in collaboration with Pieter Roelfsema’s lab, macaque monkeys had to covertly navigate a decision tree with multiple branching points while we recorded neuronal activity in visual cortical areas V1 and V4. We found a first phase of decision making in which neuronal activity increased in parallel along multiple branches of the decision tree. This was followed by an integration phase where the optimal overall strategy crystallized as the result of interactions between local decisions.

Why can't we do two things at the same time?

A ubiquitous aspect of brain function is its quasi-modular and massively parallel organization. The paradox is that this extraordinary parallel machine is incapable of performing a single large arithmetic calculation. How come it is so easy to recognize moving objects, but so difficult to multiply 357 times 289? And why, if we can simultaneously coordinate walking, group contours, segment surfaces, talk and listen to noisy speech, can we only make one decision at a time?

We simulated a large-scale spiking neural network model to explore the emergence of serial processing in the primate brain. In the model, precise sensory-motor mapping relies on a network capable of flexibly interconnecting processors and rapidly changing its configuration from one task to another. Simulations show that, when presented with dual-task stimuli, the network exhibits parallel processing at peripheral sensory levels, a memory buffer capable of keeping the result of sensory processing on hold. However, control mechanisms result in serial performance at the level of a 'routing' circuit required to flexibly map stimuli to potential motor actions. The simulations suggest that seriality in dual (or multiple) task performance results as a consequence of inhibition within the control networks needed for precise ‘‘routing’’ of information flow across a vast number of possible task configurations. More information can be found here.

From Zylberberg et al. Plos Comp 2012