A traditional and pervasive view in neuroscience is that memory and computation are wholly supported by weighted connections between neurons (synapses). As in a 'neural net' architecture, computation occurs when a pattern of activity is instantiated at an input layer, which initiates a chain reaction: each neuron's activity can affect others to which it is 'connected' by a synapse and this reaction propagates through the network, determining the output of the function which is given by the resulting values of the neurons designated as the 'output layer'. Experience-driven memory is constituted by changes to the weights on the connections, that is the 'strength' of the synapse.
Many considerations, however, militate against this 'synaptocentric' view of memory and computation. One is the wide-ranging evidence that single-cell organisms and cell collectives without stable synapses can learn, altering their later behavior based on past experience (Gershman 2023). If cells have access to mechanisms for storage and computation that are not dependent on synaptic connections, it would be surprising if neurons made no use of them. Another is direct evidence for synapse-independent learning in single neurons (Johansson et al. 2014). More theoretically, for many particular memory-based computations that animals execute, it is not yet clear that they can be implemented in a neural net architecture given realistic assumptions about noise and number of available units (Gallistel & King, 2011). These considerations are leading an increasing number of scientists to assume that some share of memory and computation is accomplished internal to the cell by single neurons (Gallistel & Balsam 2014; Abraham, Jones, & Glanzman 2019, Akhlaghpour 2022, Gershman 2023).
Here I want to assume for a moment that this is right, and consider what the consequence for cognitive neuroscientists would be. Sure, there's usually one page of the textbook where we talk about synaptic memory mechanisms and Aplysia, but most of us spend our time doing mass action measurements like EEG or fMRI which are equally far removed from a single neuron or its synapses. Does it actually matter that much for our day-to-day research lives that much of computation and memory are happening internal to the cell rather than in its connections?
One reason I believe it does is that metaphors exert a powerful grip on scientists' reasoning and hypothesis-formation. I work on language, and most of the students and colleagues I interact with have never, and will never, record from a single neuron, but nevertheless people are constantly talking about what will cause neurons to fire or 'activate', synaptic strengthening, neurons firing in synchrony, neurons being inhibited, etc. etc. In other words, although we are not actually working at the neuron level, our reasoning about our mass-action measures is suffused by tacit assumptions about what individual neurons are doing coming from the synaptocentric framework.
If single-neuron memory and computation is right, cognitive neuroscientists should be shifting our collective metaphor. The new mental picture supporting our reasoning should be one of billions of interconnected computers, like the modern internet. We should think of neurons less like feature detectors or mindless 'daemons', and more like thoughtful and independent people who once lived on their own (unicellular organisms), but who then decided to get together as a community so that they could accomplish more. When you start living as a community one of the major needs is, fittingly, communication. And so in this alternative metaphor, neural firing is no longer conceived as the 'activation' or detection of a feature, but rather as one of these thinking, reasoning individuals communicating the results of their thinking to the others--or communicating things that they remember, and which they think might now be useful for the rest of the community to achieve their common goals.
This metaphor in fact fits better with several important insights in neuroscience over recent decades. One is the importance of information theory--a model of communication-- for understanding variation in neural spiking (Rieke et al. 1999). While the majority of single-neuron recording studies still focus their analyses on firing rate (number of spikes within some time window), the work by Rieke et al. and many others increasingly suggests that the much more efficient code available in the temporal pattern (exactly when one spike occurred relative to some other; analogous to Morse code) may in fact be the one that is more widely used by the system itself. Another is the greater emphasis on the connectome--determining which high-throughput fiber pathways exist at all to communicate information between communities of neurons that 'live' far apart from each other (e.g. Saygin et al. 2016, Mahon 2022). As pointed out by recent authors, even large fiber pathways like the arcuate fasciculus for language, seem to have a surprisingly small number of fibers making the full voyage from temporal language regions to frontal regions (Rosen & Halgren 2022). This might pose some challenges for a traditional 'feature detector' architecture in which each neuron's activation 'stands for' a feature, but would be less of a problem for a 'communication network' architecture making use of much more efficient temporal coding (Mollon, Takahashi & Danilova 2022).
I suspect that one reason cognitive neuroscientists may be resistant to single-neuron memory and computation is that it means accepting that many of the important memory and computation operations are out of reach for our current cogneuro measures. If we adopt the new view, EEG, MEG, and fMRI are largely measuring wholesale differences in the amount of information communicated between neurons. While recordings of the temporal pattern of spiking in a single neuron are a bit better on this view in allowing us to investigate the communication code, they would still just be a record of what information the neuron is communicating 'out', giving no direct evidence about the intracellular mechanisms for storing and computing. And for those of us trained to work with neurophysiological recordings, it seems unlikely that within our lifetimes we could realistically switch over the methods necessary for studying the intracellular molecular mechanisms.
Although this could drive us cognitive neuroscientists to react with bummed-out denial, I see a sunnier side. When I was on postdoc at the Martinos Center, I had the good fortune to attend tutorials with David Cohen, the founder of magnetoencephalography (MEG). David had a mischievous streak and he enjoyed the look on administrators' faces when he would tell them that 'MEG sees less than EEG', after they had shelled out for a multimillion dollar MEG facility that cost hundreds of times more than an EEG lab. What he said was true, because unlike EEG, MEG only sees the tangential component of dipoles and it's less sensitive to deep sources. David would then continue '...but, MEG sees it better.' Physical properties of magnetic fields cause MEG to miss the deep sources and the radial component, but they also result in the cancellation of nuisance volume currents, and together this allows much better estimation of the visible sources remaining. Knowing what your measures aren't sensitive to eliminates uncertainty and can allow you to make much more accurate and interesting inferences from them. If EEG/MEG and fMRI researchers woke up tomorrow and started framing all of our amplitude modulations in terms of quantity of information transmitted from neurons to neurons, rather than the richer soup of 'activation', 'recognition', 'detection', etc. etc. that we currently make reference to, what more rapid and lasting progress might we make in understanding the neural communication code...perhaps even allowing better guesses about the 'dark energy' of intracellular computation and memory that we cannot yet observe.
Abraham, W. C., Jones, O. D., & Glanzman, D. L. (2019). Is plasticity of synapses the mechanism of long-term memory storage?. NPJ science of learning, 4(1), 9. https://doi.org/10.1038/s41539-019-0048-y
Akhlaghpour, H. (2022). An RNA-based theory of natural universal computation. Journal of Theoretical Biology, 537, 110984. https://doi.org/10.1016/j.jtbi.2021.110984
Gallistel, C. R., & Balsam, P. D. (2014). Time to rethink the neural mechanisms of learning and memory. Neurobiology of learning and memory, 108, 136-144. https://doi.org/10.1016/j.nlm.2013.11.019
Gallistel, C. R., & King, A. P. (2011). Memory and the computational brain: Why cognitive science will transform neuroscience. John Wiley & Sons. https://onlinelibrary.wiley.com/doi/book/10.1002/9781444310498
Gershman, S. J. (2023). The molecular memory code and synaptic plasticity: A synthesis. Biosystems, 224, 104825. https://doi.org/10.1016/j.biosystems.2022.104825
Johansson, F., Jirenhed, D. A., Rasmussen, A., Zucca, R., & Hesslow, G. (2014). Memory trace and timing mechanism localized to cerebellar Purkinje cells. Proceedings of the National Academy of Sciences, 111(41), 14930-14934. https://doi.org/10.1073/pnas.1415371111
Mahon, B. Z. (2022). Domain-specific connectivity drives the organization of object knowledge in the brain. Handbook of Clinical Neurology, 187, 221-244. https://doi.org/10.1016/B978-0-12-823493-8.00028-6
Mollon, J. D., Takahashi, C., & Danilova, M. V. (2022). What kind of network is the brain?. Trends in Cognitive Sciences, 26(4), 312-324. https://doi.org/10.1016/j.tics.2022.01.007
Rieke, F., Warland, D., Van Steveninck, R. D. R., & Bialek, W. (1999). Spikes: exploring the neural code. MIT press. https://mitpress.mit.edu/9780262181747/spikes/
Rosen, B. Q., & Halgren, E. (2022). An estimation of the absolute number of axons indicates that human cortical areas are sparsely connected. PLoS Biology, 20(3), e3001575. https://doi.org/10.1371/journal.pbio.3001575
Saygin, Z. M., Osher, D. E., Norton, E. S., Youssoufian, D. A., Beach, S. D., Feather, J., Gaab, N., Gabrieli, J.D., & Kanwisher, N. (2016). Connectivity precedes function in the development of the visual word form area. Nature neuroscience, 19(9), 1250-1255. https://doi.org/10.1038/nn.4354