^{1}

^{2}

^{*}

^{2}

^{2}

^{3}

The authors have declared that no competing interests exist.

Conceived and designed the experiments: AK. Performed the experiments: AK. Analyzed the data: AK. Contributed reagents/materials/analysis tools: AK. Wrote the paper: AK EK UK FS.

Although already William James and, more explicitly, Donald Hebb's theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are “potential synapses” defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the “effectual network connectivity”, that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.

Traditionally, learning and memory are attributed to

Another form of synaptic plasticity is

Here we introduce and analyze a simple computational model of structural plasticity which exhibits surprisingly high memory capacity and is able to explain the mentioned cognitive effects. A key to understanding the role of structural plasticity in memory has to do with the observation that the brain, even its most densely connected local circuits, is far from being fully connected

Common memory theories based on neural associative network models consider only Hebbian-type weight plasticity in networks with fixed structure, thus, neglecting processes involving structural plasticity. Such models predict that the maximal information that can be stored in a given neural network increases in proportion to the number of synaptic connections rather than number of neurons. Therefore,

defined as the chance that there is a synaptic connection between two randomly chosen neurons (

Illustration of different connectivity measures for a synaptic network

For memory theories including structural plasticity the situation is different because we can assume that processes including generation of new synapses, consolidation of useful synapses, elimination of useless synapses, and maintenance of anatomical connectivity at a given level

is the chance that there is a potential synapse between two neurons.

It is now tempting to apply the old memory theories for weight plasticity as well to structurally plastic networks by simply replacing

We therefore have to introduce another type of connectivity measure that specifies how many synapses have actually been formed at time

for binary synaptic weights with

Then

For the microscopic simulations of individual synapses as displayed in

On the network level we use corresponding

Each curve shows the evolution of effectual connectivity

The relation between synapse and network variables is non-trivial in general because there may be multiple potential synapses

The model presented so far is of general relevance for any neural theory of memory, because it is independent of any specific mechanisms for memory storage and retrieval: Any learning and storing mechanisms are only implicitly conveyed by the learning signal

A particularly simple memory model based on Hebbian learning of binary synapses is the

Note that a synapse in the Willshaw model is actually a special case of our model of a potential synapse because Eq. 5 instantiates Eq. 4 for

We have used one-step retrieval for some of our experiments (

In particular, iterative retrieval avoids the most serious limitation of one-step retrieval, that is, the lack of a sufficient attractor behavior: High output noise after one-step retrieval does not exclude perfect retrieval after iterated retrieval steps. In fact, as long as the output noise level after the first step is smaller than the input noise level, the iterative retrieval procedure is likely to reduce output noise to zero in subsequent retrieval steps. As a consequence, for

For our long-term simulations of memory phenomena (

For the simulations involving structurally plastic networks and long-term consolidation (

Figure No. | synapse model | #blocks | |||||||

4A | single | 1000 | 10 | - | - | 1 | 100–6931 | 0.1 | |

4B | single | 1000 | 10 | - | - | 1 | 100 | 0.1 | |

4C | single | 1000 | 10 | - | - | 1 | 100 | 0.1 | |

6A, upper | multi | 1000 | 50 | 0.9 | 0.1 | 25 | 12 | 0 | |

6A, lower | multi | 1000 | 50 | 0.9 | 0.1 | 25 | 4 | 1 | |

6B | multi | 1000 | 50 | 0.9 | 0.1 | 6 | 4 | 1 | |

6C | multi | 1000 | 50 | 0.9 | 0.1 | 1 | 20 | 0.01 |

The

First, the

where

In the previous section we have introduced effectual connectivity

It is indeed possible to analyze our model in such a parameter regime: In Sect.

assuming that

During development anatomical connectivity

Our analysis and further simulations (data not shown) reveal that the described increase of

It is a well-known result of information theory

assuming that weight plasticity can choose one out of

Our theory yields the surprising result that the weight capacity

Before generalizing these results to ongoing structural plasticity in sparsely connected networks, let us first re-analyze the classical Willshaw model (without structural plasticity) as illustrated in

In previous works on structural plasticity we have focused on

Synaptic overgrowth: The synaptic generation rate is much larger than the elimination rate,

Critical consolidation phase: Weight plasticity potentiates and consolidates useful synapses that support memory contents specified by the consolidation signal

Synaptic pruning: Useless synapses are eliminated, e.g.,

Because only a fraction

Using

Unlike in development, during adulthood anatomical connectivity is stable. This means that ongoing generation and elimination of synapses must be in homeostatic balance such that the total number of synaptic connections remains approximately constant over time

In the following we apply our theory to networks with biologically relevant parameters. For example, a typical network size may correspond to a cortical macrocolumn of size 1 mm^{3} containing about

By contrast, networks employing structural plasticity with potential connectivity

The following sections show that structural plasticity, in addition to increasing storage capacity, can explain several well known memory phenomena in the brain much better than previous theories.

Artificial neural networks such as multi-layer-perceptrons are well known to suffer from what was called catastrophic forgetting (CF) or the stability-plasticity dilemma

Another form of CF has been described for Hopfield-type network models of associative memory

CF poses problems for technical applications, but also for modeling memory processes because it does not normally occur in our brains. It has been argued that the capacity of the brain might just be too large for running into CF during a normal lifetime. In addition, several alternative solutions have been suggested. For example, many previous approaches suggested to have an additional hidden neural layer (e.g., between populations

A novel role in preventing CF can be attributed to structural synaptic plasticity:

More precisely, for memories stored with a certain effectual connectivity

Patients with lesions of the hippocampus or neighboring neocortex in the medial temporal lobe often suffer from graded retrograde amnesia

A body of previous work has proposed that the lesions may disrupt cortico-hippocampal memory replay and, as a result, recent memories disappear because they are not sufficiently consolidated in intact neocortex

In one of the models

Such models predict either that memories would be replayed and consolidated for an unlimited time

Synaptic learning based on structural plasticity offers an alternative explanation for Ribot gradients without relying on unlimited memory replay (

Another interesting feature of memory is that learning new items is more effective if rehearsal is spaced over time compared to single block rehearsal

Previous cognitive models attributed the spacing effect either to deficient processing of repeated items during single block rehearsal

Further simulation experiments (not shown) have indicated that the spacing effect induced by structural plasticity is very stable. Similar to the psychological experiments, it is remarkably difficult to find conditions without spacing effect. In essence, the spacing effect occurs if weight plasticity is faster than structural plasticity and if consolidated synapses are more stable than silent synapses (

One important limitation in the brain seems to be the number or density of functional (non-silent) synapses, both for anatomical and metabolic reasons. For example, the number of synapses per cortical volume is remarkably similar across different species

To get a quantitative grip of these ideas we have introduced the concept of effectual connectivity, a macroscopic measure for how useful network structure is for memory storage. Structural plasticity can increase effectual connectivity while keeping the anatomical connectivity (

Our model is applicable to learning during development, as well as during adulthood (

To simulate structural and weight plasticity we have used a simple three state Markov model of a potential synapse where state transition probabilities (with exception of

Similarly, we argue that our model is also consistent with more realistic models of structural plasticity based on homeostatic mechanisms for maintaining mean neuronal firing rates at a constant level

Thus, we argue that both Hebbian and homeostatic structural plasticity are necessary to optimize information storage: Hebbian structural plasticity (via

By introducing the concepts of effectual connectivity

There are several lines of evidence suggesting that the binary weight model (corresponding to states 0 and 1) is already quite useful, in particular, if one would add suitable noise terms to account for distributed synaptic strength: First, experiments indicate that real synapses may have only a small number of functionally distinctive states or may even be binary

Although our definition of effectual connectivity

Adding to previous results of storage capacity based on counting possible synaptic network configurations

Besides increasing storage capacity and energy efficiency of neural networks, our results suggest that structural plasticity is a key element in understanding various memory phenomena. One key prediction of the model under homeostatic maintenance of anatomical connectivity

Last, our model is able to bridge different models, describing the spacing effect

As will be shown, effectual connectivity

Next we divide neuron pairs into distinct groups, where two neuron pairs are in the same group if they receive identical consolidation signals

From this we can compute the macroscopic state variables

By these definitions we are in the position to do microscopic simulations of networks of potential synapses and compute the corresponding connectivity measures (e.g., as we have done for

While we have worked out a general theoretical framework of structural plasticity

To prove Eq. 11 let us now analyze the temporal dynamics of effectual connectivity

i.e., all real synapses minus initially consolidated (and not yet deconsolidated) synapses minus the newly consolidated synapses marked by

proving Eq. 11. The second approximation in Eq. 11 becomes valid if all product terms are approximately equal, i.e., if

As argued in Section 6, the storage capacity of structurally plastic networks where memories are stored with effectual connectivity

For the following approximate asymptotic analysis we use several simplifications. First, Address and content memory patterns

Let us first estimate error probabilities after storing

This follows from the fact that a synapse is potentiated with probability

Now we can compute the storage capacity by limiting output noise

for

For networks with structural plasticity Eq. 13 is still valid but effectual connectivity will be typically larger than anatomical connectivity,

For large

Together with Eq. 11 this proves that in networks with structural plasticity, high potential connectivity, and sufficiently small cell assembly size

One limitation of this analysis is the assumption of an optimal threshold control. In fact, an optimal threshold control as presumed above would actually require silent synapses in order to compute spike thresholds

The analysis of the previous section is asymptotically correct for large networks (

Let us first consider the Willshaw-Palm distribution

can be computed from the corresponding variance of a fully connected network which is well approximated by (see Eq. 4.25 in

where

From these results we can easily compute mean values and variances of the dendritic potential distributions of high and low units. Here high units are neurons

Assuming Gaussian distributions we can compute a globally optimal firing threshold

We thank Günther Palm and Marc-Oliver Gewaltig for many fruitful discussions.