### Definitions of network motifs and feedback loops

A network motif is defined as an enriched sub-network pattern in complex networks that occurs more frequently than in randomized networks [38–41]. Here, this concept was extended to a more general definition. A motif refers to any sub-network with a particular structure. To relate the structure of various distinct network motifs and their dynamic behaviors, a range of different network structures can be considered, and their correlations to specific dynamic behaviors investigated. This paper focused on synchronized bursting activities.

A feedback loop (FBL) is defined as a network motif composed of network nodes (neurons) and closed directed paths (synapses). The example network shown in Figure 5a contains five FBLs (Figure 5b). FBL1, FBL2 and FBL3 are positive feedback loops (PFLs); FBL4 and FBL5 are NFLs. The sign of a feedback loop is determined by (^{-}^{1}), where *q* is the total number of inhibitory interactions contained in the loop. Moreover, if a positive feedback loop contains only positive interactions like FBL1 and FBL2, it is referred to as an all-positive-interaction feedback loop (APFL).

### Pulsed neural networks

To infer the relationship between a type of feedback motif and its network behaviors, typical network motifs were constructed based on pulsed neural networks, and their network responses to randomly assigned initial states and simulation parameters were observed. The pulsed neural networks, also called the third generation of artificial neural networks, are based on spiking neurons, or "integrate-and-fire" neurons [22, 23]. These neurons utilize recent insights from neurophysiology, specifically the use of temporal coding to pass information between neurons [11, 42, 43], which closely mimic realistic communication between neurons. Therefore, pulsed neural networks are commonly applied to study the properties of neural networks.

For a spiking neuron *i*, the membrane voltage can be denoted by a state variable *x*_{
i
} . Once *x*_{
i
} reaches a threshold δ, the neuron is fired; the moment of crossing the threshold is represented by a firing time {t}_{i}^{k}. The set of all firing times of neuron *i*, commonly called a spike train, is described as

{\Phi}_{i}=\left\{{t}_{i}^{k}\left|{u}_{i}\left({t}_{i}^{k}\right)\right.=\delta ;1\le k\le n\right\}

(1)

where {t}_{i}^{n} is the most recent spike before the current time *t*. Two different processes contribute to the value of *x*_{
i
} . The first contribution is a negative-value function {\Psi}_{i}\left(t-{t}_{i}^{k}\right) indicating an immediate "reset" after each firing time in Φ_{
i
}. In the biological context, Ψ*i* is used to account for neuronal refractoriness. The second contribution is inputs from pre-synaptic neurons *j* ∈ *Λ*_{
i
} where

{\Lambda}_{i}=\left\{j\left|j\phantom{\rule{0.3em}{0ex}}\mathsf{\text{presynaptic to}}\phantom{\rule{0.3em}{0ex}}i\right.\right\}

(2)

A pre-synaptic spike at time {t}_{j}^{k} increases (or decreases) the state *x*_{
i
} of post-synaptic neuron *i* for t>{t}_{j}^{k} by summing up a weighted kernel function as {w}_{ij}{\epsilon}_{ij}\left(t-{t}_{j}^{k}\right). The signs can be reflected in synaptic efficacy, *w*_{
ij
} , using *w*_{
ij
} > 0 for excitatory synapses and *w*_{
ij
} < 0 for inhibitory synapses. The kernel *ε*_{
ij
} describes the response of *x*_{
i
} due to pre-synaptic potentials at {t}_{j}^{k}, which can be viewed as a combined effect of the axonal transmission and membrane transmission properties of neurons. Therefore, the state of neuron *i* at current time *t* is given by a linear superposition of the two main previously mentioned contributions,

{x}_{i}\left(t\right)=\sum _{{t}_{i}^{k}\in {\Phi}_{i}}{\Psi}_{i}\left(t-{t}_{i}^{k}\right)+\sum _{j\in {\Psi}_{i}}\sum _{{t}_{j}^{k}\in {\Phi}_{j}}{w}_{ij}{\epsilon}_{ij}\left(t-{t}_{j}^{k}\right)

(3)

The models described by (1)-(3) are referred to as SRMs [22]. They, together with the connectivity topology of neural networks, form a simple mechanism of simulating biological neural networks. Frequently, noise was introduced into the SRM by adding an effect of a stochastic current {I}_{i}^{noise}\left(t\right) to the right-hand side of (3). Then (3) can be altered to

{x}_{i}\left(t\right)=\sum _{{t}_{i}^{k}\in {\Phi}_{i}}{\Psi}_{i}\left(t-{t}_{i}^{k}\right)+\sum _{j\in {\Psi}_{i}}\sum _{{t}_{j}^{k}\in {\Phi}_{j}}{w}_{ij}{\epsilon}_{ij}\left(t-{t}_{j}^{k}\right)+\underset{0}{\overset{\infty}{\int}}{e}_{i}\left(s\right){I}_{i}^{noise}\left(t-s\right)ds

(4)

where the kernel function *e*(*s*) mimics the dynamic from the local noise current stimulation to the membrane voltage of neuron *i*. As usual, several typical mathematical formulations were adopted (illustrated in Figure 6) to describe refractoriness *Ψ*_{
i
} , post-synaptic potential ε_{
ij
}, and membrane dynamics *e*_{
i
} . For instance, let

{\epsilon}_{ij}\left(t\right)=\frac{1}{1-\left({\tau}_{s}/{\tau}_{m}\right)}\left[exp\left(-\frac{t-{\Delta}_{ax}}{{\tau}_{m}}\right)-exp\left(-\frac{t-{\Delta}_{ax}}{{\tau}_{s}}\right)\right]H\left(t-{\Delta}_{ax}\right)

(5)

where τ_{
s
}and τ_{
m
}are time constants describing axonal transmission dynamics and membrane dynamics, respectively, and Δ_{
ax
}is axonal transmission delay. *H*(*t* - Δ_{
ax
}) is the Heaviside step function which vanishes for *t* ≤ Δ_{
ax
}, and set *t* > Δ_{
ax
}equal to 1.

One typical membrane voltage reset function is

{\Psi}_{i}\left(t\right)=\left\{\begin{array}{cc}\hfill -\delta exp\left(-\frac{t}{\tau}\right),\hfill & \hfill \mathsf{\text{for}}\phantom{\rule{0.3em}{0ex}}t>{T}_{refractory}\hfill \\ \hfill -\infty ,\hfill & \hfill \mathsf{\text{for}}\phantom{\rule{0.3em}{0ex}}t\le {T}_{refractory}\hfill \end{array}\right.

(6)

where *T*_{
refractory
} is the absolute refractory period of neuron *i*. During such a period, the neuron cannot be fired regardless of the membrane voltage. For the membrane dynamical function, the following equation can be used:

{e}_{i}\left(t\right)=\frac{1}{{\tau}_{m}}exp\left(-\frac{t}{{\tau}_{m}}\right)H\left(t\right)

(7)

Networks of different sizes can exhibit similar network behaviors (or dynamics) if their neurons are supplied with the same average inputs [22]. To make networks of two sizes comparable with respect to the same average input of each neuron, the networks must have weight scopes scaled by the number of network nodes. For example, take an *n*_{1}-node network as a nominal case with an allowed weight scope of [-*W*_{max}, *W*_{max}]. Then the weight scope of an *n*_{2}-node network should be assigned as \frac{\left[-{W}_{max},{W}_{max}\right]}{\left({n}_{2}-1\right)/\left({n}_{1}-1\right)}. In practice, the smaller network (*n*_{1} = 12) was set as the nominal network. Thus, the weight scope of the other network (*n*_{2} = 60) was scaled by the factor (60 - 1)/(12 - 1) ≈ 5.

### The typical behaviors of spike neural networks

Four typical spontaneous network behaviors appeared during the simulations: transient response activity (TRA), SBA, asynchronized bursting activities (ASBA), and HEA [44–47]. TRA is a non-sustained firing phenomenon, while both SBA and ASBA demonstrate regularly separated clusters of spikes during their full durations. The two cases can be differentiated using a synchrony index (SI) [48, 49]. SI is defined based on a cross-correlation coefficient, which is often used to quantify the temporal relationship of a pair of neurons. Suppose the spike trains of the two cells in a duration of *T* seconds are denoted by *x*(*t*) and *y*(*t*) (*0 < t < T*). Their discrete-time forms *x*(*k*) and *y*(*k*) can be obtained by dividing *T* into *n* bins (*T*/bin width; *k =* 1 . . . *n*) and then counting the number of spikes in each bin width. In our simulations, *T =* 2 s, and bin width = 20 ms. The simulations showed that our results are quite robust with respect to arbitrary choices of *T* and bin width. The cross-correlation coefficient *r* between *x*(*k*) and *y*(*k*) is calculated as follows:

SI=r=\frac{S{S}_{xy}}{\sqrt{S{S}_{xx}S{S}_{yy}}}

(8)

where S{S}_{xx}=\sum _{k=1}^{n}{x}^{2}-\frac{{\left(\sum _{k=1}^{n}x\right)}^{2}}{n}, S{S}_{yy}=\sum _{k=1}^{n}{y}^{2}-\frac{{\left(\sum _{k=1}^{n}y\right)}^{2}}{n}, and S{S}_{xy}=\sum _{k=1}^{n}xy-\frac{\left(\sum _{k=1}^{n}x\right)\left(\sum _{k=1}^{n}y\right)}{n}.

By its definition, *r* is a value in the range of [0, 1]. If *r* exceeds a threshold (0.7 was used in our simulations), the two spike trains are considered synchronized with each other. Otherwise, they are considered asynchronized. In the case of HEA, all nodes permanently fire with an interval of absolute refractory period *T*_{
refractory
} . Notably, hyper-excited spike trains also have a high SI, which is quite similar to SBA. However, they fire as closely as possible without any perceivable intermittence during their final steady state. Figure 7 illustrates these four typical activities using a simple 2-node PFL motif. For a larger-scale network with *n* nodes, *k*-channel SBAs (2 ≤ *k* ≤ *n* and *k* ∈ *ℤ*) can be calculated by the same criteria. For simplicity, only 2-channel SBA was selected in this study to represent the level of *k*-channel SBAs, considering that measurements greater than 2-channel do not change the final results and major conclusions of the study. Similarly, 2-channel HEA was selected in to represent the level of *k*-channel HEAs either.

### Simulation protocols

Simulations were carried out using SRMs for the randomly connected networks where all neurons were assumed to have an identical parameter set {*τ*_{
s
}*, τ*_{
m
} , Δ_{
ax
}, *τ, δ*}, and the synaptic efficacies *w*_{
ij
} were randomly chosen from a uniform distribution [-1, 1] (see the section of 'Pulsed neural networks' for further details on SRM). For a randomly generated network with *n* neurons and *m* synaptic connections (suppose that *m*_{
e
} denotes excitatory connections and *m-m*_{
e
} represents inhibitory connections), the CR is defined as the percentage of the number of existing synaptic connections of the network divided by that of the fully connected network with node number *n*, i.e., \frac{m}{n\left(n-1\right)}. Here, only a single connection between two different neurons is allowed for simplification. Hence, the possibility of self-connection of one neuron and multiple connections between any pair of neurons are excluded. The ER is referred to as a quotient of excitatory synaptic number over the total number of synaptic connections, i.e., \frac{{m}_{e}}{m}.

To investigate the relationships among CR, ER, and the occurrence of SBA, simulations with both synaptic efficacies and network structures randomly perturbed were carried out using different combinations of CR and ER. Classification of four typical network behaviors (TRA, SBA, ASBA, and HEA) can be found in the section entitled "The typical behaviors of spike neural networks". For each ratio pair (CR, ER), 1,000 randomly connected artificial neural networks were constructed, and simulations based on these networks were carried out. For each constructed network (corresponding to one simulation), the total number of APFL motifs (2, 3, and 4-node) and the occurrences of *k*-channel SBA and HEA (2 ≤ *k* ≤ *n* and *k* ∈ *ℤ*) were investigated to show the correlations between APFL number and such typical behaviors. Our simulations demonstrated that the change of *k* or inclusion of more (greater than 4) node motifs had no effect on evaluating the dynamical characteristics of networks. In practice, only the 2-channel SBA and 2-channel HEA were evaluated and used to measure the levels of synchronization and hyper-excitation in networks. The correlation between the occurrence of 2-channel SBA or 2-channel HEA and network topological characteristics was obtained from the statistics of the occurrence of such behaviors with various CRs, ERs, and randomly assigned synaptic efficacies.