In network science, the configuration model is a method for generating random networks from a given degree sequence. It is widely used as a reference model for real-life social networks, because it allows the modeler to incorporate arbitrary degree distributions.
Network science | ||||
---|---|---|---|---|
Network types | ||||
Graphs | ||||
|
||||
Models | ||||
|
||||
| ||||
In the configuration model, the degree of each vertex is pre-defined, rather than having a probability distribution from which the given degree is chosen.[2] As opposed to the Erdős–Rényi model, the degree sequence of the configuration model is not restricted to have a Poisson distribution, the model allows the user to give the network any desired degree distribution.
The following algorithm describes the generation of the model:
The algorithm described above matches any stubs with the same probability. The uniform distribution of the matching is an important property in terms of calculating other features of the generated networks. The network generation process does not exclude the event of generating a self-loop or a multi-link. If we designed the process where self-loops and multi-edges are not allowed, the matching of the stubs would not follow a uniform distribution.
The expected total number of multi-links in a configuration model network would be:
[math]\displaystyle{ \frac{1}{2}\Big[\frac{\langle {k^2} \rangle}{\langle k\rangle}-1\Big]^2 }[/math]
where [math]\displaystyle{ \langle {k^n} \rangle }[/math] is the n-th moment of the degree distribution. Therefore, the average number of self-loops and multi-links is a constant for some large networks, and the density of self-loops and multi-links, meaning the number per node, goes to zero as [math]\displaystyle{ N\rightarrow\infty }[/math] as long as [math]\displaystyle{ \langle {k^2} \rangle }[/math] is constant and finite. For some power-law degree distributions where the second moment diverges, density of multi-links may not vanish or may do so more slowly than [math]\displaystyle{ {N}^{-1} }[/math].[2]
A further consequence of self-loops and multi-edges is that not all possible networks are generated with the same probability. In general, all possible realizations can be generated by permuting the stubs of all vertices in every possible way. The number of permutation of the stubs of node [math]\displaystyle{ i }[/math] is [math]\displaystyle{ k_i! }[/math], so the number of realizations of a degree sequence is [math]\displaystyle{ N\{k_i\} = \prod_i k_i ! }[/math]. This would mean that each realization occurs with the same probability. However, self-loops and multi-edges can change the number of realizations, since permuting self-edges can result an unchanged realization. Given that the number of self-loops and multi-links vanishes as [math]\displaystyle{ N\rightarrow\infty }[/math], the variation in probabilities of different realization will be small but present.[2]
A stub of node [math]\displaystyle{ i }[/math] can be connected to [math]\displaystyle{ 2m-1 }[/math] other stubs (there are [math]\displaystyle{ 2m }[/math] stubs altogether, and we have to exclude the one we are currently observing). The vertex [math]\displaystyle{ j }[/math] has [math]\displaystyle{ k_j }[/math] stubs to which node [math]\displaystyle{ i }[/math] can be connected with the same probability (because of the uniform distribution). The probability of a stub of node [math]\displaystyle{ i }[/math] being connected to one of these [math]\displaystyle{ k_j }[/math] stubs is [math]\displaystyle{ \frac{k_j}{2m-1} }[/math]. Since node [math]\displaystyle{ i }[/math] has [math]\displaystyle{ k_i }[/math] stubs, the probability of [math]\displaystyle{ i }[/math] being connected to [math]\displaystyle{ j }[/math] is [math]\displaystyle{ \frac{k_ik_j}{2m-1} }[/math] ([math]\displaystyle{ \frac{k_ik_j}{2m} }[/math] for sufficiently large [math]\displaystyle{ m }[/math]). Note that this formula can only be viewed as a probability if [math]\displaystyle{ k_ik_j/2m\ll 1 }[/math], and more precisely it describes the expected number of edges between nodes [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math]. Note that this formula does not apply to the case of self-edges.[2]
Given a configuration model with a degree distribution [math]\displaystyle{ p_k }[/math], the probability of a randomly chosen node [math]\displaystyle{ i }[/math] having degree [math]\displaystyle{ k }[/math] is [math]\displaystyle{ p_k }[/math]. But if we took one of the vertices to which we can arrive following one of edges of i, the probability of having degree k is [math]\displaystyle{ \frac{k}{2m}\times np_k = \frac{kp_k}{\left\langle k\right\rangle} }[/math] . (The probability of reaching a node with degree k is [math]\displaystyle{ \frac{k}{2m} }[/math], and there are [math]\displaystyle{ np_k }[/math] such nodes.) This fraction depends on [math]\displaystyle{ kp_k }[/math] as opposed to the degree of the typical node with [math]\displaystyle{ p_k }[/math]. Thus, a neighbor of a typical node is expected to have higher degree than the typical node itself. This feature of the configuration model describes well the phenomenon of "my friends having more friends than I do".
The clustering coefficient [math]\displaystyle{ C_g }[/math] (the average probability that the neighbors of a node are connected) is computed approximately as follows:
where [math]\displaystyle{ q_k }[/math] denotes the probability that a random edge reaches a degree-[math]\displaystyle{ k }[/math] vertex, and the factors of the form "[math]\displaystyle{ k_i-1 }[/math]" rather than "[math]\displaystyle{ k_i }[/math]" appear because one stub has been accounted for by the fact that these are neighbors of a common vertex. Evaluating the above results in
Using [math]\displaystyle{ q_k=kp_k/\langle k\rangle }[/math] and [math]\displaystyle{ 2m=N\langle k\rangle }[/math], with [math]\displaystyle{ p_k }[/math] denoting the degree distribution, [math]\displaystyle{ \langle k\rangle }[/math] denoting the average degree, and [math]\displaystyle{ N }[/math] denoting the number of vertices, the above becomes
with [math]\displaystyle{ \langle k^2\rangle }[/math] denoting the second moment of the degree distribution. Assuming that [math]\displaystyle{ \langle k^2\rangle }[/math] and [math]\displaystyle{ \langle k\rangle }[/math] are constant, the above behaves as
where the constant depends on [math]\displaystyle{ p_k }[/math].[2] Thus, the clustering coefficient [math]\displaystyle{ C_g }[/math] becomes small in the [math]\displaystyle{ N\gg 1 }[/math] limit.
In the configuration model, a giant component (GC) exists if
where [math]\displaystyle{ \langle k \rangle }[/math] and [math]\displaystyle{ \langle k^2 \rangle }[/math] are the first and second moments of the degree distribution. That means that, the critical threshold solely depends on quantities which are uniquely determined by the degree distribution [math]\displaystyle{ p_k }[/math].
Configuration model generates locally tree-like networks, meaning that any local neighborhood in such a network takes the form of a tree. More precisely, if you start at any node in the network and form the set of all nodes at distance [math]\displaystyle{ d }[/math] or less from that starting node, the set will, with probability tending to 1 as n → ∞, take the form of a tree.[3] In tree-like structures, the number of second neighbors averaged over the whole network, [math]\displaystyle{ c_2 }[/math], is: [math]\displaystyle{ c_2 =\langle k^2 \rangle- \langle k \rangle. }[/math]
Then, in general, the average number at distance [math]\displaystyle{ d }[/math] can be written as:
Which implies that if the ratio of [math]\displaystyle{ \frac{c_2}{c_1} }[/math] is larger than one, then the network can have a giant component. This is famous as the Molloy-Reed criterion.[4] The intuition behind this criterion is that if the giant component (GC) exists, then the average degree of a randomly chosen vertex [math]\displaystyle{ i }[/math] in a connected component should be at least 2. Molloy-Reed criterion can also be expressed as: [math]\displaystyle{ \sum_i k_i(k_i - 2) \gt 0, }[/math] which implies that, although the size of the GC may depend on [math]\displaystyle{ p_0 }[/math]and [math]\displaystyle{ p_2 }[/math], the number of nodes of degree 0 and 2 have no contribution in the existence of the giant component.[3]
Configuration model can assume any degree distribution and shows the small-world effect, since to leading order the diameter of the configuration model is just [math]\displaystyle{ d = \frac{\ln(N)}{\ln(c_2/c_1)} }[/math].[5]
As total number of vertices [math]\displaystyle{ N }[/math] tends to infinity, the probability to find two giant components is vanishing. This means that in the sparse regime, the model consist of one giant component (if any) and multiple connected components of finite size. The sizes of the connected components are characterized by their size distribution [math]\displaystyle{ w_n }[/math]- the probability that a randomly sampled vertex belongs to a connected component of size [math]\displaystyle{ n. }[/math] There is a correspondence between the degree distribution [math]\displaystyle{ p_k }[/math] and the size distribution [math]\displaystyle{ w_n. }[/math] When total number of vertices tends to infinity, [math]\displaystyle{ N\rightarrow\infty }[/math], the following relation takes place:[6]
where [math]\displaystyle{ u_1(k) := \frac{k+1}{\langle k \rangle} p_{k+1}, }[/math] and [math]\displaystyle{ u_1^{*n} }[/math] denotes the [math]\displaystyle{ n }[/math]-fold convolution power. Moreover, explicit asymptotes for [math]\displaystyle{ w_n }[/math] are known when [math]\displaystyle{ n\gg1 }[/math]and [math]\displaystyle{ |\langle k^2 \rangle- 2 \langle k \rangle | }[/math]is close to zero.[6] The analytical expressions for these asymptotes depend on finiteness of the moments of [math]\displaystyle{ p_k, }[/math] the degree distribution tail exponent [math]\displaystyle{ \beta }[/math] (when [math]\displaystyle{ p_k }[/math]features a heavy tail), and the sign of Molloy–Reed criterion. The following table summarises these relationships (the constants are provided in[6]).
Moments of [math]\displaystyle{ p_k }[/math] | Tail of [math]\displaystyle{ p_k }[/math] | [math]\displaystyle{ \text{sign}(\langle k^2 \rangle- 2 \langle k \rangle ) }[/math] | [math]\displaystyle{ w_n,\;n\gg1,\; \alpha=\beta -2 }[/math] |
---|---|---|---|
[math]\displaystyle{ \langle k^3 \rangle\lt \infty }[/math] | light tail | -1 or 1 | [math]\displaystyle{ C_1e^{-C_2 n} n^{-3/2} }[/math] |
0 | [math]\displaystyle{ C_1 n^{-3/2} }[/math] | ||
heavy tail, [math]\displaystyle{ \beta\gt 4 }[/math] | -1 | [math]\displaystyle{ C_3 n^{-\alpha-1} }[/math] | |
0 | [math]\displaystyle{ C_1 n^{-3/2} }[/math] | ||
1 | [math]\displaystyle{ C_1e^{-C_2 n} n^{-3/2} }[/math] | ||
[math]\displaystyle{ \langle k^3 \rangle=\infty,
}[/math]
[math]\displaystyle{ \langle k^2 \rangle\lt \infty, }[/math] |
heavy tail, [math]\displaystyle{ \beta=4 }[/math] | -1 | [math]\displaystyle{ C_3 n^{-\alpha-1} }[/math] |
0 | [math]\displaystyle{ C_1'\frac{n^{-3/2} }{\sqrt{ \log n}} }[/math] | ||
1 | [math]\displaystyle{ C_1'\frac{n^{-3/2} }{\sqrt{ \log n}}e^{ -C_2'\frac{n}{\log n}} }[/math] | ||
heavy tail, [math]\displaystyle{ 3\lt \beta\lt 4 }[/math] | -1 | [math]\displaystyle{ C_3 n^{-\alpha-1} }[/math] | |
0 | [math]\displaystyle{ C_4 n^{-\frac{1}{\alpha }-1} }[/math] | ||
1 | [math]\displaystyle{ C_5 e^{-C_6 n} n^{-3/2} }[/math] | ||
[math]\displaystyle{ \langle k^2 \rangle=\infty,
}[/math]
[math]\displaystyle{ \langle k \rangle\lt \infty, }[/math] |
heavy tail, [math]\displaystyle{ \beta=3 }[/math] | 1 | [math]\displaystyle{ C_7 e^{ -C_8 - C_9 n^{ \frac{2}{\pi} } } n^{ \frac{1}{\pi} -2} }[/math] |
heavy tail, [math]\displaystyle{ 2\lt \beta\lt 3 }[/math] | 1 | [math]\displaystyle{ C_{10} e^{-C_{11} n} n^{-3/2} }[/math] |
Three general properties of complex networks are heterogeneous degree distribution, short average path length and high clustering.[1][7][8] Having the opportunity to define any arbitrary degree sequence, the first condition can be satisfied by design, but as shown above, the global clustering coefficient is an inverse function of the network size, so for large configuration networks, clustering tends to be small. This feature of the baseline model contradicts the known properties of empirical networks, but extensions of the model can solve this issue (see [9]). All the networks generated by this model are locally tree-like provided the average of the excess degree distribution is either constant or grows slower than the square root of number of links, [math]\displaystyle{ \sqrt m }[/math]. In other words, this model prevents forming substructures such as loops in the large size limit. Vanishing of clustering coefficient, is a special case of this more general result. While the tree-like property makes the model not very realistic, so many calculations, such as the generating function methods, are possible for the configuration model thanks to this feature.[3]
The configuration model is applied as benchmark in the calculation of network modularity. Modularity measures the degree of division of the network into modules. It is computed as follows:
[math]\displaystyle{ Q = \frac{1}{2L} \sum_{i\neq j}\Bigl(A_{ij}-\frac{k_ik_j}{2L}\Bigr)\delta(C_i,C_j) }[/math][10]
in which the adjacency matrix of the network is compared to the probability of having an edge between node [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] (depending on their degrees) in the configuration model (see the page modularity for details).
In the DCM (directed configuration model),[11] each node is given a number of half-edges called tails and heads. Then tails and heads are matched uniformly at random to form directed edges. The size of the giant component,[11][12] the typical distance,[13] and the diameter[14] of DCM have been studied mathematically. There also has been extensive research on random walks on DCM.[15][16][17] Some real-world complex networks have been modelled by DCM, such as neural networks,[18] finance[19] and social networks.[20]
Original source: https://en.wikipedia.org/wiki/Configuration model.
Read more |