In graph theory, a graph is said to be a pseudorandom graph if it obeys certain properties that random graphs obey with high probability. There is no concrete definition of graph pseudorandomness, but there are many reasonable characterizations of pseudorandomness one can consider. Pseudorandom properties were first formally considered by Andrew Thomason in 1987.[1][2] He defined a condition called "jumbledness": a graph [math]\displaystyle{ G=(V,E) }[/math] is said to be [math]\displaystyle{ (p,\alpha) }[/math]-jumbled for real [math]\displaystyle{ p }[/math] and [math]\displaystyle{ \alpha }[/math] with [math]\displaystyle{ 0\lt p\lt 1\leq \alpha }[/math] if
for every subset [math]\displaystyle{ U }[/math] of the vertex set [math]\displaystyle{ V }[/math], where [math]\displaystyle{ e(U) }[/math] is the number of edges among [math]\displaystyle{ U }[/math] (equivalently, the number of edges in the subgraph induced by the vertex set [math]\displaystyle{ U }[/math]). It can be shown that the Erdős–Rényi random graph [math]\displaystyle{ G(n,p) }[/math] is almost surely [math]\displaystyle{ (p,O(\sqrt{np})) }[/math]-jumbled.[2]:6 However, graphs with less uniformly distributed edges, for example a graph on [math]\displaystyle{ 2n }[/math] vertices consisting of an [math]\displaystyle{ n }[/math]-vertex complete graph and [math]\displaystyle{ n }[/math] completely independent vertices, are not [math]\displaystyle{ (p,\alpha) }[/math]-jumbled for any small [math]\displaystyle{ \alpha }[/math], making jumbledness a reasonable quantifier for "random-like" properties of a graph's edge distribution.
Thomason showed that the "jumbled" condition is implied by a simpler-to-check condition, only depending on the codegree of two vertices and not every subset of the vertex set of the graph. Letting [math]\displaystyle{ \operatorname{codeg}(u,v) }[/math] be the number of common neighbors of two vertices [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math], Thomason showed that, given a graph [math]\displaystyle{ G }[/math] on [math]\displaystyle{ n }[/math] vertices with minimum degree [math]\displaystyle{ np }[/math], if [math]\displaystyle{ \operatorname{codeg}(u,v)\leq np^2+\ell }[/math] for every [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math], then [math]\displaystyle{ G }[/math] is [math]\displaystyle{ \left( p,\sqrt{(p+\ell)n}\,\right) }[/math]-jumbled.[2]:7 This result shows how to check the jumbledness condition algorithmically in polynomial time in the number of vertices, and can be used to show pseudorandomness of specific graphs.[2]:7
In the spirit of the conditions considered by Thomason and their alternately global and local nature, several weaker conditions were considered by Chung, Graham, and Wilson in 1989:[3] a graph [math]\displaystyle{ G }[/math] on [math]\displaystyle{ n }[/math] vertices with edge density [math]\displaystyle{ p }[/math] and some [math]\displaystyle{ \varepsilon\gt 0 }[/math] can satisfy each of these conditions if
These conditions may all be stated in terms of a sequence of graphs [math]\displaystyle{ \{G_n\} }[/math] where [math]\displaystyle{ G_n }[/math] is on [math]\displaystyle{ n }[/math] vertices with [math]\displaystyle{ (p+o(1))\binom{n}{2} }[/math] edges. For example, the 4-cycle counting condition becomes that the number of copies of any graph [math]\displaystyle{ H }[/math] in [math]\displaystyle{ G_n }[/math] is [math]\displaystyle{ \left(p^{e(H)}+o(1)\right)e^{v(H)} }[/math] as [math]\displaystyle{ n\to\infty }[/math], and the discrepancy condition becomes that [math]\displaystyle{ \left|e(X,Y)-p|X||Y|\right|=o(n^2) }[/math], using little-o notation.
A pivotal result about graph pseudorandomness is the Chung–Graham–Wilson theorem, which states that many of the above conditions are equivalent, up to polynomial changes in [math]\displaystyle{ \varepsilon }[/math][3]. A sequence of graphs which satisfies those conditions is called quasi-random. It is considered particularly surprising[2]:9 that the weak condition of having the "correct" 4-cycle density implies the other seemingly much stronger pseudorandomness conditions. Graphs such as the 4-cycle, the density of which in a sequence of graphs is sufficient to test the quasi-randomness of the sequence, are known as forcing graphs.
Some implications in the Chung–Graham–Wilson theorem are clear by the definitions of the conditions: the discrepancy on individual sets condition is simply the special case of the discrepancy condition for [math]\displaystyle{ Y=X }[/math], and 4-cycle counting is a special case of subgraph counting. In addition, the graph counting lemma, a straightforward generalization of the triangle counting lemma, implies that the discrepancy condition implies subgraph counting.
The fact that 4-cycle counting implies the codegree condition can be proven by a technique similar to the second-moment method. Firstly, the sum of codegrees can be upper-bounded:
Given 4-cycles, the sum of squares of codegrees is bounded:
Therefore, the Cauchy–Schwarz inequality gives
which can be expanded out using our bounds on the first and second moments of [math]\displaystyle{ \operatorname{codeg} }[/math] to give the desired bound. A proof that the codegree condition implies the discrepancy condition can be done by a similar, albeit trickier, computation involving the Cauchy–Schwarz inequality.
The eigenvalue condition and the 4-cycle condition can be related by noting that the number of labeled 4-cycles in [math]\displaystyle{ G }[/math] is, up to [math]\displaystyle{ o(1) }[/math] stemming from degenerate 4-cycles, [math]\displaystyle{ \operatorname{tr}\left(A_G^4\right) }[/math], where [math]\displaystyle{ A_G }[/math] is the adjacency matrix of [math]\displaystyle{ G }[/math]. The two conditions can then be shown to be equivalent by invocation of the Courant–Fischer theorem.[3]
The concept of graphs that act like random graphs connects strongly to the concept of graph regularity used in the Szemerédi regularity lemma. For [math]\displaystyle{ \varepsilon\gt 0 }[/math], a pair of vertex sets [math]\displaystyle{ X,Y }[/math] is called [math]\displaystyle{ \varepsilon }[/math]-regular, if for all subsets [math]\displaystyle{ A\subset X,B\subset Y }[/math] satisfying [math]\displaystyle{ |A|\geq\varepsilon|X|,|B|\geq\varepsilon|Y| }[/math], it holds that
where [math]\displaystyle{ d(X,Y) }[/math] denotes the edge density between [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math]: the number of edges between [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] divided by [math]\displaystyle{ |X||Y| }[/math]. This condition implies a bipartite analogue of the discrepancy condition, and essentially states that the edges between [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] behave in a "random-like" fashion. In addition, it was shown by Miklós Simonovits and Vera T. Sós in 1991 that a graph satisfies the above weak pseudorandomness conditions used in the Chung–Graham–Wilson theorem if and only if it possesses a Szemerédi partition where nearly all densities are close to the edge density of the whole graph.[4]
The Chung–Graham–Wilson theorem, specifically the implication of subgraph counting from discrepancy, does not follow for sequences of graphs with edge density approaching [math]\displaystyle{ 0 }[/math], or, for example, the common case of [math]\displaystyle{ d }[/math]-regular graphs on [math]\displaystyle{ n }[/math] vertices as [math]\displaystyle{ n\to\infty }[/math]. The following sparse analogues of the discrepancy and eigenvalue bounding conditions are commonly considered:
It is generally true that this eigenvalue condition implies the corresponding discrepancy condition, but the reverse is not true: the disjoint union of a random large [math]\displaystyle{ d }[/math]-regular graph and a [math]\displaystyle{ d+1 }[/math]-vertex complete graph has two eigenvalues of exactly [math]\displaystyle{ d }[/math] but is likely to satisfy the discrepancy property. However, as proven by David Conlon and Yufei Zhao in 2017, slight variants of the discrepancy and eigenvalue conditions for [math]\displaystyle{ d }[/math]-regular Cayley graphs are equivalent up to linear scaling in [math]\displaystyle{ \varepsilon }[/math].[5] One direction of this follows from the expander mixing lemma, while the other requires the assumption that the graph is a Cayley graph and uses the Grothendieck inequality.
A [math]\displaystyle{ d }[/math]-regular graph [math]\displaystyle{ G }[/math] on [math]\displaystyle{ n }[/math] vertices is called an [math]\displaystyle{ (n,d,\lambda) }[/math]-graph if, letting the eigenvalues of the adjacency matrix of [math]\displaystyle{ G }[/math] be [math]\displaystyle{ d=\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_n }[/math], [math]\displaystyle{ \max\left(\left|\lambda_2\right|,\left|\lambda_n\right|\right)\leq \lambda }[/math]. The Alon-Boppana bound gives that [math]\displaystyle{ \max\left(\left|\lambda_2\right|,\left|\lambda_n\right|\right)\geq 2\sqrt{d-1}-o(1) }[/math] (where the [math]\displaystyle{ o(1) }[/math] term is as [math]\displaystyle{ n\to\infty }[/math]), and Joel Friedman proved that a random [math]\displaystyle{ d }[/math]-regular graph on [math]\displaystyle{ n }[/math] vertices is [math]\displaystyle{ (n,d,\lambda) }[/math] for [math]\displaystyle{ \lambda=2\sqrt{d-1}+o(1) }[/math].[6] In this sense, how much [math]\displaystyle{ \lambda }[/math] exceeds [math]\displaystyle{ 2\sqrt{d-1} }[/math] is a general measure of the non-randomness of a graph. There are graphs with [math]\displaystyle{ \lambda\leq 2\sqrt{d-1} }[/math], which are termed Ramanujan graphs. They have been studied extensively and there are a number of open problems relating to their existence and commonness.
Given an [math]\displaystyle{ (n,d,\lambda) }[/math] graph for small [math]\displaystyle{ \lambda }[/math], many standard graph-theoretic quantities can be bounded to near what one would expect from a random graph. In particular, the size of [math]\displaystyle{ \lambda }[/math] has a direct effect on subset edge density discrepancies via the expander mixing lemma. Other examples are as follows, letting [math]\displaystyle{ G }[/math] be an [math]\displaystyle{ (n,d,\lambda) }[/math] graph:
Pseudorandom graphs factor prominently in the proof of the Green–Tao theorem. The theorem is proven by transferring Szemerédi's theorem, the statement that a set of positive integers with positive natural density contains arbitrarily long arithmetic progressions, to the sparse setting (as the primes have natural density [math]\displaystyle{ 0 }[/math] in the integers). The transference to sparse sets requires that the sets behave pseudorandomly, in the sense that corresponding graphs and hypergraphs have the correct subgraph densities for some fixed set of small (hyper)subgraphs.[9] It is then shown that a suitable superset of the prime numbers, called pseudoprimes, in which the primes are dense obeys these pseudorandomness conditions, completing the proof.
Original source: https://en.wikipedia.org/wiki/Pseudorandom graph.
Read more |