Complex networks: Structure and dynamics by S. Boccaletti et al. - Summary based on extracts

Following the series of book summaries, student notes to papers and analysis related to Network Science posted in this blog, all of them easy to reach using the network science label, I focus this time on a classical survey paper titled
"Complex networks: Structure and dynamics" by S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, D.-U. Hwang.

This is a paper with over 100 pages and 888 references. The summary approach I follow this time is different. After having read and highlighted the paper, I literally copy here those statements that could be considered a non-complete summary of the paper. Network science students could find it useful as an initial reference to later on dive deep into the entire paper.

1. Intro 

- Networks: systems composed by a large number of highly interconnected dynamical units.
- The first approach to capture the global properties of such systems is to model them as graphs whose nodes represent the dynamical units, and whose links stand for the interactions between them.
- A relevant property regards the degree of a node, that is the number of its direct connections to other nodes. In real networks, the degree distribution P (k), defined as the probability that a node chosen uniformly at random has degree k or, equivalently, as the fraction of nodes in the graph having degree k, significantly deviates from the Poisson distribution expected for a random graph and, in many cases, exhibits a power law (scale-free) tail with an exponent taking a value between 2 and 3.
- real networks are characterized by correlations in the node degrees, by having relatively short paths between any two nodes (small-world property), and by the presence of a large number of short cycles or specific motifs.
- coupling architecture has important consequences on the network functional
robustness and response to external perturbations, as random failures, or targeted attacks.
- how the network structure affects the properties of a networked dynamical
system
- some brain diseases are the result of an abnormal and, some times, abrupt synchronization of a large number of neural populations
- finding the communities within a network is a powerful tool for understanding the functioning of the network, as well as for identifying a hierarchy of connections within a complex architecture

2. Structure

- The walk of minimal length between two nodes is known as shortest path or
geodesic. A cycle is a closed walk, of at least three nodes, in which no edge is repeated.
- A component of the graph is a maximally connected induced subgraph. A giant
component is a component whose size is of the same order as N.
- The degree (or connectivity) k i of a node i is the number of edges incident with the node.
- in assortative networks the nodes tend to connect to their connectivity peers, while in disassortative networks nodes with low degree are more likely connected with highly connected ones
- The concept of betweenness can be extended also to the edges. The edge betweenness is defined as the number of shortest paths between pairs of nodes that run through that edge
- transitivity means the presence of a high number of triangles
- An alternative possibility is to use the graph clustering coefficient C
- A motif M is a pattern of interconnections
- a community (or cluster, or cohesive subgroup) is a subgraph G (N , L ), whose nodes are tightly connected, i.e. cohesive
- The spectrum of a graph is the set of eigenvalues of its adjacency matrix
- The eigenvalues and eigenvectors of A, and N have been used to characterize either models and real networks, and also for discovering the presence of cohesive subgroups
- despite the inherent differences, most of the real networks are characterized by the same topological properties, as for instance relatively small characteristic path lengths, high clustering coefficients, fat tailed shapes in the degree distributions, degree correlations, and the presence of motifs and community structures
- in most of the real networks, despite of their often large size, there is a relatively short path between any two nodes. This feature is known as the small-world property
- when
the scientists approached the study of real networks from the available databases, it was considered reasonable to find degree distributions localized around an average value, with a well-defined average of quadratic fluctuations. In contrast with all the expectancies, it was found that most of the real networks display power law shaped degree distribution
- Such networks have been named scale-free networks [2,93], because power-laws have the property of having the same functional form at all scales
- ER random graphs are the best studied among graph models, although they do not reproduce most of the properties of real networks
- the degree distribution is well approximated by a Poisson distribution
- The Watts and Strogatz (WS) model is a method to construct graphshaving both the small-world property and a high clustering coefficient
- Graphs with a power-law degree distribution can be simply obtained as a special case of the random graphs with a given degree distribution. We denote such graphs as static scale-free to distinguish them from models of evolving graphs
- Static scale-free graphs are good models for all cases in which growth or aging processes do not play a dominant role in determining the structural properties of the network
- The Barabási–Albert (BA) model is a model of network growth inspired to the formation of the World Wide Web and is based on two basic ingredients: growth and preferential attachment
- The BA model does not allow for changes after the network is formed. However, in real networks like the WWW or a social network, connections can be lost and added. Albert and Barabási (AB) have proposed a generalization of
their original model
- Here we present some of the most striking examples of real systems which have been studied as weighted networks. It has been found that the weights characterizing the various connections exhibit complex statistical features with highly varying distributions and power-law behaviors. Correlations between weights and topology provide a complementary perspective on the structural organization of such systems. Biological, social and technological networks.
- The easiest way to construct a weighted network is by considering a random graph with a given probability distribution P (k), and by assuming that the weights of the edges are random independent variables, distributed according to a weight distribution Q(w)
- Spatial networks: A particular class of networks are those embedded in the real space, i.e. networks whose nodes occupy a precise position in two or three-dimensional Euclidean space, and whose edges are real physical connections. The typical example are neural networks
- networks with strong geographical constraints are not small worlds
- example that power law degree distributions do not necessarily imply the small-world behavior
- random immunization in presence of geographical clustering might be more successful in controlling human epidemics than previously believed
- the structure of the world-wide airport network, finding empirical evidences
that both degree and betweenness distributions decay as truncated power laws

3. Static and dynamic robustness

- Robustness refers to the ability of a network to avoid malfunctioning when a fraction of its constituents is damaged. This is a topic of obvious practical reasons
- static robustness, is meant as the act of deleting nodes without the need of redistributing any quantity
- dynamical robustness refers to the case in which the dynamics of redistribution of flows should be taken into account
- The two types of robustness are similar in spirit, but while the first can be analytically treated, e.g. by using the tools of statistical physics such as percolation theory, the analytical treatment of the second case is harder and in almost all cases one has to rely on numerical simulations.
- The study of random failures in complex networks can be exactly mapped into a standard percolation problem
- the response to attacks of the scale-free network is similar to the response to attacks and failures of the random graph network
- The numerical results indicate that both the global and the local efficiency of scale-free networks are unaffected by the removal of some (up to 2%) of the nodes chosen at random. On the other hand, at variance with random graphs, global and local efficiencies rapidly decrease when the nodes removed are those with the higher connectivity
- The conclusion that follows immediately is that any graph with a finite amount of random mixing of edges for which the second moment diverges does not have a percolation threshold
- targeted deletion of nodes in uncorrelated scale-free networks are highly effective if compared to random breakdown, as previously anticipated by the numerical simulations by Albert et al. [275]. This is due to the critical high degree of just a few vertices whose removal disrupts the whole network. On the other hand, tolerance to attacks in correlated networks is still a problem entirely to be explored. This is due in part to the lack of generic models that produce networks with arbitrary degree-degree correlations
- in order to prevent the breakdown of scale-free networks, one has to find an optimal criterion that takes into account two factors: the robustness of the system itself under repeated failures, and the possibility of knowing in advance that the collapse of the system is approaching
- cascading failures occur much easier in small-world and scale-free networks than in global coupling networks
- an appropriate randomness in path selection can shift the onset of traffic congestion, allowing to accommodate more packets in the network

4. Spreading processes

- Epidemic spreading vs rumour spreading
- epidemiological processes can be regarded as percolation like processes
- the long term maintenance of the infection in a closed population is impossible in the SIR model, due to the depletion of susceptibles, as the epidemic spread through the population
- in both models the spread of infections is tremendously strengthened on scale-free networks. For such a reason, here we shall mainly concentrate on the SIR model
- Heterogeneous graphs with highly skewed degree distributions are particularly important to describe real transmission networks. For example, in the case of sexual contacts, which are responsible for the diffusion of sexually transmitted
diseases, the degree distribution has been found to follow a power-law
- a targeted immunization based on the node connectivity can restore a finite
epidemic threshold and potentially eradicate a virus. This result gives also important indications on the best public health strategies to adopt in order to control and eradicate sexually transmitted diseases (as the HIV). The real
problem here is that sexual promiscuous individuals are not always easy to identify
- vaccination of random acquaintances of random chosen individuals. This strategy is based on the fact that the probability of reaching a particular node by following a randomly chosen edge is proportional to the nodes degree
- structured scale-free networks do not possess the small-world property
- the diseases are longest lived in assortative networks



- desirable to spread the “epidemic” (the rumour) as fast and as efficiently as possible
- The standard rumor model is the so-called DK model
- denoted by ignorant (I), spreader (S) and stifler (R) nodes
- there is no “rumor threshold”, contrarily to the case of epidemic spreading.
The difference does not come from any difference in the growth mechanism of s(t)—the two are actually the same—,but from the disparate rules for the decay of the spreading process
- contrary to epidemic modelling, rumor processes can be tailored depending on the specific application. In this respect, there are still many issues to be addressed such as the influence of correlations, and dynamical rewiring. But we already know that even for a very poorly designed rumor, the final fraction of stiflers is finite at variance with epidemiological models

5. Synchronisation and collective dynamics

- Synchronization is a process wherein many systems (either equivalent or non-equivalent) adjust a given property of their motion due to a suitable coupling configuration, or to an external forcing
- We start with discussing the so called Master Stability Function approach
- Master stability function arguments are currently used as a challenging framework for the study of synchronized behaviors in complex networks, especially for understanding the interplay between complexity in the overall topology and local dynamical properties of the coupled units
- where a weighting in the connections has relevant consequences in
determining the network’s dynamics
- asymmetry in the coupling was shown to play a fundamental role in connection with synchronization of coupled spatially extended fields
- conveying the global complex structure of shortest paths into a weighting procedure gives in that range a better criterion for synchronization than solely relying on the local information around a node
- asymmetry here deteriorates the network propensity for synchronization
- which are the essential topological ingredients enhancing synchronization in weighted networks? The answer is that only the simultaneous action of many ingredients provide the best conditions for synchronization
- The first ingredient is that the weighting must induce a dominant interaction from hub to non-hub nodes
- A second fundamental ingredient is that the network contains a structure of connected hubs influencing the other nodes
- the need of a dominant interaction from hubs to non-hubs for improving synchronization, also for highly homogeneous networks
- weighted networks is the most promising framework for the study of how the architectural properties of the connection wiring influence the efficiency and robustness of the networked system in giving rise to a synchronized behavior
- the coupling strength at which the transition occurs is determined by
the largest eigenvalue of the adjacency matrix
- the ability of a given network to synchronize is strongly ruled by the structure of connections
- some studies suggested that small-world wirings always lead to enhance synchronization as compared with regular topologies

6. Applications

- the larger is the clustering coefficient, the easier the development of the consensus takes place
- the role of complex topologies in opinion dynamics is still to be better investigated
- network structure can be as important as the game rules in order to maintain
cooperative regimes
- the Internet has a relatively high clustering coefficient, about 100 times larger than that of a ER random graph with the same size. Interestingly, the clustering properties increase with time. Moreover, the clustering coefficient is a power-law decaying function of the degree
- Finally, the betweenness distributions follow a power-law
- For the data mining in WWW, it is necessary to estimate the accuracy and
valueness of such information. Interestingly, in the commercial search engine Google such estimation is performed by the so called “page rank” measurements
- it is expected that the large-scale network approach may lead to new insights on various longstanding questions on life, such as robustness to external perturbations, adaptation to external circumstances, and even hidden underlying design principles of evolution
- Similarly to metabolic networks, scale-free properties, high-clustering and small-world properties with hierarchical modularity and robustness against random attacks were observed in protein–protein interaction networks
- argued that the
biological functional organization and the spatial cellular organization are correlated significantly with the topology of the network, by comparing the connectivity structure with that of randomized networks
- Protein crystallography reveals that the fundamental unit of the protein structure is the domain
- Up to now, we have presented evidences supporting that scale-free distributions, small-world, and high clustering seem to be universal properties of cellular networks
- One of the basic properties of scale-free networks is the existence of hub nodes
- The scale-freeness property of a network might indicate growing and
preferential attachment as evolutionary principles that led to the formation of the network as we know it nowadays
- Suppressing a link between highly connected proteins and favoring links between highly connected and low-connected pairs of proteins decreases the likelihood of cross talks between different functional modules of the cell, and increases the overall robustness of the network against localized effects of deleterious perturbations. This feature (combined with scale-free and small-world property) is considered nowadays a key property to explain biological robustness
- Each network motif appears to carry out a specific dynamical function in the network, as it has been experimentally and theoretically demonstrated
- the brain organization is ruled by optimizing principles of resource allocation and constraint minimization
- This plasticity renders neurons able to continuously change connections, or establish new ones according to the computational and communication needs
- The literature usually refers to three different types of connectivity: neuroanatomical, functional and effective
- the description of neural system may be improved by using weighted networks
- Within the framework of learning of artificial neural networks, it has recently been shown that a small-world connectivity may yield a learning error and learning time lower than those obtained with random or regular connectivity patterns
- Some brain diseases, as epilepsy, are the result of an abnormal and, some times, abrupt synchronization of a large number of neural populations
- Many open questions e.g. the presence of motifs is a relevant feature in the structure of cortical networks. However, the role of motifs in brain dynamics remain unclear
- learning could induce changes in the wiring scheme
- Synchronization has been found to be enhanced by a small-world wiring structure. This finding has suggested a plausible paradigm to explain the emergence of some neural phenomena as epileptic seizures, characterized by
a sudden and strong synchronization. Unfortunately, there is presently no conceptual or computational works to understand the role of connectivity in the control of these synchronized states

7. Other topics

- finding the communities within a network is a powerful tool for understanding the structure and the functioning of the network, and its growth mechanisms
- spectral graph partitioning
- A class of algorithms that work much better when there is no prior knowledge on the number of communities is the hierarchical clustering analysis used in social networks analysis
- a series of algorithms based on the idea of iteratively removing edges with high
centrality score have been proposed. Such methods use different measures of edge centrality, as the random-walk betweenness, the current-flow betweenness
- fast method based on modularity
- The study of the eigenvector associated with the second smallest eigenvalue is of practical use only when a clear partition into two parts exists, which is rarely the case
- other algorithms use voltage drops, the graph is considered an electric circuit
- The ability to navigate and search for shortest paths in a network without knowledge on its whole topological structure is a relevant issue, not only in
social systems, but also in optimization of information finding from the World Wide Web, traffic way-finding in a city, transport of information packets on the Internet, or diffusion of signaling molecules in a biological cell
- Analytical and numerical results show that the information does not distribute
uniformly in heterogeneous networks
- Since in a highly heterogeneous network, as the Internet, the highest-degree nodes are connected to a significant fraction of all nodes in the network,
the agent need only a few steps to find a node that is a neighbor of the target q
- This means that in the small world regime, the average search time increases very slowly with the network size, in comparison with regular and random networks, and also with respect to a strategy based on random walk
- For a single search problem the optimal network is clearly a highly polarized starlike structure. This structure is indeed very simple and efficient in terms of searchability, since the average number of steps to find a given node is always bounded, independently of the size of the system. However, the polarized starlike structure becomes inefficient when many search processes coexist in the  network, due to the limited capacity of the central node
- This means that only two classes of networks can be considered as optimal: starlike configurations when the number of parallel searches is small, or homogeneous configurations when the number of parallel searches is large.
- agents making use of local information of the net topology are more efficient in such dynamical environment than agents making use of the global information, thus suggesting that this can explain what occurs in multi-agent systems for the processes of allocation of distributed resources
- Finally, for high densities and small noises, the motion of walkers becomes ordered on the same scale of the system size, and all walkers move in the same spontaneously chosen direction






Networking

Student Notes: 3 Papers on complex networks vulnerabilities

This post compiles my notes on three Papers on vulnerabilities in complex networks:
- "Multiscale vulnerability of complex networks" by Stefano Boccaletti, Javier Buldu, Regino Criado and Julio Flores, Vito Latora, Javier Pello and Miguel Romance (2007).

- "Error and Attack Tolerance of Complex Networks" by Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (2000).

- "Information Theory Perspective on Network Robustness" by Tiago A. Schieber, Laura Carpi, Alejandro C. Frery, Osvaldo A. Rosso, Panos M. Pardalos and Martin G. Ravetti (2015).

As always, I also add my usual disclaimer when dealing with Student Notes:

The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.


On the Paper titled "Multiscale vulnerability of complex networks" by Stefano Boccaletti, Javier Buldu, Regino Criado and Julio Flores, Vito Latora, Javier Pello and Miguel Romance (2007):

- The paper defines vulnerability of a complex network as the capacity of a grpah to maintain its functional performance under random damages or malicious attacks.

- Network malfunctioning could be caused by the deletion of a node and all the links ending in it or by the deletion of one or several links between nodes. This paper focuses on the deletion of links (and not nodes).

- A proper measure of vulnerability should refer to measures of the link betweenness. In general we have to consider the full multiscale sequence of betweenness coefficients (a higher betweenness means less vulnerability).

- A geodesic is the shortest path between two nodes.

On the Paper titled "Error and Attack Tolerance of Complex Networks" by Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (2000):

- This paper demonstrates that error tolerance is not shared by all redundant systems but a class of inhomogeneously wired networks called free-scale networks display it.

- However, error tolerance comes at a high price: these networks are extremely vulnerable to attacks. The removal of the nodes that play the most important role in achieving network's connectivity could affect functional performance.

- Complex (both empirical and theoretical) networks can be divided in two major classes based on P(k) i.e. the probability that a node in the network is connected to k other nodes. In the first case P(k) is peaked at an average and decays exponentially for large k (an exponential tail). Typical examples of these exponential (homogeneous) networks are the random graph model of Erdos and Renyi and the small world model of Watts and Strogatz. Each node has approximately the same number of links. In the second case, there are inhomogeneous networks (scale-free networks) for which P(k) decays as a power law (a power law tail) i.e. P(k) = alfa times k exp (minus gamma). They do not have a characteristic scale. Highly connected nodes are statistically significant in scale-free networks.

- Connectivity in a homogeneous network follows a Poisson distribution peaked at and decaying exponentially for k>> .

- The scale-free model incorporates two common ingredients in real networks: growth and preferential attachment.

- The average length of the shortest paths between any two nodes in the network is diameter d. It describes the interconnectedness of a network. Networks with a very large number of nodes can have a very small diameter.

- Error tolerance studies the changes in the diameter when a small fraction f of the nodes is removed.

- In exponential networks, the diameter increases monotonically with f, since all nodes have approximately the same number of links, they contribute equally to the network's diameter.

- In scale-free networks, the diameter remains unchanged under an increasing level of errors. The removal of "small nodes" does not alter the path structure.

- Attack survivability: When the most connected nodes are eliminated, the diameter of the scale-free network increases rapidly.

- In exponential networks, for fractions f > 0.28 we have cluster sizes S close to 0. Qualitatively similar to the percolation critical point.

- In scale-free networks, the threshold is extremely delayed as longs as it is not a targeted attack (in that case fc=0.18).

On Paper titled "Information Theory Perspective on Network Robustness" by Tiago A. Schieber, Laura Carpi, Alejandro C. Frery, Osvaldo A. Rosso, Panos M. Pardalos and Martin G. Ravetti (2015):

- This paper proposes a dynamical definition of network robustness based on Information Theory.

- They define a failure as a temporal process defined in a sequence. Robustness measures then dissimilarities between topologies after each time step of the sequence.

- Robustness is the ability of the network to continue performing after failures or attacks.

- The most popular methodology to measure robustness is based on percolation theory and on the size of the biggest connected component. However, depending on the network structure, it is possible to attack great part of it keeping these measures blind to changes.

-  This paper proposes to measure network robustness based on the Jensen-Shannon divergence. It quantifies the topological damage of each time step due to failures, not considering the consequences of the dynamical process operating via the network.

- The distance that a given topology is apart from itself after a failure quantifies robustness.

- The Jensen-Shannon divergence between two probability distributions P and Q is defined as the Shannon entropy of the average minus the average of the entropies.

- The robustness measure proposed by this paper depends not only on the network topology but on the sequence of failures over time, aiming to quantify the vulnerability of a given structure under a series of deterministic or stochastic failures.

- The computation cost of the probability distribution function (PDF) update depends on the link removed. New algorithms as the ANF or HyperANF (based on HyperLogLog counters) offer a fast and precise approach and obtain very good approximations of the distance probability distribution for graphs with millions of nodes in a few seconds.


- The network's average degree, mean degree and the minimum and maximum degree are immediately obtained from the degree distribution. The network's efficiency, diameter, average path length, fraction of disconnected pairs of nodes and other distance features can be obtained from the distance distribution.

- The knowledge of critical elements is of great importance to plan strategies either to protect or to efficiently attack networks.

- The problem of finding the best sequence of links to destroy the network can be solved through combinatorial optimization approaches.

 - This method can efficiently work with disconnections as the distance PDF is able to acknowledge the fraction of disconnected pairs of nodes.

Happy robustness study!

A network city


Student Paper Notes: Power-law distributions in empirical data by A. Clauset, C.R. Shalizi and M.E.J. Newman


This time the student notes I post are coming from the reading of a scientific paper written by Aaron Clauset, from Santa Fe Institute, Cosma Rohilla Shalizi, from Carnegie Mellon University and Mark Newman, Professor of Physics in the University of Michigan.

The link to the paper is this one. It is a 43-page paper with 70 references on Power-Law distributions in empirical data. It complements the previous notes that I published on the Network Science book authored by Albert-László Barabási and the student paper notes I published on the structure and function of complex networks by Mark Newman.

This paper deals with power-law distributions and the difficulty to characterise them.  The authors present a "principled statistical framework to deal with power-law behaviour in empirical data. The paper has a considerable mathematical component.

The following points are notes mostly literally extracted from this paper. The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.

Many empirical quantities cluster around a typical value. Those distributions can be well characterised with their mean and standard deviation (the square root of its variance).

Population of cities, earthquake intensities and power outage sizes are considered to have power-law distributions.

A quantity x obeys a power law if its probability distribution p(x) proportional to x exp (-alpha), where alpha is a constant exponent (also named scaling parameter).

Alpha typically lies between 2 and 3. Typically, the power law applies only for values greater than some minimum Xmin i.e. the tail of the distribution follows a power law.

How can we recognise a power law?

There are two flavours of power-law distributions: Continuous distributions and discrete ones. Formulas for continuous distributions tend to be easier.

In many cases the Cummulative Distribution Function (CDF) of a power-law distributed variable is important. The visual form of the CDF is more robust than the Probability Density Function (PDF).

A common way to probe for power-law behaviour is to measure x, construct a histogram representing its frequency distribution and plot the histogram on doubly logarithmic axes. The distribution should approximately fall on a straight line, being alpha the absolute slope of the line. Unfortunately, this method creates systematic errors. It is a necessary but not sufficient condition.

We can use maximum likelihood estimators (MLEs) to approximate alpha. We assume that alpha is > 1.

It is important to identify the value Xmin. One approximation is the Bayesian Information Criterion (BIC). Another method, proposed by Clauset et al consists of choosing an Xmin that makes the probability distribution of the measured data and the best-fit power law model as similar as possible. A way to quantify the distance between two probability distributions is the Kolmogorov-Smirnov or KS statistic. This is the maximum distance between the CDF of the data and the fitted model.

Other not so predominant methods are the Hill plot and also simply to limit the analysis to the largest observed samples.

Interestingly, regardless of the true distribution from which data was drawn, we can always fit a power law. The question is whether the fit is a good match to the data.

The basic approach is to compare the fluctuation between a power-law form and synthetic data sets from a true power-law distribution with similar measures with empirical data (using the KS distance statistic). With large statistical samples we have a bigger chance to verify these hypotheses.

Goodness of fit test

It generates a p value that quantifies the plausibility of the hypothesis. The p-value is defined as the fraction of the synthetic distances that are larger than the empirical distance. If p is large the distance can be attributed to statistical fluctuations, if p is small the model is not a plausible fit.

For each synthetic data we compute the KS statistic relative to the best-fit powe law for that data set, not relative to the original distribution from which the data set was drawn.

Typically, the power law is ruled out if p<=0.1. It is a necesary but no sufficient condition. High p values should be treated with caution when n is small.

Let's remember we fit the power-law form to only the part of the distribution above Xmin.

We would also need to rule out the competing exponential or log-normal distributions. We would then act similarly with synthetic values coming from several plausible competing distributions and obtaining the respective p-values. It those p-values for the competing distributions are small, then they are discarded (although not with 100% certainty).

The likelihood ratio test

An easier to implement than the KS distance test to compare two distributions is the likelihood ratio test. The one with the higher likelihood is the better fit (or, if we calculate the logarithm R of the ratio, then it could be positive, negative or zero if there is a tie). If this p-value is small, the sign is a reliable indicator or which model is the better fit to the data.

A distribution is nested if it is a subset of the other distribution type. When distributions are nested, the larger family of distributions provides a fit at least as good as the subset one.

The p-value of the likelihood ratio test is effective at identifying cases in which the data are insufficient to make a firm distinction.

Application to real world data
In general, it is extremely difficult to tell the difference between log-normal, stretched exponential and power-law behaviours.

Happy mathematical description!
















 




Student Paper Notes: The structure and function of complex networks by M.E.J. Newman

This time the student notes I post are coming from the reading of a scientific paper written by Mark Newman, Professor of Physics in the University of Michigan.

The link to the paper is this one. It is a 58-page paper with 419 references on complex networks. As an enticing intro to suggest its reading, nothing else is required. It complements the previous notes that I published on the Network Science book authored by Albert-László Barabási. Compared to the Network Science book, this paper is slightly more condensed.

This paper contains relevant historical references on the field of Network Science and it has also a considerable mathematical component.

The following points are notes mostly literally extracted from this paper. The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.

Introduction

- This paper reviews recent and non-recent work on the structure and function of networked systems.
- Vertices are nodes and edges are links.
- Degree: The number of edges connected to a vertex.
- Field of research: Consideration of large-scale statistical properties of graphs.
- The body of theory of Network Science has three aims: First, to find statistical properties like path lengths and degree distributions that define structure and behaviour of networked systems and to suggest appropriate ways to measure these properties. Second, to propose models of networks and third, to predict the behaviour of networked systems.
- A hyperedge joins more than two vertices together.

Networks in the real world

- An acyclic graph has no closed loops. The WWW is in general cyclic.
- The small-world effect: Most pairs of vertices in most networks seem to be connected by a short path through the network.

Properties of networks

- Network transitivity (or, sometimes, clustering): "the friend of your friend is likely to be your friend".
- The clustering coefficient C is the mean probability that the friend of your friend is also your friend.
- The clustering coefficient C measures the density of triangles in the network.
- The probability that two vertices point to each other in a directed network is called reciprocity. In a directed network edges have a sense.

- In a random network, in which each edge is present or absent with equal probability, the degree distribution is binomial or Poisson in the limit of large graph size.
- Real world networks are rarely random in their degree distributions.
The degree of the vertices in most real networks are highly right-skewed i.e. their degree distributions have a long right tail of values that are far above the mean.
- Networks with power-law degree distributions are referred as scale-free (degree distribution).
- The maximum degree of a vertex in a network will in general depend on the size of the network.

- Network resilience: Distance was almost entirely unaffected by random vertex removal. However, when removal targets highest degree vertices, it has devastating effects. An example of this is the Internet.

- Assortative mixing or homophily: selective linking. A classical example of assortative mixing in social networks is mixing by race.
- In some networks, high-degree vertices tend to associate with other high-degree vertices. Most social networks appear to be assortative, other types of networks (biological, technological and information) appear to be disassortative.
- The traditional method for extracting community structure from a network is cluster analysis (also called hierarchical analysis).
- Community structure is a common network property.

- Navigation: Stanley Milgram's small-world experiment. Short paths exist in the network and network members are good at finding them. This is important e.g. for efficient databases structures or better peer-to-peer computer networks.
- Betweenness centrality of a vertex in a network is the number of geodesic paths between other vertices that run through it. Betweenness centrality can also be viewed as a measure of network resilience.

Random graphs

- Poisson random graphs (or Bernoulli graphs) are not adequate to describe some important properties of real-world networks.
- The most important property of a random graph is that it possesses a phase transition, from a low-density with few edges and all components being small, having an exponential size distribution and finite mean size to a high density phase in which an extensive fraction of all vertices are joined together in a single giant component.
- The random graph reproduces well the small-world effect, however, in almost all other respects, the properties of a random graph do no match those of real-world networks.
- A random graph has a Poisson degree distribution, entirely random mixing patterns and no correlation between degrees of adjacent vertices, no community structure and, finally, navigation is not possible using local algorithms.
- The property of real graphs that is simplest to add to random graphs is the non-Poisson degree distributions i.e. the "configuration model". An example of this would be a network with a power-law degree distribution.
- Other examples are: directed graphs, bipartite graphs (which have two types of vertices and edges running only between vertices of unlike types - these ones are sometimes studied using "one-mode" projections).
- An additional random graph model for degree correlation is the exponential random graph. A more specialised model is proposed by Maslov.

Exponential random graphs and Markov graphs

- The only solvable random graph models that currently incorporate transitivity are the bipartite and the community-structured models plus certain dual graph models.
- Progress in understanding transitivity require different (and new) models.


The small-world model

- A less sophisticated but more tractable model of a network with high transitivity. However, the degree distribution of the small-world model does not match most real-world networks very well.

- A lot of attention has been given to the average geodesic path length of the small-world model.

Models of network growth

- The best studied class of network models aim to explain the origin of highly skewed degree distributions.
- Probably Price described the first example of a scale-free network, the network of citations between scientific papers.
- Power laws arise when "the rich get richer" i.e. the amount you get goes up with the amount you already have. Price named that "cumulative advantage". Barabasi and Albert named that "preferential attachment".
- z lately denotes degree (and not m) i.e. the total number of edges in the graph.
- The mechanism of cumulative advantage proposed by Price is widely accepted as the explanation for the power-law degree distribution in real-world networks such as the WWW, the citation network and possibly the Internet.
- The difference between the Price model and the Barabasi and Albert model is that in the latter one the edges are undirected, so there is no distinction between in and out degree. The Barabasi and Albert model is simpler and slightly less realistic.
- There is a correlation between the age of vertices and their degrees, with older vertices having higher mean degree.
- Krapivsky and Redner show that there are correlations between the degrees of adjacent vertices in the model.
- The assumption of linear preferential attachment seems to be a reasonable approximation to the truth.
- The real WWW does not present the correlations between age and degree of vertices as found in the Barabasi and Albert model. This is, according to Adamic and Huberman, because the degree of vertices is also a function of their intrinsic worth.
- Bianconi and Barabasi have presented an extension of the Barabasi and Albert model: Each newly appearing vertex is given a "fitness" that represent its attractiveness i.e. its propensity to accrue new links.
- Price's model is intended to be a model of a citation network. Citation networks are really directed, acyclic and all vertices (approximately) belong to a single component, unless they do not cite and are not cited.
- Simple growth model by Callaway et al.: Vertices normally have degree zero when they are first added to the graph. This model does not show preferential attachment so no power-law distributions but exponential.
- Some networks appear to have power-law degree distributions but they do not show preferential attachment e.g. biochemical interaction networks. These networks could grow by copying vertices.

Processes taking place on networks

- Looking at the behaviour of models of physical, biological, social processes going on these networks.
- In Physics, vertices are sites and edges are bonds.
- A percolation process is one in which vertices or edges on a graph are randomly designated either "occupied" or "unoccupied", asking about various properties of the resulting vertices patterns.
- The problem of resilience to random failure of vertices in a network is equivalent to a site percolation process on the network. The number of remaining vertices that can still successfully communicate is precisely the giant component of the corresponding percolation model.
- Networks with power-law degree distributions are highly susceptible to targeted attacks: one only need to remove a small percentage of vertices to destroy the giant component entirely.
- Cascading failures: Watts provided a simple model for cascading failures as a type of percolation. It could be solved using generating function methods similar to those for simple vertex removal.

Epidemiological processes

- The SIR model divides the population into three classes: Susceptible, Infected and Recovered (with permanent illness immunity).
- Deseases do not always spread on scale-free networks.
- Vaccination can be modeled as a site percolation process.
- As networks tend to be particularly vulnerable to the removal of their highest degree vertices, targeted vaccination is expected to be particularly effective.
- It is not always easy to find the highest degree vertices in a social network.
- One is more likely to find high-degree vertices by following edges than by choosing vertices at random.
- Therefore, a population can be immunised by choosing a random person and vaccinating a friend of that person and then repeating again the process.
- The SIS model: an example is computer viruses.
- At least in networks with right-skewed degree distributions, propagation of the disease turns out to be relatively robust against random vaccionations but highly susceptible to vaccination of the highest-degree individuals.

Exhaustive search 

- A page is important if it is linked by many many pages.

Guided search

- Performs small special-purpose crawls.
- It relies on the assumption that pages containing a particular information on a particular topic tend to be clustered together in local regions of the graph.

Network navigation

- The objective is to design networks structures that make a particular search algorithm perform well.
- The "social distance" is measured by the height of the lowest level in the tree at which they are both connected. In other words, how far one must go up the tree to find the lowest “common ancestor” of the pair.

Phase transition on networks

- E.g. models of opinion formation in social networks.

Other processes on networks

- Voter models, difussion, genetic regulatory models, etc.

- The study of complex networks is still in its infancy. 
 

Happy networking!

The network castle

  




Student Book Notes: Network Science by Albert-L. Barabasi - A powerful new field

Attending the 2015 summer conferences organised by @cigtr I came accross a book authored by Albert-László Barabási titled "Network Science". Actually it was Mathematics Professor Regino Criado who hinted me the name of Barabasi.

The book opens minds and new knowledge fields using Mathematics. It is worth reading and studying! Actually all chapters and other resources can be found in the book's site. Thanks to the author for making it freely available under the Creative Commons licence.

I read all ten chapters. I highlighted some sentences from each of the chapters. I enumerate some of those highlighted points as if this post were a brief collection of notes on the book, hoping that more than one of my blog's readerr will decide to embark on reading the book after going through this introductory post. Network Science students could use this post as a quick (and very initial) cheat sheet.

Happy networking! 

Chapter X - Preface

Understanding networks today is required to understand today's world. This section describes how the text book is used in a Network Science class.

Chapter 0 - Personal Introduction

This chapter describes how the author got trapped by the beauty and the importance of networks. He already mentions contributions such as Bella Bollobas' on random graphs and the work of Erdos and Renyi. It talks also about the difference between social scientists and graph theorists.

Key introductory statements:

- "A simple model, realying on growth and preferential attachment could explain the power laws spotted on some networks".

- "Scale-free networks proved to be surprisingly resistant to failures but shockingly sensitive to attacks".


Chapter 1 - Intro

- "The interplay between network structure and dynamics affects the robustness of the system".
- In a complex system it is difficult to derive the collective behaviour from the knowledge of the system's components.
- "Most networks are organised by the same principles".
- "The most succesful companies of the 21st Century base their technology and business model on networks".
- Epidemic transmission is one real example of the applicability of this new maths-based science.

Chapter 2 - Graph Theory

- "Graph theory is the mathematical scaffold behind network science".
- A path goes through all nodes only once. "A path cannot exist on a graph that has more than two nodes with an odd number of links". 
- Network parameters: Number of nodes, number of links, directness or undirectness of links.
- The choice of nodes and links is important when describing a network.
- Node degree is the number of links to other nodes.
- Average degree in a network: An important variable to play with.
- In directed networks, we talk about incoming degree and outgoing degree.
- Total number of links is denoted by L.
- Average degree k= 2L/N being N total number of nodes.
- "Degree distribution provides the probability that a randomly selected node in the network has degree k".
- "The number of degree-k nodes can de obtained from the degree distribution as N(k)=Np(k)".
- "The adjancency matrix of an undirected network is symmetric".
- "For weighted networks the elements of the adjancency matrix carry the weight of the link".
- Metcalfe's law states that the value of a network is proportional to the square of the number of its nodes.
- Bipartite networks can be divided into two disjoints sets of nodes such that each link connect a node from a set to a node from the other set.
- A path length consists of the number of links it contains.
- In networks physical distance is replaced by path length.

Note: I will not use "" signs in this post anymore. All points are extracted from the mentioned book. Please consider the existence of "" signs i.e. literal or almost literal words coming from the reviewed book in all points. I also informed Albert-László Barabási about the publicacion of this post.

- Distance between nodes changes if the network is directed i.e. d(A,B) maybe is not equal to d(B,A).
- Connected and disconnected networks (disconnected if there is at least a pair of nodes with infinite distance).
- A bridge is any link that, if cut, disconnects the network.
- The clustering coefficient measures the network's local link density.
- The maximal distance in a network is the diameter. The breadth-first-search algorithm helps finding it.

Chapter 3 - Random networks

- A random network is a collection of N nodes where each node pair is connected with probability p.
- A cocktail party chitchat scenario is an example of a random network.
- The degree distribution of a random network has the form of a Poisson distribution.
- The random network model does not capture the degree distribution of real networks. Nodes in random networks have comparable degrees, forbidding hubs (highly connected nodes).
- We have a giant component if and only if each node has on average more than one link.
- Evolution of a random network in function of the average degree k: Subcritical, critical, supercritical and connected.
- The small world phenomenon implies that the distance between two randomly chosen nodes in a network is short.
- Most real networks are in the supercritical regime.
- Real networks have a much higher clustering coefficient than expected for a random network of similar N and L.
- Real networks are not random.
- The random network model is important in network science. Features of real networks not present in random networks may represent some signature of order.

Chapter 4 - The scale-free property

- Let's remember that in a random network there are no highly connected nodes (hubs).
- The existence of hubs (e.g. in the WWW) is a signature of a network organising principle called the scale-free property.
- The degree distribution of a scale-free network follows a power law, and not a Poisson distribution like in random networks
- A scale-free network has a large number of small degree nodes, larger than in a random network.
- In a Poisson distribution (random network), most of the nodes have the same amount of links (the size of the largest node grows logarithmically or slower with N, the number of nodes).
- In a power-law distribution (scale-free network) many nodes have only a few links and there are a few hubs with a large number of links (widely different degrees, spanning several orders of magnitude).
- The larger the network, the larger the degree of its biggest hub (it grows polynomially with the network size).
- Random networks have a scale: Nodes have comparable degrees and the average degree serves as the scale of a random network.
- The scale-free property is missing in those networks that limit the number of links that a node can have.
- Ultra-small world property: Distances in a scale-free network are smaller that in a equivalent random network.
- The bigger the hubs, the more effectively they shrink distances between nodes.
- Scale-free networks are ultra-small when the value of the degree exponent is between 2 and 3.
- The configuration model, the degree-preserving randomization and the hidden parameter model can generate networks with a pre-defined degree distribution.
 - Erdos-Renyi and Watts-Strogatz described exponentially bounded networks. They lack outliers. Most nodes have comparable degrees (e.g. the power grid and highway networks). In these networks, a random network model is a starting point.
- In fat-tailed distributions, a scale-free network offers a better approximation.

Chapter 5 - The Barabasi-Albert model

- In scale-free networks, nodes prefer to link with the most connected nodes (preferential attachment).
- Growth and preferential attachment are responsible, and simultaneoulsy needed, for the emergence of scale-free networks.
- Older nodes have an advantage to become hubs over the time.
- The Barabasi-Albert model generates a scale-free network with degree exponent =3.
- To date all known models and real systems that are scale-free have preferential attachment.

Chapter 6 - Evolving networks

- The Bianconi-Barabasi model can account for the fact that nodes with different internal characteristics acquire links at different rates.
- The growth rate of a node is determined by its fitness. This model allows us to calculate the dependence of the degree distribution on the fitness distribution.
- Fitness distribution is typically exponentially bounded. That means that fitness differences between different nodes are small. With time these differences are magnified resulting in a power law degree distribution.
- Bose-Einstein condensation: That means that the fittest node grabs a finite fraction of the links, turning into a super hub creating a hub and spoke topology (the rich-gets-richer process or winner takes all phenomenon) and losing the network its scale-free nature.
- In most networks, nodes can disappear.
- As long as it continues to grow, its scale-free nature can persist.

Chapter 7 - Degree correlation

- A way to go deeper into understanding network structures based on maths.
- In some networks, hubs tend to have ties to other hubs. That is an assortative network. In disassortative networks, hubs avoid each other.
- A network displays degree correlations if the number of links between the high and low-degree nodes is systematically different from what is expected by chance.
- There is a conflict between degree correlation and the scale-free property. Hubs should be linked among each other with more that one link. 
- Assortative mating reflects the tendency of individuals to date or marry individuals that are similar to them.

Chapter 8 - Network robustness

Once the fraction of removed nodes reaches a critical threshold in a random network, the network abruptly breaks into disconnected components. Percolation theory can be used to describe the transition in random or Erdos-Renyi networks i.e. networks with equal or comparable number of nodes.

Real networks show robustness against random failures. Scale-free networks show a greater degree of robustness against random failures. However, an attack that targets a hub can easily destroy a scale-free network. Depending on the network (the WWW, or a disease propagation), this can be bad or good news.

The failure propagation model and the branching model (plus the overload model and the sandpile model in the critical regime) captures the behaviour of cascading failures. All these models predict the existence of a critical state in which the avalanche sizes follow a power law.

A network that is robust to both random failures and attacks has a hub and many nodes with the same degree i.e a hub-and-spoke topology.

Chapter 9 - Communities

A community is a locally dense connected subgraph in a network. There are weak and stron communities depending on the internal and external number of links of the nodes.

The number of potential partitions in a network grow faster than exponentially with the network size.

The higher a node's degree, the smallest is its clustering coefficient.

Randomly wired networks lack an inherent community structure.

Modularity measures the quality of each partition. Modularity optimization offers a novel approach to community detection.

For a given network the partition with maximum modularity corresponds to the optimal community structure.

A node is rarely confined to a single community. However links tend to be community specific.

The development of the fastest and the most accurate community detection tool remains an active arms race.

The community size distribution is typically fat-tailed, indicating the existence of many small communities with a few large ones.

Community finding algorithms run behind many social networks to help discover potential friends, posts of interests and target advertising.

Chapter 10 - Spreading phenomena

 A super-spreader is an individual responsible for a disproportionate number of infections during an epidemic.

Network epidemics offer a model to explain the spread of infectious diseases.

The homogenous mixing hypothesis (also named fully mixed or mass action approximation) assumes that each individual has the same chance of coming into contact with an infected individual.

Different models capture the dynamics of an epidemic outbreak (Suceptible-Infected, Susceptible-Infected-Susceptible and the Susceptible-Infected-Recovered).

In a large scale-free network a virus can reach instantaneously most nodes and even viruses with small spreading rate can persist in the population.

The human sexual encounter network is a scale-free network.

Bursty interactions are observed in a number of contact processes of relevance for epidemic phenomena.

In assortative netoworks, high degree nodes tend to link with high degree nodes.
Equally, strong ties tend to be within communities while weak ties are between them.

Several network characteristics can affect the spread of a pathogen in a network (e.g. degree correlations, link weights or a bursty contact pattern).

In a scale-free network, random immunization does not erradicate a desease. Selective immunization targeting hubs help eradicate the disease.

The friendship paradox: On average the neighbours of a node have a higher degree than the node itself. So, let's immunize neighbours of randomly selected nodes.

Travel restrictions do not decrease the number of infected individuals. They only delay the outbreak, giving maybe time to expand local vaccinations.

We can use the effective distance (different from the physical distance) to determine the speed of a pathogen.

All in all, a recommendable reference for those willing to get introduced into the Network Science field.

I will be happy to extend this post with comments coming from readers of the "Network Science" book.


Let's network!