Student Notes: 3 Papers on complex networks vulnerabilities

This post compiles my notes on three Papers on vulnerabilities in complex networks:
- "Multiscale vulnerability of complex networks" by Stefano Boccaletti, Javier Buldu, Regino Criado and Julio Flores, Vito Latora, Javier Pello and Miguel Romance (2007).

- "Error and Attack Tolerance of Complex Networks" by Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (2000).

- "Information Theory Perspective on Network Robustness" by Tiago A. Schieber, Laura Carpi, Alejandro C. Frery, Osvaldo A. Rosso, Panos M. Pardalos and Martin G. Ravetti (2015).

As always, I also add my usual disclaimer when dealing with Student Notes:

The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.


On the Paper titled "Multiscale vulnerability of complex networks" by Stefano Boccaletti, Javier Buldu, Regino Criado and Julio Flores, Vito Latora, Javier Pello and Miguel Romance (2007):

- The paper defines vulnerability of a complex network as the capacity of a grpah to maintain its functional performance under random damages or malicious attacks.

- Network malfunctioning could be caused by the deletion of a node and all the links ending in it or by the deletion of one or several links between nodes. This paper focuses on the deletion of links (and not nodes).

- A proper measure of vulnerability should refer to measures of the link betweenness. In general we have to consider the full multiscale sequence of betweenness coefficients (a higher betweenness means less vulnerability).

- A geodesic is the shortest path between two nodes.

On the Paper titled "Error and Attack Tolerance of Complex Networks" by Reka Albert, Hawoong Jeong and Albert-Laszlo Barabasi (2000):

- This paper demonstrates that error tolerance is not shared by all redundant systems but a class of inhomogeneously wired networks called free-scale networks display it.

- However, error tolerance comes at a high price: these networks are extremely vulnerable to attacks. The removal of the nodes that play the most important role in achieving network's connectivity could affect functional performance.

- Complex (both empirical and theoretical) networks can be divided in two major classes based on P(k) i.e. the probability that a node in the network is connected to k other nodes. In the first case P(k) is peaked at an average and decays exponentially for large k (an exponential tail). Typical examples of these exponential (homogeneous) networks are the random graph model of Erdos and Renyi and the small world model of Watts and Strogatz. Each node has approximately the same number of links. In the second case, there are inhomogeneous networks (scale-free networks) for which P(k) decays as a power law (a power law tail) i.e. P(k) = alfa times k exp (minus gamma). They do not have a characteristic scale. Highly connected nodes are statistically significant in scale-free networks.

- Connectivity in a homogeneous network follows a Poisson distribution peaked at and decaying exponentially for k>> .

- The scale-free model incorporates two common ingredients in real networks: growth and preferential attachment.

- The average length of the shortest paths between any two nodes in the network is diameter d. It describes the interconnectedness of a network. Networks with a very large number of nodes can have a very small diameter.

- Error tolerance studies the changes in the diameter when a small fraction f of the nodes is removed.

- In exponential networks, the diameter increases monotonically with f, since all nodes have approximately the same number of links, they contribute equally to the network's diameter.

- In scale-free networks, the diameter remains unchanged under an increasing level of errors. The removal of "small nodes" does not alter the path structure.

- Attack survivability: When the most connected nodes are eliminated, the diameter of the scale-free network increases rapidly.

- In exponential networks, for fractions f > 0.28 we have cluster sizes S close to 0. Qualitatively similar to the percolation critical point.

- In scale-free networks, the threshold is extremely delayed as longs as it is not a targeted attack (in that case fc=0.18).

On Paper titled "Information Theory Perspective on Network Robustness" by Tiago A. Schieber, Laura Carpi, Alejandro C. Frery, Osvaldo A. Rosso, Panos M. Pardalos and Martin G. Ravetti (2015):

- This paper proposes a dynamical definition of network robustness based on Information Theory.

- They define a failure as a temporal process defined in a sequence. Robustness measures then dissimilarities between topologies after each time step of the sequence.

- Robustness is the ability of the network to continue performing after failures or attacks.

- The most popular methodology to measure robustness is based on percolation theory and on the size of the biggest connected component. However, depending on the network structure, it is possible to attack great part of it keeping these measures blind to changes.

-  This paper proposes to measure network robustness based on the Jensen-Shannon divergence. It quantifies the topological damage of each time step due to failures, not considering the consequences of the dynamical process operating via the network.

- The distance that a given topology is apart from itself after a failure quantifies robustness.

- The Jensen-Shannon divergence between two probability distributions P and Q is defined as the Shannon entropy of the average minus the average of the entropies.

- The robustness measure proposed by this paper depends not only on the network topology but on the sequence of failures over time, aiming to quantify the vulnerability of a given structure under a series of deterministic or stochastic failures.

- The computation cost of the probability distribution function (PDF) update depends on the link removed. New algorithms as the ANF or HyperANF (based on HyperLogLog counters) offer a fast and precise approach and obtain very good approximations of the distance probability distribution for graphs with millions of nodes in a few seconds.


- The network's average degree, mean degree and the minimum and maximum degree are immediately obtained from the degree distribution. The network's efficiency, diameter, average path length, fraction of disconnected pairs of nodes and other distance features can be obtained from the distance distribution.

- The knowledge of critical elements is of great importance to plan strategies either to protect or to efficiently attack networks.

- The problem of finding the best sequence of links to destroy the network can be solved through combinatorial optimization approaches.

 - This method can efficiently work with disconnections as the distance PDF is able to acknowledge the fraction of disconnected pairs of nodes.

Happy robustness study!

A network city