Student Paper Notes: Power-law distributions in empirical data by A. Clauset, C.R. Shalizi and M.E.J. Newman

This time the student notes I post are coming from the reading of a scientific paper written by Aaron Clauset, from Santa Fe Institute, Cosma Rohilla Shalizi, from Carnegie Mellon University and Mark Newman, Professor of Physics in the University of Michigan.

The link to the paper is this one. It is a 43-page paper with 70 references on Power-Law distributions in empirical data. It complements the previous notes that I published on the Network Science book authored by Albert-László Barabási and the student paper notes I published on the structure and function of complex networks by Mark Newman.

This paper deals with power-law distributions and the difficulty to characterise them.  The authors present a "principled statistical framework to deal with power-law behaviour in empirical data. The paper has a considerable mathematical component.

The following points are notes mostly literally extracted from this paper. The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.

Many empirical quantities cluster around a typical value. Those distributions can be well characterised with their mean and standard deviation (the square root of its variance).

Population of cities, earthquake intensities and power outage sizes are considered to have power-law distributions.

A quantity x obeys a power law if its probability distribution p(x) proportional to x exp (-alpha), where alpha is a constant exponent (also named scaling parameter).

Alpha typically lies between 2 and 3. Typically, the power law applies only for values greater than some minimum Xmin i.e. the tail of the distribution follows a power law.

How can we recognise a power law?

There are two flavours of power-law distributions: Continuous distributions and discrete ones. Formulas for continuous distributions tend to be easier.

In many cases the Cummulative Distribution Function (CDF) of a power-law distributed variable is important. The visual form of the CDF is more robust than the Probability Density Function (PDF).

A common way to probe for power-law behaviour is to measure x, construct a histogram representing its frequency distribution and plot the histogram on doubly logarithmic axes. The distribution should approximately fall on a straight line, being alpha the absolute slope of the line. Unfortunately, this method creates systematic errors. It is a necessary but not sufficient condition.

We can use maximum likelihood estimators (MLEs) to approximate alpha. We assume that alpha is > 1.

It is important to identify the value Xmin. One approximation is the Bayesian Information Criterion (BIC). Another method, proposed by Clauset et al consists of choosing an Xmin that makes the probability distribution of the measured data and the best-fit power law model as similar as possible. A way to quantify the distance between two probability distributions is the Kolmogorov-Smirnov or KS statistic. This is the maximum distance between the CDF of the data and the fitted model.

Other not so predominant methods are the Hill plot and also simply to limit the analysis to the largest observed samples.

Interestingly, regardless of the true distribution from which data was drawn, we can always fit a power law. The question is whether the fit is a good match to the data.

The basic approach is to compare the fluctuation between a power-law form and synthetic data sets from a true power-law distribution with similar measures with empirical data (using the KS distance statistic). With large statistical samples we have a bigger chance to verify these hypotheses.

Goodness of fit test

It generates a p value that quantifies the plausibility of the hypothesis. The p-value is defined as the fraction of the synthetic distances that are larger than the empirical distance. If p is large the distance can be attributed to statistical fluctuations, if p is small the model is not a plausible fit.

For each synthetic data we compute the KS statistic relative to the best-fit powe law for that data set, not relative to the original distribution from which the data set was drawn.

Typically, the power law is ruled out if p<=0.1. It is a necesary but no sufficient condition. High p values should be treated with caution when n is small.

Let's remember we fit the power-law form to only the part of the distribution above Xmin.

We would also need to rule out the competing exponential or log-normal distributions. We would then act similarly with synthetic values coming from several plausible competing distributions and obtaining the respective p-values. It those p-values for the competing distributions are small, then they are discarded (although not with 100% certainty).

The likelihood ratio test

An easier to implement than the KS distance test to compare two distributions is the likelihood ratio test. The one with the higher likelihood is the better fit (or, if we calculate the logarithm R of the ratio, then it could be positive, negative or zero if there is a tie). If this p-value is small, the sign is a reliable indicator or which model is the better fit to the data.

A distribution is nested if it is a subset of the other distribution type. When distributions are nested, the larger family of distributions provides a fit at least as good as the subset one.

The p-value of the likelihood ratio test is effective at identifying cases in which the data are insufficient to make a firm distinction.

Application to real world data
In general, it is extremely difficult to tell the difference between log-normal, stretched exponential and power-law behaviours.

Happy mathematical description!


Student Paper Notes: The structure and function of complex networks by M.E.J. Newman

This time the student notes I post are coming from the reading of a scientific paper written by Mark Newman, Professor of Physics in the University of Michigan.

The link to the paper is this one. It is a 58-page paper with 419 references on complex networks. As an enticing intro to suggest its reading, nothing else is required. It complements the previous notes that I published on the Network Science book authored by Albert-László Barabási. Compared to the Network Science book, this paper is slightly more condensed.

This paper contains relevant historical references on the field of Network Science and it has also a considerable mathematical component.

The following points are notes mostly literally extracted from this paper. The intent of this post is not to create new content but rather to provide Network Science students with a (literal and non-comprehensive) summary of this paper.

Why this Information Security blog touches the field of Network Science? I am convinced that we can apply Network Science learning points to our Information Security real-life scenarios.

As always, a little disclaimer: These notes do not replace the reading of the paper, they are just that, some student notes (or fragments) of the paper.


- This paper reviews recent and non-recent work on the structure and function of networked systems.
- Vertices are nodes and edges are links.
- Degree: The number of edges connected to a vertex.
- Field of research: Consideration of large-scale statistical properties of graphs.
- The body of theory of Network Science has three aims: First, to find statistical properties like path lengths and degree distributions that define structure and behaviour of networked systems and to suggest appropriate ways to measure these properties. Second, to propose models of networks and third, to predict the behaviour of networked systems.
- A hyperedge joins more than two vertices together.

Networks in the real world

- An acyclic graph has no closed loops. The WWW is in general cyclic.
- The small-world effect: Most pairs of vertices in most networks seem to be connected by a short path through the network.

Properties of networks

- Network transitivity (or, sometimes, clustering): "the friend of your friend is likely to be your friend".
- The clustering coefficient C is the mean probability that the friend of your friend is also your friend.
- The clustering coefficient C measures the density of triangles in the network.
- The probability that two vertices point to each other in a directed network is called reciprocity. In a directed network edges have a sense.

- In a random network, in which each edge is present or absent with equal probability, the degree distribution is binomial or Poisson in the limit of large graph size.
- Real world networks are rarely random in their degree distributions.
The degree of the vertices in most real networks are highly right-skewed i.e. their degree distributions have a long right tail of values that are far above the mean.
- Networks with power-law degree distributions are referred as scale-free (degree distribution).
- The maximum degree of a vertex in a network will in general depend on the size of the network.

- Network resilience: Distance was almost entirely unaffected by random vertex removal. However, when removal targets highest degree vertices, it has devastating effects. An example of this is the Internet.

- Assortative mixing or homophily: selective linking. A classical example of assortative mixing in social networks is mixing by race.
- In some networks, high-degree vertices tend to associate with other high-degree vertices. Most social networks appear to be assortative, other types of networks (biological, technological and information) appear to be disassortative.
- The traditional method for extracting community structure from a network is cluster analysis (also called hierarchical analysis).
- Community structure is a common network property.

- Navigation: Stanley Milgram's small-world experiment. Short paths exist in the network and network members are good at finding them. This is important e.g. for efficient databases structures or better peer-to-peer computer networks.
- Betweenness centrality of a vertex in a network is the number of geodesic paths between other vertices that run through it. Betweenness centrality can also be viewed as a measure of network resilience.

Random graphs

- Poisson random graphs (or Bernoulli graphs) are not adequate to describe some important properties of real-world networks.
- The most important property of a random graph is that it possesses a phase transition, from a low-density with few edges and all components being small, having an exponential size distribution and finite mean size to a high density phase in which an extensive fraction of all vertices are joined together in a single giant component.
- The random graph reproduces well the small-world effect, however, in almost all other respects, the properties of a random graph do no match those of real-world networks.
- A random graph has a Poisson degree distribution, entirely random mixing patterns and no correlation between degrees of adjacent vertices, no community structure and, finally, navigation is not possible using local algorithms.
- The property of real graphs that is simplest to add to random graphs is the non-Poisson degree distributions i.e. the "configuration model". An example of this would be a network with a power-law degree distribution.
- Other examples are: directed graphs, bipartite graphs (which have two types of vertices and edges running only between vertices of unlike types - these ones are sometimes studied using "one-mode" projections).
- An additional random graph model for degree correlation is the exponential random graph. A more specialised model is proposed by Maslov.

Exponential random graphs and Markov graphs

- The only solvable random graph models that currently incorporate transitivity are the bipartite and the community-structured models plus certain dual graph models.
- Progress in understanding transitivity require different (and new) models.

The small-world model

- A less sophisticated but more tractable model of a network with high transitivity. However, the degree distribution of the small-world model does not match most real-world networks very well.

- A lot of attention has been given to the average geodesic path length of the small-world model.

Models of network growth

- The best studied class of network models aim to explain the origin of highly skewed degree distributions.
- Probably Price described the first example of a scale-free network, the network of citations between scientific papers.
- Power laws arise when "the rich get richer" i.e. the amount you get goes up with the amount you already have. Price named that "cumulative advantage". Barabasi and Albert named that "preferential attachment".
- z lately denotes degree (and not m) i.e. the total number of edges in the graph.
- The mechanism of cumulative advantage proposed by Price is widely accepted as the explanation for the power-law degree distribution in real-world networks such as the WWW, the citation network and possibly the Internet.
- The difference between the Price model and the Barabasi and Albert model is that in the latter one the edges are undirected, so there is no distinction between in and out degree. The Barabasi and Albert model is simpler and slightly less realistic.
- There is a correlation between the age of vertices and their degrees, with older vertices having higher mean degree.
- Krapivsky and Redner show that there are correlations between the degrees of adjacent vertices in the model.
- The assumption of linear preferential attachment seems to be a reasonable approximation to the truth.
- The real WWW does not present the correlations between age and degree of vertices as found in the Barabasi and Albert model. This is, according to Adamic and Huberman, because the degree of vertices is also a function of their intrinsic worth.
- Bianconi and Barabasi have presented an extension of the Barabasi and Albert model: Each newly appearing vertex is given a "fitness" that represent its attractiveness i.e. its propensity to accrue new links.
- Price's model is intended to be a model of a citation network. Citation networks are really directed, acyclic and all vertices (approximately) belong to a single component, unless they do not cite and are not cited.
- Simple growth model by Callaway et al.: Vertices normally have degree zero when they are first added to the graph. This model does not show preferential attachment so no power-law distributions but exponential.
- Some networks appear to have power-law degree distributions but they do not show preferential attachment e.g. biochemical interaction networks. These networks could grow by copying vertices.

Processes taking place on networks

- Looking at the behaviour of models of physical, biological, social processes going on these networks.
- In Physics, vertices are sites and edges are bonds.
- A percolation process is one in which vertices or edges on a graph are randomly designated either "occupied" or "unoccupied", asking about various properties of the resulting vertices patterns.
- The problem of resilience to random failure of vertices in a network is equivalent to a site percolation process on the network. The number of remaining vertices that can still successfully communicate is precisely the giant component of the corresponding percolation model.
- Networks with power-law degree distributions are highly susceptible to targeted attacks: one only need to remove a small percentage of vertices to destroy the giant component entirely.
- Cascading failures: Watts provided a simple model for cascading failures as a type of percolation. It could be solved using generating function methods similar to those for simple vertex removal.

Epidemiological processes

- The SIR model divides the population into three classes: Susceptible, Infected and Recovered (with permanent illness immunity).
- Deseases do not always spread on scale-free networks.
- Vaccination can be modeled as a site percolation process.
- As networks tend to be particularly vulnerable to the removal of their highest degree vertices, targeted vaccination is expected to be particularly effective.
- It is not always easy to find the highest degree vertices in a social network.
- One is more likely to find high-degree vertices by following edges than by choosing vertices at random.
- Therefore, a population can be immunised by choosing a random person and vaccinating a friend of that person and then repeating again the process.
- The SIS model: an example is computer viruses.
- At least in networks with right-skewed degree distributions, propagation of the disease turns out to be relatively robust against random vaccionations but highly susceptible to vaccination of the highest-degree individuals.

Exhaustive search 

- A page is important if it is linked by many many pages.

Guided search

- Performs small special-purpose crawls.
- It relies on the assumption that pages containing a particular information on a particular topic tend to be clustered together in local regions of the graph.

Network navigation

- The objective is to design networks structures that make a particular search algorithm perform well.
- The "social distance" is measured by the height of the lowest level in the tree at which they are both connected. In other words, how far one must go up the tree to find the lowest “common ancestor” of the pair.

Phase transition on networks

- E.g. models of opinion formation in social networks.

Other processes on networks

- Voter models, difussion, genetic regulatory models, etc.

- The study of complex networks is still in its infancy. 

Happy networking!

The network castle


Student Book Notes: Network Science by Albert-L. Barabasi - A powerful new field

Attending the 2015 summer conferences organised by @cigtr I came accross a book authored by Albert-László Barabási titled "Network Science". Actually it was Mathematics Professor Regino Criado who hinted me the name of Barabasi.

The book opens minds and new knowledge fields using Mathematics. It is worth reading and studying! Actually all chapters and other resources can be found in the book's site. Thanks to the author for making it freely available under the Creative Commons licence.

I read all ten chapters. I highlighted some sentences from each of the chapters. I enumerate some of those highlighted points as if this post were a brief collection of notes on the book, hoping that more than one of my blog's readerr will decide to embark on reading the book after going through this introductory post. Network Science students could use this post as a quick (and very initial) cheat sheet.

Happy networking! 

Chapter X - Preface

Understanding networks today is required to understand today's world. This section describes how the text book is used in a Network Science class.

Chapter 0 - Personal Introduction

This chapter describes how the author got trapped by the beauty and the importance of networks. He already mentions contributions such as Bella Bollobas' on random graphs and the work of Erdos and Renyi. It talks also about the difference between social scientists and graph theorists.

Key introductory statements:

- "A simple model, realying on growth and preferential attachment could explain the power laws spotted on some networks".

- "Scale-free networks proved to be surprisingly resistant to failures but shockingly sensitive to attacks".

Chapter 1 - Intro

- "The interplay between network structure and dynamics affects the robustness of the system".
- In a complex system it is difficult to derive the collective behaviour from the knowledge of the system's components.
- "Most networks are organised by the same principles".
- "The most succesful companies of the 21st Century base their technology and business model on networks".
- Epidemic transmission is one real example of the applicability of this new maths-based science.

Chapter 2 - Graph Theory

- "Graph theory is the mathematical scaffold behind network science".
- A path goes through all nodes only once. "A path cannot exist on a graph that has more than two nodes with an odd number of links". 
- Network parameters: Number of nodes, number of links, directness or undirectness of links.
- The choice of nodes and links is important when describing a network.
- Node degree is the number of links to other nodes.
- Average degree in a network: An important variable to play with.
- In directed networks, we talk about incoming degree and outgoing degree.
- Total number of links is denoted by L.
- Average degree k= 2L/N being N total number of nodes.
- "Degree distribution provides the probability that a randomly selected node in the network has degree k".
- "The number of degree-k nodes can de obtained from the degree distribution as N(k)=Np(k)".
- "The adjancency matrix of an undirected network is symmetric".
- "For weighted networks the elements of the adjancency matrix carry the weight of the link".
- Metcalfe's law states that the value of a network is proportional to the square of the number of its nodes.
- Bipartite networks can be divided into two disjoints sets of nodes such that each link connect a node from a set to a node from the other set.
- A path length consists of the number of links it contains.
- In networks physical distance is replaced by path length.

Note: I will not use "" signs in this post anymore. All points are extracted from the mentioned book. Please consider the existence of "" signs i.e. literal or almost literal words coming from the reviewed book in all points. I also informed Albert-László Barabási about the publicacion of this post.

- Distance between nodes changes if the network is directed i.e. d(A,B) maybe is not equal to d(B,A).
- Connected and disconnected networks (disconnected if there is at least a pair of nodes with infinite distance).
- A bridge is any link that, if cut, disconnects the network.
- The clustering coefficient measures the network's local link density.
- The maximal distance in a network is the diameter. The breadth-first-search algorithm helps finding it.

Chapter 3 - Random networks

- A random network is a collection of N nodes where each node pair is connected with probability p.
- A cocktail party chitchat scenario is an example of a random network.
- The degree distribution of a random network has the form of a Poisson distribution.
- The random network model does not capture the degree distribution of real networks. Nodes in random networks have comparable degrees, forbidding hubs (highly connected nodes).
- We have a giant component if and only if each node has on average more than one link.
- Evolution of a random network in function of the average degree k: Subcritical, critical, supercritical and connected.
- The small world phenomenon implies that the distance between two randomly chosen nodes in a network is short.
- Most real networks are in the supercritical regime.
- Real networks have a much higher clustering coefficient than expected for a random network of similar N and L.
- Real networks are not random.
- The random network model is important in network science. Features of real networks not present in random networks may represent some signature of order.

Chapter 4 - The scale-free property

- Let's remember that in a random network there are no highly connected nodes (hubs).
- The existence of hubs (e.g. in the WWW) is a signature of a network organising principle called the scale-free property.
- The degree distribution of a scale-free network follows a power law, and not a Poisson distribution like in random networks
- A scale-free network has a large number of small degree nodes, larger than in a random network.
- In a Poisson distribution (random network), most of the nodes have the same amount of links (the size of the largest node grows logarithmically or slower with N, the number of nodes).
- In a power-law distribution (scale-free network) many nodes have only a few links and there are a few hubs with a large number of links (widely different degrees, spanning several orders of magnitude).
- The larger the network, the larger the degree of its biggest hub (it grows polynomially with the network size).
- Random networks have a scale: Nodes have comparable degrees and the average degree serves as the scale of a random network.
- The scale-free property is missing in those networks that limit the number of links that a node can have.
- Ultra-small world property: Distances in a scale-free network are smaller that in a equivalent random network.
- The bigger the hubs, the more effectively they shrink distances between nodes.
- Scale-free networks are ultra-small when the value of the degree exponent is between 2 and 3.
- The configuration model, the degree-preserving randomization and the hidden parameter model can generate networks with a pre-defined degree distribution.
 - Erdos-Renyi and Watts-Strogatz described exponentially bounded networks. They lack outliers. Most nodes have comparable degrees (e.g. the power grid and highway networks). In these networks, a random network model is a starting point.
- In fat-tailed distributions, a scale-free network offers a better approximation.

Chapter 5 - The Barabasi-Albert model

- In scale-free networks, nodes prefer to link with the most connected nodes (preferential attachment).
- Growth and preferential attachment are responsible, and simultaneoulsy needed, for the emergence of scale-free networks.
- Older nodes have an advantage to become hubs over the time.
- The Barabasi-Albert model generates a scale-free network with degree exponent =3.
- To date all known models and real systems that are scale-free have preferential attachment.

Chapter 6 - Evolving networks

- The Bianconi-Barabasi model can account for the fact that nodes with different internal characteristics acquire links at different rates.
- The growth rate of a node is determined by its fitness. This model allows us to calculate the dependence of the degree distribution on the fitness distribution.
- Fitness distribution is typically exponentially bounded. That means that fitness differences between different nodes are small. With time these differences are magnified resulting in a power law degree distribution.
- Bose-Einstein condensation: That means that the fittest node grabs a finite fraction of the links, turning into a super hub creating a hub and spoke topology (the rich-gets-richer process or winner takes all phenomenon) and losing the network its scale-free nature.
- In most networks, nodes can disappear.
- As long as it continues to grow, its scale-free nature can persist.

Chapter 7 - Degree correlation

- A way to go deeper into understanding network structures based on maths.
- In some networks, hubs tend to have ties to other hubs. That is an assortative network. In disassortative networks, hubs avoid each other.
- A network displays degree correlations if the number of links between the high and low-degree nodes is systematically different from what is expected by chance.
- There is a conflict between degree correlation and the scale-free property. Hubs should be linked among each other with more that one link. 
- Assortative mating reflects the tendency of individuals to date or marry individuals that are similar to them.

Chapter 8 - Network robustness

Once the fraction of removed nodes reaches a critical threshold in a random network, the network abruptly breaks into disconnected components. Percolation theory can be used to describe the transition in random or Erdos-Renyi networks i.e. networks with equal or comparable number of nodes.

Real networks show robustness against random failures. Scale-free networks show a greater degree of robustness against random failures. However, an attack that targets a hub can easily destroy a scale-free network. Depending on the network (the WWW, or a disease propagation), this can be bad or good news.

The failure propagation model and the branching model (plus the overload model and the sandpile model in the critical regime) captures the behaviour of cascading failures. All these models predict the existence of a critical state in which the avalanche sizes follow a power law.

A network that is robust to both random failures and attacks has a hub and many nodes with the same degree i.e a hub-and-spoke topology.

Chapter 9 - Communities

A community is a locally dense connected subgraph in a network. There are weak and stron communities depending on the internal and external number of links of the nodes.

The number of potential partitions in a network grow faster than exponentially with the network size.

The higher a node's degree, the smallest is its clustering coefficient.

Randomly wired networks lack an inherent community structure.

Modularity measures the quality of each partition. Modularity optimization offers a novel approach to community detection.

For a given network the partition with maximum modularity corresponds to the optimal community structure.

A node is rarely confined to a single community. However links tend to be community specific.

The development of the fastest and the most accurate community detection tool remains an active arms race.

The community size distribution is typically fat-tailed, indicating the existence of many small communities with a few large ones.

Community finding algorithms run behind many social networks to help discover potential friends, posts of interests and target advertising.

Chapter 10 - Spreading phenomena

 A super-spreader is an individual responsible for a disproportionate number of infections during an epidemic.

Network epidemics offer a model to explain the spread of infectious diseases.

The homogenous mixing hypothesis (also named fully mixed or mass action approximation) assumes that each individual has the same chance of coming into contact with an infected individual.

Different models capture the dynamics of an epidemic outbreak (Suceptible-Infected, Susceptible-Infected-Susceptible and the Susceptible-Infected-Recovered).

In a large scale-free network a virus can reach instantaneously most nodes and even viruses with small spreading rate can persist in the population.

The human sexual encounter network is a scale-free network.

Bursty interactions are observed in a number of contact processes of relevance for epidemic phenomena.

In assortative netoworks, high degree nodes tend to link with high degree nodes.
Equally, strong ties tend to be within communities while weak ties are between them.

Several network characteristics can affect the spread of a pathogen in a network (e.g. degree correlations, link weights or a bursty contact pattern).

In a scale-free network, random immunization does not erradicate a desease. Selective immunization targeting hubs help eradicate the disease.

The friendship paradox: On average the neighbours of a node have a higher degree than the node itself. So, let's immunize neighbours of randomly selected nodes.

Travel restrictions do not decrease the number of infected individuals. They only delay the outbreak, giving maybe time to expand local vaccinations.

We can use the effective distance (different from the physical distance) to determine the speed of a pathogen.

All in all, a recommendable reference for those willing to get introduced into the Network Science field.

I will be happy to extend this post with comments coming from readers of the "Network Science" book.

Let's network!

Security site to bookmark:

An elegant way to sell security

Every now and then we need to get a chance to slow down our professional tactical everyday pace and think strategically. For those moments, I propose to visit Lares is a boutique-alike security company founded by Chris Nickerson and staffed also by Eric M. Smith. Both are reputable security professionals that have greatly contributed to the security community.

Chris Nickerson conducted the famous and irreverent "Exotic Liability" security podcast. Unfortunately, the last available episode dates already from 2013. Chris is also a regular presenter at many international security conferences. He is also one of the authors of the Penetration Testing Execution standard. The amount of followers he has in his twitter account confirms his relevance in the community.

Eric M. Smith has also presented at events such as DefCon 22 in 2014: Along with Josh Perrymon, they studied the topic of RFID chip security.

Let's go now through some of the sections of the site:

- Lares in action can inspire us to come up with alternative ideas to the traditional way of creating and selling security services. It contains more than a dozen videos, and their presentations, of appearances at conferences like BSides, Troopers or Source Barcelona. The historical and practical approaches they propose on how to implement security are worth thinking about.

As an example, we find almost 80 pages on how to increase the value of traditional security testing (both for vulnerability management and penetration testing). That slidedeck is not only fun but also innovative. They use useful concepts such as insider threat assessment, adversary modeling and the
continuous implementation of security tests along any technological process.

- It is great to see in their services section that, together with traditional vulnerability assessments and security testing, they also offer business impact analysis as a value added security deliverable.

- There is a social engineering section, labeled "Layer 8 labs". This is an appropriate name considering the human element as another layer on top of the 7 layers of the OSI communication model. "Layer 8 labs" provide controlled  "phishing" campaigns to increase security awareness among employees in companies and organisations.

As a final comment, I would highlight the modern design of this website: It helps underlining the valuable security content they provide.

Happy ninja reading!

Adversary modeling

Security site to bookmark:

Sharing information about real threats and real attacks

We human beings live in communities. The threats that may affect our group are an important information element communicated to our peers. This piece of information brings greater preparedness against potential risks. In addition, if those risks do really materialise, a faster and more effective reaction is possible.

Something similar can be seen on the Internet: is an example of this. It proposes an automated way to share information about real threats on the Internet.

According to its homepage, OpenIOC "facilitates the exchange of indicators of compromise ("IOCs") in a computable format" i.e. ready to be processed by information systems such as intrusion detection systems and application layer filtering firewalls.

Each compromise indicator contains three elements:
- First, the metadata, which provide contextual information such as the author of the indicator, the name of the indicator and a brief description.
- Second, references, so you can link the indicator to a particular wave of attacks.
- Third, its definition, which describes its specific infection mechanisms and operation.

A valuable detail of this format is the possibility of using Boolean logic to filter indicators automatically.

OpenIOC is an extensible XML encoding protocol initially designed for Mandiant security products such as "IOC Editor", a free XML editor for indicators of compromise, and "Redline", a compromise verification tool for Windows installations, also free.

Security incident responders were interested in this initiative and finally Mandiant OpenIOC standardised and made it available to the open source community ("open source") in 2011.

OpenIOC is currently an open initiative. For example, in OpenIOC Google Groups there is a very active forum where you can get information on how to use this format with log analysis tools like "Splunk" or references of indicator repositories such as

Based on the increasing number of security incidents on the Internet, related information sharing will grow over the coming years, especially among companies with a similar risk profile.

Perhaps a pending task of this project is to implement a non-intrusive compromise detection service for end users outside major corporations.

Happy protection!

You can also read this post in Spanish here.

Fly high!

Book Review: Steve Jobs Hardcover by Walter Isaacson - Lessons for Information Security?

Steve Jobs by Walter Isaacson - Lessons for Information Security?

I went through Mr. Isaacson's Steve Jobs' biography and I would like to share with my information security community some, very personal and biased, learning points, potentially applicable to our industry.

As always, the intent of this post is not to replace the reading of this highly interesting and very well written book by Walter Isaacson.

- The author talks about reality distortion fields and how some people live on them and the difficulty to interact with them for the rest of the mortals when we realise that their reality is different to ours.

In information security there are many people within reality distorting fields.

- However, there is a positive side to reality distortion fields, sometimes they become reality if effort, passion and innovation (and luck? and timing?) kick in.

Totally applicable to information security.

- Even introverts need a dense and effective network to succeed in business.

Frequently forgotten in Infosec.

- Successful business people does not equal successful parents.
- Successful business people does not equal ethical colleagues.

- Selling abilities are key in every social aspect of our lives (business, social, family).
- Some things definitely cannot be patented.
- Money is a hygienic motivational element.

- You can shift working passions during a long period of time (11 years passed since he was ousted from Apple until he came back).

Also very applicable (but hardly applied) to infosec people.

- Charismatic people use to have more troublesome lives than peacefully smooth characters.

Just go and attend any security conference, mingle with people around, and you will confirm this statement.

- The way a company is run can benefit hugely from innovation. We can innovate in the way we manage a company, or a team.

Totally applicable to information security.

- Brutal honesty is a management approach, up to the actors (the sender and recipient) to accept it or not.

- Marketing is key - so key that every Wednesday afternoon, every week, the CEO would meet their marketing people.

Marketing, the forgotten element in most Information Security units.
- Do you control the end-to-end the experience your customer or user goes through when using your product or your service?

Innovative element that security practitioners can apply from day one when they design their deliverables.

- Internal product cannibalisation? Go for it - Otherwise other companies and products will cannibalise yours.

Applicable to our information security products? Certainly. Let's do it.

- Persistence: key for success. Sometimes it's years what we need to devote for
something to succeed.

Is our industry persistent enough? Nice topic for a discussion.

- The second-product effect, if your company does not know why their first product was a success, then they will fail with their second product.

Have it in mind when expanding your security portfolio.

- Electronic communication is fine, but it you want to trigger and foster innovation, make physical communication, face to face, happen.

A piece of wisdom here for our industry, in which we overuse non-physical communication channels.

- Do not mention the ideas you have for a new product before you launch it... or someone else will be faster than you.

Already applied in our industry ;-)

- Privacy and running a publicly traded company create sometimes some conflicts.

Difficult to accept sometimes, but privacy is (or will be) already gone?

- Sometimes a product launch is a failure and later on it gets transformed into a historic breakthrough, especially if you use powerful marketing to let people know how they can use it.

Again, a link to smart marketing that in our industry still does not exist.

- Sometimes, going through serious health problems does not make rough characters softer.

- A feature of apple's culture: "accountability is strictly ensured".

Tough but effective.

- One of the next revolutions to come: textbooks and education. They really did not change a lot in many years.

Are we still on time in terms of securing the coming new learning experience?

- The clash of two different technology philosophies, open versus closed in terms of where software runs and the different approach Microsoft and Apple followed.

- Things can really change (although most of the times you need time, passion and patience). E.g. in 2000 Apple was worth only a twentieth of the value that Microsoft had in the market; in 2010 Apple surpassed Microsoft in market value.

- You choose, in business, either devote the time to start dying or devote the time to start being re-born.

- And, last but not least, when death comes to visit us, we all strive to get that piece of mind that was difficult to find during our lives with our people (family, relatives, colleagues, etc.).

Happy innovation!

Being fast!

Book Review: IT Risk: Turning Business Threats into Competitive Advantage by Westerman and Hunter

This post provides a very personal review on the book titled "IT Risk: Turning Business Threats into Competitive Advantage" by George Westerman (Research Scientist @ MIT) and Richard Hunter (a Gartner fellow) published in 2007.

A book mainly for executives and for those requiring some foundations on why and how information security, also known as IT risk, can be implemented in an organisation today.

It is encouraging to see how some of the learning points present in this book already appeared in this blog in 2006.

As always, an important disclaimer, this review does not replace the reading of the book. On the contrary, it motivates to read it. Thanks to the authors for their research work.

In 9 chapters, the authors provide simple but powerful ideas on how IT risk is really linked to business risk and how both risks can be managed.

The first chapter states how IT has become central in organisations today. However, IT risk is still seen in IT departments. This traditional way of seeing things is proven to be partial and not fully future-proof. The authors remind us how decision makers in organisations need to be aware of the business risks created by IT risks.

IT risks needs to be factored in in business and business risks need to be factored in in IT. The notion of perceived risk is also mentioned and how attention and resources are mostly given to those perceived risks (and not to all existing and real risks).

The authors finalise this chapter with the 4 A's model i.e. risks can be broken down into 4 categories: availability, access, accuracy and agility.

The second chapter presents three disciplines as required ingredients to manage IT risks:

- A well-structured foundation of IT assets.
- A well designed and executed risk governance process.
- A risk-aware culture (different from a risk-averse culture).  

The third chapter mentions the traditional (but powerful) idea that investing in prevention is less expensive than spending in reaction.

They present the IT risk pyramid, being availability at the bottom, then access, then accuracy and finally agility at the vertix.

The fourth chapter expresses the need to simplify the first mentioned ingredient i.e. the IT foundation. This shows the importance of IT and enterprise architecture. When will a business service be migrated to a simplified foundation? When the business risk to keep it in the legacy system is greater than its business value.

The fifth chapter proposes a traditional risk governance process using concepts such as impact and probability. Threats are actually not so mentioned though. They also touch upon the importance to engage decision makers in these governance processes.

The sixth chapter talks about a risk-aware culture and how this starts at the top of the organisation. A risk-averse culture does not really avoid risks. It just neglects them. Two useful concepts are mentioned: Segment different audiences and communicate regularly.

The seventh chapter includes some checklists that would guide the risk manager throughout the implementation of these ingredients.

The eighth chapter provides some keys on the future and the ninth chapter summarises the main learning points.

All in all, a mostly traditional (with some innovative elements) reference that can help our readers to navigate through the business ocean.

Happy risky reading!  

The sky is the limit!

Book review: The regulatory craft - Controlling Risks, Solving Problems and Managing Compliance by Malcolm K. Sparrow

Are you working in a policy-setting team and, at the same time, would you really like to see problems occurring in reality being solved?
How do you normally answer the typical dilemma between theoretical governance and effective policy-implementation in reality?

If the answer to the first question is "yes" and the answer to the second question is "hardly", this book by Malcolm K. Sparrow is for you. Also if the answer to the second question was "I am doing fine but I am running out of ideas", then this is your book to read, too.

It has 4 parts, about 330 pages and a myriad of real examples coming from the author's broad experience.

Part 1 sets the scene describing current regulatory practices and the very much used process improvement approach. A useful manner to achieve a gain delta i.e. improvements (but non-major) in policy implementation.

Part 2 proposes an innovative way to achieve bigger gains than those obtained with process improvement. The author calls it "problem solving" i.e. the capacity to focus on a specific non-compliant situation and to make it compliant. In other words, the possibility to solve real problems, one after the other.

Once a problem is listed, identified and selected, it needs to be precisely defined and, as important as that, the problem-solving team needs to set up a way to measure impact.

Only when these initial steps are thoroughly reflected and mature, one can start with the design of the measures to be taken and their implementation and monitoring. It seems pretty common sense, however, this approach is often not followed.

Together with this problem-solving approach, the author mentions different systems that need to be in place: a problem nomination and selection system, a resource and responsibility assignment system, an oversight and review system and finally, three additional systems: a reporting system, a support system and a learning and reward system.

Clearly problem-solving is not just an ad-hoc alternative to process improvement. It is a thoroughly thought through approach to manage compliance while providing value to the community.

With regard to reactive, proactive and preventive techniques, the author states that the three of them are valid and useful. He adds a valuable ingredient: using risk control as the meter to decide which technique to use in each moment.

Part 3 of the book is precisely devoted to risk control. The innovative element that would empower compliance in their quest towards excellence. The author makes risk management pivotal to apply problem solving techniques.

Risk management methodologies (like the ones also mentioned here and there) and strategic thinking would then become working tools to guide our daily work and to make it effective, regardless of the compliance field we are working on.

Worth mentioning are three risks whose treatment is, according to the author, somehow challenging: "Invisible risks", "risks involving opponents" and "risks for which prevention is paramount".

Certainly in a risk-centered world, the task to assess current and new risks, mostly know as intelligence gathering, becomes crucial for success.

The last part of the book, Part 4, provides examples and summarises proposals.

All in all, a reference for those responsible to make out of a compliance agency a successful story!

Happy problem solving and risk control!

Solving height problems

Security sites to bookmark: and

Belgium: Waffles... and security

The professional activities that we undertake within our company, be it our own shop or our employer, can, and should, benefit from all our other security related activities. The two Security Sites I recommend to visit confirm this. Both are written by well known names in the European Information Security community: The Belgians Didier Stevens and Xavier Mertens.

As securityandrisk, Didier Stevens created his blog in 2006. Since then, he regularly publishes very practical and technical security articles. Didier pays special attention to network security. The quality of the blog invites you to visit his own business' site, especially dedicated to pdf and "shellcode" analysis.  His company site is accessible from

Xavier Mertens, with his unique "Belgian balloon fish" avatar, present both in his twitter account and blog, is the author of On the Internet since 2003, publishes, together with his presented papers, very detailed summaries of the security conferences he attends. This is an opportunity to know what happened and what was said there. As in the case of Didier, also links to, his own security company, specialized in log management and security testing.

Both authors discuss security issues that are useful to our everyday job. From the pages of their blogs, they both link to some security tools (both Didier and Xavier). Didier proposes his own Microsoft Windows process-related utilities and Xavier introduces evasion tools such as "PingTunnel" and "Dns2tcp".

Information security is still a working field in which many breakthroughs, ideas and new developments come from "informal" channels such as blogs and security conferences rather than through formal academic degrees and scientific journals in the field. These two sites confirm this trend.

In short, the visit of these two personal sites from well-known Belgian security experts gives us ideas for our professional life while they nicely introduce the security companies they have created.

You can also read this post in Spanish here.

Happy Belgian security reading!

Making bridges

Security site to bookmark:

Controversial but worth-reading

In the past, guilds regulated and controlled the practice of a craft., an initiative from the volunteer crew, aims to protect the information security profession from intruders.

In an almost irreverent way, they publish news that is charged with irony (for example, a security company that promises 100% security with its products and that, interestingly enough, is successfully attacked and compromised) and references to security snake oil sellers.

All this controversial content is organized into twelve sections that reveal:

- Companies that sell products containing malware before they even reach customers.

- Legal threats to security researchers who have found a security vulnerability.

- Failures in automated software update processes.

- Charlatans, be it individuals or companies, who introduce themselves as security gurus. The subsection dealing with companies can be controversial.

- Plagiarism: A long list of authors and books that turn out to be copies of previous publications.

- Firms offering security services or products that have been attacked and compromised themselves.

- Security companies that send unsolicited e-mail ("spam") to prospective customers.

- Security incidents involving Internet-related companies, such as the case of Stratfor, a company that suffered loss of confidential information in 2011.

- Invented or manipulated security statistics.

- Examples of how media confuse their audience with not confirmed security news. This section stopped being updated a long time ago. The authors could not keep with the rhythm of appearance of such pieces of news.

- Vulnerabilities and data leakage items from initiatives like, the Open Source Vulnerability Database, and, a site that I already recommended in this blog.

Definitely shows the great influence of mass media and acts as a whistle blower against charlatans. It is an Internet-based antidote to identify attempted fraud in information security. Therefore, before buying a product or a security book, have a look at their pages.

Happy errata reading!

You can also read this post in Spanish here.

Dark night

The hedgehog's dilemma - Story of business and IT Security

In Summer 2011 a new security related conference series was started in Madrid. Or better said, a technology-based risk management and innovation event. I had the privilege to give the opening talk on the links between security and business to a wide and wise audience. I titled my talk the hedgehog's dilemma.
This post summarises the main points of the talk. They are still applicable (they are even more applicable now than in 2011!). Happy to start a discussion thread on your views on these macro topics. They are not closed to a command line but they certainly steer our professional future working with and at corporations.
Using wikipedia's description of the dilemma, "hedgehogs all seek to become close to one another to share heat during cold weather but they must remain apart, however, as they cannot avoid hurting one another with their sharp spines".
Security and business suffer exactly from the same dilemma. The objective will be to change the paradigm from hedgehogs to penguins. Penguins can stay together. Actually, they benefit from staying together every winter.
I proposed two dimensions to work with, a methodological dimension and a human one. Let's describe both of them:

From hedgehogs to penguins: A method
Firstly, we need to use traditional risk management concepts such as vulnerability, threat, risk, impact & probability and benefit to risk ratio, all of them explained in the first chapters of IT Securiteers.
Secondly, I propose the use of 1 + 3 + 1 filters. As a security professional, pay attention to elements that pass these five filters:
1. They are real and detected threats. This is why monitoring is key.
2. They cause a high impact to the organisation and they mean a low risk for the attacker.
3. Their treatment does not require massive resources and does not decrease customer usability. This filter is a though one to respect. However, it constitutes a mid-term survival guarantee for Infosec professionals at work.
4. They bring a positive reputation to the security team. This one is also challenging but worth considering in these times in which we need to market everything.
5. They comply with legal and governance requirements and they satisfy senior management's requests. Please do not forget the last part of this fifth filter.
Certainly this is easier said than done. Three additional tactical tips:
a. Plan not more than 40% of your security resources. They need to be available to deal with a great deal of unknown (and ad-hoc/unplanned) activities.
b. Follow a "baby-step" planning approach and celebrate (and sell!) every successful delta.
c. A useful way to structure your work is considering these layers: networks, systems, applications, data and identities. (Thanks to Jess Garcia for this point).

From hedgehogs to penguins: A passion
Security teams certainly need passionate and technically-savvy security professionals. Together with this statement, I would add that we need a multidisciplinary team. Non IT-savvy and non-security savvy players have also their place in a Security Team. These new players can come from fields as distinct as marketing, sociology, statistics, journalism, law and economics.
The number of interactions that some of the security team members need to have with the rest of the organisation is high. Public relations and marketing are essential for the previously presented 5-filter method to succeed.
How many active security teams do you know that already have this innovative composition? Probably not many. Two references to go deeper into this subject of security management: Try IT Security Management and Secure IT up!. I would be happy to present it to you if required.
These multidisciplinary teams will live the motto "share, respect and mobilise":
- Share the information you work with with your colleagues.
- Respect any personal and academic background from any player in the team.
- Mobilise your peers i.e. trigger their curiosity for your field of expertise.
Two models to help growing cohesive teams. Both models aim to find a balance in every team member:
- Find the sweet balanced spot among the skills they offer, their passions and market demands.
- Find the sweet balanced spot among they as individuals, they in their social dimension and finally they in their professional lives.

Multiple leadership and continuous learning
Security teams need more than one leader. Preferably three. At least two that get along well and complement each other. The role of the leader will be to look after team members while delivering the mandated value to the organisation.
In a two-dimensional graph, draw where your team members are in terms of valuable security skills and level of motivation. Those scoring high in both axis constitute your team's critical mass. The role of the leader will be to grow that critical mass i.e. encouraging everyone to sharpen their skills and letting motivation grow inside of them. Imagine a KPI on this!

Important ingredient not to oversee
Security team leaders need to be outward looking and multidisciplinary themselves. They need to act as security ambassadors specially with their reporting lines and customers. They'd better double check periodically whether they still have their senior management support.  

Security innovation: Five provocations
Some food for thought. Call it crazy ideas, call it security innovation:
- Conduct effective guerrilla marketing out of your CERT team.
- Design accurately (and smartly) the experience that a visitor to your facilities and a customer of your security services would leave with. End to end.
- Identify social connectors in your organisations and make them be your security marketing ambassadors, even if they do it unconsciously.
- Make the most of the "power of free" e.g. distribute free encrypted memory devices.
- Be constructive. Remember, life will always find a way!

Happy finding!

Finding a way

Discussion on intuition. Daniel Kahneman's lecture.

Google talks
This post is a recommendation to watch the lecture that Daniel Kahneman gave in the @Google Presents talks. It was a discussion on human intuition, somehow explaining why we magically know things without knowing we know them. We information security practitioners will find many points to link to.

Modest disclaimer: This post by no means tries to replace the video of the talk. It just provides a very subjective (and telegraphic) summary of some of the points touched upon.

Some references such as "Sources of Power: How People Make Decisions" by Gary Klein or "Blink" by Malcolm Gladwell propose that judgement biases are not so negative and actually a source of power. Daniel Kahneman is certainly very sceptical on the power of expert. For example, how would intuition play in Medicine? When you can trust intuition?

Intuition and judgement

Kahneman distinguishes between two modes of thinking i.e. thoughts that come to mind (system 1) and judgements (system 2). Examples of the first ones are something that happens to us, something truly perceived, impressions and also intuitive thinking. This type of thoughts are intuitive, automatic. The second type requires effort. They are deliberate and effortful.

A empirical exercise would be the following one: we would fall into the temptation of eating chocolate more easily if we have to keep a 7 digit number in our head. Our self control is impaired if we are doing another activity. This clearly means that it takes some effort to control our impulses.

Then in minute 12 he starts to talk about skills. For example, driving is a skill. In a skill things begin to happen automatically. That is the reason why we can drive and talk or why braking is completely automatic. However, some skills are completely non-intuitive e.g. driving on skids requires different and non-intuitive skills.

An interesting point is that having emotional reactions to a certain perception is automatic in system 1 but also system 1 is where skills are located. Then he mentions that Herbert Simon (Nobel laureate) defined intuition as simply recognition.

When can you trust intuition?
If there are clear rules in the environment, especially if they can give you immediate feedback, we will acquire those rules e.g. we all identify erratic behaviour when driving.

Human beings are also very good at reinforced practice e.g. anesthesiologists get very good feedback and very quickly, radiologists get the opposite case, slow and not so good feedback i.e. it is more difficult for them to develop intuitive expertise.

In a sentence, intuitive expertise is not possible in chaotic scenarios, that is the reason why the world is not predictable. Formulas beat human when there is some predictability but the perform poorly in low predictability environments.

We frequently have intuitions that are false and are not distinguishable from expert intuitions - how can we distinguish from expert intuition?.

A book by Joshua Foer titled "Moonwalking with Einstein" states that memory is superb at remembering routes through space but memory is poor at remembering a list Our mind is set to think about agents (and they have traits, behaviours) however we are not good at remembering sentences with abstract subjects.

Getting influenced by the environment
Posters we can see and read close to us influence our behaviour. When people are exposed to a threatening word, they move back - the symbolic threat has somehow a real effect.

If we see two unrelated words together, like "banana" and "vomit", we will think about vomit when we see a banana. In effect, we saw two words and we made a story e.g. the banana made us vomit, our associative machinery tries to find a cause.

You make a disgust face, you experience disgust. You make a smiling face, you are more likely to think that things are funny. Place a pencil in your mouth and you will think cartoons are funnier.

By partially activating ideas e.g. by whispering words, then the threshold to feel emotions related to those ideas is lower and all this happen without you knowing it consciously. It is a way to prepare ourselves.

Associative memory is a repository of knowledge. We try to suppress ambiguity, making ambiguous stimuli coherent.

It takes us very little time to create a norm. Our reasoning flows along causal lines, this happens intuitively. The coherence that we experience can be turned into a judgement of probability. However, people have confidence in intuitions that are not essentially true. We use a system that classifies things,  whether they are normal or abnormal, and very quickly. Speed is key for our brain.

Substitution: The dates experiment
Two questions: How happy are you? and How many dates did you had last month?
In that order, correlation is zero. In the reverse order, correlation is 0.66. This is an example of substitution, the emotion that reigns when answering the second one.

Subjective confidence
There is a real demand for over confidence, but this is not the secret to get real and valuable information. Confidence is not a good diagnostic to trust somebody.
The wise way to do it would be to ask what the environment is like and whether they had the opportunity to learn its regularities.

Daniel Kahneman is not really optimistic on us being able to train system 1. This is why e.g. the advertising industry addresses system 1 (emotions and not judgements) e.g. facial characteristics on political leaders (which one looks more confident?) predict 70% of elections. See reference

(minute 56) What happens to people when they are exposed to the idea of money (e.g. the symbol of a dollar), they show selfishness and lack of solidarity.

Nice things
We need to create an environment that will remind people of nice things (and not money e.g.).

Connection between selfcontrol and the general activation of system 2 is an important personality characteristic (e.g. the marshmallow test in children predicts whether they would do better when they are 20)

However, most intelligence tests we have are only for system 2.

It's hard work for system 2 to overturn what system 1 tells. Have that in mind when preparing security awareness sessions or when having a lessons learned exercise on why some security awareness sessions were not effective!

Happy system 1 and system 2 security!

The knowledge house