Book Review: How to be a modern scientist by @jtleek i.e. Jeffrey Leek

In the introduction to the Data Science world, one needs to build the right frame surrounding the topic. This is usually done via a set of straight to the point books that I will be summarising in this blog.

The first book that I start with is written by Jeffrey Leek. It is not a Data Science book by itself but rather an introductory set of tips on how to aspire to make science today.

The title of the book is "How to be a modern scientist" that you can get here. Actually, the series of posts that I start with this one is a consequence of reading this book. It is a way to acknowledge value in a twofold manner: first, I praise the book and congratulate the author and second, I share with the community a biased version of value that they could obtain by reading this book. These two processes are currently also present in the scientific community, together with more traditional aspects such as scientific paper reading, and, certainly, writing.

Let me try to share with you the main learning points I collected from this book. As always, here it goes my personal disclaimer: the reading of this very personal and non-comprehensive summary by no means replaces the reading of the book it refers to; on the contrary, this post is an invite to read the entire work.

Paper writing and publishing
There are currently three elements in modern science, what you can write, what you can code and the data you can share (the data you have based your investigation on).
The four parts the author states that a scientific paper consists of are great: a set of methodologies, a description of data, a set of results and, finally, a set of claims.
A key point is that your paper should tell or explain a story. That is why the author talks about "selecting your plot" for your paper i.e. once you have an answer to your question is when you start writing your paper.
These chapters distinguish between posting a preprint of a paper (for example in arxiv.org and submitting the paper to a peer-reviewed journal. For junior scientists, the mix the author mentions of using a preprint server and a closed access journal is very adequate.


Peer review and data sharing
The author proposes some elegant ways to carefully and timely review papers and also mentions the use e.g. of blogs to start sharing a serious and constructive review.
Regarding data sharing, his suggestion is to use a public repository that would remain accessible throughout time such as e.g. figshare.com.


Scientific blogging and coding
A way to market your papers can be via blogging. The three recommended platforms are blogger.com, medium.com and wordpress.org.
The author also reminds us that the Internet is a medium in which controversy flourishes.
In terms of code, the suggestion for general code is github.com and bitbucket.com.


Social media in science
An useful way to promote the work of others and your work.


Teaching in science
Post your lectures on the Internet (be aware of any  non-disclosure agreements with the University or the educational institution to teach at. Share videos of your lectures and, if resources allow it, create your own online course. 


Books and internal scientific communication
Three platforms are suggested: leanpub.com, gitbook.com and amazon kindle direct publishing.

Regarding internal communication, slack.com is one of the proposed tools to keep teams in sync.


Scientific talks and reading scientific papers, credit and career planning and online identity
This are the last sections of the book: some hints on preparing scientific talks, reading papers constructively and, very important, giving credit to all those community members who have help you out either by writing something you use or by creating frameworks you use. A key suggestion is to use as many related metrics as possible in your CV and in your presentations.

Finally, the books ends up with some useful (and common sense based) tips on career planning and online identity.

Thanks to the author of How to be a modern scientist



Happy revealing!

Economics Book review: Bad Samaritans by Ha-Joon Chang - Reality vs hearsay - Similar in Infosec?

I am convinced that Information Security professionals can benefit from reading, not only Information Security books like this one or that one, but also books that would shed some light on key business areas. Areas to which, ultimately, Information Security and IT Security provides services to. This is why I propose a series of book reviews outside the Information Security realm. Economics is certainly one of those. Understanding key points on Economics would enable security professionals to understand and assess system and data criticality and to better aim at providing added value to their customers.
 
In this occasion I present some learning points extracted from the book titled Bad Samaritans by Ha-Joon Chang. Certainly this list does not replace the reading of the book. I do recommend its careful reading.

However, for those with little time, maybe these points can be of help to you:

In two thoughts:

- There is a tendency in rich countries to request developing countries to follow economic policies that are, in many occasions, the opposite of what the rich countries did to reach where they are in economic terms.
- Care, reflection, attention to detail and a sense of fairness should be applied in this politically-driven field named economics.

In more than two thoughts:

On Chapter 0

- South Korea as a country greatly improved in about 50 years. Similarly to Haiti becoming Switzerland. This happened thanks to a mix of economic policies that, during most of those years, from the 1960s to today, by no means could be considered free-trade based.
- This economic development was not only linked to periods of democracy.
- The Korean government was extra careful controlling imports and their influence in their national economy.
- Intellectual property piracy has played an important role in countries that did protect their industries.
- This chapter argues clearly against neo-liberal economics.
-  It is interesting to see that today's rich countries used protectionism and subsidies to reach their current state.
- All this has been summarised as "climbing and kicking away the ladder".
- So, it seems that a careful, selective and gradual opening of countries' economies is key to make progress in this matter.

On Chapter 1

- Fewer than 50 years ago, it was unthinkable to see Japan as a high-quality car maker country.
- Democracy in Hong-Kong only started in 1994. Three years before the handover to China. And 152 years after the start of the British ruling.
- Before free trade, there was protectionism.
- Globalisation was not always hand in hand with free trade.
- Interesting thought: Something purely driven by politics is evitable.
- The role of the IMF, the World Bank and the WTO is biased towards the benefit of the rich countries.

On Chapter 2

- How did rich countries become rich? In a nutshell, by protecting their markets.
- How did rich countries protect their markets? Basically by:
  • Applying tariff rates.
  • Keeping industries in their local markets
  • Supporting local industries via governmental decisions (and budget).
  • Keeping primary commodities production in the colonies.
In a way, while reading this chapter, one can grasp the confusion that, throughout History, economic theories had between limiting parameters during a specific period of time and new limiting parameters after having applied a specific economic policy in terms of development or industrialisation, or even after benefiting from a specific technological breakthrough.

On Chapter 3

Very succinctly, this chapter suggests the need to find the right pace and timeline for a country to adopt free trade or, even, the right balance between protected and free trade. Done very quickly, it could mean a lack of growth. Done very slowly, it could mean losing growth opportunities.

On Chapter 4

Today's rich countries regulated foreign direct investment when they were at the receiving end. Foreign direct investment needs to be regulated.


On Chapter 5

There is a fine line defining which (and whether) some specific enterprises need to be owned by the State and when they need to be sold and to whom.
 

On Chapter 6

Equally, it is also very complex to get the right balance on Intellectual Property Rights and when ideas need to be protected by patents.

On Chapter 7

Economics is driven by politics. This is the reason why a balanced and prudent government is so decisive.

On Chapter 8

The trouble of corruption and how it damages the economy.  Interestingly, economic prosperity and democracy are not so compulsorily linked.



On Chapter 9

Culture in all countries can evolve with the right political measures.


On a Final Chapter      

There was at least an example in History when the "Bad Samaritans" (as so called by this book) behaved as "Good Samaritans": The Marshall Plan.

Enjoy the reading!

Enjoy the future!






















Papers on Blockchain and Bitcoin: Student notes

Lots of time devoted to this post. The reader will find it useful to get an initial idea of both concepts: Bitcoin and blockchain.

As usual, a disclaimer note: This summary is not comprehensive and it reflects mostly literal extracts from the mentioned papers and article.

Let's start summarising a classic paper, the paper on bitcoin, written by Satoshi Nakamoto.

Paper 1
Bitcoin: A Peer-to-Peer Electronic Cash System by Satoshi Nakamoto

This paper appeared on 2008. The first Bitcoins were exchanged in 2009. By June 2011 there were around 10000 users and 6.5 million Bitcoins [Info coming from this paper]. 

Bitcoins (BTC) are generated at a predictable rate. The eventual total number of BTC will 21 million [Info coming from this paper].

This is a proposal of a peer-to-peer electronic cash version. The novelty here is that there is no need for a trusted third party to tackle the double spending challenge. The key for the functioning of this peer-to-peer network is that the majority of CPU power is not controlled by an attacker.

The proposal is to replace the need of trust, an element that make transaction costs higher by a cryptographic (and distributed) proof. More specifically, via a peer-to-peer distributed timestamp server that generates computational proof of the chronological order of transactions.

Interestingly, an electronic coin is a chain of digital signatures. The main challenge in an electronic payment system is double spending avoidance. A digital mint would solve this, however this means that the entire scheme would depend on the mint.

The participants of the peer-to-peer network form a collective consensus regarding the validity of this transaction by appending it to the public history of previously agreed transactions (the blockchain) using a hashing function and their keypair. A transaction can have multiple inputs and multiple outputs [Info coming from this paper].

The only way to confirm the absence of a transaction is to be aware of all transactions. Without the role of a central party, this translates into two requirements:
- All transactions must be publicly announced.
- All market participants should agree on a single payment history. In practical terms, this means the majority of market participants.

The first technical element requiring this electronic payment proposal is a timestamp server. Each timestamp includes all previous timestamps. A timestamp consists of a published hash of a block of items.

How do we consider the distributed nature of the system in the case of the timestamp servers? The authors (or author) propose a proof-of-work system. In this case, the proof-of-work means one CPU-one vote. The majority decision is represented by the longest chain. This longest chain will have the greatest proof of work effort invested in it.

Technically, the proof-of-work involves scanning for a value that when hashed, the hash begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash.

The reader can start grasping the great degree of CPU-intensity that this electronic payment system requires.

Nodes always consider the longest chain to be the correct one and they will keep working on extending it.

Incentives come both from the creation of a new coin and from transaction fees. The first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. Potential attackers will find much more benefitial to them to devote their CPU cycles and electricity to create new coins rather than to re-do existing blocks and to control enough consensus for theirs.

Transactions are hashed in a Merkle Tree way. This makes payment verification possible without running a full network node. This verification helps as long as honest nodes control the network.

The authors state that there is never the need to extract a complete standalone copy of a transaction history.

As all transactions need to be published, the way to obtain privacy in this system is to de-couple identities from public keys. However, privacy is only partially guaranteed. An additional recommendation is to use a new keypair for each new transaction.

An attacker can only try to change one of their own transactions to take back money that they recently spent.




Paper 2
An Analysis of Anonymity in the Bitcoin System by Fergal Reid and Martin Harrigan

This paper provides further input, probably the first paper, on the topic we mentioned earlier in this post regarding Bitcoin and the limits of user anonymity.

The decentralised nature of the BTC system and the lack of a central authority brings along the need to make all transaction history publicly available.

Users in Bitcoin are identified by public keys. Bitcoin maps a public key with a user only in the user's node and it allows users to issue as many public keys as wished. Users can also make use of third-party mixers (or laundry).

The interesting element of this paper is the description of the topological structure of two networks derived from Bitcoin's public transaction history: The transaction network and the user network. This is doable thanks to the transaction history being publicly available. After joining Bitcoin's peer-to-peer network, a client can freely request the entire history of Bitcoin transactions. This enables the possibility of performing passive identification analysis.

The authors of this paper studied BTC transactions from January 2009 to July 2011. 1019486 transactions and 12530564 public keys.

The authors of this paper are not aware of network structure studies of electronic currencies. However, this was done in a physical currency based on gift certificates named Tomamae-cho that existed in Japan during 3 months in 2004-2005.

The flow of that currency showed that the cumulative degree distribution followed a power-law distribution and the network showed small-world properties (high average clustering coefficient and low average path length.

Many papers maintain the difficulty to keep anonymity in networks in which user behaviour data is available. The main postulate of the authors of this paper is that Bitcoin does not anonymise user activity.

The transaction network
It represents the flow of BTCs between transactions over time. Each node represents a transaction and each directed edge represents an output of the transaction corresponding to the source node that is an input to the transaction corresponding to the target node. Each edge includes a timestamp and a BTC value.

There is no preferential attachment in this network.

The user network
It represents the flow of BTCs between users over time. Each node represents a user and each directed edge represents an input-output pair of a single transaction. Each directed edge also includes a timestamp and a BTC value.

As a user can use many different public keys, the authors of the paper construct an ancillary network in which each vertex represents a public key. They connect nodes with undirected edges where each edge joins a pair of public keys that are both inputs to the same transaction and then are controlled by the same user.

The contraction of public keys into users generates a network that is a proxy for the social network of BTC users.

Disassembling anonymity
A first source to decrease anonymity consists of integrating off-network information. Some BTC related organisations relate public keys with personally identifiable information. SOme BTC users disclose voluntarily  their public keys in fora.

Bitcoin public keys are strings with about 33 characters in length and starting with the digit one.


A second source of information is IP addressing. Unless they are using anonymising proxy technology such as Tor, it is relatively true that the first IP address informing of a transaction is the source of it.

A third source is based in egocentric analysis and visualisation e.g. WIkileaks published its public key to request donations. The analysis of transactions having as destination that particular public key can also provide input on identities.

A fourth source will be context discovery e.g. identifying nodes that correspond to BTC brokers.

These techniques help investigating BTC thefts. For example, a very quick transfer of BTCs between public keys (most of them not yet known to the network of already done transactions)  can be an indication to generate a theft hypothesis.

There are other analysis paths involving tainted BTCs, order books from BTC exchanges, client implementations, time analysis and the like.

Mitigation strategies
The official BTC client could be patched to prevent the linking of public keys with user information, a service that would use dummy public keys could be implemented (certainly, this would increase transaction fees). Even the BTC protocol could be modified to allow for BTC mixing at protocol level.

For the time being, the authors of this paper state that physical cash payments still represent a competitive and anonymous payment system.

The final statement from the authors of this paper: "Strong anonymity is not a prominent design goal of the BTC system".




Paper 3
Bitcoin: Economics, Technology and Governance by Rainer Boehme, Nicolas Christin, Benjamin Edelman and Tyler Moore

This paper defines Bitcoin (BTC) as an online communication protocol facilitating the use of a virtual currency. It states that BTc is the first widely adopted mechanism to provide absolute scarcity of a money supply. Inflation does not have a place in this system.

Public keys serve as account numbers. Every new transaction published to the BTC network is periodically grouped in a block of recent transactions. A new block is added to the chain of blocks every ten minutes.

In some cases, a transaction batch will be added to the block chain but then a few minutes later it will be altered because a majority of miners reached a different solution.

When listing a transaction, the buyer and the seller can also offer to pay a "transaction fee", normally 0.0001 which is a bonus payment to whatever miner solves the computationally difficult puzzle that verifies the transaction.

The paper reviews four key categories of intermediaries: Currency exchanges, digital wallet services, mixers and mining pools.

Currency exchanges
They exchange BTCs for traditional currencies or other virtual currencies. Most operate double auctions with bids and asks and charge a commission (from to 0.2. to 2 percent). Today BTC resembles more a payment platform rather than a real currency.

There are significant regulatory requirements (including expensive certification fees) to establish a exchange. In addition to that, they require considerable security measures. So the number of them is relatively limited.

Digital wallet services
They are data files that include BTC accounts, recorded transactions and keys necessary to spend or transfer the stored value. In practice, digital wallet services tend to increase centralisation (and online availability with high security requirements also).

The loss of a private key, if not backed up, would mean the loss of the possibility to trade with those owned (i.e. digitally signed) BTCs.

The entire blockchain reached 30GB in March 2015.

Mixers
Mixers ensure that timing does not yield clues about money flows. They let users pool sets of transactions in unpredictable combinations. Mixers charge 1 to 3 percent of the amount sent. Mixer protocols are usually not public.


Mining pools
BTCs are created when a miner solves a mathematical puzzle. Mining pools now combine resources from numerous miners. Oversized mining pools threaten the decentralisation that underpins BTC's trustworthiness.

Uses of Bitcoin
Initially it seems illicit activities use BTC given its openness and distributed nature. Every Bitcoin transaction must be copied into all future versions of the block chain. Updating the block chain entails an undesirable delay, making BTC too slow for many in-person retail payments.

Some scientists stress the importance of BTC for its ability to create a decentralised record of almost anything.

Risks in BTC
Market risk due to the fluctuation in the exchange rate between BTC and other currencies. It has also the shallow market problem: a person trading quickly a large amount would affect the market price.
Counterparty risk: Of the exchanges that closed (either due to a security breach or to low-volume business), 46% of them did not reimburse their customers after shutting down.

The BTC system offers no possibility to un-do a transaction, creating then transaction risk (and affecting end consumer protection).

There is certainly some operational risk coming from the technical infrastructure and the already mentioned 51 percent attack.

Finally, BTC faces also privacy, legal and regulatory risks.

Crime
Three types, BTC-specific crime, BTC-facilitated crime and money laundering.

Regulation
The authors suggest that longstanding reporting requirements can provide a level of compliance for virtual currencies similar to what has been achieved for traditional currencies. However, they recommend to consider regulations in the broader context of a global market for virtual currencies services.

Social science lab
Interestingly, most users treat their bitcoin investments as speculative assets rather than as means of payment.

Incentives
A so far theoretical concern: Larger blocks are less likely to win a block race than a smaller one.

Privacy and anonymity
Some authors claim that almost have of BTC users can be identified.

An open question posed by the authors
What happens if the BTC economy grows faster than the supply of bitcoins?

A final thought by the authors of this paper: BTC may be able to accommodate a community of experimentation built on its foundations.

Paper 4
Bitcoin-NG: A scalable blockchain protocol by Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, and Robert van Renesse (Cornell Univesity)

This paper proposes a new blockchain protocol designed to scale. Original bitcoin-derived blockchain protocols have inherent scalability limits. To improve efficiency, one has to trade off throughput for latency. BTC currently targets a conservative 10-minute slot between blocks, yielding 10 minute expected latencies for transactions to be encoded in the blockchain.

Bitcoin-NG achieves a performance improvement by decoupling Bitcoin's blockchain operation into two planes: leader election and transaction serialisation.

Some generic descriptions of the blockchain protocol
An output is spent if it is the input of another transaction. A client owns x Bitcoins at time t if the aggregate of unspent outputs to its address is x. The miners commit the transactions into a global append-only log called the blockchain.

Blockchain
The blockchain records transactions in units of blocks. A valid block contains a solution to a cryptopuzzle involving the hash of the previous block, the hash (the Merkle root) of the transactions in the current block, which have to be valid and a special transaction (the coinbase) crediting the miner with the reward for solving the cryptopuzzle. The cryptopuzzle is a double hash of the block header whose result has to be smaller than a set value. The difficulty of the problem, set by this value, is dynamically adjusted such that blocks are generated at an average rate of one every ten minutes.

Bitcoin-NG
It is a blockchain protocol that serialises transactions allowing for better latency and bandwidth than BTC.

The protocol divides into time epochs. In each epoch, a single leader is in charge of serialising state machine transitions. To facilitate state propagation, leaders generate blocks. The protocol introduces two types of blocks: key blocks for leader election and microblocks that contain the ledger entries.

Leader election is already taking place in BTC. But in BTC the leader is in charge of serialising history, making the entire duration of time between leader elections a long system freeze. Leader election in BTC-NG is forward-looking and ensures that the system is able to continually process transactions.

Resilience
Bitcoin-NG is resilient to selfish mining against attackers with less than 1/4 of the mining power.

Bitcoin-NG shows that it is possible to improve the scalability of blockchain protocols to the point where the network diameter limits consensus latency and the individual node processing power is the throughput bottleneck.  


Paper 5
A Protocol for Interledger Payments by Stefan Thomas and Evan Schwartz

This paper deals with the complexity to move money between different payment systems. The authors of the paper propose a way to connect different blockchain implementations. It uses ledger-provided escrow (conditional locking of funds) to allow secure payments through untrusted connectors.

This is a protocol for secure interledger payments across an arbitrary chain of ledgers and connectors. It uses ledger-provided escrow based on cryptographic conditions to remove the need to trust connectors between different ledgers. Payments can be as fast and cheap as the participating ledgers and connectors allow and transaction details are private to their participants.

The focus of this summary is not the deep description of this protocol but the introduction to the BAR (Byzantine, Altruistic, Rational model.

Byzantine actors may deviate from the protocol for any reason, ranging from technical failure to deliberate attempts to harm other parties or simply impede the protocol.

Altruistic actors follow the protocol exactly.

Rational actors are self-interested and will follow or deviate from the protocol to maximize their short and long-term benefits.

The authors of the paper assume that all actors in the payment are either Rational or Byzantine. Any participant in a payment may attempt to overload or defraud any other actors involved. Thus, escrow is needed to make secure interledger payments.

This protocol proposes two working modes: The atomic mode and the universal mode.

In the atomic mode, transfers are coordinated by a group of notaries that serve as the source of truth regarding the success or failure of the payment. The atomic mode only guarantees atomicity when notaries N act honestly. Rational actors can be incentivised to participate with a fee.

The universal mode relies on the incentives of rational participants to eliminate the need for external coordination.



Paper 6
A Next-Generation Smart Contract and Decentralized Application Platform from Ethereum's GitHub repository

This white paper presents a blockchain implementation alternative to BTC and, initiallly, more generic.It presents blockchain technology as a tool of distributed consensus. It is not only cryptocurrencies but also financial instruments, non-fungible assets such as domain names or any other digital asset being controlled by a script i.e. a piece of code implementing arbitrary rules (e.g. smart contracts).

Ethereum provides a blockchain with a built-in fully fledged Turing-complete programming language.

A recap on BTC
 
As already mention in the summary of Paper 1 in this post, BTC is a decentralised currency managing ownership through public key cryptography with a consensus algorithm named "proof of work". It achieves two main goals: It allows nodes in the network to collectively agree on the state of the BTC ledger and it allows free entry into the consensus process. How does it do this last point? By replacing the need to use a central register by an economic barrier.

The ledger of a cryptocurrency can be thought of as a state transition system. The "state" in BTC is the collection of all coins (unspent transaction outputs, UTXO) and their owners.

BTC decentralised consensus process requires nodes in the network to continuously attempt to produce packages of transactions called blocks. The network is intended to create a block every ten minutes. Each block contains a timestamp, a nonce, a hash of the previous block and a list of all transactions that took place in the previous block.

Requirement for the "proof of work": The double SHA256 hash of every block - a 256-bit number - must be less than a dynamically adjusted target (e.g. 2 to the power of 187).

The miner of every block is entitled to include a transaction giving themselves 25 BTC out of nowhere.

In the event of a malicious attacker, they will target the order of transactions, not protected by cryptography.

The rule is that in a fork the longest blockchain prevails. In order for an attacker to make his blockchain the longest, he would need to have more computational power than the rest of the network (51% attack).

Merkle Trees
A Merkle Tree is a type of binary tree. Each node is the hash of its two children. As hashes propagate upwards. This way, a client, by downloading the header of a block, would know whether the block has been tampered.

A "simplified payment verification" protocol allows for light nodes to exist. They download only block headers and branches related to their transactions.

Alternative blockchain applications


Namecoin: A decentralised name registration database.
Colored coins and metacoins: A customised digital currency on top of BTC.


Basic scripting
UTXO in BTC can be owned also by a script expressed in a simple stack-based programming language. However, this language has some drawbacks:

- Lack of Turing completeness. Loops are not supported.
- Value-blindness: UTXO are all or nothing.
- No opportunity to consider multi-stage contracts.
- UTXOs are blockchain-blind.

Ethereum builds an alternative framework with a built-in Turing-complete programming language.

Ethereum
An Ethereum account contains four fields: The nonce (a counter that guarantees that each transaction can only be processed once), the ether balance, the contract code and the account's storage.

Ether is the crypto-fuel of Ethereum. Externally owned accounts are controlled by private keys and contract accounts are controlled by their contract code.

Contracts are autonomous agents living inside the Ethereum execution environment. Contracts have the ability to send message to other contracts. A message is a transaction produced by a contract. A transaction refers to the signed data package that stores a message to be sent from an externally owned account. Each transaction sets a limit to how many computational steps of code execution it can use.

Ethereum is also based on blockchain. Ethereum blocks contain a copy of both the transaction and the most recent state.

Ethereum applications
Token systems, financial derivatives (financial contracts mostly require reference to an external price ticker), identity and reputation systems, decentralised file storage and decentralised autonomous organisations.

Other potential uses are saving wallets, a decentralised data feed, smart multisignature escrow, cloud computing, peer-to-peer gambling and prediction markets.

GHOST
Blockchains with fast confirmation times suffer from reduced security due to blocks taking a long time to propagate through the network. Ethereum implements a simplified version of GHOST (Greedy Heaviest Observed Subtree) which only goes down seven levels.

Currency issuance
Ether is released in a currency sale at the price of 1000-2000 ether per BTC. Ether has an endowment pool and a permanently growing linear supply.

The linear supply reduces the risk of an excessive wealth concentration and gives users a fair chance to acquire ether.

Mining
BTC mining is no longer a decentralised and egalitarian task. It requires high investments. Most BTC miners rely on a centralised mining pool to provide block headers. Ethereum will use a mining algorithm where miners are required to fetch random data from the state. This white paper states that this model is untested.
Ethereum full nodes need to store just the state instead of the entire blockchain history. Every miner will be forced to be a full node, creating a lower bound on the number of full nodes and an intermediate state tree root after processing each transaction will be included in the blockchain.

The question now is ... will it work?




Article 1
Technology: Banks Seek the Key to Blockchain by Jane Wild, Martin Arnold and Philip Stafford - FT.com 

This FT.com article on blockchain can be found here. The authors mention an internal blockchain implementation and remember that a blockchain is a shared database technology that connects consumers and suppliers creating online networks with no need for middlemen or a central authority. Applications are endless and supporters claim that trust is created by the participating parties.

The authors of this article mention the use of blockchain, also named distributed ledger, as a back-office new implementation and even new governmental implementations such as land registers. 

There are two types of blockchains in terms of accessibility: invitation-only (private) and public (open). UBS and Microsoft are working with blockchain start-up Ethereum (running  an open source technology). Other banks are going the private blockchain way.

The authors of this article also mention that this technology has in front of it key challenges such as robustness, security, regulation.

The ledger of BTC weighs already more than 45 GB.

Bits and pieces





Using Networks To Make Predictions - A lecture (3 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist, at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

The first lecture introduced the concept of networks. The second lecture talked about network characteristics (centrality, degree, transitivity, homophily and  modularity). Let's continue with the third lecture. You can find it here. This time on the impact of network science.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

Dynamics in networks
- For example, how does a rumor spread in a network?
- This aspect is much more controversial than the point touched in lectures 1 and 2.
- An example: Citation networks (e.g. the network of legal opinions or the network of scientific papers).
- "Price observed that the distribution of the number of citations a paper gets follows a power law or Pareto distribution - a fat-tailed distribution in which most papers get few citations and a few get many".
- This power law is somehow surprising.

Power laws
- In comparison to a normal distribution, the power law shows that there are some nodes with a number of links that is several orders of magnitude higher. This does not happen in normal distributions.
- Examples of cases that follow a Pareto law (power law) are word counts in books, web hits, wealth distribution, family names, city populations, etc.
- Power law - the 80/20 rule. E.g. "the top 20% own 86% of the wealth. 10% of the cities have 60% of the people. 75% of people have surnames in the top 1%.
- Power laws are a very study area in complex systems.

Where do power laws come from? Preferential attachment
- The importance of getting an early lead e.g. with an excellent product, or by good marketing.  
- A plausible theory is preferential attachment. Interestingly enough, this theory does ignore the content of the papers. It only uses the number of links the nodes have. 
- First mover advantage: In citations, if you are one of the first ones writing on a topic, your paper will be cited anyway, regardless of the content. They are the early lead in that specific field.
- How many you have depends on how many you already have.
- In conclusion, it is much more effective, according to this theory, to write a mediocre paper on tomorrow's field rather than a superb paper in today's field.
- The long tail effect: A small number of nodes with  lots of connections.

The spread of a disease over a network
- Percolation model. In a specific network, I colour some of the edges and with those I have a different network starting from my initial network.
- How does the structure of the network influence the spread of a disease?
- Degree is the number of connections you have.
- Hubs are extremely effective of passing diseases along.
- What about if we vaccinate hubs? Targeted vaccination.
- Herd immunisation.
- Targeted attacks are much more effective (clear link with information security)
- We can use the network itself to find the hubs.
- People who should be vaccinated are the most mentioned friends.

Network robustness
- Can we tell that a network is robust by looking at its structure? Let's go back to the concept of homophily (mentioned in Part 2 or 3 of this series of lectures).
- Homophily by degree: Party people hanging out with party people (positive correlation coefficient in social networks- high degree nodes connect with high degree nodes).
- You get a very dense core and very clean borders. Social networks are then very robust networks. This is exactly the opposite we would like in terms of disease spread.
- Social networks are very robust and easy to vaccinate against diseases.
- Internet is fragile however. The high degree nodes connect with the low degree nodes. The highly dense nodes connect with scarcely connected nodes. The high degree nodes are spread out all over the network. Those networks are not so robust. They are fragile. If you knock down nodes with high degree, you knock down the network very quickly.
- Number of connections (x axis) is the degree.
- The crucial factor in the spread of disease is airplanes.

Future directions
- Great slide: This is very very new field. "We need to
- Improve the measurement of networks.
- Understand how networks change over time.
- Understand how changing a network can change its performance, and perhaps improve it.
- Get better at predicting network phenomena.
- Predict how society will react or evolve based on social networks.
- Prevent disease outbreaks before they happen. 
- And..?"
- Sometimes you engineer a network and sometimes it works!


Networking city

Book Review: Intentional Risk Management Through Complex Networks Analysis - Innovation for Infosec

This post provides a non-comprehensive summary of a multi-author book published in 2015 titled "Intentional Risk Management Through Complex Networks Analysis".
I recommend this book to those looking for real science-based Information Security innovations. This statement is not a forced marketing slogan. It is a reality.
The authors of this book are, in alphabetic order Victor Chapela, Regino Criado, Santiago Moral and Miguel Romance.

In this post I present some of the interesting points proposed by the authors. The ideas mentioned here are coming from the book. Certainly this summary is a clear invitation to read the book, digest its innovative proposals and start innovating in this demanding field of IT Security.

Chapter 1. Intentional Risk and Cyber-Security: A Motivating Introduction

The authors start distinguishing between Static Risk and Dynamic Risk. Static Risk is opportunistic risk (e.g. identity theft). Dynamic Risk is directed intentional risk that attempts to use potentially existing but unauthorised paths (e.g. using a vulnerability).

Static Risk is based on the probability that a user with authorised access to a specific application abuse his access for personal gain. This risk can be deterred by reducing anonymity.

In Dynamic Risk the attacker tries to get the most valuable node via the least number of hops via authorised or unauthorised accesses.

Currently the main driver for a cyber-attack is the expected profit for the attacker. The book also links Intentionality Management with Game Theory, specifically with the stability analysis of John Nash's equilibrium. The book uses Complex Network Theory (both in terms of structure and dynamics) to provide a physical and logical structure of where the game is played.

The authors consider intentionality as the backbone for cyber-risk management. They mention a figure, coming from a security provider, of around USD 400 billion as the latest annual cost of cyber-crime.

The authors make a distinction between:
- Accidental risk management, a field in which there is a cause that leads to an effect and attacks are prevented mostly with redundancy (e.g. in data centres) and
- Intentional risk management, in which we have to analyse the end goal of the attackers.

To prevent these attacks we can:

- Reduce the value of the asset.
- Increase the risk the attacker runs.
- Increase the cost for the attacker.

Traditionally the risk management methodologies are based on an actuarial approach, using the typical probability x impact. Being the probability based on observation of the frequency of past events.

We need to assess which assets are the most valuable assets for the attackers.

Using network theory, whose foundations can also be found in this blog in summaries posted in October 2015, November 2015, December 2015, January 2016, February 2016 and March 2016, the more connected a node is (or the more accessibility a computer system has), the greater is the risk for it to be hackable.

A key point proposed by this book: Calculated risk values should be intrinsic to the attributes of the network and require no expert estimates. The authors break down attackers' expected profit into these three elements:

- Expected income i.e. the value for them.
- The expense they run (depending on the accessibility both via a technical user access or a non-technical user access).
- Risk to the attacker (related to anonymity and some deterrent legal, economic and social consequences.

An attacker prefers busy applications that are highly accessible, admin access privileges and critical remote execution vulnerabilities. The main driver for attackers is value for them. Attackers in the dynamic risk arena are not deterred by anonymity.

The authors relate anonymity to the number of users who have access to the same application.




Chapter 2. Mathematical Foundations: Complex Networks and Graphs (A Review)
Complex network model the structure and non-linear non-linear dynamics of discrete complex systems.

The authors mention the difference between holism and reductionism. Reductionism works if the system is linear. Complexity depends on the degrees of freedom that a system has and whether linearity is present.

Networks are composed of vertices and edges. In complex networks small changes may have global consequences.

Euler walk: A path between two nodes for which every link appears exactly once. The degree of a node is the number of links the node shares.

If the number of links with odd degree is greater than 2 then no Euler walk exists.

If the number of links with odd degree equals 0 then there are Euler walks from any node.

If the number of links with odd degree equals 2 then there is only an Euler walk  from one of the odd nodes.

A graph is the mathematical representation of a network. The adjacency matrix of a graph is a way to determine the graph completely. A node with a low degree is weakly connected. A regular network is a network whose nodes have exactly the same degree.

In a directed network the adjacency matrix is not necessarily symmetric. Paths do not allow repetition of vertices while walks do. A tree is a connected graph in which any two vertices are connected by exactly one path.

Structural vulnerability: How does the removal of a finite number of links and/or nodes affect the topology of a network?

Two nodes with a common neighbour are likely to connect to each other. The clustering coefficient measures it.

The eigenvector centrality of a node is proportional to the sum of the centrality values of all its neighbouring nodes.

Spectral graph theory studies the eigenvalues of matrices that embody the graph structure.

Betweenness centrality: Edge betweenness of an edge is the fraction of shortest paths between pairs of vertices that run along it. Degree distribution provides  the probability of finding a node in G with degree k.

Complex networks models
In random graphs, the probability that 2 neighbours of a node are connected is the probability that two randomly chosen nodes are linked. Large scale random networks have no clustering in general. The average distance in a random network is rather small.

Small world model
Some real networks like the Internet have characteristics which are not explained by uniformly random connectivity. Small world property: The network diameter is much smaller that the number of nodes. Most vertices can be reached from the others through a small number of edges.

Scale-free networks
The degree distribution does not follow a Poisson like distribution but does follow a power law i.e. the majority of nodes have low degree and some nodes, the hubs, have an extremely high connectivity.

Additionally, many systems are strongly clustered with many short paths between the nodes. They obey the small world property.

Scale-free networks emerge in the context of a growing network in which new nodes prefer to connect to highly connected nodes. When there are constraints limiting the addition of new edges, then broad-scale or single-scale networks appear.

Assortative networks
Most edges connect nodes that exhibit similar degrees (the opposite is disassortative networks).

A Hamiltonian cycle in a graph passes through all its nodes exactly once. The line graph is a set of nodes that are the initial set of edges.


Chapter 3. Random Walkers

Two different types of random walkers: Uniform random walkers and random walkers with spectral jump (a personalisation vector).

Statistical mechanics: The frequency of all the nodes will be the same in all the random walkers developed. In any type of random walker the most important element is the frequency with which each node appears. 

"If we move on a network in a random way, we will pass more often through the more accessible nodes". This is the idea of the PageRank algorithm used by Google. The difficulty comes to compute the frequency of each node. A random walker on a network can be modelled by a discrete-time Markov chain.

Multiplex networks: The edges of those networks are distributed among several layers. It is useful to model Dynamic Risk.

Intentional risk analysis
Accessibility: Linked to the frequency of a uniform random walker with spectral jump in the weighted network of licit connections. Two types of nodes:

- Connection-generator nodes (e.g. Internet access, effective access of internal staff).
- Non connection-generator node (those nodes through which the communication is processed).

Static intentional risk? (It exists but it is not so key I assume) The accessibility of each connection is zero cost because the accesses have been achieved by using the structure of the network.

In dynamic intentional risk each connection or non-designed access increase entails a cost for the attacker who seeks access to the valuable information (the vaults).

Modelling accessibility
A biased random walker with spectral jumps, going to those nodes with an optimal cost/benefit ratio. The random walker makes movements approaching the vaults. Accessibility in dynamic intentional risk may be modelled using a biased random walker with no spectral jumps in a 3-layered multiplex network.

1. A first layer corresponding to spectral jumps (ending and starting connections).
2. A second layer with the existing connections registered by the sniffing.
3. A third layer with connections due to the existence of both vulnerabilities + affinities.


Chapter 4. The Role of Accessibility in the Static and Dynamic Risk Computation

The anonymity is computed for each edge of the intentionality network. The value and the accessibility are computed for each node. Two ways to calculate the edge's PageRank:

a. via the classic PageRank algorithm (frequency of access to an edge and the PageRank of its nodes).
b. via Line Graph i.e. the nodes are the edges of the original network.

The dumping factor will be the jumping factor.

The outcome will be a weighted and directed network with n nodes and m edges. There are equivalent approaches using the personalization vector.

Chapter 5. Mathematical Model I: Static Intentional Risk

Static Risk: Opportunistic risk. Risk follows authorised paths.
Dynamic Risk: Directed intentional risk. Tendency to follow unauthorised paths. Linked to the use of potentially existing paths but not authorised in the network.

The model is based on the information accessibility, on its value and on the anonymity level of the attacker.

Intentionality complex network for static risk. Elements:

- Value: How profitable the attack is.
- Anonymity: How easy the identity of the attacker is determined.
- Accessibility: How easily the attack is carried out.

Every node has a resistance (a measure for an attacker to get access). Value is located at certain nodes of the network called vaults. Different algorithms will be used: Max-path algorithm, value assignment algorithm and accessibility assignment algorithm.

Static risk intentionality network construction method:
1. Network construction from the table of connection catches.
2. Network collapsed and anonymity assignment.
3. Value assignment.
4. Accessibility assignment.

Two networks appear in this study, the users network and the admins network. Network sniffing provides the connections between the nodes IP and the nodes IP:ports. Based on this sniffing, we get the number of users who use each one of the edges. The inverse of that integer number becomes the label for each edge. The max-path algorithm is executed to distribute the value from the vaults to all the nodes of the networks.

The inverse of the number of users in each edge is used as a value reduction factor. The higher the number of users who access a node, the higher value reduction potential attackers will have in that node but, however, the higher anonymity they will have though.

Each edge is labelled with the frequency of access (the number of accesses). The accessibility of a node is linked to the accessibility of the edges connecting it. For each edge, the PageRank algorithm is calculated.

The higher the access frequency, the higher the probability that someone will misuse the information present in that node.

The higher the profit to risk ratio for the attacker, the greater the motivation for the attacker.

The paradigm shift is relevant: From the traditional risk = impact x probability to:

- Attacker income: Value for each element of the network.
- Attacker probability: Directly proportional to accessibility.
- Attacker risk: 1/anonymity.

The value of each element resides in the node. Anonymity resides on the edge.
The profit to risk for the attacker ratio (PAR) =

value x accessibility x (anonymity /k) being k the potential punishment probability for the attacker.


 Chapter 6. Mathematical Model II: Dynamic Intentional Risk

Zero-day attacks are not integrated in the model.

In static risk:

- The most important single attribute is Value. The value depends on the percentage of value accessible by the user.
- The attacker uses their authorised access.
- Anonymity is an important incentive. Lack of anonymity is a deterrent.
- Accessibility has no cost (the user is already authorised)
- There is a higher level of personal risk perception.
- The higher the number of users, the higher his perceived anonymity.

In dynamic risk:

- The most important single attribute is accessibility.
- The degree of anonymity is not a deterrent (the user is not already authorised or known).

- The hacker tries to access the entire value.
- Typical values of anynomity: Coming from the Internet anonymity equals 1, from Wireless equals 0.5 and from the Intranet equals 0.

Accessibility in Dynamic Risk
Each jump of a non-authorised user from one element to another element increases the cost for the attacker. The more distance to the value, the more difficult and costly the attack is.

Dynamic risk construction
First step: Performing a vulnerability scanning of the network to get all non-authorised paths (known vulnerabilities, open ports, app fingerprinting, known vulnerabilities and so forth).

The vulnerability scanner used is Nessus.

Two types of potential connections:
- Affinities: Two nodes sharing e.g. OS, configurations and users.
- Vulnerabilities.

A modified version of the PageRank algorithm is used.

Dynamic Risk model

User network + admins network + affinities + vulnerabilities

Anonymity does not play any role in Dynamic Risk but accessibility is the main parameter.

Each edge has an associate weight. The dynamic risk of an element is the potential profit the attacker obtains reaching that element. As anonymity is not relevant in the context of dynamic risk, it is not necessary to collapse its associated network.

The accessibility of an element of the Dynamic Risk Network is the value we get for the relative frequency of a biased random walker through that element.

- Dynamic risk = value x accessibility
- The dynamic risk of a network is the maximum dynamic risk value of its elements (interesting idea - why not the sum?)
- The dynamic risk average = the total value found in the vaults x accessibility average (the root mean square of all accessibility values associated to elements of the network in the context of dynamic risk).


Chapter 7. Towards the Implementation of the Model

Source ports in this model are not important. They are mostly generated randomly.

Access levels. Restricted and unrestricted.
The higher the level of privilege, the more information and functionality an attacker can access. Typically there are two types of accesses, based on different ports:

- Restricted end user access: Always authorised and mostly with low risk.

- Unrestricted technical access: Any access that allows a technical user or an external hacker to have unrestricted access to code, configuration or data. It can be authorised or gained via an exploit. It is a high risk: Using admin access in an application you can in most cases escalate privileges to gain control over the server and the network.

For static risk we need to find which accesses are already authorised and normal. The frequency of connections for each socket (especially for the frequently used sockets) informs about the busiest routes and how many hosts accessed a specific application.

For dynamic risk, we need to model the potential routes that a hacker might find and exploit. For an attacker, sockets that are used normally are desirable since they are more anonymous.

Attackers will select routes where they can obtain the most privileges with the least effort and get the closest to their end goal.

Other unknown risks are out of the scope of this proposal. This is a key point to understand.

To calculate anonymity in the static risk network we need to collapse all the IP sources that connect to the same port destination. It will be the inverse of the number of IP sources collapsed.

Value: How much the data or functionality is worth for the attacker. It needs to be placed manually into those vault nodes.

And the ending point of the book is the great news that the authors are working on a proof of concept.


Innovation in IT Security