Book Review: The Elements of Data Analytic Style by @jtleek i.e. Jeffrey Leek

In the introduction to the Data Science world, one needs to build the right frame surrounding the topic. This is usually done via a set of straight to the point books that I will be summarising in this blog.

The second book that I start with is written by Jeffrey Leek. Its title is "The Elements of Data Analytic Style". You can get it here. It is a primer on basic statistical concepts that are worth having in mind when embarking on a scientific journey.

This summary is a way to acknowledge value in a twofold manner: first, I praise the book and congratulate the author and second, I share with the community a very personal summary of the books.

Let me try to share with you the main learning points I collected from this book. As always, here it goes my personal disclaimer: the reading of this very personal and non-comprehensive summary by no means replaces the reading of the book it refers to; on the contrary, this post is an invite to read the entire work.

In approximately 100 pages, the book provides the following key points:

Type of analysis
Figure 2.1, titled the data analysis question type flow chart is the foundation of the book. It classifies the different types of data analysis. The basic one is a descriptive one (reporting results with no interpretation). A step further is a exploratory analysis (will the proposed statements be still valid in a qualitative way using a different sample?).

If this also holds true in a quantitative manner, then we are in an inferential analysis. If we can use a subset of measures to predict some others then we can talk about a predictive analysis. The next step, certainly less frequent, is the possibility to seek a cause, then we are in a casual analysis. Finally, and very rarely, if we go beyond statistics and find a deterministic relation, then those are the mechanistic analysis.

Correlation does not imply causation
This is key to understand. The additional element to really grasp it is the existence of confounding elements i.e. additional variables, not touched by the statistical work we are embarked on, that connect the variables we are studying. Two telling examples are mentioned in the book:
- The consumption of ice cream and the murder rate are correlated. However, there is no causality. There is a confounder: the temperature.
- Shoe size and literacy are correlated. However there is a confounder here: age.

Other typical mistakes
Overfitting: Using a single unsplit data set for both model building and testing.
Data dredging: Fitting a large number of models to a data set.

Components of a data set
It is not only the raw data, but also the tidy data set, a code book describing each of the variables and its values in the tidy data set and a script on how to reach the tidy data set from the raw data.
The data set should be understood even if you, as producer or curator of the data set, are not there.

Type of variables
Continuous, ordinal, categorical, missing and censored.

Some useful tips
- The preferred way to graphically represent data: plot your data.
- Explore your data thoroughly before jumping to statistical analysis.
- Use a linear regression analysis to compare it with the initial scatterplot of the original data.
- More data usually beats better algorithms.

Section 9 provides some hints on how to write an analysis. Section 10 does a similar role on how to create graphs. Section 11 hints how to present the analysis to the community. Section 12 cares about how to make the entire analysis reproducible. Section 14 provides a checklist and Section 15 additional references.

Happy analysis!

Happy stats!

Book Review: How to be a modern scientist by @jtleek i.e. Jeffrey Leek

In the introduction to the Data Science world, one needs to build the right frame surrounding the topic. This is usually done via a set of straight to the point books that I will be summarising in this blog.

The first book that I start with is written by Jeffrey Leek. It is not a Data Science book by itself but rather an introductory set of tips on how to aspire to make science today.

The title of the book is "How to be a modern scientist" that you can get here. Actually, the series of posts that I start with this one is a consequence of reading this book. It is a way to acknowledge value in a twofold manner: first, I praise the book and congratulate the author and second, I share with the community a biased version of value that they could obtain by reading this book. These two processes are currently also present in the scientific community, together with more traditional aspects such as scientific paper reading, and, certainly, writing.

Let me try to share with you the main learning points I collected from this book. As always, here it goes my personal disclaimer: the reading of this very personal and non-comprehensive summary by no means replaces the reading of the book it refers to; on the contrary, this post is an invite to read the entire work.

Paper writing and publishing
There are currently three elements in modern science, what you can write, what you can code and the data you can share (the data you have based your investigation on).
The four parts the author states that a scientific paper consists of are great: a set of methodologies, a description of data, a set of results and, finally, a set of claims.
A key point is that your paper should tell or explain a story. That is why the author talks about "selecting your plot" for your paper i.e. once you have an answer to your question is when you start writing your paper.
These chapters distinguish between posting a preprint of a paper (for example in and submitting the paper to a peer-reviewed journal. For junior scientists, the mix the author mentions of using a preprint server and a closed access journal is very adequate.

Peer review and data sharing
The author proposes some elegant ways to carefully and timely review papers and also mentions the use e.g. of blogs to start sharing a serious and constructive review.
Regarding data sharing, his suggestion is to use a public repository that would remain accessible throughout time such as e.g.

Scientific blogging and coding
A way to market your papers can be via blogging. The three recommended platforms are, and
The author also reminds us that the Internet is a medium in which controversy flourishes.
In terms of code, the suggestion for general code is and

Social media in science
An useful way to promote the work of others and your work.

Teaching in science
Post your lectures on the Internet (be aware of any  non-disclosure agreements with the University or the educational institution to teach at. Share videos of your lectures and, if resources allow it, create your own online course. 

Books and internal scientific communication
Three platforms are suggested:, and amazon kindle direct publishing.

Regarding internal communication, is one of the proposed tools to keep teams in sync.

Scientific talks and reading scientific papers, credit and career planning and online identity
This are the last sections of the book: some hints on preparing scientific talks, reading papers constructively and, very important, giving credit to all those community members who have help you out either by writing something you use or by creating frameworks you use. A key suggestion is to use as many related metrics as possible in your CV and in your presentations.

Finally, the books ends up with some useful (and common sense based) tips on career planning and online identity.

Thanks to the author of How to be a modern scientist

Happy revealing!

Economics Book review: Bad Samaritans by Ha-Joon Chang - Reality vs hearsay - Similar in Infosec?

I am convinced that Information Security professionals can benefit from reading, not only Information Security books like this one or that one, but also books that would shed some light on key business areas. Areas to which, ultimately, Information Security and IT Security provides services to. This is why I propose a series of book reviews outside the Information Security realm. Economics is certainly one of those. Understanding key points on Economics would enable security professionals to understand and assess system and data criticality and to better aim at providing added value to their customers.
In this occasion I present some learning points extracted from the book titled Bad Samaritans by Ha-Joon Chang. Certainly this list does not replace the reading of the book. I do recommend its careful reading.

However, for those with little time, maybe these points can be of help to you:

In two thoughts:

- There is a tendency in rich countries to request developing countries to follow economic policies that are, in many occasions, the opposite of what the rich countries did to reach where they are in economic terms.
- Care, reflection, attention to detail and a sense of fairness should be applied in this politically-driven field named economics.

In more than two thoughts:

On Chapter 0

- South Korea as a country greatly improved in about 50 years. Similarly to Haiti becoming Switzerland. This happened thanks to a mix of economic policies that, during most of those years, from the 1960s to today, by no means could be considered free-trade based.
- This economic development was not only linked to periods of democracy.
- The Korean government was extra careful controlling imports and their influence in their national economy.
- Intellectual property piracy has played an important role in countries that did protect their industries.
- This chapter argues clearly against neo-liberal economics.
-  It is interesting to see that today's rich countries used protectionism and subsidies to reach their current state.
- All this has been summarised as "climbing and kicking away the ladder".
- So, it seems that a careful, selective and gradual opening of countries' economies is key to make progress in this matter.

On Chapter 1

- Fewer than 50 years ago, it was unthinkable to see Japan as a high-quality car maker country.
- Democracy in Hong-Kong only started in 1994. Three years before the handover to China. And 152 years after the start of the British ruling.
- Before free trade, there was protectionism.
- Globalisation was not always hand in hand with free trade.
- Interesting thought: Something purely driven by politics is evitable.
- The role of the IMF, the World Bank and the WTO is biased towards the benefit of the rich countries.

On Chapter 2

- How did rich countries become rich? In a nutshell, by protecting their markets.
- How did rich countries protect their markets? Basically by:
  • Applying tariff rates.
  • Keeping industries in their local markets
  • Supporting local industries via governmental decisions (and budget).
  • Keeping primary commodities production in the colonies.
In a way, while reading this chapter, one can grasp the confusion that, throughout History, economic theories had between limiting parameters during a specific period of time and new limiting parameters after having applied a specific economic policy in terms of development or industrialisation, or even after benefiting from a specific technological breakthrough.

On Chapter 3

Very succinctly, this chapter suggests the need to find the right pace and timeline for a country to adopt free trade or, even, the right balance between protected and free trade. Done very quickly, it could mean a lack of growth. Done very slowly, it could mean losing growth opportunities.

On Chapter 4

Today's rich countries regulated foreign direct investment when they were at the receiving end. Foreign direct investment needs to be regulated.

On Chapter 5

There is a fine line defining which (and whether) some specific enterprises need to be owned by the State and when they need to be sold and to whom.

On Chapter 6

Equally, it is also very complex to get the right balance on Intellectual Property Rights and when ideas need to be protected by patents.

On Chapter 7

Economics is driven by politics. This is the reason why a balanced and prudent government is so decisive.

On Chapter 8

The trouble of corruption and how it damages the economy.  Interestingly, economic prosperity and democracy are not so compulsorily linked.

On Chapter 9

Culture in all countries can evolve with the right political measures.

On a Final Chapter      

There was at least an example in History when the "Bad Samaritans" (as so called by this book) behaved as "Good Samaritans": The Marshall Plan.

Enjoy the reading!

Enjoy the future!

Papers on Blockchain and Bitcoin: Student notes

Lots of time devoted to this post. The reader will find it useful to get an initial idea of both concepts: Bitcoin and blockchain.

As usual, a disclaimer note: This summary is not comprehensive and it reflects mostly literal extracts from the mentioned papers and article.

Let's start summarising a classic paper, the paper on bitcoin, written by Satoshi Nakamoto.

Paper 1
Bitcoin: A Peer-to-Peer Electronic Cash System by Satoshi Nakamoto

This paper appeared on 2008. The first Bitcoins were exchanged in 2009. By June 2011 there were around 10000 users and 6.5 million Bitcoins [Info coming from this paper]. 

Bitcoins (BTC) are generated at a predictable rate. The eventual total number of BTC will 21 million [Info coming from this paper].

This is a proposal of a peer-to-peer electronic cash version. The novelty here is that there is no need for a trusted third party to tackle the double spending challenge. The key for the functioning of this peer-to-peer network is that the majority of CPU power is not controlled by an attacker.

The proposal is to replace the need of trust, an element that make transaction costs higher by a cryptographic (and distributed) proof. More specifically, via a peer-to-peer distributed timestamp server that generates computational proof of the chronological order of transactions.

Interestingly, an electronic coin is a chain of digital signatures. The main challenge in an electronic payment system is double spending avoidance. A digital mint would solve this, however this means that the entire scheme would depend on the mint.

The participants of the peer-to-peer network form a collective consensus regarding the validity of this transaction by appending it to the public history of previously agreed transactions (the blockchain) using a hashing function and their keypair. A transaction can have multiple inputs and multiple outputs [Info coming from this paper].

The only way to confirm the absence of a transaction is to be aware of all transactions. Without the role of a central party, this translates into two requirements:
- All transactions must be publicly announced.
- All market participants should agree on a single payment history. In practical terms, this means the majority of market participants.

The first technical element requiring this electronic payment proposal is a timestamp server. Each timestamp includes all previous timestamps. A timestamp consists of a published hash of a block of items.

How do we consider the distributed nature of the system in the case of the timestamp servers? The authors (or author) propose a proof-of-work system. In this case, the proof-of-work means one CPU-one vote. The majority decision is represented by the longest chain. This longest chain will have the greatest proof of work effort invested in it.

Technically, the proof-of-work involves scanning for a value that when hashed, the hash begins with a number of zero bits. The average work required is exponential in the number of zero bits required and can be verified by executing a single hash.

The reader can start grasping the great degree of CPU-intensity that this electronic payment system requires.

Nodes always consider the longest chain to be the correct one and they will keep working on extending it.

Incentives come both from the creation of a new coin and from transaction fees. The first transaction in a block is a special transaction that starts a new coin owned by the creator of the block. Potential attackers will find much more benefitial to them to devote their CPU cycles and electricity to create new coins rather than to re-do existing blocks and to control enough consensus for theirs.

Transactions are hashed in a Merkle Tree way. This makes payment verification possible without running a full network node. This verification helps as long as honest nodes control the network.

The authors state that there is never the need to extract a complete standalone copy of a transaction history.

As all transactions need to be published, the way to obtain privacy in this system is to de-couple identities from public keys. However, privacy is only partially guaranteed. An additional recommendation is to use a new keypair for each new transaction.

An attacker can only try to change one of their own transactions to take back money that they recently spent.

Paper 2
An Analysis of Anonymity in the Bitcoin System by Fergal Reid and Martin Harrigan

This paper provides further input, probably the first paper, on the topic we mentioned earlier in this post regarding Bitcoin and the limits of user anonymity.

The decentralised nature of the BTC system and the lack of a central authority brings along the need to make all transaction history publicly available.

Users in Bitcoin are identified by public keys. Bitcoin maps a public key with a user only in the user's node and it allows users to issue as many public keys as wished. Users can also make use of third-party mixers (or laundry).

The interesting element of this paper is the description of the topological structure of two networks derived from Bitcoin's public transaction history: The transaction network and the user network. This is doable thanks to the transaction history being publicly available. After joining Bitcoin's peer-to-peer network, a client can freely request the entire history of Bitcoin transactions. This enables the possibility of performing passive identification analysis.

The authors of this paper studied BTC transactions from January 2009 to July 2011. 1019486 transactions and 12530564 public keys.

The authors of this paper are not aware of network structure studies of electronic currencies. However, this was done in a physical currency based on gift certificates named Tomamae-cho that existed in Japan during 3 months in 2004-2005.

The flow of that currency showed that the cumulative degree distribution followed a power-law distribution and the network showed small-world properties (high average clustering coefficient and low average path length.

Many papers maintain the difficulty to keep anonymity in networks in which user behaviour data is available. The main postulate of the authors of this paper is that Bitcoin does not anonymise user activity.

The transaction network
It represents the flow of BTCs between transactions over time. Each node represents a transaction and each directed edge represents an output of the transaction corresponding to the source node that is an input to the transaction corresponding to the target node. Each edge includes a timestamp and a BTC value.

There is no preferential attachment in this network.

The user network
It represents the flow of BTCs between users over time. Each node represents a user and each directed edge represents an input-output pair of a single transaction. Each directed edge also includes a timestamp and a BTC value.

As a user can use many different public keys, the authors of the paper construct an ancillary network in which each vertex represents a public key. They connect nodes with undirected edges where each edge joins a pair of public keys that are both inputs to the same transaction and then are controlled by the same user.

The contraction of public keys into users generates a network that is a proxy for the social network of BTC users.

Disassembling anonymity
A first source to decrease anonymity consists of integrating off-network information. Some BTC related organisations relate public keys with personally identifiable information. SOme BTC users disclose voluntarily  their public keys in fora.

Bitcoin public keys are strings with about 33 characters in length and starting with the digit one.

A second source of information is IP addressing. Unless they are using anonymising proxy technology such as Tor, it is relatively true that the first IP address informing of a transaction is the source of it.

A third source is based in egocentric analysis and visualisation e.g. WIkileaks published its public key to request donations. The analysis of transactions having as destination that particular public key can also provide input on identities.

A fourth source will be context discovery e.g. identifying nodes that correspond to BTC brokers.

These techniques help investigating BTC thefts. For example, a very quick transfer of BTCs between public keys (most of them not yet known to the network of already done transactions)  can be an indication to generate a theft hypothesis.

There are other analysis paths involving tainted BTCs, order books from BTC exchanges, client implementations, time analysis and the like.

Mitigation strategies
The official BTC client could be patched to prevent the linking of public keys with user information, a service that would use dummy public keys could be implemented (certainly, this would increase transaction fees). Even the BTC protocol could be modified to allow for BTC mixing at protocol level.

For the time being, the authors of this paper state that physical cash payments still represent a competitive and anonymous payment system.

The final statement from the authors of this paper: "Strong anonymity is not a prominent design goal of the BTC system".

Paper 3
Bitcoin: Economics, Technology and Governance by Rainer Boehme, Nicolas Christin, Benjamin Edelman and Tyler Moore

This paper defines Bitcoin (BTC) as an online communication protocol facilitating the use of a virtual currency. It states that BTc is the first widely adopted mechanism to provide absolute scarcity of a money supply. Inflation does not have a place in this system.

Public keys serve as account numbers. Every new transaction published to the BTC network is periodically grouped in a block of recent transactions. A new block is added to the chain of blocks every ten minutes.

In some cases, a transaction batch will be added to the block chain but then a few minutes later it will be altered because a majority of miners reached a different solution.

When listing a transaction, the buyer and the seller can also offer to pay a "transaction fee", normally 0.0001 which is a bonus payment to whatever miner solves the computationally difficult puzzle that verifies the transaction.

The paper reviews four key categories of intermediaries: Currency exchanges, digital wallet services, mixers and mining pools.

Currency exchanges
They exchange BTCs for traditional currencies or other virtual currencies. Most operate double auctions with bids and asks and charge a commission (from to 0.2. to 2 percent). Today BTC resembles more a payment platform rather than a real currency.

There are significant regulatory requirements (including expensive certification fees) to establish a exchange. In addition to that, they require considerable security measures. So the number of them is relatively limited.

Digital wallet services
They are data files that include BTC accounts, recorded transactions and keys necessary to spend or transfer the stored value. In practice, digital wallet services tend to increase centralisation (and online availability with high security requirements also).

The loss of a private key, if not backed up, would mean the loss of the possibility to trade with those owned (i.e. digitally signed) BTCs.

The entire blockchain reached 30GB in March 2015.

Mixers ensure that timing does not yield clues about money flows. They let users pool sets of transactions in unpredictable combinations. Mixers charge 1 to 3 percent of the amount sent. Mixer protocols are usually not public.

Mining pools
BTCs are created when a miner solves a mathematical puzzle. Mining pools now combine resources from numerous miners. Oversized mining pools threaten the decentralisation that underpins BTC's trustworthiness.

Uses of Bitcoin
Initially it seems illicit activities use BTC given its openness and distributed nature. Every Bitcoin transaction must be copied into all future versions of the block chain. Updating the block chain entails an undesirable delay, making BTC too slow for many in-person retail payments.

Some scientists stress the importance of BTC for its ability to create a decentralised record of almost anything.

Risks in BTC
Market risk due to the fluctuation in the exchange rate between BTC and other currencies. It has also the shallow market problem: a person trading quickly a large amount would affect the market price.
Counterparty risk: Of the exchanges that closed (either due to a security breach or to low-volume business), 46% of them did not reimburse their customers after shutting down.

The BTC system offers no possibility to un-do a transaction, creating then transaction risk (and affecting end consumer protection).

There is certainly some operational risk coming from the technical infrastructure and the already mentioned 51 percent attack.

Finally, BTC faces also privacy, legal and regulatory risks.

Three types, BTC-specific crime, BTC-facilitated crime and money laundering.

The authors suggest that longstanding reporting requirements can provide a level of compliance for virtual currencies similar to what has been achieved for traditional currencies. However, they recommend to consider regulations in the broader context of a global market for virtual currencies services.

Social science lab
Interestingly, most users treat their bitcoin investments as speculative assets rather than as means of payment.

A so far theoretical concern: Larger blocks are less likely to win a block race than a smaller one.

Privacy and anonymity
Some authors claim that almost have of BTC users can be identified.

An open question posed by the authors
What happens if the BTC economy grows faster than the supply of bitcoins?

A final thought by the authors of this paper: BTC may be able to accommodate a community of experimentation built on its foundations.

Paper 4
Bitcoin-NG: A scalable blockchain protocol by Ittay Eyal, Adem Efe Gencer, Emin Gun Sirer, and Robert van Renesse (Cornell Univesity)

This paper proposes a new blockchain protocol designed to scale. Original bitcoin-derived blockchain protocols have inherent scalability limits. To improve efficiency, one has to trade off throughput for latency. BTC currently targets a conservative 10-minute slot between blocks, yielding 10 minute expected latencies for transactions to be encoded in the blockchain.

Bitcoin-NG achieves a performance improvement by decoupling Bitcoin's blockchain operation into two planes: leader election and transaction serialisation.

Some generic descriptions of the blockchain protocol
An output is spent if it is the input of another transaction. A client owns x Bitcoins at time t if the aggregate of unspent outputs to its address is x. The miners commit the transactions into a global append-only log called the blockchain.

The blockchain records transactions in units of blocks. A valid block contains a solution to a cryptopuzzle involving the hash of the previous block, the hash (the Merkle root) of the transactions in the current block, which have to be valid and a special transaction (the coinbase) crediting the miner with the reward for solving the cryptopuzzle. The cryptopuzzle is a double hash of the block header whose result has to be smaller than a set value. The difficulty of the problem, set by this value, is dynamically adjusted such that blocks are generated at an average rate of one every ten minutes.

It is a blockchain protocol that serialises transactions allowing for better latency and bandwidth than BTC.

The protocol divides into time epochs. In each epoch, a single leader is in charge of serialising state machine transitions. To facilitate state propagation, leaders generate blocks. The protocol introduces two types of blocks: key blocks for leader election and microblocks that contain the ledger entries.

Leader election is already taking place in BTC. But in BTC the leader is in charge of serialising history, making the entire duration of time between leader elections a long system freeze. Leader election in BTC-NG is forward-looking and ensures that the system is able to continually process transactions.

Bitcoin-NG is resilient to selfish mining against attackers with less than 1/4 of the mining power.

Bitcoin-NG shows that it is possible to improve the scalability of blockchain protocols to the point where the network diameter limits consensus latency and the individual node processing power is the throughput bottleneck.  

Paper 5
A Protocol for Interledger Payments by Stefan Thomas and Evan Schwartz

This paper deals with the complexity to move money between different payment systems. The authors of the paper propose a way to connect different blockchain implementations. It uses ledger-provided escrow (conditional locking of funds) to allow secure payments through untrusted connectors.

This is a protocol for secure interledger payments across an arbitrary chain of ledgers and connectors. It uses ledger-provided escrow based on cryptographic conditions to remove the need to trust connectors between different ledgers. Payments can be as fast and cheap as the participating ledgers and connectors allow and transaction details are private to their participants.

The focus of this summary is not the deep description of this protocol but the introduction to the BAR (Byzantine, Altruistic, Rational model.

Byzantine actors may deviate from the protocol for any reason, ranging from technical failure to deliberate attempts to harm other parties or simply impede the protocol.

Altruistic actors follow the protocol exactly.

Rational actors are self-interested and will follow or deviate from the protocol to maximize their short and long-term benefits.

The authors of the paper assume that all actors in the payment are either Rational or Byzantine. Any participant in a payment may attempt to overload or defraud any other actors involved. Thus, escrow is needed to make secure interledger payments.

This protocol proposes two working modes: The atomic mode and the universal mode.

In the atomic mode, transfers are coordinated by a group of notaries that serve as the source of truth regarding the success or failure of the payment. The atomic mode only guarantees atomicity when notaries N act honestly. Rational actors can be incentivised to participate with a fee.

The universal mode relies on the incentives of rational participants to eliminate the need for external coordination.

Paper 6
A Next-Generation Smart Contract and Decentralized Application Platform from Ethereum's GitHub repository

This white paper presents a blockchain implementation alternative to BTC and, initiallly, more generic.It presents blockchain technology as a tool of distributed consensus. It is not only cryptocurrencies but also financial instruments, non-fungible assets such as domain names or any other digital asset being controlled by a script i.e. a piece of code implementing arbitrary rules (e.g. smart contracts).

Ethereum provides a blockchain with a built-in fully fledged Turing-complete programming language.

A recap on BTC
As already mention in the summary of Paper 1 in this post, BTC is a decentralised currency managing ownership through public key cryptography with a consensus algorithm named "proof of work". It achieves two main goals: It allows nodes in the network to collectively agree on the state of the BTC ledger and it allows free entry into the consensus process. How does it do this last point? By replacing the need to use a central register by an economic barrier.

The ledger of a cryptocurrency can be thought of as a state transition system. The "state" in BTC is the collection of all coins (unspent transaction outputs, UTXO) and their owners.

BTC decentralised consensus process requires nodes in the network to continuously attempt to produce packages of transactions called blocks. The network is intended to create a block every ten minutes. Each block contains a timestamp, a nonce, a hash of the previous block and a list of all transactions that took place in the previous block.

Requirement for the "proof of work": The double SHA256 hash of every block - a 256-bit number - must be less than a dynamically adjusted target (e.g. 2 to the power of 187).

The miner of every block is entitled to include a transaction giving themselves 25 BTC out of nowhere.

In the event of a malicious attacker, they will target the order of transactions, not protected by cryptography.

The rule is that in a fork the longest blockchain prevails. In order for an attacker to make his blockchain the longest, he would need to have more computational power than the rest of the network (51% attack).

Merkle Trees
A Merkle Tree is a type of binary tree. Each node is the hash of its two children. As hashes propagate upwards. This way, a client, by downloading the header of a block, would know whether the block has been tampered.

A "simplified payment verification" protocol allows for light nodes to exist. They download only block headers and branches related to their transactions.

Alternative blockchain applications

Namecoin: A decentralised name registration database.
Colored coins and metacoins: A customised digital currency on top of BTC.

Basic scripting
UTXO in BTC can be owned also by a script expressed in a simple stack-based programming language. However, this language has some drawbacks:

- Lack of Turing completeness. Loops are not supported.
- Value-blindness: UTXO are all or nothing.
- No opportunity to consider multi-stage contracts.
- UTXOs are blockchain-blind.

Ethereum builds an alternative framework with a built-in Turing-complete programming language.

An Ethereum account contains four fields: The nonce (a counter that guarantees that each transaction can only be processed once), the ether balance, the contract code and the account's storage.

Ether is the crypto-fuel of Ethereum. Externally owned accounts are controlled by private keys and contract accounts are controlled by their contract code.

Contracts are autonomous agents living inside the Ethereum execution environment. Contracts have the ability to send message to other contracts. A message is a transaction produced by a contract. A transaction refers to the signed data package that stores a message to be sent from an externally owned account. Each transaction sets a limit to how many computational steps of code execution it can use.

Ethereum is also based on blockchain. Ethereum blocks contain a copy of both the transaction and the most recent state.

Ethereum applications
Token systems, financial derivatives (financial contracts mostly require reference to an external price ticker), identity and reputation systems, decentralised file storage and decentralised autonomous organisations.

Other potential uses are saving wallets, a decentralised data feed, smart multisignature escrow, cloud computing, peer-to-peer gambling and prediction markets.

Blockchains with fast confirmation times suffer from reduced security due to blocks taking a long time to propagate through the network. Ethereum implements a simplified version of GHOST (Greedy Heaviest Observed Subtree) which only goes down seven levels.

Currency issuance
Ether is released in a currency sale at the price of 1000-2000 ether per BTC. Ether has an endowment pool and a permanently growing linear supply.

The linear supply reduces the risk of an excessive wealth concentration and gives users a fair chance to acquire ether.

BTC mining is no longer a decentralised and egalitarian task. It requires high investments. Most BTC miners rely on a centralised mining pool to provide block headers. Ethereum will use a mining algorithm where miners are required to fetch random data from the state. This white paper states that this model is untested.
Ethereum full nodes need to store just the state instead of the entire blockchain history. Every miner will be forced to be a full node, creating a lower bound on the number of full nodes and an intermediate state tree root after processing each transaction will be included in the blockchain.

The question now is ... will it work?

Article 1
Technology: Banks Seek the Key to Blockchain by Jane Wild, Martin Arnold and Philip Stafford - 

This article on blockchain can be found here. The authors mention an internal blockchain implementation and remember that a blockchain is a shared database technology that connects consumers and suppliers creating online networks with no need for middlemen or a central authority. Applications are endless and supporters claim that trust is created by the participating parties.

The authors of this article mention the use of blockchain, also named distributed ledger, as a back-office new implementation and even new governmental implementations such as land registers. 

There are two types of blockchains in terms of accessibility: invitation-only (private) and public (open). UBS and Microsoft are working with blockchain start-up Ethereum (running  an open source technology). Other banks are going the private blockchain way.

The authors of this article also mention that this technology has in front of it key challenges such as robustness, security, regulation.

The ledger of BTC weighs already more than 45 GB.

Bits and pieces

Using Networks To Make Predictions - A lecture (3 of 3) by Mark Newman

For those willing to get introduced to the world of complex networks, the three lectures given by Mark Newman, a British physicist, at the Santa Fe Institute on 14,15 and 16 September 2010 are a great way to get to know a little bit about this field.

The first lecture introduced the concept of networks. The second lecture talked about network characteristics (centrality, degree, transitivity, homophily and  modularity). Let's continue with the third lecture. You can find it here. This time on the impact of network science.

In this post I summarise (certainly in a very personal fashion, although some points are directly extracted from his slides) the learning points I extracted from the lecture.

Dynamics in networks
- For example, how does a rumor spread in a network?
- This aspect is much more controversial than the point touched in lectures 1 and 2.
- An example: Citation networks (e.g. the network of legal opinions or the network of scientific papers).
- "Price observed that the distribution of the number of citations a paper gets follows a power law or Pareto distribution - a fat-tailed distribution in which most papers get few citations and a few get many".
- This power law is somehow surprising.

Power laws
- In comparison to a normal distribution, the power law shows that there are some nodes with a number of links that is several orders of magnitude higher. This does not happen in normal distributions.
- Examples of cases that follow a Pareto law (power law) are word counts in books, web hits, wealth distribution, family names, city populations, etc.
- Power law - the 80/20 rule. E.g. "the top 20% own 86% of the wealth. 10% of the cities have 60% of the people. 75% of people have surnames in the top 1%.
- Power laws are a very study area in complex systems.

Where do power laws come from? Preferential attachment
- The importance of getting an early lead e.g. with an excellent product, or by good marketing.  
- A plausible theory is preferential attachment. Interestingly enough, this theory does ignore the content of the papers. It only uses the number of links the nodes have. 
- First mover advantage: In citations, if you are one of the first ones writing on a topic, your paper will be cited anyway, regardless of the content. They are the early lead in that specific field.
- How many you have depends on how many you already have.
- In conclusion, it is much more effective, according to this theory, to write a mediocre paper on tomorrow's field rather than a superb paper in today's field.
- The long tail effect: A small number of nodes with  lots of connections.

The spread of a disease over a network
- Percolation model. In a specific network, I colour some of the edges and with those I have a different network starting from my initial network.
- How does the structure of the network influence the spread of a disease?
- Degree is the number of connections you have.
- Hubs are extremely effective of passing diseases along.
- What about if we vaccinate hubs? Targeted vaccination.
- Herd immunisation.
- Targeted attacks are much more effective (clear link with information security)
- We can use the network itself to find the hubs.
- People who should be vaccinated are the most mentioned friends.

Network robustness
- Can we tell that a network is robust by looking at its structure? Let's go back to the concept of homophily (mentioned in Part 2 or 3 of this series of lectures).
- Homophily by degree: Party people hanging out with party people (positive correlation coefficient in social networks- high degree nodes connect with high degree nodes).
- You get a very dense core and very clean borders. Social networks are then very robust networks. This is exactly the opposite we would like in terms of disease spread.
- Social networks are very robust and easy to vaccinate against diseases.
- Internet is fragile however. The high degree nodes connect with the low degree nodes. The highly dense nodes connect with scarcely connected nodes. The high degree nodes are spread out all over the network. Those networks are not so robust. They are fragile. If you knock down nodes with high degree, you knock down the network very quickly.
- Number of connections (x axis) is the degree.
- The crucial factor in the spread of disease is airplanes.

Future directions
- Great slide: This is very very new field. "We need to
- Improve the measurement of networks.
- Understand how networks change over time.
- Understand how changing a network can change its performance, and perhaps improve it.
- Get better at predicting network phenomena.
- Predict how society will react or evolve based on social networks.
- Prevent disease outbreaks before they happen. 
- And..?"
- Sometimes you engineer a network and sometimes it works!

Networking city