In part 1, I argued that many of the most compelling processes that we observe in the world around us work according to a basic principle associated with aggregation. I will expand on this by writing a few lines that attempt to capture the essence of all of these predictive technologies:

If an input, , can be properly considered to be the estimate of a true value, the first equation states that each estimate (or guess) has some error associated with it.

Furthermore, we can get the errors to cancel out when all the estimates are added together. This is of course the process we use when we calculate a mean. And if there is even the tiniest information about what the true value of the solution might look like, and we can take enough guesses together, we will approach the true value using this aggregation approach!

This profound result was first discovered by Francis Galton when he described the “wisdom of the crowds” which he first called the “vox populi” (the voice of the people) in his seminal paper on the topic. When a group of people entered a contest to guess the weight of an ox at a county fair, an arbitrary random guess was significantly far from the true weight on average, but the median {or mean} guess was only off by a fraction of a percentage point. (I first learned about this during the Radiolab episode on Emergence). You can also check out a very interesting discussion about Galton’s choice between the median and the mean here. (And here is a more legible reprint of two of his most famous papers on this topic with a foreword).

**So, “what is the big deal? you ask**

The big deal is that in any information-rich environment, we can, on average, count on some of that information being passed along with each educated guess. If I go to google and type a search, when I click on one of the links, I have voted and therefore entered my “estimate” of the “true” perfect link for that query. If I aggregate these clicks over all time and all users, the top link becomes a very good estimate of the true best link due to the power of aggregation to cancel out all erroneous clicks.

Similarly, we can extend this analysis to Zacks website (where company earnings estimates are aggregated and the average is used) and fivethirtyeight (where polling information is aggregated and reported instead of any one poll). This can be extended to human problem solving (individual neural networks within a brain add to the aggregate we call the “subconscious” and “conscious” minds and apparently dreaming is integral to the aggregation that happens), science (both observations and hypotheses are added to an aggregated whole), Adam Smith’s invisible hand (markets self-regulate to create a pencil even though no one, individually, can), and evolution itself (the unit of replication – perhaps the individual gene – replicates a possibly mutated variant of itself and proffers this as an “estimate” of the fittest gene. The pool of available next generation guesses are aggregated, and from this group, a better alternative is found over the course of multiple stages of guesses and aggregation cycles). The power of aggregation doesn’t operate on just one level: within individuals, other aggregation mechanisms can be found (for all organisms with a double-helical genome, two copies of a gene are available – another summation – instead of just one and in populations, group evolution is making a resurgence where before it was shunned in favor of genes being the sole unit of selection).

In the next posts, I hope to relate where all this aggregation might be going with respect to predicting the future. I will also share a project I worked on that relies on this idea to calculate the reflection of light in curved surfaces.

## Leave a Reply