This is the information age, and sitting at the heart of the information expansion that has defined this age is a simple idea: aggregation.
Swimming in all this information, will humans ever be able to reliably predict the future? And if we ever are able, will the models that we use to accomplish the feat be comprehensible?
To understand these big questions, I think it makes sense to first understand a set of smaller ones. How do we predict anything? What tools do we have for prediction? What are the most effective tools? How do they work?
Starting with the last set of questions first, here is a list of technologies (I use this term to mean any package of algorithms that take questions or problems of some form as inputs and provide solutions as outputs) that predict something:
1. Fivethirtyeight (problem: “Who will win the next election?”)
2. Google (problem: “What was that thing …?”)
3. Zacks (problem: “How can I buy an undervalued stock?”)
4. Watson (problem: “Classic candy bar that’s a female Supreme Court justice”)
5. Eureqa (formulize) (problem: “What are the important relationship between these variables?”)
6. Problem-solving (problem: “What is imperative about my observation(s)?”)
7. Markets (problem: “How much should I pay for jelly beans?”)
8. Francis Galton’s “Wisdom of the crowds” (problem: “How many jelly beans are in that jar?”)
9. Science (problem: “Hmmm, that’s interesting; how does that work?”)
10. Evolution (problem: “How does organized complexity arise in the universe?”)
11. Wikipedia (problem: “How can you develop and disseminate accurate educational content about anything?”)
In a future set of posts in this series, you can learn more about how each of these technologies rely on a simple concept – aggregation.
1. Obama (always)
2. Probably within six degrees of Kevin Bacon.
3. If it exists, then you might find it here. Otherwise, see #7.
4. “What is Baby Ruth Ginsburg?”
5. , for example
6. Experiences are encoded in neuronal structures distributed throughout the brain. These are reinforced over time based on feedback and learning circuitry. Aggregating information and selective reinforcing (through selective myelination) is likely central to the story of human problem solving.
7. By aggregating the information of buyers and sellers with incomplete information, on average they will converge to the best estimate of the value of the asset. This idea is called the efficient markets hypothesis.
8. The average of a crowd’s guess can be counter-intuitively accurate.
9. Experiments – broadly defined – which allow you to decide between two competing hypotheses – with respect to the goodness of the predictions of each and how universally the hypotheses apply – are, over time, the only way to add to knowledge.
10. A genetic system and (its genetic information) is selected if it is relatively more effective (with respect to the selection environment) at replicating itself than other organisms in the neighborhood.
11. You can probably find the answer on wikipedia.