Rare and Valuable Skills

In Cal Newport’s book “So Good They Can’t Ignore You,” Newport argues that to become world class in a field, it is necessary to dedicate relatively more hours of deep work to the relevant skills than others. Young people who believe they need only “follow their passion and they will find a job where they’ll never have to work a day in their life” get the causality exactly backwards. Newport’s news is that the successful among us have, at some point in their past, regularly engaged in deep work and developed rare and valuable skills, which they later leveraged to gain access to a career and life about which they are passionate.*

This has led me to think more about the deep work I want to do over the course of my career as a math and science teacher. This is what I have so far.

Rare and Valuable Skills

Some related links:

RSA Animate version of Dan Pink’s, “Drive” talk.

Cal Newport’s “Follow your passion is bad advice” talk.

Newport’s blog

 

Footnote:

* These passion-filled careers are, presumably, enjoyable precisely because the professionals engaged in them are able to compound the return on their skill sets by doing more deep work which had helped them succeed in the first place. No wonder they feel a high degree of satisfaction given the role autonomy, purpose and mastery play in motivation.

Advertisements
Tagged with: , , ,
Posted in Deep work

Christmas Fractal Competition Vote!

Your SAS Multivariable Calculus Class presents…
The Second Annual Christmas Tree Competition!

Vote for your favorite here!

vote for xmas tree fractal 2013.001

Posted in Multivariable Calculus

An end and a new beginning…

Check out the prezi for a summary of our 2 week experiment

Screen shot 2013-04-08 at 11.51.09 PM

You’ll find out what our students thought of the experiment. I hope you enjoyed it as much as we did and find their reflections helpful!

Posted in Advanced Chemistry, Multivariable Calculus

Chemistry of wet plate photography

Check out our Prezi of wet plate photography chemistry

Screen shot 2013-04-02 at 2.51.23 PM

Posted in Advanced Chemistry

The power of aggregation – part 2

In part 1, I argued that many of the most compelling processes that we observe in the world around us work according to a basic principle associated with aggregation. I will expand on this by writing a few lines that attempt to capture the essence of all of these predictive technologies:

estimate_i = true + error_i

\lim_{N\rightarrow\infty}\sum\limits_i^{N}{estimate_i} = N\cdot{true}

If an input, estimate_i, can be properly considered to be the estimate of a true value, the first equation states that each estimate (or guess) has some error associated with it.

Furthermore, we can get the errors to cancel out when all the estimates are added together. This is of course the process we use when we calculate a mean. And if there is even the tiniest information about what the true value of the solution might look like, and we can take enough guesses together, we will approach the true value using this aggregation approach!

This profound result was first discovered by Francis Galton when he described the “wisdom of the crowds” which he first called the “vox populi” (the voice of the people) in his seminal paper on the topic. When a group of people entered a contest to guess the weight of an ox at a county fair, an arbitrary random guess was significantly far from the true weight on average, but the median {or mean} guess was only off by a fraction of a percentage point. (I first learned about this during the Radiolab episode on Emergence). You can also check out a very interesting discussion about Galton’s choice between the median and the mean here. (And here is a more legible reprint of two of his most famous papers on this topic with a foreword).

So, “what is the big deal? you ask

The big deal is that in any information-rich environment, we can, on average, count on some of that information being passed along with each educated guess. If I go to google and type a search, when I click on one of the links, I have voted and therefore entered my “estimate” of the “true” perfect link for that query. If I aggregate these clicks over all time and all users, the top link becomes a very good estimate of the true best link due to the power of aggregation to cancel out all erroneous clicks.

Similarly, we can extend this analysis to Zacks website (where company earnings estimates are aggregated and the average is used) and fivethirtyeight (where polling information is aggregated and reported instead of any one poll). This can be extended to human problem solving (individual neural networks within a brain add to the aggregate we call the “subconscious” and “conscious” minds and apparently dreaming is integral to the aggregation that happens), science (both observations and hypotheses are added to an aggregated whole), Adam Smith’s invisible hand (markets self-regulate to create a pencil even though no one, individually, can), and evolution itself (the unit of replication – perhaps the individual gene – replicates a possibly mutated variant of itself and proffers this as an “estimate” of the fittest gene. The pool of available next generation guesses are aggregated, and from this group, a better alternative is found over the course of multiple stages of guesses and aggregation cycles). The power of aggregation doesn’t operate on just one level: within individuals, other aggregation mechanisms can be found (for all organisms with a double-helical genome, two copies of a gene are available – another summation – instead of just one and in populations, group evolution is making a resurgence where before it was shunned in favor of genes being the sole unit of selection).

In the next posts, I hope to relate where all this aggregation might be going with respect to predicting the future. I will also share a project I worked on that relies on this idea to calculate the reflection of light in curved surfaces.

Posted in Advanced Chemistry, Multivariable Calculus

The power of aggregation – part 1

This is the information age, and sitting at the heart of the information expansion that has defined this age is a simple idea: aggregation.

Swimming in all this information, will humans ever be able to reliably predict the future? And if we ever are able, will the models that we use to accomplish the feat be comprehensible?

To understand these big questions, I think it makes sense to first understand a set of smaller ones. How do we predict anything? What tools do we have for prediction? What are the most effective tools? How do they work?

Starting with the last set of questions first, here is a list of technologies (I use this term to mean any package of algorithms that take questions or problems of some form as inputs and provide solutions as outputs) that predict something:

1. Fivethirtyeight (problem: “Who will win the next election?”)
2. Google (problem: “What was that thing …?”)
3. Zacks (problem: “How can I buy an undervalued stock?”)
4. Watson (problem: “Classic candy bar that’s a female Supreme Court justice”)
5. Eureqa (formulize) (problem: “What are the important relationship between these variables?”)
6. Problem-solving (problem: “What is imperative about my observation(s)?”)
7. Markets (problem: “How much should I pay for jelly beans?”)
8. Francis Galton’s “Wisdom of the crowds” (problem: “How many jelly beans are in that jar?”)
9. Science (problem: “Hmmm, that’s interesting; how does that work?”)
10. Evolution (problem: “How does organized complexity arise in the universe?”)
11. Wikipedia (problem: “How can you develop and disseminate accurate educational content about anything?”)

In a future set of posts in this series, you can learn more about how each of these technologies rely on a simple concept – aggregation.

__________
Answers:
1. Obama (always)
2. Probably within six degrees of Kevin Bacon.
3. If it exists, then you might find it here. Otherwise, see #7.
4. “What is Baby Ruth Ginsburg?”
5. \sum_i{F_i} = ma, for example
6. Experiences are encoded in neuronal structures distributed throughout the brain. These are reinforced over time based on feedback and learning circuitry. Aggregating information and selective reinforcing (through selective myelination) is likely central to the story of human problem solving.
7. By aggregating the information of buyers and sellers with incomplete information, on average they will converge to the best estimate of the value of the asset. This idea is called the efficient markets hypothesis.
8. The average of a crowd’s guess can be counter-intuitively accurate.
9. Experiments – broadly defined – which allow you to decide between two competing hypotheses – with respect to the goodness of the predictions of each and how universally the hypotheses apply – are, over time, the only way to add to knowledge.
10. A genetic system and (its genetic information) is selected if it is relatively more effective (with respect to the selection environment) at replicating itself than other organisms in the neighborhood.
11. You can probably find the answer on wikipedia.

Posted in Advanced Chemistry, Multivariable Calculus

Engineering scientific discovery

I was recently listening to archived Radiolab episodes, and came across this story about Eureqa, a computer algorithm that looks for patterns in data sets.

The amazing thing about this story is that the algorithm finds correlations and relationships between datasets that are incredibly accurate and even predictive. These finds have all the hallmarks of scientific breakthroughs – the program could by rights be listed as a coauthor on this scientific paper and it has even recreated seminal discoveries (like Newton’s Second Law!) – however, humans may be beyond our capacities to understand the “laws” Eureqa finds.

But, that is for you to decide! You can enter data into this algorithm from the comfort of your own laptop and see what relationships pop out. Perhaps you (with the help of Eureqa) will identify the meaning behind a new relationship and will make the next great discovery.

Try it out here! This is a great way to develop relationships for your own datasets.

Plotting walkingbaron’s temperature as a function of his room’s x – y coordinates we get the following:

Screen shot 2013-03-30 at 1.09.51 AM

Eureqa finds the following best fit relationship:

T = 12.6008 + 1.18541y + 0.591413x + 0.00272746xy^2 + 0.00170691y^3 - 0.0132424x^2 - 0.0701875xy - 0.0784782y^2

With the fit:

Screen shot 2013-03-30 at 1.10.01 AM

Not bad! Although I would feel a bit anxious about extrapolating this fit in many directions, and there is no clear physical significance (beyond the fact that they represent the partial derivatives of the temperature surface and therefore the coefficients in a two dimensional Taylor series expansion of the temperature profile in the neighborhood of the point (0,0,12.6)) for the necessity of the many parameters in the model. However, I am new to the program and am impressed!

Tagged with: , , , , , , , ,
Posted in Advanced Chemistry, Multivariable Calculus, Steven Strogatz