There are approximately 20,000 math learning apps available on the App Store (classified as such by their creators). Google Play does not provide the corresponding figure for Android apps, but presumably there are a lot there as well.
Most of those apps do little more than provide repetitive practice of very basic skills, primarily about numbers. They are essentially just animated flash cards.
How can a parent, or a teacher, decide which apps are likely to benefit their child, or their students? I’ll come back to that later.
First, let me say that there is not necessarily anything wrong with an app that is essentially just an animated flash card – unless parents buy them (or just download them, as the majority are free) thinking that putting them on their children’s iPad or whatever is all they need to do to improve their performance in math.
In the days when the gateway to mathematics, and indeed much of everyday life, lay in mastering the multiplication tables and memorizing a few formulas for calculating areas and volumes, mastery of the basic number facts was indeed enough to start with. So it’s a pity those fun learning apps were not available back then. They would have made the acquisition of those fundamental facts and skills so much easier and far more enjoyable.
Unfortunately, the very digital technologies that have put those learning apps into eager young hands have also provided tools that have rendered procedural mastery of those basic skills all but irrelevant.
In today’s world, we use cheap, ubiquitous devices to do our calculations. It’s no longer important that all members of society have procedural mastery of basic arithmetic. What is required is the ability to make effective use of those digital devices, and what that depends upon is a good understanding of number – what is often referred to as number sense.
Roughly speaking, having number sense means being proficient with quantities and operations with numbers. A person with number sense is able to represent number concepts with models, words and diagrams, to communicate numerical ideas, and solve problems involving numbers. She or he can flexibly compose and decompose numbers for computation and solving problems. They can evaluate the reasonableness of solutions to numerical problems, and make connections between multiple solution methods. They can communicate their number sense verbally and in writing. They notice and explore number patterns, make connections and conjectures, and communicate their thinking to others. Number sense goes beyond solving word problems and memorizing basic facts and procedures. It involves engaging in numbers and operations in ways that develop a deep understanding of the content, which provides a firm foundation for mathematical success. In particular, a strong background in number sense sets the stage for later success in algebra and other parts of mathematics.
If that last paragraph sounds like something that emerged from a committee of mathematics education experts, it is because in essence it did. You find language like that in the National Research Council’s 2000 report Adding it Up,
(which you can download for free from the National Academies Press) and in the preamble to the Common Core State Standards for Mathematics, which emphasize the development of number sense in young children.
For sure, you cannot have number sense without being able to solve an arithmetic problem and get the right answer. What has changed is that it is no longer important to solve that problem by the fastest method, or by a standard method that leaves a paper audit trail that others can check. Our calculating devices do those for us.
Much more important in today’s world is to be able to reason about the numbers in a problem from first principles, in a way that embodies the internal structure of the numbers. For as humans, we need to be able to operate when and where that calculator cannot: namely, when we are faced with a novel problem the real world has thrown up at us.
It was a lack of recognition that the world has changed fundamentally that led the consequently-Internet-famous “Jack’s Dad” to pen his satirical “letter to his son” that went viral on social media earlier this year. (See the next link below.)
Actually, Jack’s dad is an electronics engineer, so he was certainly aware of how much today’s world was different from the days of his own childhood. Unfortunately, as someone outside the world of education, he had just not connected the dots to understand what changes in education were required in order to properly prepare today’s kids to live, not just in our present world, but in the world they will help shape from it.
One of the best summaries of the issues behind that social media firestorm that I came across was the April 6 response to Jack’s Dad written by the math education blogger Christopher Danielson.
Danielson’s observations about different kinds of expertise rang very true to me. Having devoted the first part of my mathematics career to mathematical research, it was my appointment to serve on the Mathematical Sciences Education Board in 2000, and the close contact with leading experts in mathematics teaching that resulted, that brought home to me just how little I knew about how people learn mathematics, and how (consequently) we should teach it.
Put plainly, having a PhD in mathematics and a string of published research is absolutely nothing like enough background to speak with authority about K-12 mathematics learning. People like me can provide good advice on mathematical content; but not on mathematics teaching. That requires different knowledge and expertise.
My own university, Stanford, famous for its very high standards in research, apparently recognizes this when it comes to hiring new faculty in Education. While I cannot speak with authority for the School of Education’s policies, I have observed that no one gets appointed to the faculty who has not spent several years in K-12 teaching. (In addition to having done and published first class research!) Whether or not K-12 experience is official hiring policy, it certainly plays out that way, and it seems to me to be a sensible criterion to demand.
Going back to the standard algorithms and Jack’s Dad, a few months after his first post, on October 8, Danielson posted another excellent blog on the degree to which the position occupied by the standard arithmetic algorithms (in actual fact, there are many variations, so there is no such thing as “the standard algorithms) has changed in the educational landscape – from being the main focus as a method for daily use, to an interesting and historically important example of a set of highly efficient paper-and-pencil algorithms that quite literally changed the world. Their significance was a consequence of the dominant information storage and communication technology of the time: flat, static writing surfaces such as parchment, blackboards, and paper. (I describe that story in my book The Man of Numbers.)
I will note, in passing, that Danielson’s October post indicates that some math learning apps may in fact do harm to a child’s mathematics learning, an observation that should be coupled with my earlier remarks about choosing basic skills educational apps.
What put these thoughts onto my front burner recently were some discussions I was having with members of the Scientific Advisory Board for my educational technology startup company BrainQuake.
If you check out our company’s Team page, you will find we have recruited a number of world renown experts in mathematics education. Now you may think they are just there for marketing purposes – website name dropping. But you would be wrong. Each one is there because they bring very valuable, very specific expertise to the table.
To someone not an expert in mathematics learning, the arithmetic puzzles in our launch app, Wuzzit Trouble, may look as though they are just a series of problems we generated in an essentially random fashion, following the simple rule that the numbers should get "harder" the further a player goes in the game. But that is not the case. In a mathematics learning game, the mathematics ramp is just as critical as the level design of the game, and both require a lot of expertise to get it right.
(Interestingly, another name on our website, John Romero, is a world expert in level design – the ramping in game-play – but he joined forces with us only after we had brought out Wuzzit Trouble, so you will only see the results of his genius in future products we bring out.)
Which brings me back to my promise to provide advice on how to select good learning apps. It’s probably not a foolproof method, but a quick and easy way is to check out the website of the creators, and see who they have advising them on the learning side.
There is always the danger that some of the names are there for little more than window dressing, but the majority of education experts (indeed, experts in any domain) are not likely to lend their name to an enterprise they do not believe in. So the presence of names of distinguished mathematics educators should give you a lot of confidence in the product.
More to the point, the absence of such names should be taken as a serious warning. Quite frankly, it is not possible to design and build an educationally sound and effective learning app without a lot of expert input.
And I mean a lot of expert input. I bring years of my own expertise to BrainQuake, but Wuzzit Trouble would not have been anything like as educationally successful as it has, if it had just been me on the mathematics side.
There is your quick-and-easy quality check. If you use it, you will find that list of 20,000 apps suddenly shrinks down to a significantly smaller number. Fortunately, that number is not zero. There are some great math learning apps out there. You just have to choose wisely.
Wednesday, December 10, 2014
Wednesday, November 5, 2014
Against Answer Getting
"Correct answers are essential... but they're part of the process, they're not the product. The product is the math the kids walk away with in their heads." —Phil Daro
If you have not already watched Phil Daro's 17-minute video Against Answer Getting, you should do so
right away. (I'll keep this post short to give you enough time to watch it in
its entirety.)
Daro, a
longtime mathematics educator and leading figure in the national mathematics
education community, is currently director of the San Francisco field site of SERP, the Strategic
Education Research Partnership. He was one of the mathematics educators who
played a leading role in the formulation of the mathematics Common Core State
Standards. (You know, one of those knowledgeable experts the StopCommonCore
brigade keep claiming were not involved in CCSS development.)
The video is full of powerful insights that the mathematics
education community has accumulated over many years of research. My opening
quote sums up the focus of the video. Here is another one I like:
"Mathematics does not break down into lesson-sized pieces." —Phil Daro
This particular quote resonates with me. I adopted the same principle in
the design of my MOOC Introduction to Mathematical Thinking, currently about halfway through its fifth run.
Daro's focus, both in the video and in his work in general, is K-12
mathematics education. But it is very relevant to those of us in college-level
mathematics education. When students come to college with a perception that
mathematics is about "answer getting," we face the very uphill task of ridding
them of that misleading mindset.
True, for hundreds of years, getting answers was a key component of
learning and doing mathematics. But these days, if we want answers in
mathematics, we generally use one of a number of digital technologies. The job
of today's mathematician (or typical user of mathematics) is problem solving.
The part that requires a human mind is when the problem has a novel aspect. It
was precisely to put the focus on the thinking part that I named my MOOC the
way I did.
The principle requirement for being able to solve a novel problem is
conceptual understanding. That is why the issues Daro raises in that video are
so central to the mathematics education of the citizens of tomorrow.
The outdated mindset about the purpose of mathematics that many students
bring with them when they transition from school to college is not the only problem
many have to overcome. A parallel issue manifests itself when they start to
learn about mathematical proofs (if they follow the mathematics path).
My MOOC students are currently right in the middle of that part of the
course (proofs), and many are having a very hard time coming to understand what
role proofs play and what (therefore) constitutes a good proof.
The dominant perception is that proofs are what mathematicians produce
in order to determine mathematical truth. That, of course, is true (at least in
an idealistic sense that guides mathematical progress), but as with arithmetic
answer getting, it is only part of the story. And in terms of actual
mathematical practice, a very small part of the story.
As with answer getting in K-12 math, achieving a logically correct proof
is a binary target (right or wrong), which make both very easy to evaluate for
correctness and assign a numerical grade. (Ka-ching!)
But let's pause and ask ourselves how proofs work in practice. If you
want to know if Fermat's Last Theorem is true, you consult a reliable source.
Today, any moderately knowledgeable mathematician will tell you the answer: "Yes." Now you know.
But what if you want to know why
it is true. That's when you need to look at a proof.
In terms of mathematical practice, proofs are about understanding. They
are communicative devices we construct to convince ourselves and to convince
others.
In my MOOC, because I cannot assume the students have access to individualized,
expert feedback on their work, I do not ask them to construct proofs. But I do
present them with a range of purported proofs, some correct, others not, and
ask them to evaluate them. The evaluation is in terms both of logical
correctness and communicative effectiveness.
To do this, I ask them to look at each purported proof in terms of five
different factors: one logical correctness, the others focusing on
communicative issues. Though the five factors are not independent variables, I
ask them to treat them as such when evaluating a proof.
This is the part of the course where those students who have had some
exposure to proofs in the K-12 system tend to do worse than those who are new
to proofs. They are simply not able to approach a proof other than in the "answer getting" mode of "Is it logically correct?"
This shows up dramatically with extremal cases. When I present them with
a carefully constructed argument that is logically correct but provides no
explanation, they will give it high marks across the board. But faced with an
argument that is superbly articulated but has a logical flaw, they are
psychologically unable to evaluate the structure of the argument. "It's wrong," they keep saying. End of story (for them).
Of course, extremal examples are atypical, and often difficult to wrap
our minds around. That's what makes them so valuable as learning devices. It's
when the classroom rubber hits the road and we find ourselves using
mathematical thinking in our lives or careers that it becomes important to have
good communication skills.
Pick up a more advanced level mathematics book or research article and
the chances are high that the arguments presented will contain errors.
(Actually, the book does not have to be advanced. Euclid's Elements is littered with "proofs" that are not logically sound.)
But if the arguments are well laid out, with adequate explanations, a suitably
skilled reader can fix them as they go along—possibly with help from someone
else. (That's definitely the case with Elements,
though it took two thousand years before David Hilbert noticed that Euclid's
own arguments left a lot of work to be done to make them genuine "proofs.")
It's the same in software engineering. Any useful program will have
bugs. But if the code is well structured, and adequately annotated, someone
else can dive in and fix it whenever a flaw manifests. A good computer programmer is not someone who writes error-free,
working code; it is someone who writes working code that can easily be fixed or
modified.
I'll leave it as an exercise for the reader to identify the analogous
issue in the natural sciences.
If those of us in the education business want to do the best we can to
prepare our students for life in the 21st century, we need to recognize
that in an era when technologies provide instant answers (facts), the one
ability they will need above anything else is (creative, reflective) thinking.
Tuesday, October 7, 2014
The Straw Teacher
When people argue for a position they hold because of political bias or some deep-rooted sense of conviction (as opposed to one arrived at by a process of reflection, weighing all sides of the issue), they often resort to straw-man tactics. This is particularly common in the U.S Math Wars, which these days are largely focused on the Common Core State Standards for mathematics.
A particularly popular straw man – more precisely, a "straw teacher" (a term that nicely gets us out of gender issues) – is a math teacher who spends class time exclusively discussing mathematics concepts (whatever that means) and pays no attention to helping the students master any procedures.
I guess there may be such a teacher, somewhere, but I have to confess I have yet to meet one. Ditto for the straw teacher who says getting the right answer (if there is one) is not important. Teachers just don't do either of those.
My colleagues who work in classroom teacher preparation do tell me that many math teachers do little else than drill on procedures (in some cases because they never set out to teach math, and don't really understand the concepts themselves), but in my walk of life I never meet them. I see the ones who became math teachers because they love mathematics and want to teach, and attend mathematics teacher conferences to exchange ideas and to learn more – which is where I meet them.
Anyone who has a working knowledge of (1) what mathematics (really) is and (2) how the brain works knows that learning math in a useful way requires both mastery of a set of basic procedures and conceptual understanding of the mathematical notions those procedures are built on.
In practical terms, you need to master basic procedures in order to develop conceptual understanding, and you need conceptual understanding in order to avoid any procedural mastery being brittle and short-lived.
So good math teaching involves both. And, for the record (yet again), both are called for in the CCSS.
Absent the CC connection, I've written about this issue on a number of occasions before in this column. For instance:
March 2006: How do we learn math?
September 2007: What is conceptual understanding?
Both articles were written long before the Common Core was developed. They were also written when I was just starting to become more actively involved in K-12 education issues. (And before I inadvertently ignited the "repeated addition" firestorm in the summer of 2008.) But having just re-read them for the first time in many years, I still stand by what I wrote. So I won't repeat myself here.
A particularly popular straw man – more precisely, a "straw teacher" (a term that nicely gets us out of gender issues) – is a math teacher who spends class time exclusively discussing mathematics concepts (whatever that means) and pays no attention to helping the students master any procedures.
I guess there may be such a teacher, somewhere, but I have to confess I have yet to meet one. Ditto for the straw teacher who says getting the right answer (if there is one) is not important. Teachers just don't do either of those.
My colleagues who work in classroom teacher preparation do tell me that many math teachers do little else than drill on procedures (in some cases because they never set out to teach math, and don't really understand the concepts themselves), but in my walk of life I never meet them. I see the ones who became math teachers because they love mathematics and want to teach, and attend mathematics teacher conferences to exchange ideas and to learn more – which is where I meet them.
Anyone who has a working knowledge of (1) what mathematics (really) is and (2) how the brain works knows that learning math in a useful way requires both mastery of a set of basic procedures and conceptual understanding of the mathematical notions those procedures are built on.
In practical terms, you need to master basic procedures in order to develop conceptual understanding, and you need conceptual understanding in order to avoid any procedural mastery being brittle and short-lived.
So good math teaching involves both. And, for the record (yet again), both are called for in the CCSS.
Absent the CC connection, I've written about this issue on a number of occasions before in this column. For instance:
March 2006: How do we learn math?
September 2007: What is conceptual understanding?
Both articles were written long before the Common Core was developed. They were also written when I was just starting to become more actively involved in K-12 education issues. (And before I inadvertently ignited the "repeated addition" firestorm in the summer of 2008.) But having just re-read them for the first time in many years, I still stand by what I wrote. So I won't repeat myself here.
Tuesday, September 2, 2014
Will the Real Geometry of Nature Please Stand Up?
Is fractal geometry “the geometry of nature”? I was asked this question recently in an email from someone who had watched the PBS video Hunting the Hidden Dimension that I worked on, and appeared in, a few years ago.
It would have been easy to simply reply “Yes,” and for many audiences I would (and have) done just that—for this was by no means the first time I had been asked that question, or others very much like it. But the context in which this recent questioner raised the issue merited a less superficial response. So I wrote back to say that there is no such thing as the geometry of nature, or more generally, the mathematics of W, where W is some real world domain.
The strongest claim that can be made is something along the lines of “Mathematical theory T is the best mathematical description (or model) we currently have of the real world domain (or phenomenon) W.” But even then, this statement is less definitive than it might first appear: In particular, what do we mean by “best”?
Best in terms of understanding? (If so, then understanding by whom?)
Best in terms of building something in W? (If so, then building out of what, using what tools, and for what use?)
Best in terms of teaching someone about W? (If so, then teaching what kind of person in terms of age, background, education, motivation, etc.?)
Slightly edited and extended, the next few paragraphs are what I wrote back to my correspondent:
Nature is just what it is. Mathematics provides various ways to model our perception and experience of reality. Different parts of mathematics provide different models, some better than others. Fractal geometry provides one model that seems to accord with our observations, measurements, and experiences. But so too do the cellular automata models on which Steve Wolfram bases his “New Kind of Science.”
Many of us think fractal geometry does a better job than cellular automata in helping us understand the natural world by virtue of its nature, but that reflects an assumed patterns/relationship conception of what constitutes science.
I would prefer to call Wolfram’s framework a computational theory (of the world), rather than science. But the distinction is, I think, purely one of the meaning we attach to the relevant words (particularly “science”).
Both approaches can be said to begin by looking at how nature works, but the moment you start to create a model, you leave nature and are into the realm of human theorizing. From then on, the only available metrics are (1) degree of fit to observations and measurements, (2) degree of utility, and (3) degree to which we find the model’s assumptions reasonable.
There is lots of slack here.
In (1), what are we observing and measuring? (They are often entities created by those very mathematical theories, e.g. mass, length, volume, velocity, momentum, temperature, etc.)
In (2), how do we define utility? Doing stuff, building stuff, understanding stuff, teaching stuff, or something else? (Each with the various audience/use/purpose caveats I raised earlier.)
Then there is (3). Unless we make some initial assumptions, we cannot get a theory off the ground. And make no mistake about it, we do begin with assumptions. Not arbitrary ones, to be sure—not even close to being arbitrary. For the resulting theory to be fully accepted (as a plausible explanation or model), it has to accord to any and all the available facts, and it has to be falsifiable—it should make claims or imply conclusions that we can attempt to prove wrong.
For instance, a mathematical theory that implied 3 = 4 (as an identity of integers) would be immediately rejected.
What about a theory that implies 0.999… = 1.0, where those three dots indicate that the decimal series continues for ever? According to the widely accepted, standard definitions that mathematicians use to provide meaning to the concept of an infinite sequence of decimal digits, this identity is correct. Indeed, it can be proved to be correct, starting from the reasonable, plausible, and accepted basic principles (axioms) for the real number system.
Most university math students learn about the framework within which 0.999… is indeed equal to 1.0. (Though many of the popular “proofs” you come across are not rigorous.) As a result, many mathematically educated people will state, as if it were an absolute fact of the world, that 0.999 = 1.0. But that is not true. The identity holds because we have made some assumptions about how to handle infinity. It’s easy to overlook that fact. So let me provide a further example where it may be less easy to miss an underlying assumption.
Graduate students of mathematics are introduced to further assumptions (about handling the infinite, and various other issues), equally reasonable and useful, and in accord both with our everyday intuitions (insofar as they are relevant) and with the rest of mainstream mathematics. And on the basis of those assumptions, you can prove that
1 + 2 + 3 + … = –1/12.
That’s right, the sum of all the natural numbers equals –1/12.
This result is so much in-your-face, that people whose mathematics education stopped at the undergraduate level (if they got that far) typically say it is wrong. It’s not. Just as with the 0.999… example, where we had to construct a proper meaning for an infinite decimal expansion before we could determine what its value is, so to we have to define what that infinite sum means.
It turns out that there is an entire branch of mathematics, called analytic continuation theory, that provides us with a “natural” meaning for (in particular) that sum. And when we calculate the value using that meaning, we arrive at the answer -1/12. See this Wikipedia article for a brief account.
Incidentally, just as with the 0.999… example, you will find purported “intuitive proofs” floating around, among them this video that went viral earlier this year, but those arguments too are not rigorous.
Both frameworks, the one that yields a value for 0.999… and the one that produces a value for 1 + 2 + 3 + … , satisfy all the requirements of being reasonable, plausible, consistent with the rest of mainstream mathematics, and useful (in studies of real world phenomena, including physics). If you accept one, you really cannot reasonably deny the other. Rather, you have to accept the implications they yield, even if they at first seem counter to your expectations.
True, neither identity accords with our experiences in the physical world, since those experiences do not involve any infinite quantities or processes. (So there is nothing to accord with!)
One of the things surprising examples involving infinity remind us of is that mathematics is not “the true theory of the real world” (whatever that might mean). Rather, mathematical theories are mental frameworks we construct to help us make sense of the world. They survive or wither according to the degree to which they continue to accord with our real world experiences and to prove useful to us in conducting our individual and collective lives.
To return to geometry. For most people, throughout human history the geometry of the world experienced was planar Euclidean geometry, which accords extremely well with our everyday experiences.
But for the global air traveler (such as long distance airplane pilots), and for the astronauts in the International Space Station, spherical geometry is “the geometry.” In still other circumstances (for the most part, physics and cosmology), hyperbolic and elliptic geometries are the best frameworks.
For the artist trying to represent three dimensions on a two-dimensional canvas (or the movie or video-game animator trying to represent three dimensions on a screen), projective geometry is the best framework.
Picking up on my opening example, when you adopt a geometric perspective to try to understand growth in the natural world, you find that fractal geometry is the most appropriate one to hand.
And, finally, when you adopt a geometric perspective to try to make sense of social life in today’s multi-cultural societies, you may find that higher dimensional Euclidean geometries seem to work best, as I explain in this video (30 minutes) taken from a talk I gave at a conference in New Mexico earlier this year. (The relevant segment starts at 3:20 and ends at 11:00.)
The fact is, there is not just one geometry, and there is no such thing as “the geometry of W,” where W is a real world phenomenon or domain.
Likewise for other branches of mathematics we develop and use to understand our world and to do things in our world.
This means that, whereas, within mathematics there are “right answers,” when you apply mathematics to the world, that certainty and accuracy is only as good as the fit between the mathematics (as a conceptual framework) and the world.
And now we are back, more or less, at the topic of my previous Devlin’s Angle post. It merits a second look. Given the nature of the modern world, with mathematical models playing such a major role, with major consequences (in banking, information storage, communication, transportation, national security, etc.), we should not lose track of the fact that mathematics is not the truth.
Rather, it provides us with useful models of the world. As a result, it is a powerful and useful way of making sense of the world, and doing things in the world.
This distinction was not particularly significant for anyone growing up in the 20th century and earlier. Back then, there was usually no danger in viewing mathematics as if it were the truth. But it is an absolutely critical distinction to keep in mind for those coming of age today.
That New Mexico talk video I referred to a moment ago was in fact from a conference on middle school mathematics education, and was an attempt to raise awareness among middle school math teachers of the need to make their students aware of the way mathematics is used in the world they will live in and help shape, emphasizing not only mathematics’ strengths but also its limitations.
When you think about what is at stake here, much of the current debate (largely uninformed on the opposition side) about the Common Core State Standards resembles nothing more than two elderly bald men arguing over ownership of a comb.
In the case of the UK’s Falkland’s War of 1983, where this analogy originated, both sides appeared equally stupid. The sad aspect to the CCSS debate is that the level of ignorance (or malicious intent) on the “Stop” side forces many well-informed teachers and mathematics learning experts to devote time to the debate, lest ignorance prevail and our kids find themselves unable to survive in the world they inherit. (What the debate should focus on is how to properly implement the Standards. There be dragons, and someone needs to slay them.)
WORTH LISTENING TO: American RadioWorks has just aired an excellent radio documentary about the Common Core, in which we hear from real teachers who have been using it, both in states where it has been implemented according to plan and others where the implementation has been modified.
It would have been easy to simply reply “Yes,” and for many audiences I would (and have) done just that—for this was by no means the first time I had been asked that question, or others very much like it. But the context in which this recent questioner raised the issue merited a less superficial response. So I wrote back to say that there is no such thing as the geometry of nature, or more generally, the mathematics of W, where W is some real world domain.
The strongest claim that can be made is something along the lines of “Mathematical theory T is the best mathematical description (or model) we currently have of the real world domain (or phenomenon) W.” But even then, this statement is less definitive than it might first appear: In particular, what do we mean by “best”?
Best in terms of understanding? (If so, then understanding by whom?)
Best in terms of building something in W? (If so, then building out of what, using what tools, and for what use?)
Best in terms of teaching someone about W? (If so, then teaching what kind of person in terms of age, background, education, motivation, etc.?)
Slightly edited and extended, the next few paragraphs are what I wrote back to my correspondent:
Nature is just what it is. Mathematics provides various ways to model our perception and experience of reality. Different parts of mathematics provide different models, some better than others. Fractal geometry provides one model that seems to accord with our observations, measurements, and experiences. But so too do the cellular automata models on which Steve Wolfram bases his “New Kind of Science.”
Many of us think fractal geometry does a better job than cellular automata in helping us understand the natural world by virtue of its nature, but that reflects an assumed patterns/relationship conception of what constitutes science.
I would prefer to call Wolfram’s framework a computational theory (of the world), rather than science. But the distinction is, I think, purely one of the meaning we attach to the relevant words (particularly “science”).
Both approaches can be said to begin by looking at how nature works, but the moment you start to create a model, you leave nature and are into the realm of human theorizing. From then on, the only available metrics are (1) degree of fit to observations and measurements, (2) degree of utility, and (3) degree to which we find the model’s assumptions reasonable.
There is lots of slack here.
In (1), what are we observing and measuring? (They are often entities created by those very mathematical theories, e.g. mass, length, volume, velocity, momentum, temperature, etc.)
In (2), how do we define utility? Doing stuff, building stuff, understanding stuff, teaching stuff, or something else? (Each with the various audience/use/purpose caveats I raised earlier.)
Then there is (3). Unless we make some initial assumptions, we cannot get a theory off the ground. And make no mistake about it, we do begin with assumptions. Not arbitrary ones, to be sure—not even close to being arbitrary. For the resulting theory to be fully accepted (as a plausible explanation or model), it has to accord to any and all the available facts, and it has to be falsifiable—it should make claims or imply conclusions that we can attempt to prove wrong.
For instance, a mathematical theory that implied 3 = 4 (as an identity of integers) would be immediately rejected.
What about a theory that implies 0.999… = 1.0, where those three dots indicate that the decimal series continues for ever? According to the widely accepted, standard definitions that mathematicians use to provide meaning to the concept of an infinite sequence of decimal digits, this identity is correct. Indeed, it can be proved to be correct, starting from the reasonable, plausible, and accepted basic principles (axioms) for the real number system.
Most university math students learn about the framework within which 0.999… is indeed equal to 1.0. (Though many of the popular “proofs” you come across are not rigorous.) As a result, many mathematically educated people will state, as if it were an absolute fact of the world, that 0.999 = 1.0. But that is not true. The identity holds because we have made some assumptions about how to handle infinity. It’s easy to overlook that fact. So let me provide a further example where it may be less easy to miss an underlying assumption.
Graduate students of mathematics are introduced to further assumptions (about handling the infinite, and various other issues), equally reasonable and useful, and in accord both with our everyday intuitions (insofar as they are relevant) and with the rest of mainstream mathematics. And on the basis of those assumptions, you can prove that
1 + 2 + 3 + … = –1/12.
That’s right, the sum of all the natural numbers equals –1/12.
This result is so much in-your-face, that people whose mathematics education stopped at the undergraduate level (if they got that far) typically say it is wrong. It’s not. Just as with the 0.999… example, where we had to construct a proper meaning for an infinite decimal expansion before we could determine what its value is, so to we have to define what that infinite sum means.
It turns out that there is an entire branch of mathematics, called analytic continuation theory, that provides us with a “natural” meaning for (in particular) that sum. And when we calculate the value using that meaning, we arrive at the answer -1/12. See this Wikipedia article for a brief account.
Incidentally, just as with the 0.999… example, you will find purported “intuitive proofs” floating around, among them this video that went viral earlier this year, but those arguments too are not rigorous.
Both frameworks, the one that yields a value for 0.999… and the one that produces a value for 1 + 2 + 3 + … , satisfy all the requirements of being reasonable, plausible, consistent with the rest of mainstream mathematics, and useful (in studies of real world phenomena, including physics). If you accept one, you really cannot reasonably deny the other. Rather, you have to accept the implications they yield, even if they at first seem counter to your expectations.
True, neither identity accords with our experiences in the physical world, since those experiences do not involve any infinite quantities or processes. (So there is nothing to accord with!)
One of the things surprising examples involving infinity remind us of is that mathematics is not “the true theory of the real world” (whatever that might mean). Rather, mathematical theories are mental frameworks we construct to help us make sense of the world. They survive or wither according to the degree to which they continue to accord with our real world experiences and to prove useful to us in conducting our individual and collective lives.
To return to geometry. For most people, throughout human history the geometry of the world experienced was planar Euclidean geometry, which accords extremely well with our everyday experiences.
But for the global air traveler (such as long distance airplane pilots), and for the astronauts in the International Space Station, spherical geometry is “the geometry.” In still other circumstances (for the most part, physics and cosmology), hyperbolic and elliptic geometries are the best frameworks.
For the artist trying to represent three dimensions on a two-dimensional canvas (or the movie or video-game animator trying to represent three dimensions on a screen), projective geometry is the best framework.
Picking up on my opening example, when you adopt a geometric perspective to try to understand growth in the natural world, you find that fractal geometry is the most appropriate one to hand.
And, finally, when you adopt a geometric perspective to try to make sense of social life in today’s multi-cultural societies, you may find that higher dimensional Euclidean geometries seem to work best, as I explain in this video (30 minutes) taken from a talk I gave at a conference in New Mexico earlier this year. (The relevant segment starts at 3:20 and ends at 11:00.)
The fact is, there is not just one geometry, and there is no such thing as “the geometry of W,” where W is a real world phenomenon or domain.
Likewise for other branches of mathematics we develop and use to understand our world and to do things in our world.
This means that, whereas, within mathematics there are “right answers,” when you apply mathematics to the world, that certainty and accuracy is only as good as the fit between the mathematics (as a conceptual framework) and the world.
And now we are back, more or less, at the topic of my previous Devlin’s Angle post. It merits a second look. Given the nature of the modern world, with mathematical models playing such a major role, with major consequences (in banking, information storage, communication, transportation, national security, etc.), we should not lose track of the fact that mathematics is not the truth.
Rather, it provides us with useful models of the world. As a result, it is a powerful and useful way of making sense of the world, and doing things in the world.
This distinction was not particularly significant for anyone growing up in the 20th century and earlier. Back then, there was usually no danger in viewing mathematics as if it were the truth. But it is an absolutely critical distinction to keep in mind for those coming of age today.
That New Mexico talk video I referred to a moment ago was in fact from a conference on middle school mathematics education, and was an attempt to raise awareness among middle school math teachers of the need to make their students aware of the way mathematics is used in the world they will live in and help shape, emphasizing not only mathematics’ strengths but also its limitations.
When you think about what is at stake here, much of the current debate (largely uninformed on the opposition side) about the Common Core State Standards resembles nothing more than two elderly bald men arguing over ownership of a comb.
In the case of the UK’s Falkland’s War of 1983, where this analogy originated, both sides appeared equally stupid. The sad aspect to the CCSS debate is that the level of ignorance (or malicious intent) on the “Stop” side forces many well-informed teachers and mathematics learning experts to devote time to the debate, lest ignorance prevail and our kids find themselves unable to survive in the world they inherit. (What the debate should focus on is how to properly implement the Standards. There be dragons, and someone needs to slay them.)
WORTH LISTENING TO: American RadioWorks has just aired an excellent radio documentary about the Common Core, in which we hear from real teachers who have been using it, both in states where it has been implemented according to plan and others where the implementation has been modified.
Friday, August 1, 2014
Most Math Problems Do Not Have a Unique Right Answer
One of the most widely held misconceptions about mathematics is that a math problem has a unique correct answer.
(Some of those who hold that view also think that there is just one correct way to get that answer. A far smaller group, to be sure, but still a worryingly large number. Still, my focus here is on the first false belief.)
Having earned my living as a mathematician for over 40 years, I can assure you that the belief is false. In addition to my university research, I have done mathematical work for the U. S. Intelligence Community, the U.S. Army, private defense contractors, and a number of for-profit companies. In not one of those projects was I paid to find "the right answer." No one thought for one moment that there could be such a thing.
So what is the origin of those false beliefs? It's hardly a mystery. People form that misconception because of their experience at school. In school mathematics, students are only exposed to problems that (a) are well defined, (b) have a unique correct answer, and (c) whose answer can be obtained with a few lines of calculation.
But the only career in which a high school graduate can expect to continue to work on such problems is academic research in pure mathematics—and even then (and again speaking from many years of personal experience), cleanly specified problems that have (obtainable) "right answers" are not as common as you might think.
Since the vast majority of students who go through school math classes do not end up as university research mathematicians, whereas many do find themselves in careers that require some mathematical ability, it's reasonable to ask why their entire school mathematics education focuses exclusively on one tiny fraction of all possible mathematics problems.
The answer can be found by looking at the history of mathematics. Starting with the invention of numbers around 10,000 years ago, people developed mathematical methods to solve problems they faced in the world: arithmetic and algebra to use in trade and engineering, geometry and trigonometry for building and navigation, calculus for scientific research, and so forth.
While some of that mathematics was required only by specialists (e.g. calculus), arithmetic and parts of algebra in particular were essential for everyday living. As a consequence, mathematicians wrote books from which ordinary people could learn how to calculate. From the very earliest textbooks (Babylonian tablets, Indian manuscripts, etc.), two kinds of problems were presented: algorithm ("recipes") problems that showed the steps to be carried out to do a particular kind of computation, presented without any context, and word problems, designed to help people learn how to apply a particular algorithm to solve a real world problem. Ancient and medieval textbooks had many hundreds of such problems, so that a trader (say) could find a problem almost identical in form to the one he (and back then use of mathematics was primarily a male activity) actually wanted to solve in his business. If he were lucky, all he would have to do is substitute his own numbers for those in the book's worked word problem. In other cases, the book might not provide an exact match, but by working through five or six problems that were close in form, the individual could learn how to solve his real problem.
For the majority of people, that was enough. Life simply did not require anything more. The problems they faced in their everyday activities for which mathematics was needed were simple and routine. The mathematical word problems that today seem so unrealistic were by and large remarkably similar to the problems ordinary citizens faced every day.
"When do I need to leave home in order to catch that train?" There wasn't an app to tell you the answer; you had to calculate it yourself. That word problem about trains leaving stations in your math class showed you how.
Arithmetic, in particular, was an essential, basic life skill that remained so until the development of devices that automated the process in the 1960s. I am a member of the last generation for whom the question "What do I need arithmetic for?" simply did not arise. (We asked it about other parts of mathematics.)
But that computer technology that eliminated the need for people to be good calculators led to a world in which there is a huge demand for higher order mathematical skills, starting with algebra. I wrote about this change in this column back in 1998, in a piece titled "Forget 'Back to Basics.' It's Time for 'Forward to (the New) Basics.'" Looking back at what I wrote then, I am amazed at just how much things have changed in the intervening 16 years. In September of that year, Google was founded, and the Web became a dominant force in our lives and our work.
Today, we have instant access to vast amounts of information and to unlimited computing power. Both are now utilities, much like water and electricity. And that has led to a revolution in the mathematics ordinary citizens need in order to lead a fulfilling, productive life. In a world where procedural (i.e., algorithmic) mathematics is available at the push of a button, the need has shifted to what I and others have been calling mathematical thinking.
I wrote about this in my September 2012 Devlin's Angle. Broadly speaking, mathematical thinking is a way of approaching problems that is based on classical mathematics, but takes account of the fact that computation (both numeric and symbolic) can be readily done by machines.
In practical terms, what this means is that people can now focus all their attention on real-world problems in the form they are encountered. Knowing how to solve an equation is no longer a valuable human ability; what matters now is formulating the equation to solve that problem in the first place, and then taking the result of the machine solution to the equation and making use of it.
In the 1960s, we got used to the fact that the arithmetic part of solving a mathematical problem could be done by machines. Now we are in a world where almost all the procedural mathematics can be done by machines.
Of course, this does not mean we should stop teaching procedural mathematics to the next generation, any more than the introduction of pocket calculators meant we should stop teaching arithmetic. But in both cases, the reason for teaching changes, and with it the way we should teach it. The purpose shifts from mastering procedures—something that was necessary only when there were no machines to do that part—to understanding the concepts sufficiently well to make good use of those machines.
Though this change in emphasis has been underway for some years now, it did not garner much attention in the United States until the rollout of the Common Core State Standards, which are very much geared towards the mathematical thinking needs of the 21st century. The degree to which many parents were shortsighted by the shift was made clear when some of them took to social media to complain about the kinds of homework questions their children were being asked to do. While some of those questions were truly, truly awful, others garnering a lot of critical SM comments were actually extremely good.
What was particularly ironic was that many parents, faced with being unable to assist their child with elementary grade arithmetic homework, did not draw the obvious conclusion: "Gee, if I cannot understand something as basic as integer arithmetic—however it is done—there must have been something really lacking in my own education." Instead, they jumped to the totally off-the-wall conclusion that the current educational system must be wrong.
That's like waking up in the morning to find your car won't start and saying, "Oh dear, the laws of physics don't work." The smart person says, "I need to replace the battery."
I'll tell you something. I was taught math the "old-fashioned way" too, and some of those student arithmetic worksheets were new to me when I first saw them. But regardless of any views I might have as to how it is best taught in today's world, it didn't take a lot of effort to figure out what those kids were doing on those worksheets posted on Facebook. It was just whole number arithmetic for heavens sake! Anyone who understands the basic ideas of whole number arithmetic can figure it out.
It was not my training as a professional mathematician that helped me here. It was the simple fact that I understand whole number arithmetic, something that goes back to my early childhood, when I did not even know there was such a thing as a professional mathematician, let alone aspire to be one. Unfortunately, many Americans were never taught to understand arithmetic, they were just trained to execute procedures. It's not their kids who are being short-changed. They—the parents—were!
Breezing into this fray is University of Wisconsin mathematics professor Jordan Ellenberg, with his new book How Not To Be Wrong. I knew I would find a kindred spirit when I read the book's subtitle: “The Power of Mathematical Thinking.” With a Stanford MOOC and an associated textbook both called Introduction to Mathematical Thinking, how could I not?
Ellenberg's title is superb. In one fell swoop, it casts aside that old misconception that mathematics provides "right answers," replacing it with the far more accurate description that it is a great way to stop you being wrong. For, like me, he focuses not on the internal activities of pure mathematics, rather on how mathematics is used in today's real world.
To be sure, also like me, Ellenberg has devoted a lot of his career to working in pure mathematics, so he loves searching for those "right answers," and he enjoys the subject in its own terms. We both know that there are eternal truths within mathematics (a better term would be "tautologies") and have experienced the thrill of going after them. But we both realize that what we do as pure mathematicians is a very specialist pursuit. The society that supports us when we do that does so largely because of the payoff in terms of the benefits that emerge when mathematical thinking is applied to real world problems.
Ellenberg's book is chock full of examples of those benefits, from many walks of life, presented with a delightfully light touch. He grabs the reader's attention with his very first example, taken from the Second World War. The U. S. military chiefs wanted to reduce the number of warplanes that were being shot down. The obvious solution was to add more armor to protect them. But armor adds weight, which limits the distances that can be flown and the duration of the mission, as well as increasing the production cost. So the question was, where is the most effective place to put that extra protection?
To answer this question, the chiefs brought in a team of mathematicians to analyze the evidence and determine what parts of the aircraft were most likely to be hit. They examined the fuselages of all the damaged planes that had flown back after being hit to see where the most damage was. It turned out that the engines had an average of 1.11 bullet holes per square foot, the fuel system had 1.55, the fuselages 1.73, and the rest of the plane 1.8.
So where was the optimal place to add extra armor? According to the data, the fuselages took a lot of hits, while engines suffered the least damage. So an obvious suggestion was to add armor to the fuselages. But that was not what the mathematicians suggested. Their solution was to add the armor to the engines, the part that had fewer hits when the planes got back.
And they were right. I'll leave you to figure out why that is the best solution. It's a great example of mathematical thinking. After you have convinced yourself why adding armor to the engines was the best strategy, you should buy a copy of Ellenberg's book and gain some understanding of just what mathematical thinking is, and why it is a crucial ability in today's world.
(My own book on mathematical thinking is more of a "how to" guide, as is my MOOC. Another, excellent book on mathematical thinking, that is somewhere between Ellenberg's and mine, is Burger and Starbird's The 5 Elements of Effective Thinking.)
Finally, and to some extent switching gears (and definitely switching media), I want to draw your attention to a new video game, DragonBox Elements, by the Norwegian-based educational technology company WeWantToKnow. The company made a splash with its first game, DragonBox (Algebra) a couple of years ago.
Unlike my own work in educational videogames, through my company BrainQuake, which is very strongly focused on real-world mathematical thinking, the DragonBox folks are seeking to enhance and strengthen school mathematics.
When I first played the new Elements game, I was initially confused, since I approached it with a Geometer's Sketchpad expectation. But Elements is not a geometry construction/exploration tool. The focus is on the importance of providing justification for steps in a proof. Knowing why something is true. And that is not only a key feature of GOFM (“Good Old Fashioned Math”), as was taught for two thousand years, it's one of the aspects of mathematics that is characteristic of mathematical thinking (as used in the real world). Euclid, the author of the first Elements (the book), would surely have approved.
The modern world has not made GOFM redundant. What has changed, and drastically, is the way GOFM fits in with the rest of human activities. Unless you are going to make a career for yourself in pure mathematics research, GOFM today is simply an amazingly powerful tool for acquiring one of the most important cognitive capacities in the 21st century: mathematical thinking.
In today's world, most of the important problems are complex and multi-faceted. There are few right answers. As Ellenberg demonstrates, mathematical thinking can help you choose better answers—and avoid being wrong.
(Some of those who hold that view also think that there is just one correct way to get that answer. A far smaller group, to be sure, but still a worryingly large number. Still, my focus here is on the first false belief.)
Having earned my living as a mathematician for over 40 years, I can assure you that the belief is false. In addition to my university research, I have done mathematical work for the U. S. Intelligence Community, the U.S. Army, private defense contractors, and a number of for-profit companies. In not one of those projects was I paid to find "the right answer." No one thought for one moment that there could be such a thing.
So what is the origin of those false beliefs? It's hardly a mystery. People form that misconception because of their experience at school. In school mathematics, students are only exposed to problems that (a) are well defined, (b) have a unique correct answer, and (c) whose answer can be obtained with a few lines of calculation.
But the only career in which a high school graduate can expect to continue to work on such problems is academic research in pure mathematics—and even then (and again speaking from many years of personal experience), cleanly specified problems that have (obtainable) "right answers" are not as common as you might think.
Since the vast majority of students who go through school math classes do not end up as university research mathematicians, whereas many do find themselves in careers that require some mathematical ability, it's reasonable to ask why their entire school mathematics education focuses exclusively on one tiny fraction of all possible mathematics problems.
The answer can be found by looking at the history of mathematics. Starting with the invention of numbers around 10,000 years ago, people developed mathematical methods to solve problems they faced in the world: arithmetic and algebra to use in trade and engineering, geometry and trigonometry for building and navigation, calculus for scientific research, and so forth.
While some of that mathematics was required only by specialists (e.g. calculus), arithmetic and parts of algebra in particular were essential for everyday living. As a consequence, mathematicians wrote books from which ordinary people could learn how to calculate. From the very earliest textbooks (Babylonian tablets, Indian manuscripts, etc.), two kinds of problems were presented: algorithm ("recipes") problems that showed the steps to be carried out to do a particular kind of computation, presented without any context, and word problems, designed to help people learn how to apply a particular algorithm to solve a real world problem. Ancient and medieval textbooks had many hundreds of such problems, so that a trader (say) could find a problem almost identical in form to the one he (and back then use of mathematics was primarily a male activity) actually wanted to solve in his business. If he were lucky, all he would have to do is substitute his own numbers for those in the book's worked word problem. In other cases, the book might not provide an exact match, but by working through five or six problems that were close in form, the individual could learn how to solve his real problem.
For the majority of people, that was enough. Life simply did not require anything more. The problems they faced in their everyday activities for which mathematics was needed were simple and routine. The mathematical word problems that today seem so unrealistic were by and large remarkably similar to the problems ordinary citizens faced every day.
"When do I need to leave home in order to catch that train?" There wasn't an app to tell you the answer; you had to calculate it yourself. That word problem about trains leaving stations in your math class showed you how.
Arithmetic, in particular, was an essential, basic life skill that remained so until the development of devices that automated the process in the 1960s. I am a member of the last generation for whom the question "What do I need arithmetic for?" simply did not arise. (We asked it about other parts of mathematics.)
But that computer technology that eliminated the need for people to be good calculators led to a world in which there is a huge demand for higher order mathematical skills, starting with algebra. I wrote about this change in this column back in 1998, in a piece titled "Forget 'Back to Basics.' It's Time for 'Forward to (the New) Basics.'" Looking back at what I wrote then, I am amazed at just how much things have changed in the intervening 16 years. In September of that year, Google was founded, and the Web became a dominant force in our lives and our work.
Today, we have instant access to vast amounts of information and to unlimited computing power. Both are now utilities, much like water and electricity. And that has led to a revolution in the mathematics ordinary citizens need in order to lead a fulfilling, productive life. In a world where procedural (i.e., algorithmic) mathematics is available at the push of a button, the need has shifted to what I and others have been calling mathematical thinking.
I wrote about this in my September 2012 Devlin's Angle. Broadly speaking, mathematical thinking is a way of approaching problems that is based on classical mathematics, but takes account of the fact that computation (both numeric and symbolic) can be readily done by machines.
In practical terms, what this means is that people can now focus all their attention on real-world problems in the form they are encountered. Knowing how to solve an equation is no longer a valuable human ability; what matters now is formulating the equation to solve that problem in the first place, and then taking the result of the machine solution to the equation and making use of it.
In the 1960s, we got used to the fact that the arithmetic part of solving a mathematical problem could be done by machines. Now we are in a world where almost all the procedural mathematics can be done by machines.
Of course, this does not mean we should stop teaching procedural mathematics to the next generation, any more than the introduction of pocket calculators meant we should stop teaching arithmetic. But in both cases, the reason for teaching changes, and with it the way we should teach it. The purpose shifts from mastering procedures—something that was necessary only when there were no machines to do that part—to understanding the concepts sufficiently well to make good use of those machines.
Though this change in emphasis has been underway for some years now, it did not garner much attention in the United States until the rollout of the Common Core State Standards, which are very much geared towards the mathematical thinking needs of the 21st century. The degree to which many parents were shortsighted by the shift was made clear when some of them took to social media to complain about the kinds of homework questions their children were being asked to do. While some of those questions were truly, truly awful, others garnering a lot of critical SM comments were actually extremely good.
What was particularly ironic was that many parents, faced with being unable to assist their child with elementary grade arithmetic homework, did not draw the obvious conclusion: "Gee, if I cannot understand something as basic as integer arithmetic—however it is done—there must have been something really lacking in my own education." Instead, they jumped to the totally off-the-wall conclusion that the current educational system must be wrong.
That's like waking up in the morning to find your car won't start and saying, "Oh dear, the laws of physics don't work." The smart person says, "I need to replace the battery."
I'll tell you something. I was taught math the "old-fashioned way" too, and some of those student arithmetic worksheets were new to me when I first saw them. But regardless of any views I might have as to how it is best taught in today's world, it didn't take a lot of effort to figure out what those kids were doing on those worksheets posted on Facebook. It was just whole number arithmetic for heavens sake! Anyone who understands the basic ideas of whole number arithmetic can figure it out.
It was not my training as a professional mathematician that helped me here. It was the simple fact that I understand whole number arithmetic, something that goes back to my early childhood, when I did not even know there was such a thing as a professional mathematician, let alone aspire to be one. Unfortunately, many Americans were never taught to understand arithmetic, they were just trained to execute procedures. It's not their kids who are being short-changed. They—the parents—were!
Breezing into this fray is University of Wisconsin mathematics professor Jordan Ellenberg, with his new book How Not To Be Wrong. I knew I would find a kindred spirit when I read the book's subtitle: “The Power of Mathematical Thinking.” With a Stanford MOOC and an associated textbook both called Introduction to Mathematical Thinking, how could I not?
Ellenberg's title is superb. In one fell swoop, it casts aside that old misconception that mathematics provides "right answers," replacing it with the far more accurate description that it is a great way to stop you being wrong. For, like me, he focuses not on the internal activities of pure mathematics, rather on how mathematics is used in today's real world.
To be sure, also like me, Ellenberg has devoted a lot of his career to working in pure mathematics, so he loves searching for those "right answers," and he enjoys the subject in its own terms. We both know that there are eternal truths within mathematics (a better term would be "tautologies") and have experienced the thrill of going after them. But we both realize that what we do as pure mathematicians is a very specialist pursuit. The society that supports us when we do that does so largely because of the payoff in terms of the benefits that emerge when mathematical thinking is applied to real world problems.
Ellenberg's book is chock full of examples of those benefits, from many walks of life, presented with a delightfully light touch. He grabs the reader's attention with his very first example, taken from the Second World War. The U. S. military chiefs wanted to reduce the number of warplanes that were being shot down. The obvious solution was to add more armor to protect them. But armor adds weight, which limits the distances that can be flown and the duration of the mission, as well as increasing the production cost. So the question was, where is the most effective place to put that extra protection?
To answer this question, the chiefs brought in a team of mathematicians to analyze the evidence and determine what parts of the aircraft were most likely to be hit. They examined the fuselages of all the damaged planes that had flown back after being hit to see where the most damage was. It turned out that the engines had an average of 1.11 bullet holes per square foot, the fuel system had 1.55, the fuselages 1.73, and the rest of the plane 1.8.
So where was the optimal place to add extra armor? According to the data, the fuselages took a lot of hits, while engines suffered the least damage. So an obvious suggestion was to add armor to the fuselages. But that was not what the mathematicians suggested. Their solution was to add the armor to the engines, the part that had fewer hits when the planes got back.
And they were right. I'll leave you to figure out why that is the best solution. It's a great example of mathematical thinking. After you have convinced yourself why adding armor to the engines was the best strategy, you should buy a copy of Ellenberg's book and gain some understanding of just what mathematical thinking is, and why it is a crucial ability in today's world.
(My own book on mathematical thinking is more of a "how to" guide, as is my MOOC. Another, excellent book on mathematical thinking, that is somewhere between Ellenberg's and mine, is Burger and Starbird's The 5 Elements of Effective Thinking.)
Finally, and to some extent switching gears (and definitely switching media), I want to draw your attention to a new video game, DragonBox Elements, by the Norwegian-based educational technology company WeWantToKnow. The company made a splash with its first game, DragonBox (Algebra) a couple of years ago.
Unlike my own work in educational videogames, through my company BrainQuake, which is very strongly focused on real-world mathematical thinking, the DragonBox folks are seeking to enhance and strengthen school mathematics.
When I first played the new Elements game, I was initially confused, since I approached it with a Geometer's Sketchpad expectation. But Elements is not a geometry construction/exploration tool. The focus is on the importance of providing justification for steps in a proof. Knowing why something is true. And that is not only a key feature of GOFM (“Good Old Fashioned Math”), as was taught for two thousand years, it's one of the aspects of mathematics that is characteristic of mathematical thinking (as used in the real world). Euclid, the author of the first Elements (the book), would surely have approved.
The modern world has not made GOFM redundant. What has changed, and drastically, is the way GOFM fits in with the rest of human activities. Unless you are going to make a career for yourself in pure mathematics research, GOFM today is simply an amazingly powerful tool for acquiring one of the most important cognitive capacities in the 21st century: mathematical thinking.
In today's world, most of the important problems are complex and multi-faceted. There are few right answers. As Ellenberg demonstrates, mathematical thinking can help you choose better answers—and avoid being wrong.
Tuesday, July 1, 2014
The Power of Dots
Surely, if mathematics education should achieve one thing, it is develop the ability to figure things out for yourself. We’re not talking the Riemann Hypothesis here; the focus is basic school arithmetic, for heaven’s sake.
To continue with the Times article, arrays of dots seemed to figure large in this parent’s dislike of the Common Core. She felt it was pointless to spend time drawing and staring at arrays of dots.
True, it would be possible—and I am sure it happens—to generate tedious, and largely pointless, “busywork” exercises involving drawing arrays of dots. But the image of a Common Core math worksheet the Times chose to illustrate its story showed a very sensible, and deep use of dot diagrams, to understand structure in arithmetic. Much like the (extremely deep) dot array at the top of this article, which I’ll come to in a moment.
To the girl’s parent, mathematics is about numbers, but that’s just a surface feature. It’s really about structure. And throughout the ages, mathematicians have used the most simple symbols possible to bring out and understand that structure: namely, dots and lines.
The Times’ parent, so dismissive of time spent drawing and reflecting on dot diagrams, would, I am sure, think it a waste of time to devote any effort trying to make sense of the dot diagram I used to open this post. She would, I have no doubt, find it incomprehensible that an individual with a freshly-minted Ph.D. in mathematics would spend many months—at taxpayers’ expense—staring day-after-day at either that one diagram, or seemingly minor variations he would start each day by sketching out on a sheet of paper in front of him.
Well, I am that mathematician. That diagram helped me understand the framework that would be required to specify an infinite mathematical object of the third order of infinitude (aleph-2) by means of a family of infinite mathematical objects of the first order of infinitude (aleph-0). The top line of dots represents an increasing tower of objects that come together to form the desired aleph-2 object, and each of the lower lines of dots represent shorter towers of aleph-0 objects. In the 1970s, a number of us used those dot diagrams to solve mathematical problems that just a few years earlier had seemed impossible.
That particular kind of dot diagram was invented by a close senior colleague (and mentor) of mine, Professor Ronald Jensen, who called it a “morass.” He chose the name wisely, since the structure represented by those dots was extremely complex and intricate.
In contrast, the simple, rectangular array implicitly referred to in the New York Times article is used to help learners understand the much simpler (but still deep, and far more important to society) structure of numbers and the basic operations of arithmetic, as was well explained in a subsequent blog post by mathematics education specialist Christopher Danielson. The fact is, dot diagrams are powerful, for learners and world experts alike.
The problem facing parents (and many teachers) today, is that the present student generation is the one that, for the first time in history, is having to learn the mathematics the professionals use—what I and many other pros have started to call “mathematical thinking” in order to distinguish it from the procedural skills so important in past times.
The reason for that is that in the world today’s students will graduate into, computation is as plentiful as water or electricity. The smartphone we carry around with us is much faster, and more accurate, in carrying out mathematical procedures than any human.
In a single generation, society’s need for mathematical mastery has gone from procedural computation, to being able to make effective and reliable use of an effectively unlimited amount of automated computation. To put it bluntly, mastery of computational skills is no longer a marketable asset. The ability to make good use of computational power is where it’s at in math today.
For almost all the three thousand years of mathematical development, the focus in mathematics was calculation (numerical, symbolic, or geometric). Learning mathematics meant learning how to perform those calculations, which boiled down to achieving mastery of various procedures. Mastery of any one procedure could be achieved by rote learning—doing many examples, all essentially the same—leaving the only truly creative mental task that of recognition of which procedure to apply to solve which problem.
Numerical and symbolic calculation (arithmetic and algebra) are so simple and routine that we can program computers to do it for us. That is possible because calculation is essentially trivial. Perceiving and understanding structure, on the other hand, is something that (at least at the present time) requires human insight. It is not trivial and it is difficult. Dot diagrams can help us come to terms with that difficulty.
When movie director Gus Van Sant was faced with introducing the lead character, Will Hunting (played by Matt Damon) in the hit 1997 film Good Will Hunting, establishing in one shot that the hero was an uneducated (actually, self-educated) mathematical genius, the first encounter we had with Will showed him drawing a dot diagram on a blackboard in an MIT corridor.
You can be sure that when an experienced movie director like Gus Van Sant selects an establishing shot for the lead character, he does so with considerable care, on the advice of an expert. By showing Will writing a network of dots on a blackboard, Van Sant was right on the button in terms of portraying the kind of thing that professional mathematicians do all the time.
The one bit of license Van Sant took was that the diagram we saw Matt Damon writing was not the solution to a problem that had taken an MIT math professor two years to solve. (Unless MIT math professors are a lot less smart than we are led to believe!) It was a real solution to a real math problem, all right. I am pretty sure it was chosen because it fitted nicely on one blackboard and looked good on the screen. It absolutely conveyed the kind of (dotty) activity that mathematicians do all the time—the kind of (dotty) thing I did in my early post-Ph.D. years when I was working with Prof Jensen’s morasses.
But it’s actually a problem that anyone who has learned how to think mathematically should be able to solve in at most a few hours. Numberphile has an excellent video explaining the problem.
So, New York Times story parent, I hope you reconsider your decision to take your daughter out of school to teach her the way you were taught. The kind of mathematics you were taught was indeed required in times past. But not any more. The world has changed dramatically as far as mathematics is concerned. As with many other aspects of our lives, we have built machines to handle the more routine, procedural stuff, thereby putting a premium on the one thing where humans vastly outperform computers: creative thinking.
Those dot diagrams are all about creative thinking. A computer can understand numbers, and process millions of them faster than a human can write just one. But it cannot make sense of those dot diagrams. Because it does not know what any particular array of dots means! And it has no way to figure it out. (Unless a human tells it.)
Next month I’ll look further into the distinction between old-style procedural mathematics and the 21st-century need for mathematical thinking. In particular, I’ll look at an excellent recent book, Jordan Ellenberg’s How Not to be Wrong.
The book’s title is significant, since it recognizes that the vast majority of real-world mathematical problems do not have a unique right answer, and that the real power of mathematical thinking is making sure you are not wrong. (The book’s subtitle is “The power of mathematical thinking.”)
I’ll also look at a new mathematics video game that also focuses on mathematical thinking, this time, school-room Euclidean geometry. It’s called DragonBox Elements.
You might want to check out both.
Sunday, June 1, 2014
Déjà vu all over again: Fibonacci and Steve Jobs — Part 2
This month’s column is the second of a two-part video presentation of a public address I gave recently at Princeton, where I have been spending this semester as a Visiting Professor.
The talk was based on my 2011 e-book Leonardo and Steve, which itself was a supplement to my print book The Man of Numbers, published the same year.
Both the e-book and my presentation show how Jobs’s introduction of the Macintosh computer in 1984 was an almost exact replay of Leonardo of Pisa’s (Fibonacci) 13th Century introduction to Europe of Hindu-Arabic arithmetic.
Part 1 appeared last month.
The talk was based on my 2011 e-book Leonardo and Steve, which itself was a supplement to my print book The Man of Numbers, published the same year.
Both the e-book and my presentation show how Jobs’s introduction of the Macintosh computer in 1984 was an almost exact replay of Leonardo of Pisa’s (Fibonacci) 13th Century introduction to Europe of Hindu-Arabic arithmetic.
Part 1 appeared last month.
Thursday, May 1, 2014
Déjà vu all over again: Fibonacci and Steve Jobs
This month’s column is the first of a two-part video
presentation of a public address I gave recently at Princeton, where I have
been spending this semester as a Visiting Professor.
The talk was based on my 2011 e-book Leonardo and Steve, which itself was a supplement to my print book The Man of Numbers, published the same year.
Both the e-book and my presentation show how Jobs’s
introduction of the Macintosh computer in 1984 was an almost exact replay of
Leonardo of Pisa’s (Fibonacci's) 13th Century introduction to Europe
of Hindu-Arabic arithmetic.
Tuesday, April 1, 2014
What good is math and why do we teach it?
This month’s column comes in lecture format. It’s a narrated videostream of the presentation file that accompanied the featured address I made recently at the MidSchoolMath National Conference, held in Santa Fe, NM, on March 27-29. It lasts just under 30 minutes, including two embedded videos.
In the talk, I step back from the (now largely metaphorical) blackboard and take a broader look at why we and our students are there is the first place.
Download the video here.
In the talk, I step back from the (now largely metaphorical) blackboard and take a broader look at why we and our students are there is the first place.
Download the video here.
Saturday, March 1, 2014
How Mountain Biking Can Provide the Key to the Eureka Moment
Because this blog post
covers both mountain biking and proving theorems, it is being simultaneously
published in Devlin’s more wide ranging blog profkeithdevlin.org.
In my post last month, I described my efforts to ride a
particularly difficult stretch of a local mountain bike trail in the hills just
west of Palo Alto. As promised, I will now draw a number of conclusions for
solving difficult mathematical problems.
Most of them will be familiar to anyone who has read George
Polya’s classic book How to Solve It. But my main conclusion may come as a surprise unless you
have watched movies such as Top Gun
or Field of Dreams, or if you follow
professional sports at the Olympic level.
Here goes, step-by-step, or rather
pedal-stroke-by-pedal-stroke. (I am assuming you have recently read my last
post.)
BIKE: Though bikers with extremely strong leg muscles can
make the Alpine Road ByPass Trail ascent by brute force, I can't. So my first
step, spread over several rides, was to break the main problem—get up an
insanely steep, root strewn, loose-dirt climb—into smaller, simpler problems,
and solve those one at a time.
MATH: Breaking a large problem into a series of smaller ones is a
technique all mathematicians learn early in their careers. Those
subproblems may still be hard and require considerable effort and several
attempts, but in many cases you find you can make progress on at least some of
them. The trick is to make each subproblem sufficiently small that it requires
just one idea or one technique to solve it.
In particular, when you break the overall problem down
sufficiently, you usually find that each smaller subproblem resembles another
problem you, or someone else, has already solved.
When you have managed to solve the subproblems, you are left
with the task of assembling all those subproblem solutions into a single whole.
This is frequently not easy, and in many cases turns out to be a much harder
challenge in its own right than any of the subproblem solutions, perhaps
requiring modification to the subproblems or to the method you used to solve
them.
BIKE: Sometimes there are several different lines you can
follow to overcome a particular obstacle, starting and ending at the same
positions but requiring different combinations of skills, strengths, and
agility. (See
my description last month of how I managed to negotiate the steepest section
and avoid being thrown off course—or off the bike—by that troublesome
tree-root nipple.)
MATH: Each subproblem takes you from a particular starting
point to a particular end-point, but there may be several different approaches
to accomplish that subtask. In many cases, other mathematicians have solved
similar problems and you can copy their approach.
BIKE: Sometimes, the approach you adopt to get you past one
obstacle leaves you unable to negotiate the next, and you have to find a
different way to handle the first one.
MATH: Ditto.
BIKE: Eventually, perhaps after many attempts, you figure
out how to negotiate each individual segment of the climb. Getting to this
stage is, I think, a bit harder in mountain biking than in math. With a math
problem, you usually can work on each subproblem one at a time, in any order.
In mountain biking, because of the need to maintain forward (i.e., upward)
momentum, you have to build your overall solution up in a cumulative fashion—vertically!
But the distinction is not as great as might first appear.
In both cases, the step from having solved each individual subproblem in
isolation to finding a solution for the overall problem, is a mysterious one
that perhaps cannot be appreciated by someone who has not experienced it. This
is where things get interesting.
Having had the experience of solving difficult (for me)
problems in both mathematics and mountain biking, I see considerable
similarities between the two. In both
cases, the subconscious mind plays a major role—which is, I presume, why
they seem mysterious. This is where this two-part blog post is heading.
BIKE: I ended my previous post by promising to
"look at the role of the subconscious in being able to put together a series of mastered steps in order to solve a big problem. For a very curious thing happened after I took the photos to illustrate this post. I walked back down to collect my bike from…where I'd left it, and rode up to continue my ride.
It took me four attempts to complete that initial climb!
And therein lies one of the biggest secrets of being able to solve a difficult math problem."
BOTH: How does the human mind make a breakthrough? How are
we able to do something that we have not only never done before, but failed
many times in attempts to do so? And why does the breakthrough always seem to occur
when we are not consciously trying
to solve the problem?
The first thing to note is that we never experience the
process of making that breakthrough. Rather, what we experience, i.e., what we
are conscious of, is having just made
the breakthrough!
The sensation we have is a combined one of both elation and surprise. Followed almost
immediately by a feeling that it wasn’t
so difficult after all!
What are we to make of this strange process?
Clearly, I cannot provide a definitive, concrete answer to
that question. No one can. It’s a mystery. But it is possible to make a number of relevant observations,
together with some reasonable, informed speculations. (What follows is a
continuation of sorts of the thread I developed in my 2000 book The Math Gene.)
The first observation is that the human brain is a result of
millions of years of survival-driven, natural selection. That made it supremely
efficient at (rapidly) solving problems that threaten survival. Most of that
survival activity is handled by a small, walnut-shaped area of the brain called
the amygdala, working in close conjunction with the body’s nervous system and motor
control system.
In contrast to the speed at which our amydala operates, the
much more recently developed neo-cortex that supports our conscious thought,
our speech, and our “rational reasoning,” functions at what is comparatively
glacial speed, following well developed channels of mental activity—channels
that can be built up by repetitive training.
Because we have conscious access to our neo-cortical thought
processes, we tend to regard them as “logical,” often dismissing the actions of
the amygdala as providing (“mere,” “animal-like”) “instinctive reactions.” But
that misses the point that, because that “instinctive reaction organ” has
evolved to ensure its owner’s survival in a highly complex and ever changing
environment, it does in fact operate in an extremely logical fashion, honed by
generations of natural selection pressure to be in sync with its owner’s environment.
Which leads me to this.
Do you want to identify that part of the brain that makes
major scientific (and mountain biking) breakthroughs?
I nominate the amygdala—the “reptilian brain” as it is
sometimes called to reflect its evolutionary origin.
I should acknowledge that I am not the first person to make
this suggestion. Well, for mathematical breakthroughs, maybe I am. But in
sports and the creative arts, it has long been recognized that the key to truly
great performance is to essentially shut down the neo-cortex and let the
subconscious activities of the amygdala take over.
Taking this as a working hypothesis for mathematical (or
mountain biking) problem solving, we can readily see why those moments of great
breakthrough come only after a long period of preparation, where we keep
working away—in conscious fashion—at trying to solve the problem or perform
the action, seemingly without making any progress.
We can see too why, when the breakthrough (or the great
performance) comes, it does so instantly and surprisingly, when we are not actively trying to achieve the goal, leaving our
conscious selves as mere after-the-fact observers of the outcome.
For what that long period of struggle does is build a
cognitive environment in which our reptilian brain—living inside and being
connected to all of that deliberate, conscious activity the whole time—can
make the key connections required to put everything together. In other words,
investing all of that time and effort in that initial struggle raises the internal,
cognitive stakes to a level where the amygdala can do its stuff.
Okay, I’ve been playing fast and loose with the metaphors
and the anthropomorphization here. We’re
really talking about biological systems, simply operating the way natural
selection equipped them. But my goal is not to put together a scientific
analysis, rather to try to figure out how to improve our ability to solve novel
problems. My primary aim is not to be “right” (though knowledge and insight are
always nice to have), but to be able to improve performance.
Let’s return to that tricky stretch of the ByPass section on
the Alpine Road trail. What am I consciously focusing on when I make a
successful ascent?
BIKE: If you have read my earlier account, you will know
that the difficult section comes in three parts. What I do is this. As I
approach each segment, I consciously think about, and fix my eyes on, the
end-point of that segment—where I will be after I have negotiated the
difficulties on the way. And I keep my eyes and attention focused on that
goal-point until I reach it. For the whole of the maneuver, I have no conscious
awareness of the actual ground I am cycling over, or of my bike. It’s total
focus on where I want to end up, and nothing else.
So who—or what—is controlling the bike? The mathematical control problem involved in getting a person-on-a-bike up a steep,
irregular, dirt trail is far greater than that required to auto-fly a jet
fighter. The calculations and the speed with which they would have to be
performed are orders of magnitude beyond the capability of the relatively slow
neuronal firings in the neocortex. There is only one organ we know of that
could perform this task. And that’s the amygdala, working in conjunction with
the nervous system and the body’s motor control mechanism in a super-fast
constant feedback loop. All the neo-cortex and its conscious thought has to do
is avoid getting in the way!
These days, in the case of Alpine Road, now I have “solved”
the problem, the only things my conscious neo-cortex has to do on each occasion
are switching my focus from the goal of one segment to the goal of the next. If
anything interferes with my attention at one of those key transition moments,
my climb is over—and I stop or fall.
What used to be the hard parts are now “done for me” by
unconscious circuits in my brain.
MATH: In my case at least, what I just wrote about mountain
biking accords perfectly with my experiences in making (personal) mathematical
problem-solving breakthroughs.
It is by stepping back from trying to solve the problem by putting together everything I know and
have learned in my attempts, and instead simply focusing on the problem
itself—what it is I am trying to show—that I suddenly find that I have the
solution.
It’s not that I arrive at the solution when I am not
thinking about the problem. Some mathematicians have expressed their
breakthrough moments that way, but I strongly suspect that is not totally true.
When a mathematician has been trying to solve a problem for some months or
years, that problem is always with them. It becomes part of their existence.
There is not a single waking moment when that problem is not “on their mind.”
What they mean, I believe, and what I am sure is the case
for me, is that the breakthrough comes when the problem is not the focus of our
thoughts. We really are thinking about something else, often some mundane
detail of life, or enjoying a marvelous view. (Google “Stephen Smale beaches of
Rio” for a famous example.)
This thesis does, of course, explain why the process of
walking up the ByPass Trail and taking photographs of all the tricky points made
it impossible for me to complete the climb. True, I did succeed at the fourth
attempt. But I am sure that was not because the first three were “practice.”
Heavens, I’d long ago mastered the maneuvers required. It was because it took
three failed attempts before I managed to erase the effects of focusing on the
details to capture those images.
The same is true, I suggest, for solving a difficult
mathematical problem. All of those techniques Polya describes in his book, some
of which I list above, are essential to prepare the way for solving the
problem. But the solution will come only when you forget about all those
details, and just focus on the prize.
This may seem a wild suggestion, but in some respects it may
not be entirely new. There is much in common between what I described above and
the highly successful teaching method of R. L. Moore. For sure you have to do a fair amount of translation from
his language to mine, but Moore used to demand that his students not
clutter their minds by learning stuff, rather took each problem as it came and
then try to solve it by pure reasoning, not giving up until they found the
solution.
In terms of training future mathematicians, what these
considerations imply, of course, is that there is mileage to be had from
adopting some of the techniques used by coaches and instructors to produce
great performances in sports, in the arts, in the military, and in chess.
Sweating the small stuff will make you good. But if you want
to be great, you have to go beyond that—you have to forget the small stuff
and keep your eye on the prize.
And if you are successful, be sure to give full credit for
that Fields Medal or that AMS Prize where it is rightly due: dedicate it to
your amygdala. It will deserve it.
Saturday, February 1, 2014
Want to learn how to prove a theorem? Go for a mountain bike ride
Because this blogpost covers both
mountain biking and proving theorems, it is being simultaneously published in
Devlin’s more wide ranging blog profkeithdevlin.org.
Mountain biking is big in the San
Francisco Bay Area, where I live. (In its present day form, using
specially built bicycles with suspension, the sport/pastime was invented a few
miles north in Marin County in the late 1970s.) Though there are hundreds of
trails in the open space preserves that spread over the hills to the west of
Stanford, there are just a handful of access trails that allow you to start and
finish your ride in Palo Alto. Of those, by far the most popular is Alpine
Road.
My mountain biking buddies and I
ascend Alpine Road roughly once a week in the mountain biking season (which in
California is usually around nine or ten months long). In this post, I'll
describe my own long struggle, stretching over many months, to master one
particularly difficult stretch of the climb, where many riders get off and walk
their bikes.
[SPOILER: If your interest in
mathematics is not matched by an obsession with bike riding, bear with me. My
entire account is actually about how to set about solving a difficult math
problem, particularly proving a theorem. I'll draw the two threads together in
a subsequent post, since it will take me into consideration of how the brain
works when it does mathematics. For now, I'll leave the drawing of those
conclusions as an exercise for the reader! So when you read mountain biking,
think math.]
Alpine Road used to take cars all
the way from Palo Alto to Skyline Boulevard at the summit of the Coastal Range,
but the upper part fell into disrepair in the late 1960s, and the
two-and-a-half-mile stretch from just west of Portola Valley to where it meets
the paved Page Mill Road just short of Skyline is now a dirt trail,
much frequented by hikers and mountain bikers.
Alpine Road. The trail is washed out just round the bend |
A few years ago, a storm washed out
a short section of the trail about half a mile up, and the local authority
constructed a bypass trail. About a quarter of a mile long, it is steep,
narrow, twisted, and a constant staircase of tree roots protruding from the
dirt floor. A brutal climb going up and a thrilling (beginners might say
terrifying) descent on the way back. Mountain bike heaven.
There is one particularly tricky
section right at the start. This is where you can develop the key abilities you
need to be able to prove mathematical theorems.
So you have a choice. Read
Polya's classic book, or
get a mountain bike and find your own version of the Alpine Road ByPass Trail.
(Better still: do both!)
When I first encountered Alpine
Road Dirt a few years ago, it took me many rides before I managed to get up the
first short, steep section of the ByPass Trail.
It starts innocently enough—because you cannot see what awaits just around that sharp left-hand turn.
After you have made the turn, you
are greeted with a short narrow downhill. You will need it to gain as much
momentum as you can for what follows.
The short, narrow descent |
I've seen bikers with extremely
strong leg muscles who can plod their way up the wall that comes next, but I
can't do it that way. I learned how to get up it by using my
problem-solving/theorem-proving skills.
The first thing was to break the
main problem—get up the insanely steep, root strewn, loose-dirt climb—into
smaller, simpler problems, and solve those one at a time. Classic Polya.
But it's Polya with a twist—and
by "twist" I am not referring to the sharp triple-S bend in the
climb. The twist in this case is that the penalty for failure is physical, not
emotional as in mathematics. I fell off my bike a lot. The climb is insanely
steep. So steep that unless you bend really low, with your chin almost touching
your handlebar, your front wheel will lift off the ground. That gives rise to
an unpleasant feeling of panic that is perhaps not unlike the one that many students
encounter when faced with having to prove a theorem for the first time.
If you are not careful, your front wheel will lift off the ground. |
The photo above shows the first
difficult stretch. Though this first sub-problem is steep, there is a fairly
clear line to follow to the right that misses those roots, though at the very
least its steepness will slow you down, and on many occasions will result in an
ungainly, rapid dismount. And losing momentum is the last thing you want, since
the really hard part is further up ahead, near the top in the picture.
Also, do you see that rain- and
tire-worn groove that curves round to the right just over half way up—just
beyond that big root coming in from the left? It is actually deeper and
narrower than it looks in the photo, so unless you stay right in the middle of
the groove you will be thrown off line, and your ascent will be over. (Click on
the photo to enlarge it and you should be able to make out what I mean about
the groove. Staying in the groove can be tricky at times.)
Still, despite difficulties in the
execution, eventually, with repeated practice, I got to the point of
being able to negotiate this initial stretch and still have some forward
momentum. I could get up on muscle memory. What was once a series of
challenging problems, each dependent on the previous ones, was now a single
mastered skill.
[Remember, I don't have
super-strong leg muscles. I am primarily a road bike rider. I can ride for six
hours at a 16-18 mph pace, covering up to 100 miles or more. But to climb a
steep hill I have to get off the saddle and stand on the pedals, using my body
weight, not leg power. Unfortunately, if you take your weight off the saddle on
a mountain bike on a steep dirt climb, your rear wheel will start to spin and
you come to a stop - which on a steep hill means jump off quick or fall. So I
have to use a problem solving approach.]
Once I'd mastered the first
sub-problem, I could address the next. This one was much harder. See that area
at the top of the photo above where the trail curves right and then left? Here
is what it looks like up close.
The crux of the climb/problem. Now it is really steep. |
(Again, click on the photo to get a
good look. This is the mountain bike equivalent of being asked to solve a
complex math problem with many variables.)
Though the tire tracks might
suggest following a line to the left, I suspect they are left by riders coming
down. Coming out of that narrow, right-curving groove I pointed out earlier, it
would take an extremely strong rider to follow the left-hand line. No one I
know does it that way. An average rider (which I am) has to follow a zig-zag
line that cuts down the slope a bit.
Like most riders I have seen—and
for a while I did watch my more experienced buddies negotiate this slope to get
some clues—I start this part of the climb by aiming my bike between the two
roots, over at the right-hand side of the trail. (Bottom right of picture.)
The next question is, do you go
left of that little tree root nipple, sticking up all on its own, or do you
skirt it to the right? (If you enlarge the photo you will see that you most
definitely do not want either wheel to hit it.)
The wear-marks in the dirt show
that many riders make a sharp left after passing between those two roots at the
start, and steer left of the root protrusion. That's very tempting, as
the slope is notably less (initially). I tried that at first, but with
infrequent success. Most often, my left-bearing momentum carried me into
that obstacle course of tree roots over to the left, and though I sometimes
managed to recover and swing out to skirt to the left of that really big
root, more often than not I was not able to swing back right and avoid running
into that tree!
The underlying problem with that
line was that thin looking root at the base of the tree. Even with the above
photo blown up to full size, you can't really tell how tricky an obstacle it
presents at that stage in the climb. Here is a closer view.
The obstacle course of tree roots that awaits the rider who bears left |
If you enlarge this photo, you can
probably appreciate how that final, thin root can be a problem if you are out of
strength and momentum. Though the slope eases considerably at that point, I—like many riders I have seen—was on many occasions simply unable to make it
either over the root or circumventing it on one side—though all three
options would clearly be possible with fresh legs. And on the few occasions I
did make it, I felt I just got lucky—I had not mastered it. I had got the
right answer, but I had not really solved the problem. So close, so often. But,
as in mathematics, close is not good enough.
After realizing I did not have the
leg strength to master the left-of-the-nipple path, I switched to taking the
right-hand line. Though the slope was considerable steeper (that is very clear
from the blown-up photo), the tire-worn dirt showed that many riders chose that
option.
Several failed attempts and one or
two lucky successes convinced me that the trick was to steer to the right of
the nipple and then bear left around it, but keep as close to it as possible
without the rear wheel hitting it, and then head for the gap between the tree
roots over at the right.
After that, a fairly clear
left-bearing line on very gently sloping terrain takes you round to the right
to what appears to be a crest. (It turns out to be an inflection point rather
than a maximum, but let's bask for a while in the success we have had so far.)
Here is our brief basking point.
The inflection point. One more detail to resolve. |
As we oh-so-briefly catch our
breath and "coast" round the final, right-hand bend and see the
summit ahead, we come—very suddenly—to one final obstacle.
The summit of the climb |
At the root of the
problem (sorry!) is the fact that the right-hand turn is actually sharper than
the previous photo indicates, almost a switchback. Moreover, the slope
kicks up as you enter the turn. So you might not be able to gain sufficient
momentum to carry you over one or both of those tree roots on the left that you
find your bike heading towards. And in my case, I found I often did not have
any muscle strength left to carry me over them by brute force.
What worked for me is making an
even tighter turn that takes me to the right of the roots, with my right
shoulder narrowly missing that protruding tree trunk. A fine-tuned approach
that replaces one problem (power up and get over those roots) with another one
initially more difficult (slow down and make the tight turn even tighter).
And there we are. That final little
root poking up near the summit is easily skirted. The problem is solved.
To be sure, the rest of the ByPass
Trail still presents several other difficult challenges, a number of which took
me several attempts before I achieved mastery. Taken as a whole, the entire
ByPass is a hard climb, and many riders walk the entire quarter mile. But
nothing is as difficult as that initial stretch. I was able to ride the rest
long before I solved the problem of the first 100 feet. Which made it all the
sweeter when I finally did really crack that wall.
Now I (usually) breeze up it,
wondering why I found it so difficult for so long.
Usually? In my next post, I'll use
this story to talk about strategies for solving difficult mathematical
problems. In particular, I'll look at the role of the subconscious in being
able to put together a series of mastered steps in order to solve a big
problem. For a very curious thing happened after I took the photos to
illustrate this post. I walked back down to collect my bike from the ByPass
sign where I'd left it, and rode up to continue my ride.
It took me four attempts to
complete that initial climb!
And therein lies one of the biggest
secrets of being able to solve a difficult math problem.
Subscribe to:
Posts (Atom)