The pie chart in the above tweet jumped out of the page when it appeared in my twitter feed
on September 14. My initial shock at seeing the figure 1% attached to a region of the pie chart
that was evidently almost 25% of the total area of the disk did not last long, of course, since the
accompanying text made it clear what the diagram was intended to convey. The 1% label
referred to the section of the population being discussed, whereas the pie-chart indicated the
share of taxes paid by that group. Indeed, the image was an animated GIF; when I clicked on it,
the region labeled “1%” shrank, culminating with the chart on the right in the image shown
But here’s the thing. Even after I had figured out what the chart was intended to convey, I still
found it confusing. I wondered if a lay-reader, someone who is not a professional
mathematician, would manage to parse out the intended meaning. It was not long before I
found out. The image below shows one of the tweets that appeared in response less than an
As I had suspected, a common reaction was to dismiss the chart as yet another example of a
bad data visualization created by an innumerate graphics designer. Indeed, that had been my
initial reaction. But this particular example is more interesting. Yes, it is a bad graphic, for the
simple reason that it does not convey the intended message. But not because of the
illustrator’s innumeracy. In fact, numerically, it appears to be as accurate as you can get with a
pie chart. The before and after charts do seem to have regions whose areas correspond to the
actual data on the tax-payer population.
This example was too good to pass up as an educational tool: asking a class to discuss what the
chart is intended to show, could lead to a number of good insights into how mathematics can
help us understand the world, while at the same time having the potential to mislead. I was
tempted to write about it in my October post, but wondered if I should delay a couple of
months to avoid an example that was at the heart of a current, somewhat acrimonious party-political debate. As it turned out, the September 30 death of the game-show host Monty Hall
resolved the issue for me—I had to write about that—and then November presented another
“must do” story (the use of mathematics in election jerrymandering). So this month, with the
background political, tax votes now a matter of historical record, I have my first real
opportunity to run this story.
The two-month delay brought home to me just how problematic this particular graphic is. Even
knowing in advance what the issue is, I still found I had to concentrate to “see” the chart as
conveying the message intended. That “1%” label continued to clash with the relative area of
the labeled region.
It’s a bit like those college psychology-class graphics that show two digits in different font sizes,
and ask you to point to the digit that represents the bigger integer. If the font sizes clash with
the sizes of the integers, you take measurably longer to identify the correct one, as shown
For me, the really big take-home lesson from the tax-proposal graphic is the power of two
different mathematical representations of proportions: pie charts and numerical percentages.
Each, on its own, is instant. In the case of the pie chart, the representation draws on the innate
human cognitive ability to judge relative areas in simple, highly symmetrical figures like circular
disks or rectangles. With percentages, there is some initial learning required—you have to
understand percentages—but once you have done that, you know instantly what is meant by
figures such as “5%” or “75%."
But how do you get that understanding of the meaning of numerical percentages? For most of
us (I suspect all of us), it comes from being presented (as children) with area examples like pie
charts and subdivided rectangles. This sets us up to be confused, bigly, by examples where
those two representations are used in the same graphic but with the percentage representing
something other than the area of the segment (or what that area is intended to represent).
The message then, from this particular example—or at least the message I got from it—is that
powerful graphics are like any powerful tool, their power for good depends on using them
wisely; if used incorrectly, they can confuse and mislead. And make no mistake about it,
numbers are incredibly powerful tools. Their invention alone is by far the greatest
mathematical invention in human history. That’s why in every nation in the world, math is the
only mandated school subject apart from the native language.
American courts have never appeared to be very receptive to mathematical arguments,
in large part, some (including me) have assumed, because many judges do not feel
confident evaluating mathematical reasoning and, in the case of jury trials, no doubt
because they worry that unscrupulous, math-savvy lawyers could use formulas and
statistics to fool jury members. There certainly have been some egregious examples of
this, particularly when bogus probability arguments have been presented. Indeed, one
classic misuse of conditional probability is now known as the “prosecutor’s fallacy."
Another example where the courts have trouble with probability is in cases involving
DNA profiling, particularly Cold Hit cases, where a DNA profile match is the only hard
evidence against a suspect. I myself have been asked to provide expert testimony in
some such cases, and I wrote about the issue in this column in September and October
In both kinds of case, the courts have good reason to proceed with caution. The
prosecutor’s fallacy is an easy one to fall into, and with Cold Hit DNA identification there
is a real conflict between frequentist and Bayesian probability calculations. In neither
case, however, should the courts try to avoid the issue. When evidence is presented,
the court needs to have as accurate an assessment as possible as to its reliability or
veracity. That frequently has to be in the form of a probability estimate.
Now the courts are facing another mathematical conundrum. And this time, the case
has landed before the US Supreme Court. It is a case that reaches down to the very
foundation of our democratic system: How we conduct our elections. Not how we use
vote counts to determine winners, although that is also mathematically contentious, as I
wrote about in this column in November of 2000, just before the Bush v Gore Presidential
Election outcome ended up before the Supreme Court. Rather, the issue before the
Court this time is how states are divided up into electoral districts for state elections.
How a state carves up voters into state electoral districts can have a huge impact on the
outcome. In six states, Alaska, Arizona, California, Idaho, Montana, and Washington,
the apportioning is done by independent redistricting commissions. This is generally
regarded—at least by those who have studied the issue—as the least problematic
approach. In seven other states, Arkansas, Colorado, Hawaii, Missouri, New Jersey,
Ohio, and Pennsylvania, politician commissions draw state legislative district maps. In
the remaining 37 states, the state legislatures themselves are responsible for state
legislative redistricting. And that is where the current problem arises.
There is, of course, a powerful temptation for the party in power to redraw the electoral
district maps to favor their candidates in the next election. And indeed, in the states
where the legislatures draw the maps, both major political parties have engaged in that
practice. One of the first times this occurred was in 1812, when Massachusetts
governor Elbridge Gerry redrew district boundaries to help his party in an upcoming
senate election. A journalist at the Boston Gazette observed that one of the contrived districts in Gerry’s new map looked like a giant salamander, and gave such partisan redistricting a name, combining Gerry and mander to create the new word gerrymander.
Though Gerry lost his job over his sleight-of- hand, his redistricting did enable his party
to take over the state senate. And the name stuck.
Illegality of partisan gerrymandering is generally taken to stem from the 14th
Amendment, since it deprives the smaller party of the equal protection of the laws, but it
has also been argued to be, in addition, a 1st Amendment issue—namely an
apportionment that has the purpose and effect of burdening a group of voters’
In 1986, the Supreme Court issued a ruling that partisan gerrymandering, if extreme
enough, is unconstitutional, but it has yet to throw out a single redistricting map. In large
part, the Supreme Court’s inclination to stay out of the redistricting issue is based on a
recognition that both parties do it, and over time, any injustices cancel out, as least
numerically. Historically, this was, generally speaking, true. Attempts to gerrymander
have tended to favor both parties to roughly the same extent. But in 2012, things took a
dramatic turn with a re-districting process carried out in Wisconsin.
That year, the recently elected Republican state legislature released a re-districting map
generated using a sophisticated mathematical algorithm running on a powerful
computer. And that map was in an altogether new category. It effectively guaranteed
Republican majorities for the foreseeable future. The Democrat opposition cried foul, a
Federal District Court agreed with them, and a few months ago the case found its way
to the Supreme Court.
That the Republicans come across as the bad actors in this particular case is likely just
an accident of timing; they happened to come to power at the very time when political
parties were becoming aware of what could be done with sophisticated algorithms. If
history is any guide, either one of the two main parties would have tried to exploit the
latest technology sooner or later. In any event, with mathematics at the heart of the new
gerrymandering technique, the only way to counter it may be with the aid of equally
The most common technique used to gerrymander a district is called “packing and
cracking." In packing, you cram as many of the opposing party’s voters as possible into
a small number of “their” districts where they will win with many more votes than
necessary. In cracking, you spread opposing party’s voters across as many of “your”
districts as possible so there are not enough votes in any one of those districts to ever
A form of packing and cracking arises naturally when better-educated liberal-leaning
voters move into in cities and form a majority, leaving those in rural areas outnumbered
by less-educated, more conservative-leaning voters. (This is thought to be one of the
factors that has led to the increasing polarization in American politics.) Solving that
problem is, of course, a political one for society as a whole, though mathematics can be
of assistance by helping to provide good statistical data. Not so with partisan
gerrymandering, where mathematics has now created a problem that had not arisen
before, for which mathematics may of necessity be part of the solution.
When Republicans won control of Wisconsin in 2010, they used a sophisticated
computer algorithm to draw a redistricting map that on the surface appeared fair—no
salamander-shaped districts—but in fact was guaranteed to yield a Republican majority
even if voter preferences shifted significantly. Under the new map, in the 2012 election,
Republican candidates won 48 percent of the vote, but 60 of the state’s 99 legislative
seats. The Democrats’ 51 percent that year translated into only 39 seats. Two years
later, when the Republicans won the same share of the vote, they ended up with 63
seats—a 24-seat differential.
Recognizing what they saw as a misuse of mathematics to undermine the basic
principles of American democracy, a number of mathematicians around the country
were motivated to look for ways to rectify the situation. There are really two issues to be
addressed. One is to draw fair maps—a kind of “positive gerrymandering.” The other is
to provide reliable evidence to show that a particular map has been intentionally drawn
to favor one party over another, if such occurs, and moreover to do so in a way that the
courts can understand and accept. Neither issue is easy to solve, and without
mathematics, both are almost certainly impossible.
For the first issue, a 2016 Supreme Court ruling gave a hint about what kind of fairness
measure it might look kindly on: one that captures the notion of “partisan symmetry,”
where each party has an equal opportunity to convert its votes into seats. The
Wisconsin case now presents the Supreme Court with the second issue.
When, last year, a Federal District Court in Wisconsin threw out the new districting map,
they cited both the 1st and 14th Amendments. It was beyond doubt, the court held, that
the new maps were “designed to make it more difficult for Democrats, compared to
Republicans, to translate their votes into seats.” The court rejected the Republican
lawmakers’ claim that the discrepancy between vote share and legislative seats was
due simply to political geography. The Republicans had argued that Democratic voters
are concentrated in urban areas, so their votes have an impact on fewer races, while
Republicans are spread out across the state. But, while that is true, geography alone
does not explain why the Wisconsin maps are so skewed.
So, how do you tell if a district is gerrymandered? One way, that has been around for
some time, is to look at the geographical profile. The gerrymandering score, G, is
defined by: G = gP/A, where
g: the district’s boundary length, minus natural boundaries (like coastlines and rivers)
P: the district’s total perimeter
A: the district’s area
The higher the score, the wilder is the apportionment as a geographic region, and
hence the more likely to have been gerrymandered.
That approach is sufficiently simple and sensible to be acceptable to both society and
the courts, but unfortunately does not achieve the desired aim of fairness. And, more to
the point in the Wisconsin case, use of sophisticated computer algorithms can draw
maps that have a low gerrymandering score and yet are wildly partisan.
The Wisconsin Republicans’ algorithm searched through thousands of possible maps
looking for one that would look reasonable according to existing criteria, but would
favor Republicans no matter what the election day voting profile might look like. As
such, it would be a statistical outlier. To find evidence to counter that kind of approach,
you have to look at the results the districting produces when different voting profiles are
fed into it.
One promising way to identify gerrymandering is with a simple mathematical formula
suggested in 2015, called the “efficiency gap." It was the use of this measure that
caused, at least in part, the Wisconsin map to be struck down by the court. It is a simple
idea—and as I noted, simplicity is an important criterion, if it is to stand a chance of
being accepted by society and the courts.
You can think of a single elector’s vote as being “wasted” if it is cast in a district where
their candidate loses or it is cast in a district where their candidate would have won
there anyway. The efficiency gap measures those “wasted” votes. For each district, you
total up the number of votes the winning candidate receives in excess of what it would
have taken to elect them in that district, and you total up the number of votes the losing
candidate receives. Those are the two parties’ “wasted votes” for that district.
You then calculate the difference between those “wasted-vote” totals for each of the two
parties, and divide the answer by the total number of votes in the state. This yields a
single percentage figure: the efficiency gap. If that works out to be greater than 7%,
the systems developers suggest, the districting is unfair.
By way of an example, let’s see what the efficiency gap will tell us about the last
Congressional election. In particular, consider Maryland’s 6 th Congressional district,
which was won by the Democrats. It requires 159K votes to win. In the last election,
there were 186K Democrat votes, so 186K – 159K = 26K Democrat votes were
“wasted,” and 133K Republican votes, all of which were “wasted.”
In Maryland as a whole, there were 510K Democrat votes “wasted” and 789K
Republican votes “wasted.” So, statewide, there was a net “waste” of 789K – 510K =
279K Republican votes.
There were 2,598M votes cast in total. So the efficiency gap is 279K/2598K = 10.7% in
favor of the Democrats.
I should note, however, that the gerrymandering problem is currently viewed as far more
of a concern in state elections than in congressional races. Last year, two social scientists published the results they obtained using computer simulations to measure
the extent of intentional gerrymandering in congressional district maps across most of
the 50 states. They found that on the national level, it mostly canceled out between the
parties. So banning only intentional gerrymandering would likely have little effect on the
partisan balance of the U.S. House of Representatives. The efficiency gap did,
however, play a significant role in the Wisconsin court’s decision.
Another approach, developed by a team at Duke University, takes aim at the main idea
behind the Wisconsin redistricting algorithm—searching through many thousands of
possible maps looking for ones that met various goals set by the creators, any one of
which would, of necessity, be a statistical outlier. To identify a map that has been
obtained in this way, you subject it to many thousands of random tweaks. If the map is
indeed an outlier, the vast majority of tweaks will yield a fairly unremarkable map. So,
you compare the actual map with all those thousands of seemingly almost identical, and
apparently reasonable, variations you have generated from it. If the actual map
produces significantly different election results from all the others, when presented with
a range of different statewide voting profiles, you can conclude that it is indeed an
“outlier” — a map that could only have been chosen to deliberately subvert the
And this is where we—and the Supreme Court—are now. We have a problem for our
democracy created using mathematics. Mathematicians looking for mathematical ways
to solve it, and there are already two candidate “partisan gerrymandering test” in the
arena. Historically, the Supreme Court has proven resistant to allowing math into the
courtroom. But this time, it looks like they may have no choice. At least as long as state
legislatures continue to draw the districting maps. Maybe the very threat of having to
deal with mathematical formulas and algorithms will persuade the Supreme Court to
recommend that Congress legislates to enforce all states to use independent
commissions to draw the districting maps. Legislation under pain of math. We will know
Monty Hall with a contestant in Let's Make a Deal.
The news that American TV personality Monty Hall died recently (The New York Times, September 30) caused two groups of people to sit up and take note. One group, by far the larger, was
American fans of television game shows in the 1960s and 70s, who tuned in each week to his
show “Let’s Make a Deal.” The other group include lovers of mathematics the world over, most of
whom, I assume, have never seen the show.
I, and by definition all readers of this column, are in that second category. As it happens, I have
seen a key snippet of one episode of the show, which a television documentary film producer
procured to use in a mathematics program we were making about probability theory. Our
interest, of course, was not the game show itself, but the famous — indeed infamous —
“Monty Hall Problem” it let loose on an unsuspecting world.
To recap, at a certain point in the show, Monty would offer one of the audience participants
the opportunity to select one of three doors on the stage. Behind one, he told them, was a
valuable prize, such as a car, behind each of the other two was a booby prize, say a goat. The
contestant chose one door. Sometimes, that was the end of the exchange, and Monty would
open the door to reveal what the contestant had won. But on other occasions, after the
contestant had chosen a door, Monty would open one of the two unselected doors to reveal a
booby prize, and then give them the opportunity to switch their selection. (Monty could always
do this since he knew exactly which door the prize was hidden behind.)
So, for example, if the contestant first selects Door 2, Monty might open Door 1 to reveal a
goat, and then ask if the contestant wanted to switch their choice from Door 2 to Door 3. The
mathematical question here is, does it make any difference if the contestant switches their
selection from Door 2 to Door 3? The answer, which on first meeting this puzzler surprises
many people, is that the contestant doubles their chance of winning by switching. The
probability goes up from an original 1/3 of Door 2 being the right guess, to 2/3 that the prize is
behind Door 3.
I have discussed this problem in Devlin’s Angle on at least two occasions, the most recent being
December 2005, and have presented it in a number of articles elsewhere, including national
newspapers. That on each occasion I have been deluged with mail saying my solution is
obviously false was never a surprise; since the problem is famous precisely because it presents
the unwary with a trap. That, after all, is why I, and other mathematics expositors, use it! What
continues to amaze me is how unreasonably resistant many people are to stepping back and
trying to figure out where they went wrong in asserting that switching doors cannot possibly
make any difference. For such reflection is the very essence of learning.
Wrapping your mind around the initially startling information that switching the doors doubles
the probability of winning is akin to our ancestors coming to terms with the facts that the Earth
is not flat or that the Sun does not move around the Earth. In all cases, we have to examine
how it can be that what our eyes or experience seem to tell us is misleading. Only then can we
accept the rock-solid evidence that science or mathematics provides.
Some initial resistance is good, to be sure. We should always be skeptical. But for us
and society to continue to advance, we have to be prepared to let go of our original belief when
the evidence to the contrary becomes overwhelming.
The Monty Hall problem is unusual (though by no means unique) in being simple to state and
initially surprising, yet once you have understood where your initial error lies, the simple
correct answer is blindingly obvious, and you will never again fall into the same trap you did on the first encounter. Many issues in life are much less clear-cut.
BTW, if you have never encountered the problem before, I will tell you it is not a trick question.
It is entirely a mathematical puzzle, and the correct mathematics is simple and straightforward.
You just have to pay careful attention to the information you are actually given, and not remain
locked in the mindset of what you initially think it says. Along the way, you may realize you
have misunderstood the notion of probability. (Some people maintain that probabilities cannot
change, a false understanding that most likely results from first encountering the notion in
terms of the empirical study of rolling dice and selecting colored beans from jars.) So reflection
on the Monty Hall Problem can provide a valuable lesson in coming to understand the hugely
important concept of mathematical probability.
As it happens, Hall’s death comes at a time when, for those of us in the United States, the
system of evidence-based, rational inquiry which made the nation a scientific, technological,
and financial superpower is coming under dangerous assault, with significant resources being
put into a sustained attempt to deny that there are such things as scientific facts. For scientific
facts provide a great leveler, favoring no one person or one particular group, and are thus to
some, a threat.
“I have a foreboding of an America in my children’s or my grandchildren’s time — when
the United States is a service and information economy; when nearly all the key
manufacturing industries have slipped away to other countries; when awesome
technological powers are in the hands of a very few, and no one representing the public
interest can even grasp the issues; when the people have lost the ability to set their
own agendas or knowledgeably question those in authority; when, clutching our
crystals and nervously consulting our horoscopes, our critical faculties in decline,
unable to distinguish between what feels good and what’s true, we slide, almost
without noticing, back into superstition and darkness. ...”
Good scientists, such as Sagan, are not just skilled at understanding what is, they can
sometimes extrapolate rationally to make uncannily accurate predictions of what the future
might bring. It is chilling, but now a possibility that cannot be ignored, that a decade from now,
I could be imprisoned for writing the above words. Today, the probability that will happen is
surely extremely low, albeit nonzero. But that probability could change. As mathematicians, we
have a clear responsibility to do all we can to ensure that Sagan’s words do not describe the
world in which our children and grandchildren live.
Keith Devlin and Jonathan Borwein talk to host Robert Krulwick on stage at the World Science Festival in 2011.
At the end of this week I fly to Australia to speak and participate in the Jonathan Borwein
Commemorative Conference in Newcastle, NSW, Borwein’s home from 2009 onwards, when he
moved to the Southern hemisphere after spending most of his career at various Canadian
universities. Born in Scotland in 1951, Jonathan passed away in August last year, leaving behind
an extensive collection of mathematical results and writings, as well as a long list of service
activities to the mathematical community. [For a quick overview, read the brief obituary
written by his long-time research collaborator David Bailey in their joint blog Math Drudge. For
more details, check out his Wikipedia entry.]
Jonathan’s (I cannot call him by anything but the name I always used for him) career path and
mine crossed on a number of occasions, with both of us being highly active in mathematical
outreach activities and both of us taking an early interest in the use of computers in
mathematics. Over the years we became good friends, though we worked together on a project
only once, co-authoring an expository book on experimental mathematics, titled The Computer as Crucible, published in 2008.
Most mathematicians, myself included, would credit Jonathan as the father of experimental
mathematics as a recognized discipline. In the first chapter of our joint book, we defined
experimental mathematics as “the use of a computer to run computations—sometimes no
more than trial-and- error tests—to look for patterns, to identify particular numbers and
sequences, to gather evidence in support of specific mathematical assertions that may
themselves arise by computational means, including search.”
The goal of such work was to gather information and gain insight that would eventually give rise
to the formulation and rigorous proof of a theorem. Or rather, I should say, that was Jonathan’s
goal. He saw the computer, and computer-based technologies, as providing new tools to
formulate and prove mathematical results. And since he gets to define what “experimental
mathematics” is, that is definitive. But that is where are two interests diverged significantly.
In my case, the rapidly growing ubiquity of ever more powerful and faster computers led to an
interest in what I initially called “soft mathematics” (see my 1998 bookGoodbye Descartes) and
subsequently referred to as “mathematical thinking,” which I explored in a number of articles
and books. The idea of mathematical thinking is to use a mathematical approach, and often
mathematical notations, to gather information and gain insight about a task in a domain that
enables improved performance. [A seminal, and to my mind validating, example of that way of
working was thrust my way shortly after September 11, 2001, when I was asked to join a team
tasked with improving defense intelligence analysis.]
Note that the same phrase “gather information and gain insight” occurs in both the definition
of experimental mathematics and that of mathematical thinking. In both cases, the process is
designed to lead to a specific outcome. What differs is the nature of that outcome. (See my
2001 book InfoSense, to get the general idea of how mathematical thinking works, though I
wrote that book before my Department of Defense work, and before I adopted the term “mathematical thinking.”)
It was our two very different perspectives on the deliberative blending of mathematics and
computers that made our book The Computer as Crucible such a fascinating project for the two of us.
But that book was not the first time our research interests brought us together. In 1998, the
American Mathematical Society introduced a new section of its ten-issues- a-year Notices, sent
out to all members, called “Computers and Mathematics,” the purpose of which was both
informational and advocacy.
Though computers were originally invented by mathematicians to perform various numerical
calculations, professional mathematicians were, by and large, much slower at making use of
computers in their work and their teaching than scientists and engineers. The one exception
was the development of a number of software systems for the preparation of mathematical
manuscripts, which mathematicians took to like ducks to water.
In the case of research, mathematicians’ lack of interest in computers was perfectly
understandable—computers offered little, if any, benefit. (Jonathan was one of a very small
number of exceptions, and his approach was initially highly controversial, and occasionally
derided.) But the writing was on the wall—or rather on the computer screen—when it came to
university teaching. Computers were clearly going to have a major impact in mathematics
The “Computers and Mathematics” section of the AMS Notices was intended to be a change
agent. It was originally edited by the Stanford mathematician Jon Barwise, who took care of it
from the first issue in the May/June 1988 Notices, to February 1991, and then by me until we
retired the section in December 1994. It is significant that 1988 was the year Stephen Wolfram
released his mathematical software package Mathematica. And in 1992, the first issue of the
new research journal Experimental Mathematics was published.
Over its six-and- a-half years run, the column published 59 feature articles, 19 editorial essays,
and 115 reviews of mathematical software packages — 31 features 11 editorials, and 41
reviews under Barwise, 28 features, 8 editorials, and 74 reviews under me. [The Notices
website has a complete index.] One of the feature articles published under my watch was
“Some Observations of Computer Aided Analysis,” by Jonathan Borwein and his brother Peter,
which appeared in October 1992. Editing that article was my first real introduction to
something called “experimental mathematics.” For the majority of mathematicians, reading it
was their introduction.
From then on, it was clear to both of us that our view of “doing mathematics” had one feature
in common: we both believed that for some problems it could be productive to engage in
mathematical work that involved significant interaction with a computer. Neither of us was by
any means the first to recognize that. We may, however, have been among the first to conceive
of such activity as constituting a discipline in its own right, and each to erect a shingle to
advertise what we were doing. In Jonathan’s case, he was advancing mathematical knowledge;
for me it was about utilizing mathematical thinking to improve how we handle messy, real-world problems. In both cases, we were engaging in mental work that could not have been
done before powerful, networked computers became available.
It’s hard to adjust to Jonathan no longer being among us. But his legacy will long outlast us all. I
am looking forward to re-living much of that legacy in Australia in a few days time.
Exactly 30 years ago, I and my family arrived in the U.S. from the U.K. to take up a one-year visiting position in the mathematics department at Stanford University. (We landed on July 28, 1987.) That one year was subsequently extended to two, and in the end we never returned to the U.K. A very attractive offer of a newly endowed chair in mathematics at Colby College in Maine provided the pull. But equally significant was a push from the U.K.
The late 1980s were a bad time for universities in Britain, as Prime Minister Margaret Thatcher launched a full-scale assault on higher education, motivated in part by a false understanding of what universities do, and in part by personal vindictiveness stemming from her being criticized by academics for her poor performance as Minister for Education some years earlier. My own university, Lancaster, where I had been a regular faculty member since 1977, had been a source of some of the most vocal criticisms of the then Minister Thatcher, and accordingly was dealt a particularly heavy funding hit when Prime Minister Thatcher started to wield her axe. A newly appointed vice chancellor (president), with a reputation for tough leadership as a dean, was hired from the United States to steer the university through the troubled waters ahead.
One of the first decisions the new vice chancellor made was to cut the mathematics department faculty by roughly 50%, from around 28 to 14. (I forget the actual numbers.) The problem he faced in achieving that goal was that in the British system at the time, once a new Lecturer (= Assistant Professor) had passed a three-year probationary period, they had tenure for life. The only way to achieve a 50% cut in faculty was to force out anyone who could be “persuaded” to go. That boiled down to putting pressure on those whose reputation was sufficiently good for them to secure a position elsewhere. (So, a strategy of “prune from the top,” arguably more productive in the garden than a university.)
In my case, the new vice chancellor made it clear to me soon after his arrival that my prospects of career advancement at Lancaster were low, and I could expect ever increasing teaching loads that would hamper my research, and lack of financial support to attend conferences. As a research mathematician early in my career, with my work going well and my reputation starting to grow, that prospect was ominous. Though I was not sure whether he would ever actually follow through with his threat, it seemed prudent to start thinking in terms of a move, possibly one that involved leaving the U.K.
Then, just as all of this was going on, out of the blue I got the invitation from Stanford. (I had started working on a project that aligned well with a group at Stanford who had just set up a new research center to work on the same issues. As a result, I had gotten to know some of them, mostly by way of an experimental new way to communicate called “e-mail,” which universities were just starting to use.)
In my meeting with the vice chancellor to request permission to accept the offer and discuss the arrangements, I was told in no uncertain terms that I would be wise not to return after my year in California came to an end. The writing was on the wall. Lancaster wanted me gone. In addition, other departmental colleagues were also looking at opportunities elsewhere, so even if I were to return to Lancaster after my year at Stanford, it might well be to a department that had lost several of its more productive mathematicians. (It would have been. The vice chancellor achieved his 50% departmental reduction in little more than two years.)
Yes, these events were all so long ago, in a different country. So why am I bringing the story up now? The answer, is that, as is frequently observed, history can provide cautionary lessons for what may happen in the future.
Those of us in mathematics are deeply aware of the hugely significant role the subject plays in the modern world, and have seen with every generation of students how learning mathematics can open so many career doors. We also know sufficient mathematics to appreciate the enormous impact on society that new mathematical discoveries can have—albeit in many cases years or decades later. To us, it is inconceivable that a university—an institution having the sole purpose of advancing and passing on new knowledge for the good of society—would ever make a conscious decision to cut down (especially from the top), or eliminate, a mathematics department.
But to people outside the universities, things can look different. Indeed, as I discovered during my time as an academic dean (in the U.S.), the need for mathematics departments engaged in research is often not recognized by faculty in other departments. Everyone recognizes the need for each new generation of students to be given some basic mathematics instruction, of course. But mathematics research? That’s a much harder sell. In fact, it is an extremely hard sell. Eliminating the research mathematicians in a department and viewing it as having a solely instructional role can seem like an attractive way to achieve financial savings. But it can come at a considerable cost to the overall academic/educational environment. Not least because of the message conveyed to the students.
As things are, students typically graduate from high school thinking of mathematics as a toolbox of formulas and procedures for solving certain kinds of problems. But at university level, they should come to understand it as a particular way of thinking. To that end, they should be exposed to an environment where tasks can be approached on their own terms, with mathematicians being one of any number of groups of experts who can bring a particular way of thinking that may, or may not, be effective.
The educational importance of having an active mathematics research group in a university is particularly important in today’s world. As I noted in an article in The Huffington Post in January, pretty well all the formulas and procedures that for many centuries have constituted the heart of a university mathematics degree have now been automated and are freely available on sites such as Wolfram Alpha. Applying an implemented, standard mathematical procedure to solve, say, a differential equation, is now in the same category as using a calculator to add up a column of numbers. Just enter the data correctly and the machine will do the rest.
In particular, a physicist or an engineer (say) at a university can, for the most part, carry out their work without the need for specialist mathematical input. (That was always largely the case. It is even more so today.) But one of the functions of a university is to provide a community of experts who are able to make progress when the available canned procedures do not quite fit the task at hand. The advance of technology does not eliminate the need for creative, human expertise. It simply shifts the locus of where such expertise is required. Part of a university education is being part of a community where that reliance on human expertise is part of the daily activities; a community where all the domain specialists are experts in their domains, and able to go beyond the routine.
It is easy to think of education as taking place in a classroom. But that’s just not what goes on. What you find in classrooms is instruction, maybe involving some limited discussion. Education and learning occur primarily by way of interpersonal interaction in a community. That’s why we have universities, and why students, and often their parents, pay to attend them. It’s why “online universities” and MOOCs have not replaced universities, and to my mind never will. The richer and more varied the community, the better the education.
Lest I have given the impression that my focus is on topline research universities, stocked with award winning academic superstars, let me end by observing that nothing I have said refers to level of achievement. Rather it is all about the attitude of mind and working practices of the faculty. As long as the mathematics faculty love mathematics, and enjoy doing it, and are able to bring their knowledge to bear on a new task or problem, they contribute something of real value to the environment in which the students learn. It’s a human thing.
A university that decides to downgrade a particular discipline to do little more than provide basic instruction is diminishing its students educational experience, and is no longer a bona fide university. (It may well, of course, continue to provide a valuable service. The university, my focus in this essay, is just one form of educational institution among many.)
The great mathematician Karl Freidrich Gauss is frequently quoted as saying “What we need are
notions, not notations.” [In “About the proof of Wilson's theorem,” Disquisitiones
Arithmeticae (1801), Article 76.]
While most mathematicians would agree that Gauss was correct in pointing out that concepts,
not symbol manipulation, are at the heart of mathematics, his words do have to be properly
interpreted. While a notation does not matter, a representation can make a huge difference. The
distinction is that developing or selecting a representation for a particular mathematical concept
(or notion) involves deciding which features of the concept to capture.
For example, the form of the ten digits 0, 1, … , 9 does not matter (as long as they are readily
distinguishable), but the usefulness of the Hindu-Arabic number system is that it embodies base-
10 place-value representation of whole numbers. Moreover, it does so in a way that makes both
learning and using Hindu-Arabic arithmetic efficient.
Likewise, the choice of 10 as the base is optimal for a species that has highly manipulable hands
with ten digits. Although the base-10 arithmetic eventually became the standard, other systems
were used in different societies, but they too evolved from the use of the hands and sometimes
the feet for counting: base-12 (where finger-counting used the three segments of each of the four
fingers) and base-20 where both fingers and toes were used. Base-12 arithmetic and base-20
arithmetic both remained in regular use in the monetary system in the UK when I was a child
growing up there, with 12 pennies giving one shilling and 20 shillings one pound. And several
languages continue to carry reminders of earlier use of both bases — English uses phrases such
as “three score and ten” to mean 70 (= 3x20 + 10) and French articulates 85 as “quatre-vingt cinq
(4x20 + 5).
Another number system we continue to use today is base-60, used in measuring time (seconds
and minutes) and in circular measurement (degrees in a circle). Presumably the use of 60 as a
base came from combining the finger and toes bases 10, 12, and 20, allowing for all three to be
used as most convenient.
These different base-number representation systems all capture features that make them useful to
humans. Analogously, digital computers are designed to use binary arithmetic (base 2), because
that aligns naturally with the two states of an electronic gate (open or closed, on or off).
In contrast, the shapes of the Hindu-Arabic numerals is an example of a superfluous feature of
the representation. The fact that it is possible to draw the numerals in a fashion whereby each
digit has the corresponding number of angles, like this
may be a historical echo of the evolution of the symbols, but whether or not that is the case (and
frankly I find it fanciful), it is of no significance in terms of their use—the form of the numerals
is very much in Gauss’s “unimportant notations” bucket.
On the other hand, the huge difference a representation system can make in mathematics is
indicated by the revolutionary change in human life that was brought about by the switch from
Roman numerals and abacus-board calculation to Hindu-Arabic arithmetic in Thirteenth Century
Europe, as I described in my 2011 book The Man of Numbers.
Of course, there is a sense in which representations do not matter to mathematics. There is a
legitimate way to understand Gauss’s remark as a complete dismissal of how we represent
mathematics on a page. The notations we use provide mental gateways to the abstract notions of
mathematics that live in our minds. The notions themselves transcend any notations we use to
denote them. That may, in fact, have been how Gauss intended his reply to be taken, given the
But when we shift our attention from mathematics as a body of eternal, abstract structure
occupying a Platonic realm, to an activity carried out by people, then it is clear that notations
(i.e., a representation system) are important. In the early days of Category Theory, some
mathematicians dismissed it as “abstract nonsense” or “mere diagram chasing”, but as most of us
discovered when we made a serious attempt to get into the subject, “tracing the arrows” in a
commutative diagram can be a powerful way to approach and understand a complex structure.
[Google “the snake lemma”. Even better, watch actress Jill Clayburgh explain it to a graduate
math class in an early scene from the 1980s movie It’s My Turn.]
A well-developed mathematical diagram can also be particularly powerful in trying to
understand complex real-world phenomena. In fact, I would argue that the use of mathematical
representations as a tool for highlighting hidden abstract structure to help us understand and
operate in our world is one of mathematics most significant roles in society, a use that tends to
get overlooked, given our present day focus on mathematics as a tool for “getting answers.”
Getting an answer is frequently the end of a process of thought; gaining new insight and
understanding is the start of a new mental journey.
A particularly well known example of such use are the Feynmann Diagrams, simple
visualizations to help physicists understand the complex behavior of subatomic particles,
introduced by the American physicist Richard Feynmann in 1948.
A more recent example that has proved useful in linguistics, philosophy, and the social sciences
is the “completion diagram” developed by the American mathematician Jon Barwise in
collaboration with his philosopher collaborator John Perry in the early 1980s, initially to
understand information flow.
A discussion of one use of this diagram can be found in a survey article I wrote in the volume
Handbook of the History of Logic, Volume 7, edited by Dov Gabbay and John Woods (Elsevier,
2008, pp.601-664), a manuscript version of which can be found on my Stanford homepage. That
particular application is essentially the original one for which the diagram was introduced, but
the diagram itself turned out be to be applicable in many domains, including improving
workplace productivity, intelligence analysis, battlefield command, and mathematics education.
(I worked on some of those applications myself; some links to publications are on my
To be particularly effective, a representation needs to be simple and easy to master. In the case of
a representational diagram, like the Commutative Diagrams of Category Theory, the Feynmann
Diagram in physics, and the Completion Diagram in social science and information systems
development, the representation itself is frequently so simple that it is easy for domain experts to
dismiss them as little more than decoration. (For instance, the main critics of Category Theory in
its early days were world famous algebraists.) But the mental clarity such diagrams can bring to
a complex domain can be highly significant, both for the expert and the learner.
In the case of the Completion Diagram, I was a member of the team at Stanford that led the
efforts to develop an understanding of information that could be fruitful in the development of
information technologies. We had many long discussions about the most effective way to view
the domain. That simple looking diagram emerged from a number of attempts (over a great many
months) as being the most effective.
Given that personal involvement, you would have thought I would be careful not to dismiss a
novel representation I thought was too simple and obvious to be important. But no. When you
understand something deeply, and have done so for many years, you easily forget how hard it
can be for a beginning learner. That’s why, when the MAA’s own James Tanton told me about
his “Exploding Dots” idea some months ago, my initial reaction was “That sounds cute," but I did
not stop and reflect on what it might mean for early (and not so early) mathematics education.
To me, and I assume to any professional mathematician, it sounds like the method simply adds a
visual element on paper (or a board) to the mental image of abstract number concepts we already
have in our minds. In fact, that is exactly what it does. But that’s the point! “Exploding Dots”
does nothing for the expert. But for the learner, it can be huge. It does nothing for the expert
because it represents on a page what the expert has in their mind. But that is why it can be so
effective in assisting a learner arrive at that level of understanding! All it took to convince me
was to watch Tanton’s lecture video on Vimeo. Like Tanton, and I suspect almost all other
mathematicians, it took me many years of struggle to go beyond the formal symbol manipulation
of the classical algorithms of arithmetic (developed to enable people to carry our calculations
efficiently and accurately in the days before we had machines to do it for us) until I had created
the mental representation that the exploding dots process capture so brilliantly. Many learners
subjected to the classical teaching approach never reach that level of understanding; for them,
basic arithmetic remains forever a collection of incomprehensible symbolic incantations.
Yes, I was right in my original assumption that there is nothing new in exploding dots. But I was
also wrong in concluding that there was nothing new. There is no contradiction here.
Mathematically, there is nothing new; it’s stuff that goes back to the first centuries of the First
Millennium—the underlying idea for place-value arithmetic. Educationally, however, it’s a big
deal. A very big deal. Educationally explosive, in fact. Check it out!
Many math instructors use clickers in their larger lecture classes, and can cite numerous studies
to show that they lead to more student attention and better learning. A recent research paper
on clicker use devotes a page-long introductory section to a review of some of that literature.
(Shapiro et al, Computers & Education 111 (2017), 44–59) But the paper—by clicker
aficionadas, I should stress—is not all good news. In fact, its main new finding is that when
clickers are used in what may be the most common way, they actually have a negative effect on
student learning. This finding was sufficiently startling that EdSurge put out a feature article on
the paper on May 25, which is how I learned of the result.
The most common (I believe) use of clickers is to provide students with frequent quiz questions
to check that they are retaining important facts. (The early MOOCs, including my own, used
simple, machine-graded quizzes embedded in the video lectures to achieve the same result.)
And a lot of that research I just alluded to showed that the clickers achieve that goal.
So too does the latest study. All of which is fine and dandy if the main goal of the course is
retention of facts. Where things get messy is when it comes to conceptual understanding of the
material—a goal that almost all mathematicians agree is crucial.
In the new study, the researchers looked at two versions of a course (physics, not
mathematics), one fact-focused, the other more conceptual and problem solving. In each
course, they gave one group fact-based clicker questions and a second group clicker questions
that concentrated on conceptual understanding in addition to retention of basic facts.
As the researchers expected, both kinds of questions resulted in improved performance in fact-
based questions on a test administered at the end.
Neither kind of question led to improved performance in a problem-based test questions that
required conceptual understanding.
The researchers expressed surprise that the students who were given the conceptual clicker
questions did not show improvement in conceptual questions performance. But that was not
the big surprise. That was, wait for it: students who were given only fact-based clicker questions
actually performed worse on conceptual, problem solving questions.
To those of us who are by nature heavy on the conceptual understanding, not showing
improvement as a result of enforced fact-retention comes as no big surprise. But a negative
effect! That’s news.
By way of explanation, the researchers suggest that the fact-based clicker questions focus the
student’s attention on retention of what are, of course, surface features, and do so to the
detriment of acquiring the deeper understanding required to solve problems.
If this conclusion is correct—and is certainly seems eminently reasonable—the message is clear.
Use clickers, but do so with questions that focus on conceptual understanding, not retention of
The authors also recommend class discussions of the concepts being tested by the clicker
questions, again something that comes natural to we concepts matter folks.
I would expect the new finding to have implications for game-based math learning, which
regular readers will know is something I have been working on for some years now. The games I
have been developing are entirely problem-solving challenges that require deep understanding,
and university studies have shown they achieve the goal of better problem-solving skills. (See
the December 4, 2015Devlin’s Angle post.) The majority of math learning games, in contrast,
focus on retention of basic facts. Based on the new clickers study, I would hypothesize that,
even if a game were built on math concepts (many are not), unless the gameplay involves
active, problem-solving engagement with those concepts, the result could be, not just no
conceptual learning, but a drop in performance on a problem solving test.
Both clickers and video games set up a feedback cycle that can quickly become addictive. With
both technologies, regular positive feedback leads to improvement in what the clicker-
questions or game-challenges ask for. Potentially more pernicious, however, that positive
feedback will result in the students thinking they are doing just fine overall—and hence have no
need to wrestle more deeply with the material. And that sets them up for failure once they
have to go beneath the surface fact they have retained. Thinking you are winning all the time
seduces you to ease off, and as a result is the path to eventual failure. If you want success, the
best diet is a series of challenges— that is to say, challenges in coming to grips with the essence
of the material to be learned—where you experience some successes, some failures from which
you can recover, and the occasional crash-and- burn to prevent over-confidence.
That’s not just the secret to learning math. It’s the secret to success in almost any walk of life.
My May post is more than a little late. The initial delay was caused by a mountain of other
deadlines. When I did finally start to come up for air, there just did not seem to be any suitable
math stories floating around to riff off, but I did not have enough time to dig around for one.
That this has happened so rarely in the twenty years I have been writing Devlin’s Angle (and
various other outlets going back to the early 1980s in the UK), that it speaks volumes against
the claim you sometimes hear that nothing much happens in the world of mathematics. There
is always stuff going on.
Be that as it may, when I woke up this morning and went online, two fascinating stories were
waiting for me. What’s more, they are connected – at least, that’s how I saw them.
First, my Stanford colleague Professor Jo Boaler sent out a group email pointing to a New York
Times article that quoted her, and which, she noted, she helped the author to write. Titled "No
Such Thing as a Math Person," it summarizes the consensus among informed math educators
that mathematical ability is a spectrum. Just like any other human ability. What is more, the
basic math of the K-8 system is well within the capacity of the vast majority of people. Not easy
to master, to be sure; but definitely within most people’s ability. It may be defensible to apply
terms such as “gifted and talented” to higher mathematics (though I will come back to that
momentarily), but basic math is almost entirely a matter of wanting to master it and being
willing to put in the effort. People who say otherwise are either (1) education suppliers trying to
sell products, (2) children who for whatever reason simply do not want to learn and find it
reassuring to convince themselves they just don’t have the gift, or (3) mums and dads who
want to use the term as a parental boast or an excuse.
With many parents, and not a few teachers, having convinced themselves of the “Math Gift
Myth,” attempts over the past several decades to change that mindset have met with
considerable resistance. If you have such a mindset, it is easy to see what happens in the
educational world around you as confirming it. For instance, one teacher commented on The
New York Times article:
“Excuse me? I'm a teacher and I refute your assertion. I have seen countless individuals who
have problems with math – and some never get it. The same goes for English. But, unless
you've spent years in the classroom, it takes years to fully accept that observation. The article's
writer is a doctor, not a teacher; accomplishment in one field does not necessarily translate
readily to another.”
Others were quick to push back against that comment, with one pointing out that her final
remark surely argues in favor of everyone in the education world keeping up with the latest
scientific research in learning. We are all liable to seek confirmation of our initial biases. And
both teachers and parents are in powerful positions to pass on those biases to a new
generation of math learners.
And so to that second story I came across. Hemant Mehta is a former National Board Certified
high school math teacher in the suburbs of Chicago, where he taught for seven years, who is
arguably best known for his blog The Friendly Atheist. His post on May 22 was titled "Years Later,
the Mother Who 'Audited' an Evolution Exhibit Reflects on the Viral Response." Knowing
Mehta’s work (for the record, I have also been interviewed by him on his education-related
podcast), that title hooked me at first glance. I could not resist diving in.
As with The New York Times article I led off with, Mehta’s post is brief and to the point, so I
won’t attempt to summarize it here. Like Mehta, as an experienced educator I know that it
requires real effort, and courage, to take apart ones beliefs and assumptions, when faced with
contrary evidence, and then to reason oneself to a new understanding. So I side with him in not
in any way trying to diminish the individual who made the two videos he comments on. What
we can do, is use her videos to observe how difficult it can be to make that leap from
interpreting seemingly nonsensical and mutually contradictory evidence from within our
(current!) belief system, to seeing it from a new viewpoint from which it all makes perfect
sense – to rise above the trees to view the forest, if you will. The video lady cannot do that, and
assumes no one else can either.
Finally, what about my claim that post K-12 mathematics may be beyond the reach of many
individuals’ innate capacity for progression along that spectrum I referred to? Of course, it
depends on what you mean by “many”. Leaving that aside, however, if someone, for whatever
reason, develops a passionate interest in mathematics, how far can they go? I don’t know.
Based on a sample size of one, me, we can go further than we think. I look at the achievement
of mathematicians such as Andrew Wiles or Terrence Tao and experience the same degree of
their being from a different species as the keen-amateur- cyclist-me feels when I see the likes of
Tour de France winner Chris Froome or World Champion Peter Sagan climb mountains at twice
the speed I can sustain.
Yet, on a number of occasions where I failed to solve a mathematics problem I had been
working on for months and sometimes years, when someone else did solve it, my first reaction
was, “Oh no, I was so close. If only I had tried just a tiny bit harder!” Not always, to be sure. Not
infrequently, I was convinced I would never have found the solution. But I got within a
hairsbreadth on enough occasions to realize that with more effort I could have done better
than I did. (I have the same experience with cycling, but there I do not have a particular desire
to aim for the top.)
In other words, all my experience in mathematics tells me I do not have an absolute ability
limit. Nor, I am sure, do you. Mathematical proficiency is indeed a spectrum. We can all do
better – if we want to. That, surely is the message we educators should be telling our students,
be they in the K-8 classroom or the postgraduate seminar room.
Gifted and talented? Time to recognize that as an educational equivalent of the Flat Earth
Belief. Sure, we are surrounded by seemingly overwhelming daily experience that the world is
flat. But it isn’t. And once you accept that, guess what? From a new perspective, you start to
see supporting evidence for the Earth being spherical.
The first reviews of my new book Finding Fibonacci have just come out, and I
have started doing promotional activities to try to raise awareness. As I expected, one of the first reviews I saw featured a picture of the Nautilus shell (no
connection to Fibonacci or the Golden Ratio), and media interviewers have
inevitably tried to direct the conversation towards the many fanciful—but for the
most part totally bogus—claims about how the Golden Ratio (and hence the
Fibonacci sequence) are related to human aesthetics, and can be found in a
wide variety of real-world objects besides the Nautilus shell. [Note: the Fibonacci
sequence absolutely is mathematically related to the Golden Ratio. That’s one of
the few golden ratio claims that is valid! There is no evidence Fibonacci knew of
For some reason, once a number has been given names like “Golden Ratio” and
“Divine Ratio”, millions of otherwise sane, rational human beings seem willing to
accept claims based on no evidence whatsoever, and cling to those beliefs in the
face of a steady barrage of contrary evidence going back to 1992, when the
University of Maine mathematician George Markovsky published a seventeen-
page paper titled "Misconceptions about the Golden Ratio" in the MAA’s
College Mathematics Journal, Vol. 23, No. 1 (Jan. 1992), pp. 2-19.
I first entered the fray with a Devlin’s Angle post in June 2004 titled "Good Stories
Pity They’re Not True" [the MAA archive is not currently accessible], and then
again in May 2007 with "The Myth That Will Not Go Away" [ditto].
Those two posts gave rise to a number of articles in which I was quoted, one of
the most recent being "The Golden Ration: Design’s Biggest Myth," by John
Brownlee, which appeared in Fast CompanyDesign on April 13, 2015.
In 2011, the Museum of Mathematics in New York City invited me to give a public
lecture titled "Fibonacci and the Golden Ratio Exposed: Common Myths andFascinating Truths," the recording of which was at the time (and I think still is) the
most commented-on MoMath lecture video on YouTube, largely due to the many
Internet trolls the post attracted—an observation that I find very telling as to the
kinds of people who hitch their belief system to one particular ratio that does not
quite work out to be 1.6 (or any other rational number for that matter), and for
which the majority of instances of those beliefs are supported by not one shred of
evidence. (File along with UFOs, Flat Earth, Moon Landing Hoax, Climate
Change Denial, and all the rest.)
Needless to say, having been at the golden ratio debunking game for many years
now, I have learned to expect I’ll have to field questions about it. Even in a media
interview about a book that, not only flatly refutes all the fanciful stuff, but lays out
the history showing that the medieval mathematician known today as Fibonacci
left no evidence he had the slightest interest in the sequence now named after
him, nor had any idea it had several cute properties. Rather, he simply included
among the hundreds of arithmetic problems in his seminal book Liber abbaci,
published in 1202, an ancient one about a fictitious rabbit population, the solution
of which is that sequence.
What I have always found intriguing is the question, how did this urban legend
begin? It turns out to be a relatively recent phenomenon. The culprit is a German
psychologist and author called Adolf Zeising. In 1855, he published a book titled: A New Theory of the proportions of the human body, developed from a basic
morphological law which stayed hitherto unknown, and which permeates the
whole nature and art, accompanied by a complete summary of the prevailing
This book, which today would likely be classified as “New Age,” is where the
claim first appears that the proportions of the human body are based on the
Golden Ratio. For example, taking the height from a person's naval to their toes
and dividing it by the person's total height yields the Golden Ratio. So, he claims,
does dividing height of the face by its width.
From here Zeising leaped to make a connection between these human-centered
proportions and ancient and Renaissance architecture. Not such an
unreasonable jump, perhaps, but it was, and is pure speculation. After Zeising,
the Golden Ratio Thing just took off.
Enough! I can’t bring myself to continue. I need a stiff drink.
Devlin makes a pilgrimage to Pisa to see the
Leonardo Fibonacci in 2002.
In 1983, I did something that would turn out to have a significant influence on the
direction my career would take. Frustrated by the lack of coverage of mathematics in
the weekly science section of my newspaper of choice, The Guardian, I wrote a short
article about mathematics and sent it to the science editor. A few days later, the editor
phoned me to explain why he could not to publish it. “But,” he said, “I like your style.
You seem to have a real knack for explaining difficult ideas in a way ordinary people
can understand.” He encouraged me to try again, and my second attempt was
published in the newspaper on May 12, 1983. Several more pieces also made it into
print over the next few months, eliciting some appreciative letters to the editor. As a
result, when The Guardian launched a weekly, personal computing page later that year,
it included my new, twice-monthly column MicroMaths. The column ran without
interruption until 1989, when my two-year visit to Stanford University in California turned
into a permanent move to the US.
Before long, a major publisher contracted me to publish a collection of my MicroMaths
articles, which I did, and following that Penguin asked me to write a more substantial
book on mathematics for a general audience. That book, Mathematics: The NewGolden Age, was first published in 1987, the year I moved to America.
In addition to writing for a general audience, I began to give lectures to lay audiences,
and started to make occasional appearances on radio and television. From 1991 to
1997, I edited MAA FOCUS, the monthly magazine of the Mathematical Association of
America, and since January 1996 I have written this monthly Devlin’s Angle column. In
1994, I also became the NPR Math Guy, as I describe in my latest article in the
Each new step I took into the world of “science outreach” brought me further pleasure,
as more and more people came up to me after a talk or wrote or emailed me after
reading an article I had written or hearing me on the radio. They would tell me they
found my words inspiring, challenging, thought-provoking, or enjoyable. Parents,
teachers, housewives, business people, and retired people would thank me for
awakening in them an interest and a new appreciation of a subject they had long ago
given up as being either dull and boring or else beyond their understanding. I came to
realize that I was touching people’s lives, opening their eyes to the marvelous world of
None of this was planned. I had become a “mathematics expositor” by accident. Only
after I realized I had been born with a talent that others appreciated—and which by all
appearances is fairly rare—did I start to work on developing and improving my “gift.”
In taking mathematical ideas developed by others and explaining them in a way that the
layperson can understand, I was following in the footsteps of others who had also made
efforts to organize and communicate mathematical ideas to people outside the
discipline. Among that very tiny subgroup of mathematics communicators, the two who I
regarded as the greatest and most influential mathematical expositors of all time are
Euclid and Leonardo Fibonacci. Each wrote a mammoth book that influenced the way
mathematics developed, and with it society as a whole.
Euclid’s classic work Elements presented ancient Greek geometry and number theory in
such a well-organized and understandable way that even today some instructors use it
as a textbook. It is not known if any of the results or proofs Euclid describes in the book
are his, although it is reasonable to assume that some are, maybe even many. What
makes Elements such a great and hugely influential work, however, is the way Euclid
organized and presented the material. He made such a good job of it that his text has
formed the basis of school geometry teaching ever since. Present day high school
geometry texts still follow Elements fairly closely, and translations of the original remain
With geometry being an obligatory part of the school mathematics curriculum until a few
years ago, most people have been exposed to Euclid’s teaching during their childhood,
and many recognize his name and that of his great book. In contrast, Leonardo of Pisa
(aka Fibonnaci) and his book Liber abbaci are much less well known. Yet their impact
on present-day life is far greater. Liber abbaci was the first comprehensive book on
modern practical arithmetic in the western world. While few of us ever use geometry,
people all over the world make daily use of the methods of arithmetic that Leonardo
described in Liber abbaci.
In contrast to the widespread availability of the original Euclid’s Elements, the only
version of Leonardo’s Liber abbaci we can read today is a second edition he completed
in 1228, not his original 1202 text. Moreover, there is just one translation from the
original Latin, in English, published as recently as 2002.
But for all its rarity, Liber abbaci is an impressive work. Although its great fame rests on
its treatment of Hindu-Arabic arithmetic, it is a mathematically solid book that covers not
just arithmetic, but the beginnings of algebra and some applied mathematics, all firmly
based on the theoretical foundations of Euclid’s mathematics.
After completing the first edition of Liber abbaci, Leonardo wrote several other
mathematics books, his writing making him something of a celebrity throughout
Italy—on one occasion he was summonsed to an audience with the Emperor Frederick
II. Yet very little was written about his life.
In 2001, I decided to embark on a quest to try to collect together what little was known
about him and bring his story to a wider audience. My motivation? I saw in Leonardo
someone who, like me, devoted a lot of time and effort trying to make the mathematics
of the day accessible to the world at large. (Known today as “mathematical outreach,”
very few mathematicians engage in that activity.) He was the giant whose footsteps I
had been following.
I was not at all sure I could succeed. Over the years, I had built up a good reputation as
an expositor of mathematics, but a book on Leonardo would be something new. I would
have to become something of an archival scholar, trying to make sense of Thirteenth
Century Latin manuscripts. I was definitely stepping outside my comfort zone.
The dearth of hard information about Leonardo in the historical record meant that a
traditional biography was impossible—which is probably why no medieval historian had
written one. To tell my story, I would have to rely heavily on the mathematical thread
that connects today’s world to that of Leonardo—an approach unique to mathematics,
made possible by the timeless nature of the discipline. Even so, it would be a stretch.
In the end, I got lucky. Very lucky. And not just once, but several times. As a result of all
that good fortune, when my historical account The Man of Numbers: Fibonacci’s Arithmetic Revolution was published in 2011, I was able to compensate for the
unavoidable paucity of information about Leonardo’s life with the first-ever account of
the seminal discovery showing that my medieval role-model expositor had indeed
played the pivotal role in creating the modern world that most historians had
With my Leonardo project such a new and unfamiliar genre, I decided from the start to
keep a diary of my progress. Not just my findings, but also my experiences, the project's
highs and lows, the false starts and disappointments, the tragedies and unexpected
turns, the immense thrill of holding in my hands seminal manuscripts written in the
thirteenth and fourteenth centuries, and one or two truly hilarious episodes. I also
encountered, and made diary entries capturing my interactions with, a number of
remarkable individuals who, each for their own reasons, had become fascinated by
Fibonacci—the Yale professor who traced modern finance back to Fibonacci, the Italian
historian who made the crucial archival discovery that brought together all the threads of
Fibonacci's astonishing story, and the remarkable widow of the man who died shortly
after completing the world’s first, and only, modern language translation of Liber abbaci,
who went to heroic lengths to rescue his manuscript and see it safely into print.
After I had finished the Man of Numbers, I decided that one day I would take my diary
and turn it into a book, telling the story of that small group of people (myself included)
who had turned an interest in Leonardo into a passion, and worked long and hard to
ensure that Leonardo Fibonacci of Pisa will forever be regarded as among the very
greatest people to have ever lived. Just as The Man of Numbers was an account of the
writing of Liber abbaci, so too Finding Fibonacci is an account of the writing of The Man
of Numbers. [So it is a book about a book about a book. As Andrew Wiles once
famously said, “I’ll stop there.”]
Two years ago, there was a sudden, viral spike in online discussion of the Ramanujan
1 + 2 + 3 + 4 + 5 + . . . = –1/12
This identity had been lying around in the mathematical literature since the famous
Indian mathematician Srinivasa Ramanujan included it in one of his books in the early
Twentieth Century, a curiosity to be tossed out to undergraduate mathematics students
in their first course on complex analysis (which was my first exposure to it), and
apparently a result that physicists made actual (and reliable) use of.
The sudden explosion of interest was the result of a video posted online by Australian
video journalist Brady Haran on his excellent Numberphile YouTube channel. In it,
British mathematician and mathematical outreach activist James Grime moderates as
his physicist countrymen Tony Padilla and Ed Copeland of the University of Nottingham
explain their “physicists’ proof” of the identity.
In the video, Padilla and Copeland manipulate infinite series with the gay abandon
physicists are wont to do (their intuitions about physics tends to keep them out of
trouble), eventually coming up with the sum of the natural numbers on the left of the
equality sign and –1/12 on the right.
Euler was good at doing that kind of thing too, so mathematicians are hesitant to trash
it, rather noting that it “lacks rigor” and warning that it would be dangerous in the hands
of a lesser mortal than Euler.
In any event, when it went live on January 9, 2014, the video and the result (which to
most people was new) exploded into the mathematically-curious public consciousness,
rapidly garnering hundreds of thousands of hits. (It is currently approaching 5 million in
total.) By February 3, interest was high enough for The New York Times to run a
substantial story about the “result”, taking advantage of the presence in town of
Berkeley mathematician Ed Frenkel, who was there to promote his new book Love and
Math, to fill in the details.
Before long, mathematicians whose careers depended on the powerful mathematical
technique known as analytic continuation were weighing in, castigating the two
Nottingham academics for misleading the public with their symbolic sleight-of- hand, and
trying to set the record straight. One of the best of those corrective attempts was
another Numberphile video, published on March 18, 2014, in which Frenkel give a
superb summary of what is really going on.
A year after the initial flair-up, on January 11, 2015, Haran published a blogpost
summarizing the entire episode, with hyperlinks to the main posts. It was quite a story.
[[ASIDE: The next few paragraphs may become a bit too much for casual readers, but
my discussion culminates with a link to a really cool video, so keep going. Of course,
you could just jump straight to the video, now you know it’s coming, but without some
preparation, you will soon get lost in that as well! The video is my reason for writing this
For readers unfamiliar with the mathematical background to what does, on the face of it,
seem like a completely nonsensical result, which is the MAA audience I am aiming this
essay at (principally, undergraduate readers and those not steeped in university-level
math), it should be said that, as expressed, Ramanujan’s identity is nonsense. But not
because of the -1/12 on the right of the equals sign. Rather, the issue lies in those three
dots on the left. Not even a mathematician can add up infinitely many numbers.
What you can do is, under certain circumstances, assign a meaning to an expression
X1 + X2 + X3 + X4 + …
where the XN are numbers and the dots indicate that the pattern continues for ever.
Such expressions are called infinite series.
For instance, undergraduate mathematics students (and many high school students)
learn that, provided X is a real number whose absolute value is less than 1, the infinite
1 + X + X2 + X3 + X4 + …
can be assigned the value 1/(1 – X). Yes, I meant to write “can be assigned”. Since the
rules of real arithmetic do not extend to the vague notion of an “infinite sum”, this has to
be defined. Since we are into the realm of definition here, in a sense you can define it to
be whatever you want. But if you want the result to be meaningful and useful (useful in,
say, engineering or physics, to say nothing of the rest of mathematics), you had better
define it in a way that is consistent with that “rest of mathematics.” In this case, you
have only one option for your definition. A simple mathematical argument (but not the
one you can find all over the web that involves multiplying the terms in the series by X,
shifting along, and subtracting—the rigorous argument is a bit more complicated than
that, and a whole lot deeper conceptually) shows that the value has to be 1/(1 – X).
So now we have the identity
(*) 1 + X +X2 + X3 + X4 + … = 1/(1 – X)
which is valid (by definition) whenever X has absolute value less than 1. (That absolute
value requirement comes in because of that “bit more complicated” aspect of the
rigorous argument to derive the identity that I just mentioned.)
“What happens if you put in a value of X that does not have absolute value less than 1?”
you might ask. Clearly, you cannot put X = 1, since then the right-hand side becomes
1/0, which is totally and absolutely forbidden (except when it isn’t, which happens a lot
in physics). But apart from that one case, it is a fair question. For instance, if you put X =
2, the identity (*) becomes
1 + 2 + 4 + 8 + 16 + … = 1/(1 – 2) = 1/(–1) = –1
So you could, if you wanted, make the identity (*) the definition for what the infinite sum
1 + X + X2 + X3 + X4 + …
means for any X other than X = 1. Your definition would be consistent with the value
you get whenever you use the rigorous argument to compute the value of the infinite
series for any X with absolute value less than 1, but would have the “benefit” of being
defined for all values of X apart from one, let us call it a “pole”, at X = 1.
This is the idea of analytic continuation, the concept that lies behind Ramanujan’s
identity. But to get that concept, you need to go from the real numbers to the complex
In particular, there is a fundamental theorem about differentiable functions (the accurate
term in this context is analytic functions) of a single complex variable that says that if
any such function has value zero everywhere on a nonempty disk in the complex
plane, no matter how small the diameter of that disk, then the function is zero
everywhere. In other words, there can be no smooth “hills” sitting in the middle of flat
plains, or even one small flat clearing in the middle of a “hilly” landscape—the quotes
are because we are beyond simple visualization here.
An immediate consequence of this theorem is that if you pull the same continuation
stunt as I just did for the series of integer powers, where I extended the valid formula (*)
for the sum when X is in the open unit interval to the entire real line apart from one pole
at 1, but this time do it for analytic functions of a complex variable, then if you get an
answer at all (i.e., a formula), it will be unique. (Well, no, the formula you get need not
be unique, rather the function it describes will be.)
In other words, if you can find a formula that describes how to compute the values of a
certain expression for a disk of complex numbers (the equivalent of an interval of the
real line), and if you can find another formula that works for all complex numbers and
agrees with your original formula on that disk, then your new formula tells you the right
way to calculate your function for any complex number. All this subject to the
requirement that the functions have to be analytic. Hence the term “analytic
For a bit more detail on this, check out the Wikipedia explanation or the one on Wolfram Mathworld. If you find those explanations are beyond you right now, just remember that
this is not magic and it is not a mystery. It is mathematics. The thing you need to bear in
mind is that the complex numbers are very, very regular. Their two-dimensional
structure ties everything down as far as analytic functions are concerned. This is why
results about the integers such as Fermat’s Last Theorem are frequently solved using
methods of Analytic Number Theory, which views the integers as just special kinds of
complex numbers, and makes use of the techniques of complex analysis.
Now we are coming to that video. When I was a student, way, way back in the 1960s,
my knowledge of analytic continuation followed the general path I just outlined. I was
able to follow all the technical steps, and I convinced myself the results were true. But I
never was able to visualize, in any remotely useful sense, what was going on.
In particular, when our class came to study the (famous) Riemann zeta function, which
begins with the following definition for real numbers S bigger than 1:
(**) Zeta(S) = 1 + 1/2S + 1/3S + 1/4S + 1/5S + …
I had no reliable mental image to help me understand what was going on. For integers
S greater than 1, I knew what the series meant, I knew that it summed (converged) to a
finite answer, and I could follow the computation of some answers, such as Euler’s
Zeta(2) = π2/6
(You get another expression involving π for S = 4, namely π4/90.)
It turns out that the above definition (**) will give you an analytic function if you plug in
any complex number for S for which the real part is bigger than 1. That means you have
an analytic function that is rigorously defined everywhere on the complex plane to the
right of the line x = 1.
By some deft manipulation of formulas, it’s possible to come up with an analytic
continuation of the function defined above to one defined for all complex numbers
except for a pole at S = 1. By that basic fact I mentioned above, that continuation is
unique. Any value it gives you can be taken as the right answer.
In particular, if you plug in S = –1, you get
Zeta(–1) = –1/12
That equation is totally rigorous, meaningful, and accurate.
Now comes the tempting, but wrong, part that is not rigorous. If you plug in S = –1 in the
original infinite series, you get
1 + 1/2-1 + 1/3-1 + 1/4-1 + 1/5-1 + …
which is just
1 + 2 + 3 + 4 + 5 + …
and it seems you have shown that
1 + 2 + 3 + 4 + 5 + . . . = –1/12
The point is, though, you can’t plug S = –1 into that infinite series formula (**). That
formula is not valid (i.e., it has no meaning) unless S > 1.
So the only way to interpret Ramanujan’s identity is to say that there is a unique analytic
function, Zeta(S), defined on the complex plane (apart from at the real number 1), which
for all real numbers S greater than 1 has the same values as the infinite series (**), which
for S = –1 gives the value Zeta(–1) = –1/12.
Or, to put it another way, more fanciful but less accurate, if the sum of all the natural
numbers were to suddenly find it had a finite answer, that answer could only be –1/12.
As I said, when I learned all this stuff, I had no good mental images. But now, thanks to
modern technology, and the creative talent of a young (recent) Stanford mathematics
graduate called Grant Sanderson, I can finally see what for most of my career has been
opaque. On December 9, he uploaded this video onto YouTube.
It is one of the most remarkable mathematics videos I have ever seen. Had it been
available in the 1960s, my undergraduate experience in my complex analysis class
would have been so much richer for it. Not easier, of that I am certain. But things that
seemed so mysterious to me would have been far clearer. Not least, I would not have
been so frustrated at being unable to understand how Riemann, based on hardly any
numerical data, was able to formulate his famous hypothesis, finding a proof of which is
agreed by most professional mathematicians to be the most important unsolved
problem in the field.
When you see (in the video) what looks awfully like a gravitational field, pulling the
zeros of the Zeta function towards the line x = 1/2, and you know that it is the only such
gravitational field there is, and recognize its symmetry, you have to conclude that the
universe could not tolerate anything other than all the zeros being on that line.
Having said that, it would, however, be really interesting if that turned out not to be the
case. Nothing is certain in mathematics until we have a rigorous proof.
Meanwhile, do check out some of Grant’s other videos. There are some real gems!