Thursday, December 29, 2011

156: The Ultimate Answer

I celebrate my 42nd birthday this week.  This is an important milestone, since in his classic Hitchhikers series of science fiction novels, the late Douglas Adams described 42 as the ultimate answer to life, the universe, and everything.  This was figured out by a massive computer, Deep Thought, after 7.5 million years of calculation.  Once they heard the answer, people realized that maybe they should have figured out what the question was first.  But we can ask a more pertinent question in real life:  why did Adams settle on 42 as the answer in his books?  It somehow seems to have a nice ring to it, but what makes 42 a good "ultimate answer"?

Intrinsically, 42 does have a few things going for it.  One fun fact is that you can build a 3x3x3 magic cube with each number from 1 to 27 appearing exactly once, and every row, column, and diagonal summing up to 42.  42 is also the 5th Catalan Number, which means that 42 is the number of triangulations of a heptagon, or the number of ways you can cut a heptagon into triangles using straight lines.  (You may recall that a heptagon is a 7-sided regular polygon.)  It's a Harshad number, or an integer divisible by the sum of its digits.   There are also many more obscure properties of this number, too numerous to list here. 

Science provides some more cool applications of this number.  In 1268, English philosopher Roger Bacon calculated the geometric properties of rainbows, and discovered that the summit of a rainbow cannot appear more than 42 degrees above the horizon.  More recently, in 1966, mathematician Paul Cooper calculated that if you bore a frictionless hole all the way through the earth and try to travel to China by just jumping in and using gravity, the trip would take you exactly 42 minutes.  Surprisingly, this calculation works even if your hole doesn't pass through the center of the earth:  the reduction in gravitational force and distance traveled exactly balance each other out.

The number 42 also appears in various places in religion and culture.  It's unlucky in Japan, since the name of it sounds like the words for "unto death".  In ancient Egypt, there were 42 principles of Ma'at, or religious laws.  In the Christian book of Revelations, the Beast will hold dominion over the Earth for 42 months.  The Jewish Talmud references the "42-lettered name of god", and perhaps most relevant, Kabbalistic tradition makes the related claim that this number was somehow used by God to create the universe. 

The Wikipedia article also points out that Lewis Carroll, popular 19th-century author of lighthearted math-influenced works, used the number 42 numerous times:  42 illustrations in "Alice in Wonderland", the rule 42 in the same book (requiring all persons more than a mile high to leave the court), 42 boxes in The Hunting of the Snark, and a few other places. 

One more intriguing possiblity is that a semi-serious guide book from the 1970s and source of Adams's title, "The Hitchhiker's Guide to Europe", mentioned that travelers to the UK looking for family roots were likely to find the answers disappointing.  And that comment was on page 42.

So, were one of these the reasoning as to why Adams chose 42 as the ultimate answer?  These possibilities, mostly included on the wikipedia page linked in the show notes, all have some level of plausibility.   That page also includes various other references to 42, too numerous to include here.  But these just scratch the surface-- while doing online research for this podcast, I found that author Peter Gill wrote a whole book on the topic, which I haven't yet read. 

On the other hand, most of these references to the number are what I would describe as, well, very miscellaneous at best.   There's no overarching theme that really convinces me that 42 is a good candidate for the answer to the universe.  I would bet that with a little research, I could find an equal number of properties and references to any other two-digit number you might select.  Think about it:  in any instance where a number less than 100 appears in history, religion, or even some mathematical grouping, there's roughly a 1% chance that it's the number you want!  

So my inclination is to guess that Adams was playing a massive joke on the public, and really did choose the number randomly, just to see what interpretations his readers would come up with.  Since the author is no longer with us, we never will know for sure.

And this has been your math mutation for today.

  • Review of Gill book
  • 42 at Wikipedia
  • 153: Cracking Pythagoras


    A few days ago I was browsing the infotainment site cracked.com, and saw an amusing article.  It was titled "6 Famous Firsts That You Learned In History Class (And Are Total BS)."  One of these 'famous firsts' was Pythogras's discovery of the Pythagorean Theorem.  As you probably recall, this is the theorem that if the legs of a right triangle are A and B, and the hypotenuse is C, A squared plus B squared equals C squared.  Pythagoras is known for having first proven this theorem around 550 BC.  Is it really the case that this is total BS?

    Actually, I wasn't too surprised to see this theorem on such a list.  This theorem is critical for measuring distances and amounts of materials in construction, real estate boundaries, and many other real cases where life doesn't always provide you with straight lines.  Because of this, it was discovered much earlier than Pythagoras-- ancient Babylonians had been using it for at least 1000 years by Pythagoras's time, and there are also references to it in ancient India, China, and Egypt.  Cracked.com even included a photo of an ancient Babylonian stone tablet inscribed with Pythagorean triples, sets of integers such that A squared plus B squared equals C squared.

    But there's a huge difference between having observed the theorem experimentally and actually knowing for sure that it will always be true.  The history of math and science is full of examples of supposedly true facts that were later overturned; just ask your doctor if he has any leeches in stock.  If you read your high school math textbook a little more carefully, you'll see that Pythagoras is credited not with discovering the theorem, but with *proving* it.  So if you really want to claim it is illegitimate to cite Pythagoras, you need to supply an earlier proof of the theorem.  The Cracked guys are on their toes though-- they have done this as well, pointing to an earlier Chinese text known as the Chou Pei Suan Ting.

    This book does indeed contain a diagram which seems to illustrate a common geometric proof of the theorem.  You can see the picture if you follow the links in the show notes.  The way it works is that you put four copies of the triangle together to form a large square, in such a way that each side contains one of each of the A and B legs of the original triangle.  This forms a large square whose sides are each of length (A+B), and in its center is a smaller square whose side is of length C.  The total area of the large square is (A+B), the quantity, squared, or A squared + 2AB + B squared.  The four triangles are each of area AB/2, using the standard fomula for area of a right triangle, so together their area adds up to 2AB.  But the remaining square in the middle has sides each of length C, with total area C squared.   So we have A squared + 2AB + B squared equals C squared + 2AB--- or A squared + B squared equals C squared!   Does this illustration thus establish that the Chinese proved the theorem before Pythagoras?  Cracked seems to think so.

    But not so fast-- there are a few problems with the Chinese illustration.  Most glaringly, it has grid markings that show it is addressing a particular case, of 3-4-5 right triangles, with no clear evidence that it was thought of as a general proof.  There is even some dispute as to whether the diagram was actually part of the original book, or transcribed by later commentators.  While the book was begun as early as 1046 BC, annotations and additions continued until 220 A.D.  And the book is not part of any kind of general document created to supply axiomatic proofs of mathematical discoveries; it's a collection of 246 specific problems encountered by the Duke of Zhou and his astrologer.

    But most importantly, we need to keep in mind that the Pythagorean Theorem is not taught as a standalone discovery, a single isolated contribution of the Greeks.  It was one of the beginnings of a systematic approach to mathematics, of not just being satisfied with observations, but of proving hypotheses based on fundamental assumptions, that culminated in Euclid's Elements.  Whether earlier societies knew some of the facts discussed by Greek mathematicians, or even came up with a few isolated proofs, is beside the point.  And if you want to unseat the Greeks as founders of modern mathematics, you need to point out systematic efforts to prove theorems on the basis of fundamental axioms, not grab a few random factoids out of other societies' texts.   I love cracked.com, but I think they truly are cracked on this one.

    And this has been your math mutation for today.

  • Cracked article
  • Chou Pei Suan Ching at Wikipedia
  • Pythagoream Theorem at Wikipedia
  • Another Pythagorean Theorem article
  • 150: A Podcast About Nothing

    Before we start, I thought might be nice to highlight a few ways you can show your support for Math Mutation, since I received a recent query about donations.   I don't actually accept money, as I like the idea of total independence.  But if you find the power of the podcast to be so overwhelming as to loosen your wallet, please donate some money to your favorite charity in honor of Math Mutation, and send me an email about it.  I also love hearing directly from fans, either in the form of an email to erik(e r i k)@mathmutation.com, or by posting a review to iTunes.  And don't forget to sign up as a fan on Facebook.

    Anyway, you may recall that in the last episode, I asked for topic suggestions for Episode 150.  While I received a few unrelated emails from listeners, nobody was brave enough to actually sugget a topic.  So, being in a glass-half-full kind of mood, I decided to treat the lack of suggestions as a suggestion in itself, and do a topic that has been on my back burner for a while, the history of the number zero.

    To start with, we should clarify a few things about zero.  In one sense, it is a simple representation of nothing.  But perhaps even more importantly, it plays the critical role of a placeholder in our positional number system.  For example, how do you know you are listening to episode 150 and not episode 15?  It's because of that 0 in the ones place, which pushes the 5 into the tens place and the 1 into the hundreds place.  This seems like an obvious idea now, but that wasn't always the case.  You may recall that the Roman number system was basically non-positional:  with a few minor exceptions, you essentially wrote a bunch of symbols down in any order, added their values, and got a total.  Systems like the Roman numerals quickly grow cumbersome in the face of large numbers, and led to the confusing situation of many possible representations for the same number.

    One of the earliest positional number systems was used by the Babylonians, well-established by the second millenium B.C.  Their system was base-60 instead of base-10, and we still hear echoes of it today when we measure time or angles.  Initially, they did not have a symbol for zero, which meant that written numbers were inherently ambiguous:  you could not tell whether you had written 61 or 3601, 60 squared plus 1, because there was no symbol marking any non-used powers of 60 in the number.  By the first millenium B.C., the system had been enhanced by several authors to use placeholder symbols such as a pair of wedges, but these were only used internally between two digits: trailing zeros in the lower places could only be identified by context.   There must have been a lot of arguments with the waiter over the check in ancient Babylonian resturants.

    Strangely, even when used by a few mathematicians, this placeholder concept did not take hold very quickly.   Despite all their advances in mathematics, the Greeks didn't develop a true positional number system.  This may have been partially due to the Euclidean emphasis on geometry, where numbers were respected mainly for their usefulness when talking about drawn figures.  A few Greek and Roman astronomers, such as Ptolemy in 130 A.D., used the Babylonian system enhanced by a placeholder zero when recording their observations, but this was still considered an esoteric usage. 

    We should also mention that the native Olmecs and Mayans of the Americas independently developed a mixed base-20 and base-18 positional system as early as the 1st century B.C., which also included a true zero placeholder symbol.  Their Long Count calendar basically counted the days from 3114 B.C, when Raised-up-Sky-Lord caused three stones to be set by associated gods at Lying-Down-Sky, First-Three-Stone-Place.  While their numbers were mostly base-20, the second digit from the end rolled over whenever it hit 18 rather than 20.  The mixed base is kind of strange, until you think about the fact that 20x18 is 360, very close to an actual year.

    Most sources seem to agree that the widespread use of a true zero originated in India between 500-700 A.D.  In Brahmagupta's treatise "The Opening Of The Universe", he laid out a number of rules for mathematical operations on numbers including zero.  Some of his rules are very familiar to us today, such as adding zero to a negative number gives you a negative, and adding zero to a positive number gives you a positive.  But oddly, he tried to define division by zero, claiming that zero over zero equals zero.  Today we see that is clearly wrong:  if a calculation results in 0/0, you need more context to figure out a reasonable interpretation.  For example, look at the function y = x/x.  You can see that this is 1 for all nonzero values-- so shouldn't it also be 1 for x=0, thus showing that 0/0 = 1?  But on the other hand, look at y=2x/x.  With the same reasoning, we find that 0/0 equals two! 

    The Hindu-Arabic number system, a positional base-10 system with zero, was originally brought into Spain in the 11th century by the Moors.  Apparently it had spread into common usage among merchants in that civilization.   It was popularized in Christian Europe by Leonardo of Pisa, or Fibonacci, in 1202.  Even then it did not exactly take the continent by storm:  while mathematicians embraced it, merchants continued using the non-positional Roman numeral system for several more centuries before it slowly died out in the face of a superior competitor.  Except, of course, for the critical tasks of expressing motion picture copyright dates, or naming Popes.

    As usual, I've barely scratched the surface here.  Whole books have been written about the number zero, such as Robert Kaplan's highly regarded "The Nothing That Is".  And you can also find some excellent online articles linked in the show notes. But hopefully this podcast has given you a non-zero number of things to think about.

    And this has been your math mutation for today.

  • Robert Kaplan's book
  • Zero at Wikipedia
  • Another online history of zero
  • 149: Robot Planet

    Before we start, I've observed that we're rapidly approaching the episode 150, one of those big round numbers that is supposed to be significant or something.  Having trouble coming up with an appropriately weighty topic, I thought I'd throw the problem out to you listeners:  what topics do YOU think would be worthy of episode 150?  Please email erik (e r i k) @ mathmutation.com with your answer.  If you suggest the topic I use, you will win the honor of international fame and fortune as I mention you on the podcast.  Now on to today's topic.

    In a recent hilarious episode of the TV cartoon Futurama, the robot Bender acquired an attachment that allowed him to make 2 duplicates of himself, each 60% of his size, consuming an equal mass of nearby matter in the process for their raw materials.  When the professor asked him to do some work, he was lazy, so activated this machine to make 2 smaller replicas of himself to do it.  But then these replicas each contained a smaller copy of the duplicating machine as well, and they were just as lazy as the original, so they could make 2 copies each of themselves.  You can see where this is going:  since all the Benders were equally lazy, they kept duplicating, potentially on to infinity.  The alarmed professor flashed an equation on the screen, and everyone gasped.  "The equation is divergent", he explained, which meant that soon replicating Benders would consume all the matter on Earth!  What was he talking about?

    Let's review the concept of convergent and divergent series.  Suppose you are adding together an infinite series of smaller and smaller numbers: say, 1/2 + 1/3 + 1/4 ...  .  There are two things that could happen.  Either the series is divergent, in which case the sum approaches infinity, or it's convergent, in which case the total is some finite number.  At first, it might seem counter-intuitive that the sum of an infinite series could ever do anything but diverge to infinity.  But here's a simple counterexample:  Look at the decimal number .99999..., with an infinite number of nines after the decimal point.  I think we would all agree at a minimum that this is a finite value, less than or equal to 1.  (It's actually precisely equal to 1, but we'll leave that nuance for another podcast.)  In any case, if you look at each digit of the number separately, you can see it is just an infinite series:  the first 9 after the decimal represents 9/10, the 2nd represents 9/100, and so on.  So .99999... is the same as the infinite series 9/10 + 9/100 + 9/1000..., and we know that despite the infinite number of terms, the sum never gets past 1.  Similarly, if the total mass of the infinite Benders converges to a small finite number, the Earth is not doomed after all.

    So, let's take another look at the problem of the replicating Benders.  Rather than kilograms, let's simplify our calculations and measure mass in Bender-masses, or Bs, where the mass of the original Bender is 1B.  What is the mass of the two mini-Benders he creates?  First we have to define what '60% of the size' means:  I think it would be logical to assume we are reducing the measure to .6 times the original in each of the 3 dimensions: length, width, height.  Assuming the mass is proportional to the volume, this means that the mass of each mini-Bender is .6*.6*.6 Bs, or .216 times the mass of the original Bender.  So, the total mass of the two half-Benders is 2*.216 = .432 Bs.   Similarly, the Nth generation of Benders should have mass of (.432)^N Bs, since its mass is .432 times that of the previous generation.  And the sum we're dealing with is (1 + .432 + .432^2 + ...).  Good news-- this series converges!  The easiest way to see that is to notice that .432 is less than 1/2, so the Nth term is always less than (1/2^N), a well-known convergent series that adds up to 2.  (A quick way to prove this is to look at the base-2 number .1111...)   So, no matter how many replicas there are, the total mass will be less than 2 Benders, and our planet is safe.

    Unfortunately, the plot of the episode is dependent on the fact that the Bender-masses will add up to a divergent series guaranteed to consume the earth.  On the cartoon, the sum the Professor flashes is a series where each term equalled a Bender mass times 2^n * (1 / ((2^n) * (n+1))).  You can see here that the 2^n terms cancel out in the top and bottom, making this effectively the sum of 1/(n+1), or 1/2 + 1/3 + 1/4..., a well-known divergent series.  If accurate, this would indeed show that all the mass in the universe would ultimately be consumed by the replicating Benders.  But where does this equation come from?  If the smaller Benders were 60% of the mass instead of 60% of the size in each dimension, then each generation would be 1.2x the mass of the previous, a constantly growing series which obviously diverges, but doesn't match the Professor's series either.  I have the feeling the writers are just messing with us, and came up with an arbitrary divergent series to advance the plot.  Or maybe the professor messed up the equation; after all, he is often portrayed as a bit senile.  On the other hand,  I could be the senile one-- tell me if you manage to find something I missed, and think of a good justification for the series in this episode.

    And this has been your math mutation for today.

  • The Basel Problem at Wikipedia
  • Convergent Series at Wikipedia
  • Online discussion including screenshot of the Professor's equation
  • 147: Elemental Obsession


    Recently I was reading Theodore Gray's colorful coffee-table book/ebook "The Elements", where he shows an illustration of each element and a few trivia bits about their physical and chemical properties.  I was surprised to see a few passing references to the ease or difficulty of acquiring samples for "collectors".  Are there really people who collect chemical elements as a hobby?  A bit of quick web searching showed that this was no joke-- there is a noticeable element-collecting community out there, who take their hobby quite seriously.  The goal of an element collector is to acquire samples of every element in the Periodic Table.
    I know what you're probably thinking now.  "Sounds interesting, but this is a math podcast, not a chemistry podcast!"  True, but if you think about it, I believe this interest is a kind of mathematical obsession.  After all, the beauty of the Periodic Table is the way that a simple mathematical pattern has be used to understand and predict the occurrence of the basic elements that make up our universe. 

    Let's review the basic concepts of the periodic table.  As you may recall, each row of the table represents an electron shell, which you can think of as a layer of orbiting electrons around the nucleus of an atom.  Each column represents a configuration of 'valence electrons', the outermost electrons that are most important in determining chemical reactions.  Once a shell is full of electrons, the next row begins filling in a new shell with a similar pattern of electrons-- thus the pattern of valence electrons repeats in the next row, which is what makes the periodic table periodic.  The electrons in each shell must fit into a finite set of known orbit patterns, but the inner shells have fewer orbits than outer ones, which is why there are the noticeable gaps in the upper rows, representing the inner shells.  Once the basic pattern is established, the elements go in the table in order of increasing number of protons, or atomic number, and the families with similar sets of valence electrons fall neatly into the columns.  There are a few other complications which you can read about in detail on the wikipedia page in the show notes, and modern quantum mechanics shows that the solar-system-like vision of simple orbiting particles isn't quite right.  But for this podcast, the important point is the mathematical elegance of having a table filled in using simple patterns, and being able to connect this to reality by neatly checking off each box as you acquire samples of each of the elements.

    But for people who actually want to take up element collecting, there are a few minor issues they have to deal with.  An obvious one is simple chemical safety:  numerous elements are poisonous in their pure form.  It sure looks tempting to open that vial of mercury and actually feel the bizarre sensation of a liquid metal rolling around your hand-- but that is not medically advisable, to say the least, as mercury is extremely poisionous and can be absorbed through the skin.  Many other elements are relatively safe in solid chunks but can be extemely dangerous in powdered form, which is unfortunately often the easiest to acquire.  One Russian company used to sell cheap samples of many elements by mail order, including a glass vial of powered beryllium, which could be fatal if it broke during shipping and someone inhaled the scattered powder.  Any element collector has to think carefully about the form and storage requirements for each element.

    And then there are the radioactive elements.  Aside from the obvious dangers of radiation poisoning and the need for careful storage, there is the question of how to acquire them in the first place.  Some that have real-life applications are available in tiny amounts in commercial products:  for example, in the 1990's, teenager David Hahn, nicknamed the "radioactive boy scout", ordered decommissioned smoke detectors by the dozens to extract the tiny clumps of americium.  There are also some historical sources for some radioactive elements that were discovered before their dangers were known: in the early 20th century, radium paint was used to create glow-in-the-dark clocks and watches.  Hahn got his nice radium sample when he found an old clock in an antique shop with a vial of spare paint to refresh the dials.   Hahn's success in acquiring radioactive elements eventually led to a national security incident, and the need for a federal cleanup of his backyard.  And if you look at the scary online photo of Hahn today, covered in open sores from the effects of radiation, your admiration for his element-collecting skills might be dampened somewhat.

    Another issue is that such a collection can never be truly complete.  As you get to the heaviest elements, they are not only highly radioactive, but decay almost instantaneously when formed.  The highest-numbered element actually synthesized so far is number 118, ununoctium, but only four atoms of this element have ever been known to exist.  All were synthesized in scientific labs and detected only indirectly.  Good luck collecting that!

    Anyway, if you're a fan of this podcast, you probably realize that you can appreciate the periodic table from a poster or chart with or without actually holding the elements.  In the show notes you'll find a link to Theodore Gray's book and the beautiful associated poster, which I highly recommend.  The existence of physical objects is just a minor corollary of this mathematically elegant table, and the underlying quantum mechanical laws that make it possible; why sully it by obsessively acquiring a bunch of rocks, liquids, and powders that could potentially kill you anyway? 
      
    And this has been your Math Mutation for today.
  • Element Collecting at Wikipedia
  • Another Element Collecting Page
  • Page showing how to collect americium from smoke detectors
  • Periodic Table at Wikipedia
  • David Hahn at Wikipedia
  • Theodore Gray's book
  • 145: Why Johnny Couldn't Add

    It's school board election season again here in Oregon, and while I'm not running this time, it got me thinking about educational issues.  In particular, one topic I've been meaning to cover for a while is New Math.  Those of you old enough to have learned elementary school math in the 1960's or early 1970's, or a little younger but geeky enough (like me) to have browsed math textbooks in used bookstores in the later 1970s, will probably recognize the term.  New Math was a revolutionary change in math education, spurred by a reaction to the Soviets beating the US into space.  Naturally, the solution to that dilemma was to have a team of academic math professors, led by Ed Begle from Yale, come up with a new curriculum totally divorced from any experience educating young children.  What could go wrong?

    Let's look at what the New Math was.  Basically, before this movement, elementary school math consisted of lots and lots of drilling of arithmetic problems.  While this wasn't very exciting, it did result in most children getting a solid 'number sense' and becoming very comfortable with doing basic addition, subtraction, multiplication, and division without calculators.   The theory behind the New Math was that children were unsatisfied with this because they want to understand the real logical foundations of what they are doing.  Thus concepts were introduced like understanding the difference between the concept of numbers and the written symbols known as numerals, set theory, converting numbers between different bases, modular arithmetic, and similar areas.

    In the vast majority of cases, the net effect of all the time spent on these advanced concepts at the expense of gaining basic arithmetic competency and number sense was that kids could mimic some advanced mathematical terms, but were severely lacking in the ability to do everyday calculations.  Tom Lehrer famously made fun of this situation in his song 'New Math':

    [excerpt]

    We should point out that some of these ideas had some level of usefulness as illustrations.  For example, suppose you want to illustrate that 4+3 equals 3+4.  If you have a set of four elements, written as a circle around 4 dots, and add it to a set of 3 elements, you can see easily that the order doesn't matter.  And look at how we deal with multi-digit numbers:  for example, 123 is equal to 1 times 100, plus 2 times 10, plus 3 times 1-- each place is a new power of the base of your number system, in this case 10.   You were probably doing 'carrying' and 'borrowing' when adding and subtracting multi-digit numbers for a long time before you understood that it was this place-based system that allowed you to do this-- and maybe learning to represent numbers in other bases would make this clearer.  

    Some studies did show that talented teachers who fully understood these concepts, and used them appropriately as part of an arithmetic class, could indeed enable kids to better understand basic arithmetic.  They would have to be carefully guided to show how, for example, set theory led to the actual operations like addition and subtraction, or how the algorithms for working with multi-digit numbers originated in our base 10 number system.   But the vast majority of teachers didn't understand these advanced concepts much better than the kids, and instead turned them into a new set of random stuff to be memorized.  Rather than concrete and useful numerical operations, students were now engaging in rote repetition of complex ideas that were simply not useful to them at their level of mathematical sophistication.

    An online article at 'Straight Dope' illustrates the change in education nicely with an example problem.
    Traditional math: A logger sells a truckload of lumber for $100. His cost of production is $80. What is his profit?
    New Math: A logger exchanges a set L of lumber for a set M of money. The cardinality of set M is 100 and each element is worth $1.    (a) make 100 dots representing the elements of the set M    (b) The set C representing costs of production contains 20 fewer points than set M. Represent the set C as a subset of the set M.   (c) What is the cardinality of the set P of profits?
    OK, there is probably a bit of exaggeration in this example, as I doubt kids actually were asked to draw sets of size 100.  But you get the general flavor.  After phrasing this basic profit problem in terms of set theory, has anything of value really been added, or has a simple, concrete concept been obscured?

    The death knell of New Math was sounded in 1973, when Morris Kline published his famous book "Why Johnny Can't Add".   Kline rightfully pointed out that, "abstraction is not the first stage but the last stage in a mathematical development".   People need to understand numbers in a concrete way first, to the point where they have a natural instinct for them, and then maybe it's worth talking about abstractions like set theory and alternate bases.  The vast majority of kids are not little Bertrand Russells and Ludvig Wittgensteins, demanding a full axiomatic justification of what they are learning.   Repeated arithmetic drills may not be glamorous, but they get the job done.
    I'd like to say that after New Math, teaching in the U.S. returned to an emphasis on sensible, back-to-basics methods that actually work.  In some parts of the country this was true.  But in other areas, New Math has been followed by various further educational fads such as calculator mania, radical multiculturalism, "Discovery Math", and "New New Math".  You may recall that back in episode 70, I discussed my frustration that a fast food cashier could not figure out 2+2+1 is less than 7 in her head.  And I'm afraid that the current talk in the news about our need to improve math & science education will result in the creation of yet more new fads instead of a return to common sense.  But at least I'll never lack for topics to make fun of in this podcast.

    And this has been your math mutation for today.


  • New Math at Wikipedia
  • Another article
  • One more article
  • 143: The Math Of Mutations

    Before we start, I'd like to thank listener Mark Mabee, who pointed out an error in my last podcast.  I mentioned that 990Hz was too high for humans to hear, but typical humans can actually hear up to 20000 Hz.  Oops.  Luckily it didn't affect the main point of the podcast.

    Now on to this week's topic.  Recently my wife Ann and daughter Sonia were visiting the Newport Aquarium on the Oregon coast, and came across a large tank of crabs.   Recognizing her mom's favorite food, Sonia remarked, "They look tasty!  But don't tell them I said so."  Ann and the other spectators began laughing.  But they were laughing for somewhat different reasons.  I
    think most of the visitors were amused by Sonia's implication that the crabs could be offended.  Ann, however, recognized that the crabs in the tank were spider crabs, with long legs and roundish shells, while everyone knows that the tastiest crabs are the oblong-shelled Oregon Dungeness crabs.

    This got me thinking about the many different types of crab shells.  How is it that from the same basic type of creature, so many different forms could evolve?  How did random genetic mutation continually rewrite the complex blueprints of the crab shell, so each cell knew exactly the right place to grow to complete these complex shapes? 

    Actually, about 100 years ago, British mathematical biologist D'Arcy Thompson investigated this issue.  He asked the question:  is there a simple mathematical relationship between the shapes of different species of related animals?  He began looking at groups of creatures such as crabs, fish, and primates, and made an amazing discovery.  By drawing a coordinate grid over the shapes and making simple mathematical transformations, he could change
    one species into another.

    For example, he started with a fish species called Argyropelecus.  He then showed that by applying simple transformations to each (x,y) coordinate, where x'=ax+by, and y'=cx+dy, he could distort the fish into other species known as Sternoptyx or Scarus.  Basically this transformation is the equivalent of stretching the fish at different rates in the x and y directions, and then tilting one axis with respect to the other.  This change, called an affine transformation, is a subset of what is known as "rubber sheet" transformations; more complex conformal transformations, which you may be familiar with from flattened maps of the Earth, allow curved aspects and lead to a greater variety of possible forms.  The show notes contain a link to a nice web tool that lets you stretch a fish of your own, mimicking some of Thompson's thought experiments.   Thompson showed with a series of diagrams how the shapes of fish, shells of various crab species, and skulls of apes and humans were mathematically related to each other.

    What does this all mean?  Well, for one thing, it makes the process of evolution much easier to understand.  When we talk about stretching, slanting, or compressing a part of a living creature, what we're really talking about is increasing or reducing the rate of growth of various parts.  So what Thompson showed was that to produce the various forms we see in nature, it's not necessary for some mutation to radically change a
    detailed blueprint, but simply to tweak the uniform mathematical formula that governs the rates of cell growth in a creature.  This seems like a much more plausible avenue for a random mutation to create a new creature, as opposed to the total rewrite that would be necessary if DNA was working like a literal blueprint of a building.

    Thompson went on to discover many other amazing mathematical relationships in biology, such as the relationship of the Fibonacci Sequence to spiral structures and the relation between jellyfish forms and random dispersal of viscous fluids, that we might discuss in future podcasts.  I'm not sure if he ever found the true formula for which crab is tastiest though.

    And this has been your Math Mutation for today.

  • Thompson at Wikipedia
  • Thompson tribute website
  • Fish transformation demo tool
  • Interesting online article. See Thompson's crab diagrams on p.56 .
  • 141: The Right Way To Procrastinate

    Do you ever feel exhausted from your math, science, and engineering work and want to take a relaxing break with a nice comic book?  I've found just the right one for Math Mutation listeners.  It's a graphic novel called "Logicomix", by Apostolos Doxiadis and Christos Papadimitriou, about the life of Bertrand Russell and his lifelong quest to build solid foundations for mathematics.  As you may recall, Russell was an influential philosopher and mathematician who lived from 1872 to 1970, considered one of the founders of analytic philosophy.

    In his younger years, Russell was a dedicated mathematician, but continuously disturbed by what he considered the shaky foundations of mathematics.  In college, for example, he strongly criticized a professor for teaching calculus using infinitesmals, infinitely small values such that an infinite number added up to finite values.  And he did have a point; as we discussed in podcast 65, when taught without using the more rigorous concept of limits, calculus does have a quasi-mystical quality.  

    More importantly, Russell went on to discover an inherent paradox in set theory, which came to be known as Russell's Paradox.  If you are allowed freedom in defining sets, define the set S as the set of all sets that do not contain themselves.  Does set S contain itself?  To put it more concretely, suppose I tell you that I listen to precisely those podcasts whose hosts don't listen to their own podcasts.  Do I listen to Math Mutation?  If I don't, then I do, and if I do, then I don't!  Fellow mathematician Gottlob Frege thought Russell's paradox so important that he delayed publication of his new book on set theory, hastily adding an appendix explaining the paradox.  In the comic, he angrily demanded that the printer destroy the printing plates and all copies of his book, though I couldn't find other references to this incident online.

    Russell then went on to work with Alfred North Whitehead on the Principia Mathematica, a massive work that was supposed to finally establish solid logical foundations for mathematics from the ground up, based on set theory augmented with 'types' to solve Russell's Paradox.  Numerous times over the decade they worked on it, Russell would find some basic flaw and insist on starting over from scratch, driving his partner nuts.  After this project, they never collaborated again.  Finally they reached an acceptable version of their first volume, which took over 300 pages just to prove 1+1=2, and had to pay to self-publish it because the publishers estimated that so few people would actually read it that it would result in a net loss financially.  In the end, it became a highly influential work, found on the shelves of every respectable math library.  Though I am still a bit skeptical that more people actually read the whole thing than the publishers had originally estimated. 

    Ironically, just as Russell had smugly dismantled the beliefs of his predecessors, successors soon arose who were just as much a pain in Russell's rear.   As we discussed back in podcast 24, "A Mathematical Nuclear Bomb", Godel eventually proved that no matter how rigorous Russell was, he could never put together a perfect mathematical system from the ground up--  any sufficiently complex mathematical system is guaranteed to be either inconsistent, having internal contradictions, or incomplete, having true statements that are unprovable.  His student Wittgenstein further challenged even the basic concepts of mathematics being applicable to life, claiming all existential propositions are meaningless.  When Russell gave as an example the statement "There is no hippopotamus in the room at present", Wittgenstein held his ground, claiming he could not judge its truth or falsehood.  I think Russell may have been in the right on that one.   But regardless, Russell concluded that Wittgenstein was a genius, and after working with him, decided he could never again do foundational work in mathematics or philosophy. 

    The story in Logicomix is framed by Russell talking to a group of American college students on the eve of World War II, who have insisted that he use his logic to show conclusively that entering the war would be irrational.  After narrating his autobiography in the flashbacks that take up most of the comic, he disappoints the students by refusing to give a definitive answer.  Instead, he uses his experiences to show that you can't always get all the answers from mathematics and logic, and discusses the various propositions claimed by both sides of the issue.  Needless to say, the students were a bit disappointed.

    While quite informative and entertaining, the graphic novel does have its flaws.  I was a little disappointed to read in the afterward that many of the amusing encounters depicted between Russell and other mathematicians and philosophers did not actually happen-- while Russell encountered their ideas, his interactions with the people were apparently not as colorful as required for a comic.  I also was disappointed that the book does not talk at all about Russell's later life:  it ends before World War II, though he was prominent in philosophy and politics until his death in 1970.  For example, David Horowitz's memoir "Radical Son" includes some sad scenes of Russell being used and manipulated by young leftists in the 1960s, and I really would have liked to see some of those incidents discussed in the context of this story.   I'm also not sure I fully buy the level of logical modesty and open-mindedness assumed by Russell at the end of the comic, since he did go on to spend the next three decades advocating strongly for political causes.  But still, I really enjoyed this book, and would highly recommend it to others interested in math or philosophy.

    And this has been your math mutation for today.

  • Bertrand Russell at Wikipedia
  • Logicomix at Wikipedia
  • Horowitz excerpt featuring Russell
  • 138: You Can't Fool Bill Gates

    Suppose I were to offer to play a game with you:  from a set of three six-sided dice labelled A, B, and C, we each choose a die.  We then have ten rounds in which we each roll our die, and the higher number wins.  Furthermore, the dice obey the following property:  die A beats die B 2/3 of the time, and die B beats die C 2/3 of the time.  And since I'm such a nice guy, I'll let you choose your die first.  According to a popular anecdote, Warren Buffet challenged Bill Gates to just such a game.  What do you think Bill did?

    Intuitively, you might think this is easy.  If die A is better than die B, and die B is better than die C, then you should just choose die A, and you will kick my butt.  However, there is one piece of information you don't have:  die C will also beat die A 2/3 of the time!  So no matter which die you choose, I can then choose one which will probably beat it.   At first, this might seem very odd.  We are used to situations that obey the transitive law:  if A > B, and B > C, then A > C as well.  But there is no reason an arbitrary mathematical operation, such as "has a 2/3 probability of winning against", should obey a transitive law.  A set of three dice with circular victory odds, where die A is likely to beat die B, die B likely beats die C, and die C likely beats die A, are known as "non-transitive dice".

    This is not merely theoretical-- it's pretty easy to construct a set of non-transitive dice.  One simple example consists of the following:  let die A contain the numbers 2, 4, and 9; die B contains 1, 6, and 8; and die C contains 3, 5, and 7.  We're repeating each number twice on the dice, so each roll is only selecting among 3 numbers.  The probabilities are easy to calculate:  if we roll two of the dice, there are 3 x 3 or 9 possible combinations.  By enumerating the possible results, we see that A beats B 5/9 of the time, B beats C 5/9 of the time, and C beats A 5/9 of the time.  For example, the five winning combinations of A vs B are (2,1),(4,1),(9,1),(9,6), and (9,8), and the losing ones are (2,6),(2,8), (4,6), and (4,8).

    Doing the calculations shows that these numbers are indeed correct.  But why does this situation make sense?  Why don't the probabilities of victory obey a transitive law?  With simple numbers, A>B and B>C implies that A>C.  Why doesn't this work for probability?  The cheap, though accurate, answer is to say we just did the calculations, so trust them & stop bugging me.  But that does seem a bit unsatisfying.  I was actually suprrised how difficult it was to find an answer to this complaint on the web:  many pages describe examples and calculate the possibilities, but nobody seems to attempt a good common-sense explanation.   So here's my best attempt: 

    I think the intuition is that there are numerous low results which sometimes win locally, but are not great overall.  For example, a 2 on die A will sometimes win against die B, which has a 1, but always lose against die C, where all numbers are 3 or higher.  So the set of winning cases of A vs B does not directly overlap the winners in B vs C.  The high numbers on die A make up for some of the low
    numbers. Since you can't accumulate your good results and carry them forward, the fact that A beats B most of the time cannot directly be rolled forward into saying that A beats anything B can beat.   Thus, a transitive law cannot be applied.  I know, a little messy to explain, but I think if you try to enumerate all the cases in the example I mentioned before it will kind of make sense.  If you can come up with a nicer explanation, please email erik@mathmutation.com to tell me about it!

    Oh, and to complete the anecdote in my opening:  it turns out that Bill Gates was a little too smart to be fooled by the nontransitive dice.  After he looked at the numbers on them, he agreed to play the game-- as long as Warren Buffett chose first.  Hopefully, next time you are challenged at your local gathering of billionaires, you will be just as clever.

    And this has been your math mutation for today.

  • Nontransitive Dice at Wikipedia
  • Another Article on Nontransitive Dice
  • Site that sells nontransitive dice
  • 137: A What What?

    In the last episode, I briefly mentioned the tool known as a slide rule.  But after that podcast, I realized that there are a significant number of potential listeners who have never seen a slide rule, and are not sure how one works.  I even found fellow engineers who were not quite sure on the concept.  What was this tool that was so essential for science and engineering half a century ago, but now is virtually forgotten?  Today we're going to find out.

    The slide rule was invented in the early 1600s, soon after Scottish mathematician and theologian John Napier popularized the definition of a logarithm.  Napier an odd character, devoting time to both mathematics and theology, and was thought of as a sorceror and magician by many of his contemporaries.  One story about him suggests that he liked to convince people that he had a magic rooster, who would tell him whether a criminal suspect was innocent or guilty after they pet him.  Actually, Napier coated the rooster with soot, and identified the guilty suspect by looking to see who had no soot on his hands, and thus had just pretended to pet it. 

    But in many ways, the power of the logarithm seemed just as magical.  You may recall from high school that a logarithm, or log for short, is just a fancy way of describing an exponent.  For example, since 2 to the 3rd power is 8,  the log (base 2) of 8 is 3.  Since 2 to the 5th power is 32, the log (base 2) of 32 is 5.  What makes this useful is that, if we hold the base constant, the log of (a times b) is log a + log b.  So as long as we have tables that can tell us the logs and inverse-logs of all the numbers involved, we can transform multiplication problems into addition problems!  So let's suppose we want to multiply 8 * 32.  We can take the log of 8, which is 3, and the log of 32, which is 5, and add them together.  The result is 8, so we just need to find the number whose log (base 2) is 8.  This number is 256, which as we expected, is the product of 8 and 32.

    The slide rule takes advantage of this fact by providing two rulers, labelled according to a logarithmic scale, that can be slid relative to each other.  So, for example, assume we're using base 2 logs again.  The leftmost edge of the ruler is marked with a 1, since 2 to the 0 power is 1.  Then at one inch, we see the number 2, since 2 to the 1st power is 2.  At two inches, we see 4, since 2 to the 2nd power is 4, etc.  So to multiply 8 times 32, we would first find the 8 marking on the bottom ruler, which would be 3 inches over, though we don't need to know that fact as we are just looking for the 8.  We would align the top ruler's 1 mark to that.  Then on the top ruler we would look for the 32 marker, which would be 5 inches over.  Now what number on the bottom ruler would be under this 32 marker?  Well, we started at 3 inches on the bottom ruler, and moved another 5 inches-- so we are at 8 inches on the bottom ruler, the exact spot where we see the number whose log is 8.  So the number displayed under the 32 marker is 256, and we have successfully done our multiplication.  Division can be done similarly, through the reverse process.

    Of course, there are many additional subtleties to the slide rule as it evolved over the centuries.  To use one effectively, you usually have to normalize your values to the scale on the ruler, usually multiplying or dividing all values by some power of 10 before you start, and remembering to do the proper transformations on your answer.  Over the years slide rules were enhanced with additional scales to ease calculations of roots and 1/x values; sines and cosines; and logs and exponents of numbers.  You can see more details on these topics on the web pages linked in the show notes.

    As late as the mid-1970s, a slide rule was an indispensable tool for scientists and engineers:  it allowed them to perform common mathematical calculations much more quickly than they could on paper.  But then came the development of electronic calculators, which could replicate anything a slide rule could do and required less thinking from the user.  For a while there was a period when slide rules and calculators coexisted, and the older engineers would make fun of the young whippersnappers who would bring out a newfangled calculator for problems they could easily solve in seconds with a slide rule.  But as calculators got cheaper and less bulky, younger engineers got the last laugh, and the slide rule became a little more than a historical artifact.  

    Oddly, they have not gone away completely though:  when researching this podcast, I found several hobbyist sites on the web posted by collectors of various forms of slide rules.  And there is even an organziation known as the Oughtred Society, named after another of the early slide rule pioneers, which to this day still publishes a journal on slide-rule-related topics, has conferences several times a year for slide rule enthusiasts to get together and calculate, and hosts an international slide rule championship for high school students.  Amusingly, slide rules for team practice are apparently so hard to come by that the International Slide Rule Museum sponsors a Slide Rule Loaner Program to encourage potential participants.  You can find these sites in the show notes as well.

    And this has been your math mutation for today.
  • Slide Rules at Wikipedia
  • John Napier at Wikipedia
  • Intro to Slide Rules
  • Oughtred Society
  • International Slide Rule Museum
  • 136: Are Coincidences Just Coincidences?

    In past episodes of this podcast, I've poked fun at the human tendency to try to spot deep significance in coincidences.  We know from statistics that unlikely things are very likely to happen on occasion, due to all the millions of events occurring in the world.  For example, someone in the near future will listen to a Math Mutation podcast only days after dreaming they would listen to a Math Mutation podcast.  Does this mean they had psychic powers and predicted it?  Or does it just mean that if they are the kind of person who would listen to math podcasts, chances are very high that they would both have dreams featuring podcasts, and come across this one in iTunes?   Hopefully listeners to this podcast would choose the second explanation, but you never know.  Somehow we are hard-wired to notice coincidences and act on them.  And one point that I have not given enough podcast time to is that, on many occasions, this tendency to notice coincidences and patterns is actually a *good* thing.  In fact, noticing such patterns and questioning whether there might be an underlying cause has been a key basis of nearly all progress in math and science.  Let's look at a few examples.

    Most of us learned about the periodic table of the elements in high school.  This is a table that lists the basic chemical elements in rows, according to their atomic number, and has the miraculous-seeming property that elements in each column engage in similar types of chemical reactions.  For example, the leftmost column is mainly highly reactive or 'alkali' metals, while the rightmost column is noble gases.  In modern times, we know this is mainly due to the elements in each column having a similarly built outer electron shell.  But how was this table discovered?  It began when early chemists, such as Dobereiner in the early 1800s, saw that many elements could be grouped into triads based on their properties.  They also noticed that in each triad, when arranged by atomic weight, the middle element was roughly the average weight of the first and the third.  Could this pattern have just been a coincidence?  The thought that this lucky coincidence might have deeper significance began the path to the periodic table and modern chemistry.

    Another example is a geometric coincidence whose explanation is well-known today: the shapes of the continents.  Ever since the writings of Ortelius in the late 1500s, people had noticed that the continents kind of look like they should fit together, most notably in the cases of Africa and South America.  They wondered if there was some deeper explanation, perhaps the continents being torn apart by earthquakes or volcanoes, and then drifting away.  Many later scientists expanded on the concept, most notably Alfred Wegener in 1912.  For a long time this was considered a fringe theory-- even in the 1950s, papers were published debunking the idea.  But when the theory of plate tectonics was developed in the 1960s, the mechanism was finally understood, and the idea of continental drift was proven correct.

    There are also other examples of numerical patterns that truly are just coincidences, but can be harnessed and used effectively in science and technology.  For example, if you square pi, the ratio between the circumference and diameter of a circle, you find that pi squared is very close to 10.  This made it very convenient to design slide rules with scales based on pi instead of the square root of 10, enabling precise engineering calculations while still maintaining a relationship to our base-10 number system. Another example that is common in the computer world is the fact that 2 to the 10th power is very close to 1000, enabling the slightly inaccurate but very convenient use of the metric prefixes kilo-, mega-, tera-, etc to be used for computer memory which comes in multiples of powers of 2, instead of having to generate all-new terminology. 

    And while we're on this topic, we also shouldn't neglect examples of seeming patterns or coincidences that led otherwise brilliant scientists down dead ends.  One classic example is the one I mentioned back in podcast 26.  Kepler, one of the founders of modern astronomy, thought he had hit on something when he noticed that the planetary orbits could be inscribed approcimately within giant versions of the five regular polyhedra.  He spent a lot of time and energy on this theory, which today we consider just as silly as psychic dream phenomena.   But by continually looking for coincidences and patterns in the sky, he made enough real scientific discoveries that today we forgive him.

    So, while continuing to make fun of New Agers who derive all sorts of significance from patterns that are mere statistical coincidences, keep this in mind:  the very same human mental foibles that cause us
    to fall for these things are the ones that have enabled our amazing progress in the sciences and mathematics.

    And this has been your math mutation for today.

  • Mathematical Coincidence at Wikipedia
  • Periodic Table at Wikipedia
  • Continental Drift at Wikipedia
  • Kepler at Wikipedia
  • 131: The Perfect Podcast

        In case you've ever wondered whether God exists, a simple answer can be found in mathematics.  Here it is:  God is defined as the perfect being.  One of the properties contained in the definition of "perfect" is existence.  Therefore, to avoid logical contradiction, God must exist.  This argument, known as an 'ontological argument' due to its attempt to use logic and reason alone to prove God's existence, was originally proposed by Anselm of Canterbury in the 11th century, and repeated in various forms by later philosphers, including famous mathematician Rene Descartes. 
        So, what do you think of this argument?  Something about it seems a bit unsatisfying, though it can be a little tricky to put your finger on it at first.  It's pretty easy to spoof it though.  Have you ever thought about a perfect island, with light breezes, coconut trees, and beautiful beaches on which cute seals frolic?  Well, by this argument, it must exist too.  The world must in fact be full of perfect stuff that all exists, by this argument.  Some perfect podcaster must be recording the perfect August 1st 2010 math podcast sometime today, in fact.  I'm pretty sure this isn't it though; sorry to disappoint you! 
        Extending this argument to the mathematical domain further exposes its absurdity:  let's talk about the perfect 32-sided die, which we will define as a 3-d object which has 32 regular polygons for its faces, and in which all angles and faces are congruent.  Since we have defined this as perfect, it must exist.  But we know, as discussed in episode 18, that such an object cannot exist, as there are only five regular polyhedra, and a 32-sider is not one of them!  What's going on?
        You have probably spotted the key problem by now, pointed out most clearly by Immanuel Kant in 1781.  When we define a theoretical object and state a number of properties it has, this definition is necessarily conditional:  we are stating that IF the object exists, it has all the stated properties.  We cannot cause an object to exist by just adding existence to its definition:  this is a circular argument.  And if other aspects of its definition prevent an object from existing, these cannot be undone merely by adding existence to its definition.  All we are saying is that IF the object exists, then it exists, which doesn't really add much to the discussion.
        If you look at the wikipedia page on ontological arguments, you will find a number of convoluted reformulations by modern philosophers attempting to resurrect Anselm's ontological argument, but they all look to me like fundamentally the same argument in more complicated words.  Ultimately, I don't care how fancy the philsophers get, we can't cause something to exist by adding the existence attribute to its definition.  When I can sit on my perfect island beach rolling 32-sided dice as I play the perfect edition of Dungeons & Dragons with frollicking seals, maybe I'll start to be convinced by this argument.  Until then, I'm afraid the question of the existence of divine beings is probably beyond the domain of mathematics.
        And this has been your math mutation for today.

    Ontological Argument at Wikipedia

    Wednesday, December 28, 2011

    130: Epsilons And Deltas

        Back when I was doing student teaching in a public high school, I asked the teacher I was working with how to handle the lesson on epsilon-delta proofs.  He told me not to bother, as they always skipped that one.  If you remember these things from calculus class, it's somewhat understandable:  to the vast majority of students, an epsilon-delta proof is some arcane mechanical procedure they are forced to memorize and execute, without ever understanding the point.  And skipping them is also kind of justifiable in an introductory class, since the basic concepts of calculus existed for several centuries before anyone reached the level of rigor where this stuff was defined.  This lack of rigor actually led to a bit of controversy, as I pointed out back in podcast 65, where I talked about philosopher Bishop Berkeley's claims that calculus required as much faith as Christianity.   In the 19th century, Bolzano and Cauchy closed this hole, providing rigorous definitions of limits and continuity, including what we now know as the "epsilon-delta proof".  At its heart, the epsilon-delta concept is a simple formal statement about what we mean by limits and continity, and I think understanding it can really improve your intuition about many basic mathematical concepts.
        To start with, let's ask the question:  what does it mean for something to be continuous?  Let's make the problem more concrete by looking at a volume slider on your ipod.   For the sake of discussion, let's assume this is a small bar on the screen that can slide up to 100mm from the bottom.  In the bottom position, the sound is off, and the volume is 0 decibels.  In the top position, the sound is fully on, for a volume of 100 decibels.  Naturally, this position is reserved for listening to Math Mutation podcasts.  What does it mean to say that this volume control is continuous?  Well, as you slide the lever from the bottom to the top, you expect the sound to smoothly get louder gradually as it rises.  If some position caused a sudden jump to 100db and back down, or sliding past some other position caused it to suddenly drop to a much quieter level, you would say that it's not continuous.
        The confusing part comes if you point at some spot on the control, say the exact center, and ask "Is this control continuous here?"  It's hard to say if the control is continuous at that point without wiggling it around a little-- after all, when it's just sitting at the middle and playing at 50 db, you don't know how the overall control behaves.   This was one of the sticking points of early calculus:  fundamentally, it tried to talk about motion or change at individual points, while every individual point is inherently static.  But if you wiggle it a little, you can quickly see that moving it one way  will get you a little more than 50db, and wiggling the other way will get you a little less, with the change being related to how much you move the lever.  We need to somehow make use of this fact to clearly define continuity.
        To do this a little more rigorously, let's say you know that it's hard to get the volume lever at a precise point, but want to guarantee an error within 1db of that 50% mark, so you want the volume to be between 49 and 51 db.  We should be able to identify a range on the volume slider, in this case a distance of 1mm on either side of that center point, such that as long as you are at least that close to the center, your error will be in that 1db range you want.
        In other words, we have said that if the slider is truly continuous, it should be the case that for any arbitrary point on the slider, if we want our error to be within some designated range, then we can find a distance such that if we're at least that close to our target point, we'll be within that error range.  And this is precisely the epsilon-delta definition:  the Greek letter epsilon can be thought of as specifying the desired error range, and the delta represents the distance that we are allowed. 
        In calculus class, this probably sounded a lot more complicated.  I'm not sure why, when trying to describe basic concepts, many textbooks devolve into a muddle of arcane symbols.  I think math stuff sounds a lot more impressive when you throw in lots of Greek letters, even if this is not optimal for the reader's comprehension.   Sadly, I have seen many high school math teachers who never quite grasped this concept either.  It's really just a precise statement that to discuss continuity and limits, we need to directly relate expected error to distance from each point.  A basic intuition about this error vs distance, or epsilon vs delta, concept is invaluable in many areas of modern mathematics.
        And this has been your math mutation for today. 

  • Epsilon-Delta Definition at Wikipedia
  • Calculus at Wikipedia
  • 127: Garden Of The Mind

        Ironically, just a week after I mentioned him in this podcast, Martin Gardner passed away at the age of 95.  Anyone familiar with his work would not be surprised that I consider him one of my major influences-- in fact, Math Mutation is very much an attempt to recapture the same sense of fun and whimsy in mathematics that Garder spread for a quarter century in his Mathematical Games column in Scientific American. 
        Gardner was born in 1914 in Oklahoma, studied philosophy in college, and eventually settled down in a writing career by the early 1950s.  Initially, he specialized in children's literature:  he was an editor of Humpty Dumpty magazine, and wrote some articles on paper-folding puzzles for children.  This interest led him to explore "hexaflexagons", complex folded-paper structures that had captured the imagination of a small group of elite math and physics students at Princeton, which included a young Richard Feynman.  His article on the topic in Scientific American led to an invitation to publish a monthly column, which ended up lasting 25 years, until he retired from the column in 1981.
        I occasionally wonder if I'm truly qualified to create a podcast like this, not having a Ph.D. in mathematics.  But I feel better about that issue after reading the Gardner never even made it through a college math class, and this didn't stop him from authoring one of the most successful and influential math columns of the 20th century.   His column opened the eyes of a generation of aspriting mathematicians, scientists, and engineers to concepts like the paintings of M.C. Escher, John Conway's cellular automata 'Game of Life', tangrams, planar tilings, and fractals.  He claimed that his lack of formal math education actually made him more effective as a writer, since he knew that by the time he could write about a topic, he had boiled it down to concepts even he could understand.
        And his interests were not limited to mathematics.  He also released an annotated edition of Alice in Wonderland, "The Annotated Alice", where he analyzed issues like the origin of the Cheshire Cat and whether Carroll was realistically describing British weather.  He also dabbled in fiction, writing a sequel to "The Wizard of Oz", and in philosophy, in his book "The Whys of a Philosophical Scrivener". 
        But perhaps his most famous non-mathematical achievement is his role in launching the modern skeptical movement, debunking claims of pseudoscience and mysticism, starting back in 1952 with his book "In the Name of Science".  He had a talent for presenting straightforward descriptions of the bizarre ideas he was debunking, and letting the proponents of psedudoscience make themselves look ridiculous.  For example, in "In the Name of Science", he included an essay on Wilhelm Reich's bizarre theory of "Orgone Energy", a newly discovered type of sexual energy that is responsible for turning the sky blue.  Reich's followers angrily accused Gardner of publishing an unfair and libellous article-- until Gardner revealed that before publishing it, he had sent it to Reich himself for approval, without revealing that it was to appear in a book on pseudoscience.  Reich fully approved of the article, aside from some minor corrections, and even complimented Gardner's understanding of his theories.
        Eventually Garder attracted such a following that in 1996 a biannual conference was created, "Gathering for Gardner", for fans to get together and discuss his influence on  their lives.  Nine have been held so far.  The most recent included influential mathematicians John Conway and Stephen Wolfram.  And in the playful spirit of Gardner's columns, each conference includes a debate on the good or evil nature of the conference number.  At this 9th conference, arguments in favor of the number 9 involved its role in the candles on a Jewish menorah, the lives of a cat, and its presence on the jersey of a quarterback in the Super Bowl.  The anti-9 faction brought up the number of circles of Hell, the 9-headed beast guarding Hades, and the recent loss in status of our supposed 9th planet.
        I'd be willing to be that the Gathering for Gardner will continue for decades, and his influence will still be felt for many years to come.
        And this has been your math mutation for today.

  • Martin Gardner at Wikipedia
  • Gardner obituary
  • 126: Tic Tac What?

        Recently my daughter Sonia has become obsessed with the game of tic-tac-toe.  You know, the classic diversion where two players take turns marking squares in a 3x3 grid, one with Xs and the other with Os, and try to get 3 in a row.  The number of possible moves seems large:  since there are 9 possible squares to be first chosen, then 8 for the move after that, and so on, and the game has to last at least 5 turns for someone to make three marks and win, there are at least 9*8*7*6*5, or 15,120 possible games, and  significantly more if you take into account the possibility of longer games.  According to Wikipedia, the total comes to 255,168.  But on the other hand, the game board is very symmetrical, and if you start playing you realize the game quickly falls into a small number of standard patterns; Wikipedia calculates there are only 138 unique outcomes.  Everyone listening to this podcast has probably realized long ago that the key to playing is to try to create a situation where you have two possible threes-in-a-row, so your opponent will be unable to block-- but any player can always prevent their opponent from laying such a trap, so the optimal strategy will always lead to a tie.  Is there anything more to say about this game?
        Recently Cambridge University Press has been re-releasing the many tomes of Mathematical Games essays written by Martin Gardner for Scientific American, and I was happy to see that the first volume contains an article on this topic.  One surprising point Gardner makes is that while it is true that an optimal opponent can always force a tie, what if you assume your opponent is not quite optimal?  For example, suppose you are playing someone who opens with a 'side move', putting an X in the middle bottom square, rather than the center or a corner.  Normally you have a choice of several squares to place your O in that will prevent a trap & eventually force a tie.  But what if you choose a seemingly silly move, and put your O in the upper right corner?  This leaves the X player a chance to set up an inevitable victory-- but only if he chooses the lower right.  Anywhere else, and he leaves you at worst with a tie; and there are several choices which will let you lay a trap for him, and win.  So if you take a chance that your opponent will mess up, you can give yourself a shot at victory, where otherwise you would at best have a tie! This lesson probably applies to other parts of life somehow as well.
        Another amusing discussion in Gardner's essay is the many variants of tic-tac-toe that have arisen over the years.  Since ancient times, in cultures as diverse as Greece, China, and Rome, a version was played that became known in England as "Three Men's Morris".  In this variant, once 6 moves have been made, rather than placing additional marks in the last three squares, players take turns shifting an exiting mark from its current square to an adjacent one.  Apparently in Book III of Ovid's "Art of Love", he advises women to master this game in order to be more popular among men.  So, if your love life has been having trouble lately, you better get out that tic-tac-toe board and start practicing!
        Some additional variants include the obvious extensions to 4x4 or 5x5 boards.  Naturally, these make the counter-moving versions of the game much more interesting, as your choices are much less restricted.  Other variants are a bit more creative:  one called "toetacktick" is the same game as standard tictactoe, except that the first person to get 3
    in a row *loses*.  There are also 3-d variants, where you effectively have a 3x3x3 cubic board.  Surprisingly, the 3x3x3 board makes a very bad game, since unlike the 2-d version, the first player has an optimal strategy that will always lead to victory.  A 4x4x4 board is a little better, allowing for more variety.  Though in 1988 a complex computer program was developed which can always win this version as well.
        Of course, there is no reason to limit yourself to three dimensions just because of pesky little properties of our universe, is there?  Another variation takes place on a four-dimensional hypercube.  To play it, you need to project it down to a set of three-dimensional or two-dimensional images, in order to diagram what moves you are making on the imagined four-dimensional object.  It can also get a little tricky to reverse your projection and remember which squares are adjacent to which, in order to figure out who won.  Gardner draws a nice diagram accompanying his essay, but I think most non-mathematicians would find this variant a bit challenging.  What makes it worse is that the size of the hypercube needs to be at least 5x5x5x5 to rule out simple strategies that would enable an inevitable victory for the first player.
        Wikipedia also points out some more interesting variants.  One game that can be proven equivalent to tic tac toe is the following:  Take turns saying a number between 1 and 9 out loud.  You cannot repeat a number used by your opponent.  If you say three numbers that total 15, you win!  You can see this is equivalent by drawing a magic square, a 3x3 square with numbers from 1 to 9 such that 3 in a row always total 15-- in effect, each player is stating the number in the square of their next tic-tac-tow move.  More bizarrely, Quantum Tic Tac Toe combines the standard game with principles of quantum mechanics, where particles may be in a probablistic "superposition" of multiple locations:  each turn you label multiple squares, and each square can have multiple 'spooky marks'.  In certain cases, your quantum marks from multiple squares collapse into a single "classical" mark on one square, and you win if you cause the collapses in such a way as to get three in a row.  The wikipedia page in the show notes has a link to a java applet so you can try this game if you want. 
        Before I get around to teaching Sonia all these variants, though, I should probably try to teach her to complete the row of three Os and win the game when I leave her an obvious opening. 
        And this has been your math mutation for today.



  • Tic Tac Toe at Wikipedia
  • Quantum Tic Tac Toe at Wikipedia
  • Martin Gardner at Wikipedia
  • 124: Everything Is A Rubber Donut

        We've all heard the classical stories of how the premodern people
    really knew the earth was round.  For example, as a ship sailed away,
    you could see its mast receding from view around the curve of the
    earth, and Magellan's circumnavigation of the globe would seem to have
    settled the question.  But is a round earth the only possible
    explanation for these phenomena?  Actually, if you think about it,
    there are many possible forms the Earth could have taken.  For
    example, suppose our planet were a large torus, or donut-shape.
    People would have observed local curvature as they watched departing
    ships, and again it would have been possible to travel west for a
    while and return from the east.  There are a few laws of physics that
    tend to produce round planets, but in this podcast we don't care about
    such trivial details, we're just thinking about the mathematical
    possibilities.  Thinking about such alternate possibilities is what
    led mathematicians to pose the famous problem known as the Poincare
    Conjecture, which was just resolved a few years ago.
        Now suppose our planet had been a torus.  Would there have been
    ways to distinguish that situation from living on a sphere?  Actually,
    there are various fundamental differences between a torus and a
    sphere.  For example, if you circumnavigate the sphere, your path
    partitions it into two halves, and anyone crossing from north of your
    path to the south of it must cross your path at some point.  On the
    torus, if you were 'circumnavigating' the short way into the
    donut-hole and back, your path would not bisect it, and another
    traveller could make it to the other side without crossing your path.  A
    similar test you could do is to stretch a long rubber band along your
    path, and tie it together when you return to your starting point:
    when you are done, can you slide it along the surface and eventually
    contract it to an arbitrarily small size?  On a sphere, you will
    always be able to, but on a torus, if you traveled into the hole and
    back, you will never be able to fully contract the rubber band.  The
    branch of mathematics that studies basic properties like these of
    surfaces is known as topology.
        In topology, mathematicians study the essential features of surfaces
    that do not vary when they are stretched, or "continuously deformed".
    So you can think of a surface as a giant sheet of rubber:  tearing or
    gluing is out of bounds, but you can distort it all you want.  In more
    precise terms, two surfaces are homeomorphic, or topologically
    equivalent, if there is a continuous, invertible 1-1 mapping between
    them.  The classic example is that a coffee cup is topologically
    equivalent to a donut: both are continuous surfaces with a single
    hole.  You can imagine creating non-equivalent surfaces by adding
    extra 'handles' to a sphere, or punching additional holes in a donut.
    Add one handle to a sphere and you have a travel sphere convenient to
    take to the airport, but with a little stretching it's also equivalent
    to a donut.  With two handles you have something equivalent to a kind
    of figure-8 donut with two holes, and so on.
        A surprising result of 19th-century mathematics was that if you
    look at any closed, compact surface that can exist in 3-D space-- that
    is, without any infinite protrusions or sharp edges-- it is guaranteed
    to be homeomorphic to a sphere with a number of handles, or
    equivalently, to a donut with some number of holes.  So no matter how
    crazy a surface you think you can construct in a 3-D world, in some
    sense it is equivalent to a stretched n-holed donut.  This result
    first appeared in a paper in 1888, though it wasn't rigorously proven
    until the 20th century.
        Now where does the Poincare conjecture fit into all this?  Well,
    first we need to extend our vision by a dimension, and think about
    discussing three-dimensional surfaces in four-dimensional space.  Not
    very easy to visualize, due to our daily lives occurring in our lame
    3-D universe, but the basic concepts are the same.  The question
    Poincare asked is essentially whether, just like for 2-D surfaces any
    closed, compact surface without a hole is homeomorphic to a sphere, is
    any such 3-D surface in 4-space equivalent to a hypersphere?  I'm
    glossing over a few details here, but that's the basic concept.  It
    seems like a relatively simple question, but took our best minds over
    a century to solve.
        You may recall how back in episode 12 I talked about the fact that
    this conjecture has now been proven, and about the odd decision of
    Grigori Perelman, the eccentric Russian genius who solved the problem,
    to refuse the Fields Medal.  At the time, I wimped out of trying to
    describe the theorem itself-- but since then I read an excellent book
    on the topic, by Donal O'Shea, which gave me enough basics to attempt
    this podcast.  If you're still confused, an entirely likely
    possibility, I highly recommend taking a look at that book, which is
    linked in the show notes.
        And this has been your math mutation for today.

  • O'Shea book
  • Good intro topology article by E.C.Zeenam
  • Surfaces at Wikipedia
  • Poincare Conjecture at Wikipedia
  • 121: Why Left Is Right And Right Is Left

        Recently I was sitting in front of my 3-year-old daughter, Sonia,
    and helping her to put on her shoes.  I held one up and asked, "Which
    foot does this go on?"  She enthusiastically responded, "The right
    one!"  Then she added "And this is my right hand!", holding up a hand
    that was indeed her right hand.  For a moment I basked in the glow of
    my daughter's impressive intelligence.  But then that bubble was
    deflated as she added, "And that's Daddy's right hand!", pointing to
    my hand which was directly in front of her right hand.  But since I
    was facing her, the hand she pointed to was actually my left hand.  I
    tried to correct her, but she seemed upset.  "Why is right and left
    different for me and you, Daddy?"  I tried to dismiss the question,
    but she insisted.  I started to articulate an answer... but realized
    that I didn't really have a good one.  Her up and down were the same
    as my up and down-- why were our rights and lefts different?
        After thinking about it for a while, and reading a nice online
    article by someone named Eric Schmidt, I was able to clarify my
    thoughts and come up with an answer.   Left and right are simply a
    different type of direction than up and down.  Up and down are clearly
    defined in relation to some reference point, usually the center of the
    earth:  anything going towards that is going down, and anything going
    away from that is going up.  If you stand in the center of a crowded
    Math Mutation fan club party and ask everyone to point up, they will
    all point the same direction.  But left and right are relative
    directions:  if you ask everyone at the same crowded party to point
    to their right, some will point north, others will point east, etc. 
    You need to define both an 'up' and a 'front' first, and only then can
    you define left and right, in relation to those two directions.  So if
    two people are in a room facing different directions, they have a
    different front, which naturally changes their definition of the
    relative directions known as left and right.
        Related to this issue is the question of why, when you look in a
    mirror, your mirror image has its left and right sides reversed.  In
    one sense, we can say that this statement is false: the image does
    *not* have its left and right sides reversed.  When you are looking in
    the mirror, there is only one sentient being in the room, you, and
    your 'up' and 'front' are used to define which way is left and right.
    If you hold up your right arm, then the image is holding up its right
    arm-- where right is defined according to your orientation.   Unless
    you are part of an Alice in Wonderland novel, the image is not a real
    creature, just a set of light waves that happen to be bouncing around
    the room.  If you choose to anthropomorphize the mirror image,
    imagining that there is an actual being in there just like you, then
    you are subconsciously turning around the concept of 'front', since
    your mirror image is facing out of the mirror.  Then, since right and
    left are only relative directions defined in terms of a front, you can
    label the hand that is on your right side to be the image-creature's
    left hand.  But you're really comparing apples to oranges here:  once
    you change the conceptual orientation, you expect the relative
    directions of right and left to be different.
        Unfortunately, all these nice explanations were lost on Sonia.  By
    the time I was ready to answer her question, she was no longer
    interested in the concepts of relative and absolute directions, and
    had moved on to play with her toy tweety-bird.
        And this has been your math mutation for today.
    Schmidt Article

    117: A Question Of Assumptions

        We commonly view mathematics as a process of starting with simple
    axioms, based on commonsense notions of what the universe must be
    like, and then building up from them to theorems that show their
    various consequences.  You are probably familiar with the most
    classical example of this, Euclidean geometry, where simple notions
    about points and lines build up into surprising conclusions like the
    existence of only five regular polyhedra.  You may also recall from
    earlier podcasts the concept of Non-Euclidean geometry.  By
    slightly modifying Euclid's basic assumption known as the Parallel
    Postulate, which specifies that only one parallel can be drawn to a
    line through an external point, we are able to come up with different
    geometries that are just as self-consistent, but don't happen to
    describe our typical notions of the universe.  But despite having
    originally begun as intellectual exercises, sometimes these
    non-Euclidean geometries turn out to have real applications.  In fact,
    one of the surprises of Einstein's theory of relativity was that our
    universe is not truly Euclidean, and one of these alternate geometries
    is actually a better description.
        Of course, there are many other mathematical models of our
    universe besides Euclidean geometry.  One of the most important is
    what modern physicists call the "Standard Model", a set of
    descriptions of elementary particles and their interactions, along with
    19 related constants, that seems to be an excellent description of the
    behavior of subatomic particles in our universe.  The details are a
    little complex to describe in a podcast, but there's a link in the
    show notes if you want to delve into them in more depth.  Like the
    parallel postulate of Euclidean geometry, the many seemingly arbitrary
    constants of the Standard Model have been disturbing to physicists.
    Is there some reason the numbers have to work out exactly this way?
    Some subscribe to the "anthropic principle", the idea that there
    probably are many universes with many different values for these
    fundamental constants, but the ones we observe in our universe are the
    ones that could enable the set of phenomena the lead to intelligent
    life-- otherwise we wouldn't be here to observe them.  Kind of like
    the old paradox about the tree falling in the woods:  if a physical
    constant assumes a value and there's no one there to see it, does it
    make a sound?
        This has led to some interesting lines of speculation in recent
    years:  could there be alternative values for some of these
    fundamental constants that might also lead to life?  Martin Rees of
    the University of Cambridge has theorized that there are many
    inhabitable 'islands' of life-supporting physical laws in the
    multiverse.  Since string theory supports the existence of 10^500
    different universes, this doesn't seem that implausible.  To make this
    more complete, a trio of physicists named Roni Hanik, Graham Kribs,
    and Gilad Perez attempted to analyze a universe with one specific
    change:  turning off the 'weak nuclear force', one fundamental force
    of the standard model. 
        The 'weakless universe' they describe is different in some basic
    ways from our own.  The primary nuclear reactions that fuel our stars,
    hydrogen fusing into helium, cannot happen, but with slightly more
    deuterium in its starting state, other types of star-fueling reactions
    could occur.  A type of supernova would still be possible, a
    critical factor since these are what synthesize and disperse the
    heavier elements needed for life, though few elements heavier than
    iron would be likely to appear.  Stars would be much smaller and
    shorter-lived, but some about 2% the size of our sun could survive for
    the billions of years needed to evolve life.    Since the stars would
    be small and cool, planets would have to orbit very close to them.
    The plate tectonics of these planets would be much calmer than
    Earth's, since much of our volcanic activity is ultimately fueled by
    the decay of heavy elements deep within the planet.  To inhabitants of
    weakless planets, their sun would appear gigantic in the sky, but the
    night sky would be nearly empty due to distant stars being so dim.  
        And this is not the only alternate universe.  Anthony Aguierre of
    the University of California discovered another possibly
    life-supporting universe by varying a different constant, the number
    of photons per baryon, and a recent issue of Scientific American
    discusses other possibilities resulting from assuming a different mass
    for quarks.  So, is there an answer to the original question, of why we
    have our particular constants in our universe?  Maybe the anthropic
    principle is still the answer, and we are residents of a small subset
    of universes lucky enough to have life-supporting constants.  Maybe
    there is something fundamental that has not been discovered yet, and
    our laws of physics really are the only ones possible.  To some
    extent, the question is in the realm of pure mathematics, since
    however much fun it is to speculate about them, nobody knows how we
    could ever observe one of these alternate universes in any case.
        And this has been your math mutation for today.

  • Non-Euclidean Geometry at Wikipedia
  • Weakless Universe at Wikipedia
  • Standard Model at Wikipedia
  • Scientific American article on alternate universes
  • New Scientist article on alternate universes