Sunday, March 22, 2015

206: Deceptive Digits

Audio Link

Imagine that you are a crooked corporate manager, and are trying to convince your large financial firm's customers that they own a set of continually growing stocks, when in fact you blew the whole thing investing in math podcasts over a decade ago. You carefully create artifical monthly statements indicating made-up balances and profits, choosing numbers where each digit 1-9 appears as the leading digit about 1/9th of the time, so everything looks random just like real balances would. You are then shocked when the cops come and arrest you, telling you that the distribution of these leading digits is a key piece of evidence. In fact, due to a bizarre but accurate mathematical rule known as Benford's Law, the first digit should have been 1 about 30% of the time, with probabilities trailing off until 9s only appear about 5% of the time. How could this be? Could the random processes of reality actually favor some digits over others?

This surprising mathematical law was first discovered by American astronomer Simon Newcomb back in 1881, in a pre-automation era when performing advanced computations efficiently required a small book listing tables of logarithms. Newcomb noticed that in his logarithm book, the earlier pages, which covered numbers starting with 1, were much more worn than later ones. In 1938, physicist Frank Benford investigated this in more detail, which is why he got to put his name on the law. He looked at thousands of data sets as diverse as the surface areas of rivers, a large set of molecular weights, 104 physical constants, and all the numbers he could gather from an issue of Reader's Digest. He found the results remarkably consistent: a 1 would be the leading digit about 30% of the time, followed by 2 at about 18%, and gradually trailing down to about 5% each for 8 and 9.

While counterintuitive at first, Benford's Law actually makes a lot of sense if you look at a piece of logarithmic graph paper. You probably saw this kind of paper in high school physics class: it has a large interval between 1 and 2, with shrinking intervals as you get up to 9, and then the interval grows again to represent the beginning of the next order of magnitude. The idea is that this scale can represent values that may be very small and very large on the same graph, by having the same amount of space on a graph represent much larger intervals as the order of magnitude grows. It effectively transforms exponential intervals to linear ones. If you can generate a data set that tends to vary evenly across orders of magnitude, it is likely to generate numbers which appear at random locations on this log scale-- which means that the probabilities of it being in a 1-2 interval are much larger than a 2-3, 3-4, and so on.

Now, you are probably thinking of the next logical quesiton, why would a data set vary smoothly across several orders of magnitude? Actually, there are some very natural ways this could happen. One way is if you are choosing a bunch of totally arbitrary numbers generated from diverse sources, as in the Reader's Digest example, or the set of assorted physical constants. Another simple explanation is exponential growth. Take a look, for example, at the powers of 2: 2, 4, 8, 16, 32, 64, 128, etc. You can see that for each count of digits in the number, you only go through a few values before jumping to having more digits, or the next order of magnitude.  When you add new digits by doubling values, you will jump up to a larger number that begins with a 1.   If you try writing out the first 20 or so powers of 2 and look at the first digits, you will see that we are already not too far off from Benford's Law, with 1s appearing most commonly in the lead.

Sets of arbitrarily occurring human or natural data that can span multiple orders of magnitude also tend to share this Benford distribution. The key is that you need to choose a data set that does have this kind of span, due to encompassing both very small and very large examples. If you look at populations of towns in England, ranging from the tiniest hovel to London, you will see that it obeys Benford's law. However, if you define "small town" as a town with 100-999 residents, creating a category that is restricted to three-digit numbers only, this phenomenon will go away, and the leading digits will likely show a roughly equal distribution.

The most intriguing part of Benford's law is the fact that it leads to several powerful real-life applications. As we alluded to in the intro to this topic, Benford's Law is legally admissible in cases of accounting fraud, and can often be used to ensnare foolish fraudsters who haven't had the foresight to listen to Math Mutation. (Or who are listening too slowly and haven't reached this episode yet.) A link in the show notes goes to an article that demonstrates fraud in several bankrupt U.S. municipalities based on their reported data not conforming to Benford's law. It was claimed that this law proves fraud in Iran's 2009 election data as well, and in the economic data Greece used to enter the Eurozone. It has also been proposed that this could be a good test for detecting scientific fraud in published papers. Naturally, however, once someone knows about Benford's law they can use it to generate their fake data, so complicance with this law doesn't prove the absence of fraud.

So, next time you are looking at a large data set in an accounting table, scientific article, or newspaper story, take a close look at the first digits of all the numbers. If you don't see the digits appearing in the proportions identified by Benford, you may very well be seeing a set of made-up numbers.

And this has been your math mutation for today.

 


References:

Sunday, March 1, 2015

205: The Converse of a CEO

Audio Link

Ever since I was a small child, I aspired to grow up to become a great Rectangle.    When I was only six years old, my father took me to meet one of the leading Rectangles of New Jersey, and I will always remember his advice:  "Be sure to have four sides and four angles."   All through my teenage years, I worked on developing my four sides and four angles, as I read similar advice in numerous glossy magazines aimed at Rectangle fans.     In high school, my guidance counselor showed me many nice pamphlets with profiles of famous Rectangles who had ridden their four sides and four angles to success.   Finally, soon after I turned 18, I took a shot at realizing my dream, lining up many hours to audition for a spot on the popular TV show "American Rectangle".    But when I made it up onto the stage, I was mortified to be met by a chorus of laughter, and end up as one of the foolish dorks that Simon Cowell makes fun of on the failed auditions episode.    With all my years of effort, I had not become a Rectangle, but a mere Trapezoid.

OK, that anecdote might be slightly absurd, but think for a moment about the premise.   Suppose you want to become successful in  some difficult profession or task.   A natural inclination is to find others who have succeeded at that, and ask them for advice.   If you find something that a large proportion of those successful people claim to have done, then you conclude that following those actions will lead you to success.     Most of us don't actually aspire to become geometric shapes, but you can probably think of many miscellaneous pieces of advice you have heard in this area:   practicing many hours, waking up early every day, choosing an appropriate college major, etc.    I started reflecting on this concept after looking at a nice career planning tool aimed at high school students, which lets them select professions they are interested in, and then read about attributes and advice from those successful in it.

Unfortunately, this kind of advice-seeking from the successful is actually acting out a basic mathematical fallacy.    In simple logic terms, an implication statement "A implies B", is logically different from its converse, "B implies A".   Neither statement logically follows from the other:   "A implies B" does not mean that "B implies A".   When we look at the case of rectangles, this seems fairly easy to understand:   the condition A of having four sides and four angles does NOT imply the consequent B, that the object is a rectangle.   By observing that all rectangles have these characteristics, we are learning the opposite:   Being a rectangle implies that the object has four sides and four angles.   This is important to recognize because there many be infinitely many non-rectangle objects that meet this condition, and actual rectangles might represent only a small portion of the possibilities.     If we wanted to isolate conditions that will imply something is a rectangle, we need to look at both rectangles and non-rectangles, to identify unique rectangle conditions, such as having four right angles.    Once we have a set of properties that will pertain only to rectangles and not to non-rectangles, then we might be able to come up with an intelligent set of preconditions.

Sadly, real life does not always offer us geometric shapes.   When we substitute a real aspiration people might have, too many try to infer the keys to success just from looking at the successful.      Without thinking through this basic logical fallacy about a statement and its converse, "A implies B" does not mean "B implies A",  many people waste lots of time and money following paths where their likelihood of success is minimal.     A common case among today's generation of middle class kids is the hopeful young writer who decides to major in English.   An aspiring writer might see that many successful writers have degrees in English, without taking the time to note that the proportion of English majors who become successful writers is infinitesmally small.    The statement "If you are a successful writer today, you probably have a college degree in English" does not imply "if you earn a degree in English, you will probably become a successful writer."       In contrast,  if looking at computer engineering, they might see a similar profile among the most successful-- but will also find that unlike in English, a huge majority of computer engineering majors do end up with a well-paying job in that field upon graduation.    So in that case, the implication really does work both ways-- but this is a coincidence, since the statement and its converse are independent.

Even famous business consultants are subject to this fallacy.   Have you heard of  the influential 1980s business book "In Search of Excellence", where the authors closely looked at a set of successful companies to find out what characteristics they were built upon?      That became one of the all-time best-selling business books, and many leaders followed their sweeping conclusions, hoping to someday make their companies as successful as NCR, Wang, or Data General.     But some have criticized the basic premise of this research for this same basic flaw:  trying to determine the conditions of success by looking only at the successful will inherently get you the wrong kind of implication.   It may enable you to find a set of preconditions that being successful means you must have had, while these same preconditions are met by endless numbers of failed companies.   You really need to study both success and failure to find conditions that uniquely imply success.

So, when you or your children are thinking about their future, look carefully at all the available information, not just at instances of success.   Always keep in mind that a logical statement "A implies B" is truly distinct from its converse "B implies A", and take this into account in your decision making.
And this has been your math mutation for today.




References:


Sunday, January 18, 2015

204: What Happened To Grigori Perelman?

Audio Link

Before we start, I'd like to thank listeners katenmkate and EdB, who recently posted nice reviews on iTunes. I'd also like to welcome our many new listeners-- from the hits on the Facebook page, I'm guessing a bunch of you out there just got new smartphones for Xmas and started listening to podcasts. Remember, posting good reviews on iTunes helps spread the word about Math Mutation, as well as motivating me to get to work on the next episode.

Anyway, on to today's topic. We often think of mathematical history as something that happened far in the past, rather than something that is still going on. This is understandable to some degree, as until you get to the most advanced level of college math classes, you generally are learning about discoveries and theorems proven centuries ago. But even since this podcast began in 2007, the mathematical world has not stood still. In particular, way back in episode 12, we discussed the strange case of Grigori Perelman, the Russian genius who had refused the Fields Medal, widely viewed as math's equivalent of the Nobel Prize. Perelman is still alive, and his saga has just continued to get more bizarre.

As you may recall, Grigori Perelman was the first person to solve one of the Clay Institute's celebrated "Millennium Problems", a set of major problems identified by leading mathematicians in the year 2000 as key challenges for the 21st century. Just two years later, Perelman posted a series of internet articles containing a proof of the Poincare Conjecture, a millennium problem involving the shapes of certain multidimensional spaces. But because he had posted it on the internet instead of in a refereed journal, there was some confusion about when or how he would qualify for the prize. And amid this controversy, a group of Chinese mathematicians published a journal article claiming they had completed the proof, apparently claiming credit for themselves for solving this problem. The confusion was compounded by the fact that so few mathematicians in the world could fully understand the proof to begin with. Apparently all this bickering left a bitter taste in Perelman's mouth, and even though he was selected to receive the Fields Medal, he refused it, quit professional mathematics altogether, and moved back to Russia to quietly live with his mother.

That was pretty much where things stood at the time we discussed Perelman in podcast 12. My curiosity about his fate was revived a few months ago when I read Masha Gessen's excellent biography of Perlman, "Perfect Rigor: A Genius and the Mathematical Breakthrough of the Century". It gives a great overview of Perelman's early life, where he became a superstar in Russian math competitions but still had to contend with Soviet anti-semitism when moving on to university level. It also continues a little beyond the events of 2006, describing a somewhat happy postscript: eventually the competing group of Chinese mathematicians retitled their paper " Hamilton–Perelman's Proof of the PoincarĂ© Conjecture and the Geometrization Conjecture", explicitly removing any attempt to claim credit for the proof, and recasting their contribution as merely providing a more readable explanation of Perelman's proof. Sadly, this did not cause Perelman to rejoin the mathematical community: he has continued to live in poverty and seclusion with his mother, remaining retired from mathematics and refusing any kind of interviews with the media.

As you would expect, this reclusiveness just served to pique the curiosity of the world media, and there were many attempts to get him to give interviews or return to public life. Even when researching her biography, Masha Gessen was unable to get an interview. In 2010, the Clay institute finally decided to officially award him the million dollar prize for solving the Poincare Conjecture There had been some concern that his refusal to publish in a traditional journal would disqualify him for the prize, but the Institute seemed willing to modify the rules in this case. Still, Perelman refused to accept the prize or rejoin the mathematical community. He claimed that this was partially because he thought Richard Hamilton, another mathematician whose work he had built upon for the proof, was just as deserving as he was. He also said that "the main reason is my disagreement with the organized mathematical community. I don't like their decisions, I consider them unjust." Responding to a persistent reporter through the closed door of his apartment, he later clarified that he didn't want "to be on display like an animal in a zoo." Even more paradoxically, he added "I'm not a hero of mathematics. I'm not even that successful." Perhaps he just holds himself and everyone else to impossibly high standards.

Meanwhile, Perelman's elusiveness to the media has continued. In 2011 a Russian studio filmed a documentary about him, again without cooperation or participation from Perelman himself. A Russian journalist named Alexander Zabrovsky claimed later that year to have successfully interviewed Perelman and published a report, but experienced analysts, including biographer Masha Gessen, poked that report full of holes, pointing out various unlikely statements and contradictions. One critic provided the amusing summary "All those thoughts about nanotechnologies and the ideas of filling hollowness look like rabbi's thoughts about pork flavor properties." A more believable 2012 article by journalist Brett Forrest describes a brief, and rather unenlightening, conversation he was able to have with Perelman after staking out his apartment for several days and finally catching him while the mathematician and his mother were out for a walk.

Probably the most intriguing possibility here is that Perelman has not actually abandoned mathematics, but has merely abandoned the organized research community, and is using his seclusion to quietly work on the problems that truly interest him. Fellow mathematician Yakov Eliashberg claimed in 2007 that Perelman had privately confided that he was working on some new problems, but did not yet have any results worth reporting. Meanwhile, Perelman continues to ignore the world around him, as he and his mother quietly live in their small apartment in St Petersburg, Russia. Something tells me that this not quite the end of the Perelman story, or of his contributions to mathematics.

And this has been your math mutation for today.

 

References:
      







Saturday, December 27, 2014

203: Big Numbers Upside Down

Audio Link

When it comes to understanding big numbers, our universe just isn't very cooperative.  Of course, this statement depends a bit on your definition of the word "big".   The age of the universe is a barely noticeable 14 billion years, or 1.4 times 10 to the 10th power.   The radius of the observable universe is estimated as 46 billion light years, around 4.6 * 10 to the 25th power meters.  The observable universe is estimated to contain a number of atoms equal to about 10 to the 80th power, or a 1 followed by 80 zeroes.   Now you might say that some of these numbers are pretty big, by your judgement.   But still, these seem pretty pathetic to me, with none of their exponents even containing exponents.   It's fairly easy to write down a number that's larger than any of these without much effort, and we have discussed such numbers in several previous podcasts.  While it's easy to come up with mathematical definitions of numbers much larger than these, is there some way we can relate even larger numbers to physical realities?   Internet author Robert Munafo has a great web page up, linked in the show notes, with all kinds of examples of significant large numbers.
   
There are some borderline examples of large numbers that result from various forms of games and amusements.   For example, the number of possible chess games is estimated as 10 to the 10 to the 50th power.   Similarly, if playing the "four 4s" game on a calculator, trying to get the largest number you can with four 4s, you can reach 10 to the (8 times 10 to the 153rd power) equal to 4 to the 4 to the 4 to the 4th power.  It can be argued, however, that numbers that result from games, artifical exercises created by humans for their amusement, really should not count as physical numbers.   These might more accurately be considered another form of mathematical construct.
   
At a more physical level, some scientists have come up with some pretty wild sounding numbers based on assumptions about what goes on in the multiverse. beyond what humans could directly observe, even in theory.   These are extremely speculative, of course, and largely border on science fiction, though based at some level in modern physics.  For example, one estimate is that there are likely 10 to the 10 to the 82nd power universes existing in our multiverse, though this calculation varies widely depending on initial assumptions.   In an even stranger calculation, physicist Max Tegmark has estimated that if the universe is infinite and random, then there is likely another identical copy of our observable universe within 10 to the 10 to the 115th meters.   Munafo's page contains many more examples of such estimates from physics.
   
My favorite class of these large "physical" numbers is the use of probabilities, as discussed by Richard Crandall in his classic 1997 Scientific American article (linked in the show notes).   There are many things that can physically happen whose infinitesimal odds dwarf the numbers involved in any physical measurement we can make of the universe.   Naturally, due to their infinitesimal probabilities, these things are almost certain never to actually happen, so some might argue that they are just as theoretical as artificial mathematical constructions.  But I still find them a bit more satisfying.  For example, a parrot would have odds of about a 1 in 10 to the 3 millionth of pecking out a classic Sherlock Holmes novel, if placed in front of a typewriter for a year.   Taking on an even more unlikely event, what is the probability that a full beer can on a flat, motionless table will suddenly flip on its side due to random quantum fluctuations sometime in the next year?  Crandall estimates this as 1 in 10 to the 10 to the 33rd.   In the same neighborhood is the chance of a mouse surviving a week on the surface of the sun, due to random fluctuations that locally create a comfortable temperature and atmosphere:  1 in 10 to the 10 to the 42nd power.  Similarly, your odds of suddently being randomly and spontaneously teleported to Mars are 10 to the 10 to the 51st power to 1.   Sorry, Edgar Rice Burroughs.
   
So, it looks like tiny probabilities might be the best way to envision the vastness of truly large numbers, and escape from the limitations of our universe's puny 10 to the 80th power number of atoms.  If you aren't spontaneously teleported to Mars, maybe you can think of even more cool examples of large numbers involved in tiny probabilities that apply to our physical world.
   
And this has been your Math Mutation for today.




References:

Sunday, November 23, 2014

202: Psychochronometry

Audio Link

Before we start, I'd like to thank listener Stefan Novak, who made a donation to Operation Gratitude in honor of Math Mutation.  Remember, you can get your name mentioned too, by donating to your favorite charity and sending me an email about it!

Now, on to today's topic.  I recently celebrated my 45th birthday.  It seems like the years are zipping by now-- it feels like just yesterday when I was learning to podcast, and my 3rd grader was the baby in the cover photo.   This actually ties in well with the fact that I've recently been reading "Thinking in Numbers", the latest book by Daniel Tammett.   You may recall the Tammett, who I've featured in several previous episodes, is known as the "Rosetta Stone" of autistic savants, as he combines the Rain Man-like mathematical talents with the social skills to live a relatively normal life, and write accessible popular books on how his mind works.    This latest book is actually a collection of loosely autobiographical essays about various mathematical topics.   One I found especially interesting was the discussion of how our perceptions of time change as we age.
     
I think most of us believe that when we were young, time just seemed longer.   The 365 days between one birthday and the next were an inconceivably vast stretch of time when you were 9 or 10, while at the age of 45, it does not seem nearly as long.   Tammett points out that there is a pretty simple way to explain this using mathematics:  when you are younger, any given amount of time simply represents a much larger proportion of your life.   When you are 10, the next year you experience is equal to 10% of your previous life, which is a pretty large chunk.   At my age, the next year will only be 1/45th of my life,  or about 2.2%, which is much less noticeable.   So it stands to reason that as we get older, each year will prove less and less significant.   This observation did not actually originate with Tammett-- it was first pointed out by 19th century philsopher Paul Janet, a professor at the Sorbonne in France.
   
Following up on the topic, I found a nice article online by an author named James Kenney, which I have linked in the show notes.  He mentions that there is a term for this analysis of why time seems to pass by at different rates, "Psychochronometry".   Extending the concept of time being experienced proportionally, he points out that we should think of years like a musical scale:  in music, every time we move up one octave in pitch, we are doubling the frequency.   Similarly, we should think of our lives as divided into "octaves", with each octave being perceived as roughly the equivalent subjective time as the previous one.   So the times from ages 1 to 2, 2 to 4, 4 to 8, 8 to 16, 16 to 32, and 32 to 64, are each an octave, experienced as roughly equivalent to the average human.
   
This outlook is a bit on the bleak side though:  it makes me uneasy to reflect on the fact that, barring any truly extraordinary medical advances in the next decade or two, I'm already well into the second-to-last octave of my life.  Am I really speeding down a highway to old age with my foot stuck on the accelerator, and time zipping by faster and faster?   Is there anything I can do to make it feel like I have more time left?   Fortunately, a little research on the web reveals that there are other theories of the passage of time, which offer a little more hope.
   
In particular, I like the "perceptual theory", the idea that our perception of time is in proportion to the amount of new things we have perceived during a time interval.  When you are a child, nearly everything is new, and you are constantly learning about the world.   As we reach adulthood, we tend to settle down and get into routines, and learning or experiencing something truly new becomes increasingly rare.   Under this theory, the lack of new experiences is what makes time go by too quickly.  And this means there *is* something you can do about it-- if you feel like things are getting repetitive, try to arrange your life so that you continue to have new experiences.
   
There are many common ways to address this problem:  travel, change your job,  get married, have a child, or strive for the pinnacle of human achievement and start a podcast.  If time or money are short, there are also simple ways to add new experiences without major changes in your life.  My strong interest in imaginary and virtual worlds has been an endless source of mirth to my wife.  I attend a weekly Dungeons and Dragons game, avidly follow the Fables graphic novels, exercise by jogging through random cities in Wii Streets U, and love exploring electronic realms within video games like Skyrim or Assassins Creed.  You may argue that the unreality of these worlds makes them less of an "experience" than other things I could be doing-- but I think it's hard to dispute the fact that these do add moments to my life that are fundamentally different from my day-to-day routine.   One might argue that a better way to gain new experiences is to spend more time travelling and go to real places, but personally I would sacrifice 100 years of life if it meant I would never have to deal with airport security again, or have to spend 6 hours scrunched into an airplane seat designed for dwarven contortionists.
   
So, will my varied virtual experiences lengthen my perceived life, or am I ultmately doomed by Janet's math?   Find me in 50 years, and maybe I'll have a good answer.  Or maybe not-- time will be passing too quickly by then for me to pay attention to silly questions.
   
And this has been your math mutation for today.
     
References:



Thursday, October 23, 2014

201: A Heap Of Seagulls


Audio Link

Before we start, I'd like to thank listener RobocopMustang, who wrote another nice review on iTunes.  Remember, you too can get your name mentioned on the podcast, by either writing an iTunes review, or sending a donation to your favorite charity in honor of Math Mutation and emailing me about it.

Anyway, on to today's topic.  Recently I was thinking about various classic mathematical and philosophical paradoxes we have discussed.   I was surprised to notice that we have not yet gotten to one of the most well-known classical paradoxes, the Heap Paradox.   This is another of the many paradoxes described in ancient Greece, originally credited to Euclid's pupil Eubulides of Miletus.
   
The Heap Paradox, also known to snootier intellectuals as the Sorites Paradox (Sorites being the Greek word for heap), goes like this.   We all agree we can recognize the concept of a heap of sand:  if we see a heap, we can look at the pile of sand and say "that's a heap!".   We all agree that removing one grain of sand from a heap does not make it a non-heap, so we can easily remove one grain, knowing we still have a heap.  But if we keep doing this for thousands of iterations, eventually we will be down to 1 grain of sand.  Is that a heap?  I think we would agree the answer is no.  But how did we get from a situation of having a heap to having a non-heap, when each step consisted of an operation that preserved heap-ness?
   
One reason this paradox is so interesting is that it apples to a lot of real-life situations.   We can come up with a similar paradox if describing a tall person, and continually subtracting inches.   Subtracting a single inch from a tall person would not make him non-tall, would it?   But if we do it repeatedly, at some point he has to get short, before disappearing altogether.   Similarly, we can take away a dollar from Bill Gates without endangering his status of "rich", but there must be some level where if enough people (probably antitrust lawyers) do it enough times, he would no longer be rich.   We can do the same thing with pretty much any adjective that admits some ambiguity in the boundaries of its definition.
   
Surprisingly, the idea of clearly defining animal species is also subject to this paradox, as Richard Dawkins has pointed out.  We tend to think of animal species as discrete and clearly divided, but that's just not the case.  The best example from the animal kingdom may be the concept of "Ring Species".   These are species of animals that consist of a number of neighboring populations, forming a ring.   At one point on the ring are two seemingly distinct species.  But if you start at one of them, it can interbreed with a neighbor to its right, and that neighbor can interbreed with the next, and so on... until it reaches all the way around, forming a continuous set of interbreeding pairs between the two distinct species.  
   
For example, in Great Britain there are two species of herring gulls, the European and the Lesser Black-Backed, which do not interbreed.   But the European Herring Gull can breed with the American Herring Gull to its west, which can breed with the East Siberian Herring Gull, whose western members can breed with the Heuglin's Gull, which can breed with the Lesser Black-Backed Gull, which was seemingly a distinct species from the European gull we started with.   So, are we discussing several distinct gull species, or is this just a huge heap of gulls of one species?   It's a paradox.
  
Getting back to the core heap concept, there are a number of classic resolutions to the dilemma.   The most obvious is to just label an arbitrary boundary:  for example, 500 grains of sand or more is a heap, and anything fewer is a non-heap.  This seems a bit unsatisfying though.   A more complicated version of this method mentioned on the Wikipedia page is known as "Hysteresis", allowing an asymmetric variation in the definition, kind of like how your home air conditioner works.  When subtracting from the heap, it may lose its heapness at a threshold like 500.  But when adding grains, it doesn't gain the heap property again until it has 700.  I'm not convinced this version adds much philosophically though, unless your energy company is billing each time you redefine your heap.
   
A better method is to use multivalued logic, where we say that any pile has some degree of heapness which continuously varies:  over some threshold it is 100%, then as we reduce the size the percentage of heapness gradually goes down, reaching 0 at one grain.   A variant of this is to say that you must poll all the observers, and average their judgement of whether or not it's a heap, to decide whether your pile is worthy of the definition.
   
If you're a little more cynical, there is the nihilistic approach, where you basically unask the question:  simply declare it out-of-bounds to discuss any concept that is not well-defined with clean boundaries.   Thus, we would say the real problem is the use of the word "heap", which is not precise enough to admit philosophical discussion.   There are also a couple of more involved philosophical resolutions discussed in online sources, which seem a bit technical to me, but you can find at the links in the show notes.
   
Ultimately, this paradox is pointing out the problem of living in a world where we like things to have discrete definitions, always either having or not having a property we ascribe to it.  It is almost always the case that there are shades of grey, that our clean, discrete points may reach each other by a continuous incremental path, and thus not be as distinct as we think. 
  
And this has been your math mutation for today.


References:

Sunday, September 28, 2014

200: Answering Common Listener Questions

Audio Link

Wow, I can't believe we've made it to 200 episodes.  Thanks everyone for sticking with me all this time, or at least for discovering this podcast and not immediately deleting it.   Actually, if we're being technical, this is the 201st episode, since I started at number 0.   But we all suffer from the common human fascination with big round numbers, so I think reaching number 200 is still something to celebrate.  

Finding a sufficiently momentous topic for this episode has been a challenge.  Wimping out somewhat, I think a good use of it is to answer a number of listener questions I have received by email over the past 7 years in which I've been podcasting.   Of course I have tried to send individual answers to each of you who has emailed me-- please continue emailing me at erik (e-r-i-k) at mathmutation.com-- but on the theory that each emailer represents a large number of listeners who are too busy or lazy to email, they are probably worth answering here.

1.  Who listens to this podcast?   According to my ISP, I've been getting about a thousand downloads per week on average.   Oddly, a slight majority seem to be from China, a country from which I've never received a listener email, as far as I can tell.   Chinese listeners, please email me to say hi!   Or perhaps the Communist spies there have determined that my podcast is of strategic importance to the United States and needs to be monitored.  If that's the case, I'll look forward to an elevated status under our new overlords after the invasion.  Assuming, that is, that they don't connect me to any of my non-podcast political writings, and toss me into the laogai instead.

2.  How is this podcast funded?    Well, you can probably guess from the average level of audio quality that I'm not doing this from a professional studio; just a decent laptop microphone, plus some cheap/shareware utilities including the Podcast RSS Buddy and the Audacity sound editor, along with a cheap server account at 1 and 1 Internet.  So I actually don't spend a noticeable amount on the podcast.  That's why rather than asking for donations, I ask that if you like the podcast enough to motivate you, you donate to your favorite charity in honor of Math Mutation and email me.
On a side note, I have fantasized about trying to amp up the quality and frequency and make this podcast a profitable venture.   Many of us small podcasters were inspired a few years ago when Brian Dunning of the Skeptoid podcast quit his day job and announced he was podcasting full time.   However, that dream died somewhat when it was revealed earlier this year that that Brian's lifestyle was partially funded by some kind of internet fraud, and he was sentenced to a jail term.

3.  Why don't you release episodes more often, and/or record longer episodes?  First of all, thanks for the vote of confidence, and I'm glad you're enjoying the podcast enough to want more!  During my first year or two of Math Mutation, I had lots of great ideas in the back of my mind, so coming up with topics & preparing episodes was pretty easy.   But now I'm at a point where I've cleared the backlog in my brain, and now I have to think pretty hard to come up with cool topics, and spend a nontrivial amount of time researching each one before I can talk about it.  This is also combined with many non-podcast responsibilities in my daily life, including a wife and daughter who somehow like to hang out with me, and an elected position on the local school board, at the 4th largest district in Oregon.   So I'm afraid I won't be able to increase the pace anytime soon.  Perhaps in a few years, after I've been tarred, feathered, and removed from public office, and my daughter becomes a teenager and hates me, I'll have a bit more podcasting time though.

4.  Can you help me solve this insanely difficult math problem:  (insert problem here)?   I've received a number of queries of this form.   I'm flattered that my podcasting persona has led you to believe I'm a matehmatical genius of some kind, but to clarify, I would put myself more in the category of an interested hobbyist, nowhere near the level of a professional mathematician.   I did earn a B.A. in math many years ago, but my M.S. is in computer science, and I work as an engineer, using and developing software that applies known mathematical techniques to practical issues in chip design at Intel.   If you're a math or science major or graduate student at a decent college, and have a problem that is challenging for you, it's probably way over my head!   So if you're one of the numerous people who sent me a question of this kind & didn't get a good answer, don't think that I'm withholding my brilliant insights, you've probably just left me totally baffled.   And you're probably way more likely to solve it than I am anyway.

5. What other podcasts do you listen to?   To start with, I don't listen to other math podcasts.  This is partially because I'm afraid I'll be intimidated at how much more professional they are.   But mostly I'm worried that I'll subconsciously remember them and accidentally repeat the same topic in my own podcast, as humans are prone to do.    I do avidly listen to podcasts in other genres though.   As you might suspect from some of my topics, I'm a big fan of the world of "science skepticism" podcasts, such as Skeptoid, QuackCast, Skeptic's Guide to the Universe, and Oh No Ross and Carrie.   Those are always fun, although occasionally a bit pretentious in their claims to teach other people how to think.   I'm also a bit of a history buff, really enjoying Robin Pierson's "History of Byzantium", Harris and Reily's "Life of Caesar", and the eclectic "History According to Bob".   Rounding out my playlist is the odd Australian comedy/culture podcast "Sunday Night Safran", where a Catholic priest and a Jewish atheist have a weekly debate on cultural issues.

Anyway, I think those are probably the most common questions I have received from listeners.   I always love to hear from you though, so don't hesitate to email me if you have more ideas, questions, or requests for the podcasts.   If I receive enough emails, I might not wait until episode 400 before doing another Q&A. 

And this has been your math mutation for today.

References: