Friday, September 30, 2016

223: Think With Both Your Brains

Audio Link

One of the most basic questions in mathematics is:  how do you solve problems in general?    This is traditionally why students tremble in terror at “story problems”— instead of being asked to mimic well-known algorithms, as in the majority of their school exercises, suddenly they are in a situation where they are not presented the clear path to the answer.    Yet problem solving is one of the most critical skills you can learn in mathematics classes, and many of us, especially those in science and engineering fields, spend a lifetime continuing to sharpen our skills in this area.    Even in non-math-based professions, people often encounter dilemmas where the solution is not obvious.   So I think it’s worth taking a look at ways to improve our problem solving abilities in general.   And surprisingly, modern neuroscience can provide us some strange methods to try when simple linear reasoning fails us.

Probably the most famous book on this topic is “How To Solve It” by the late Stanford math professor Georg Polya.   Polya lays out a general 4-step process for approaching any problem, in a book full of useful examples from basic areas of algebra and geometry.   First, you need to understand it:  what are the current information, the unknowns, the goals, and the restrictions that apply?  Second, find a way to connect the data and the unknowns, in order to plan your approach.   If this is not obvious, look for related problems, or a smaller subset of the problem that you can solve.   Third, carry out your plan, taking care to show that each step is correct.    And finally, examine the solution:   is there a way to independently check the result, or use it for other problems?   

While Polya’s method is very useful, something about it seems a bit too simple.   After all, if it is easy to understand a problem, plan the solution, and carry it out, why are there so many unsolved problems out there?   Why hasn’t someone definitively solved each millennial problem,   like the P=NP question we discussed in podcast 13, and taken the million dollar prize?     I think one key is that a lot of problems require a flash of intuition, or a conceptual leap that is very difficult to arrive at by linear reasoning.   And that’s where the neuroscience comes in.   Recently I’ve been reading an intriguing book by Andy Hunt called “Pragmatic Thinking and Learning”, which offers a number of strategies for stimulating your mind to solve problems in different ways.

As you’ve probably heard somewhere, many modern scientists believe our brains exhibit two main modes of thought.   Commonly these are called “left brain” and “right brain”, but Hunt points out that the strict connection with the brain hemispheres isn’t quite right, so he suggests the terms “L-mode” and “R-mode”, with the L standing for “linear”, and R standing for “rich”.   You can think of the  two modes as being the two CPUs of a multiprocessing computer system, potentially working in parallel at all times.   Your L-mode brain excels at analytic, linear thinking, and is the primary user of methods like Polya’s.   Your R-mode brain is what you typically exercise in artistic or creative endeavors.    R-mode, while trickier to interact with due to its nonverbal nature, can also provide intuition, synthesis, and holistic thinking— it probably won’t come up with a mathematical proof, but can lead to do discover a conceptual leap you need to get past a roadblock in one.    But how can we effectively interact with our R-mode, or stimulate its activity, in order to leverage its power?   Hunt suggests a variety of basic techniques for getting a dormant R-mode active and more involved.
One simple method is to try to use different senses than usual, in a way that engages your artistic side.  While thinking about a problem with your L-mode, do some minor creative action with your hands that exercises your R-mode, such as making shapes with a paper clip, doodling, or putting together Legos.   In one amusing example, Hunt describes a case where a team designing a complex computer program decided to get up and “role-play” each of the functional units, and soon had a variety of new insights about the system.    

Another method Hunt suggests comes from the domain of computer science, but is likely applicable to many other fields:  “Pair Programming”.   The idea here is that one programmer is actually typing a computer program on the screen, inherently an L-mode activity, while the other is sitting next to him, observing, and making suggestions.   Because the second programmer doesn’t have to worry about the L-mode task of entering the precise sequence of commands, he is free to use his R-mode to take a holistic look, and come up with intuitive suggestions about the overall method. 

A third method that can be surprisingly effective is known as “image streaming”.   After thinking about a problem for a while, try to close your eyes and visualize images related to it for ten minutes or so.   For each image you can think of, first try to imagine it visually, then describe out loud how it appears to all five of your senses.   This one sounds a bit silly at first— and I would suggest you don’t try it in an open cubicle with your co-workers watching— but can be a very powerful way to engage your R-mode.   

A fourth suggestion is called the “morning pages” technique:  when you wake up every morning, immediately write at least three pages on whatever topic comes to mind.   Don’t censor what you write, or try to revise and make it perfect, just let the information flow.   Because it’s the first thing in the morning, you’re getting an unguarded brain dump, while your R-mode dreams and unconscious thoughts are still fresh in your mind.    If you were working on a hard problem the day before, your R-mode may naturally have provided new insights during the night that you now want to capture.    As Hunt summarizes, “You haven’t yet raised all the defenses and adapted to the limited world of reality”.

These ideas are just a small subset of known techniques for leveraging your lesser-used R-mode— if you want to maximize your ability to use your whole mind for problem solving, I would highly recommend that you check out his book, linked in the show notes.    I’ll be interested to hear from any of you who successfully use some of Hunt’s odder-sounding techniques to solve difficult problems.    On the other hand, if you think everything I’ve said today sounds crazy, that’s probably just your L-mode brain over-exercising its linear, logical influence. 

And this has been your math mutation for today.


Monday, August 22, 2016

222: Fractal Expressionism

Audio Link

If you watch enough TV, you probably remember an old sitcom plot where the characters are at a viewing of abstract expressionist art, and somehow a 3-year-old’s paint scribblings get mixed in with the famous works.   Most of the characters, clueless about art, pretend to like the ‘bad’ painting as much as the real paintings, trusting that whatever is on display must be officially blessed as good by the important people.    However, one wise art aficionado spots the fake, pointing out how it is obviously garbage compared to all the real art in the room.   Therefore the many pseudo-intellectuals in the audience get affirmation that their professed fandom of “officially” respected art has a valid basis.     I’ve always considered this kind of plot a mere fantasy, until I read about physicist Richard Taylor’s apparent success in showing that Jackson Pollock’s most famous paintings actually involve mathematical objects called fractals, and this analysis can be used to distinguish Pollock artworks from lesser efforts.

Before we talk about Taylor’s work, let’s review the idea of fractals, which we have discussed in some earlier podcasts.   A simple definition of a fractal is a structure with a pattern that exhibits infinite self-similarity.     A popular example is the Koch snowflake.   You can create this shape by drawing an equilateral triangle, then drawing a smaller equilateral triangle in the middle third of each side, and repeating the process on each outer edge of the resulting figure.   You will end up with a kind of snowflake shape, with the fun property that if you zoom in on any local region, it will look like a partial copy of the same snowflake shape.    Other fractals may have a random or varying element in the self-symmetry, which makes them useful to create realistic-looking mountain ranges or coastlines.    The degree of self-similarity in a fractal is measured by something called the “fractal dimension”.

Taylor’s insight was that Pollock’s paintings might actually be representing fractal patterns.   This idea has some intuitive appeal:  perhaps the abstract expressionists were a form of savant, creating deep mathematical structures that most people could understand on an intuitive level but not verbalize.  Taylor created a computer program that would overlay a grid on a painting and look for repeating patterns, reporting the fractal dimension resulting from the analysis.   After examining a large sample of these, his research team announced that Pollack’s paintings really are fractals, tending to almost always fall within a particular range of fractal dimensions.   They also claimed that these patterns could be used with high accuracy to distinguish Pollock paintings from forgeries.   Taylor even claimed at one point that, due to the various changes in technique over Pollock’s career, he could date any Pollock painting to within a year based on its fractal dimension.   Abstract art critics and fans all over the world felt vindicated, and Taylor became the toast of the artistic community.

However, the story doesn’t end there.   When first reading about Taylor’s work, something seemed a little fishy to me— it reminded me a bit of the overblown fandom of the Golden Ratio, which we discussed back in episode 185.   You may recall that in that episode, I pointed out that any ratio in nature roughly close to 3:5 could be interpreted as an example of the Golden Ratio, by carefully choosing your points of measurement and level of accuracy.   Similarly, it seems to me that the level of fine-tuning required for Taylor’s type of computer analysis would make it inherently suspect.   Taylor isn’t claiming an indisputable point-by-point self-similarity, as in an image of the Koch snowflake.   He must be inferring some kind of approximate self-similarity, with a level of approximation and tuning that is built into the computer programs he uses.  Furthermore, there is no way his experimental process can be truly double-blind:  all Pollock paintings are matters of public record, and I suspect everyone involved in his study were Pollock fans to some degree to begin with.   I’m sure most Pollock paintings exhibit some kinds of patterns, and with the right definitions and approximations, just about any kind of pattern can be loosely interpreted as a fractal.   With all this knowledge available as they were creating their program, I’m sure Taylor’s team was able to generate something finely tuned to Pollock’s style, even if they were not conscious of this built-in bias.

Of course, many people in the math and physics world also found Taylor’s analysis suspicious.   A team of skeptics, led by well-known physicist Laurence Krauss, developed their own fractal-detecting program and tried to repeat Taylor’s analysis.   They found that attempting to analyze their fractal dimensions was useless for identifying Pollock paintings.   Several actual paintings were missed, while lame sketches of random patterns by lab staff, such as a series of stars on a sheet of paper that could have been written by a child, were given Pollock-like measurements.   When issuing their report, this team claimed to have conclusively proven that fractal analysis is completely useless for distinguishing Pollocks from forgeries.    In some sense, this may not be such a bad result for art fans— as Krauss’ collaborator Katherine Jones-Smith stated, "I think it is more appealing that Pollock's work cannot be reduced to a set of numbers with a certain mean and certain standard deviation.”

So, are Pollock paintings actually describable as fractals, or not?   The jury still seems to be out.   Krauss’s team claimed that they had definitively disproven this idea.   However, Taylor responded that this was merely an issue of them having used a much less sophisticated computer program.   Active research is still continuing in this area, as shown by a 2015 paper that combines Taylor’s method with several other mathematical techniques, and claims a 93% accuracy in identifying Pollocks.   My inclination is that we should still look at this entire area with a healthy skepticism, due to the inability to produce a truly double-blind study when famous artworks are involved.    But there are likely some underlying patterns in abstract expressionist art, at least in the better paintings, which may be a key to why some people find them enjoyable.   So lie back, turn on your John Cage music, and start staring at those Pollocks.

And this has been your math mutation for today.


Thursday, June 30, 2016

221: The Trouble With Glass Beads

Audio Link

Hi everyone— before we start, just wanted to remind you again that the Math Mutation book is out!   To order, you can follow the link at or just search for it on Amazon.    Please consider posting a review on Amazon if you like it; I’m still waiting for somebody to post one.  And if you ever pass through the Portland, Oregon metro area, I’ll be happy to autograph your copy!   Now on to today’s topic.

I just finished reading the controversial 2006 bestseller “The Trouble With Physics”, by physicist Lee Smolin.   Smolin is an accomplished physicist who has issued a blistering critique of his own field, claiming that ever since the Standard Model of particle physics was developed in the 1970s, the entire field has simply failed to make significant progress in understanding the fundamental laws of the universe.    He largely blames the widespread focus on string theory, the attempt to unify physics that is based on interpreting subatomic particles as tiny multidimensional strings.   In a 2015 followup interview linked in the show notes, he described most of the problems he pointed out as still being highly relevant.    As I was reading Smolin’s book, I couldn’t help but be reminded of a classic novel I read years ago,  Herman Hesse’s “The Glass Bead Game”.

“The Glass Bead Game” describes a world in which its wisest class of professors inhabit a unique, isolated institution, where they spend all their time studying an intricate game that involves moving glass beads around a board.   They have decided that they do not need to directly study most concrete real-world subjects, because all knowledge in the world can theoretically be captured through a mathematical mapping to patterns of glass beads.     This isn’t quite as absurd an idea as it may have seemed in Hesse’s day:  you may recall several earlier podcasts where I mentioned Conway’s Game of Life.   This game is played on an infinite two-dimensional board full of cells, which can be either on or off— you can imagine this being marked with glass beads— and a simple set of rules determines which adjacent cells will be on or off during the next time unit.   Amazingly, this simple game has been proven to be computationally universal, which means that any modern computer can be simulated by some pattern of beads.    Of course, it would be much more efficient to build the computer directly, rather than dealing with the bead-based simulation, which is where Hesse’s concept breaks down.

Anyway, Hesse won a Nobel Prize for the novel, and it can be interpreted on many levels; I’m sure his fans will beat me up for my grossly oversimplified summary.   But one basic interpretation of the Game is as a critique of the academia of Hesse’s time:  focusing on their own abstract research, Europe’s class of professors were far too detached from the real world, to the point of losing all relevance to the day-to-day lives of the people around them.   Even though all of life could be represented by the glass beads, the converse was not true:  many manipulations of the beads would tell you absolutely nothing useful or interesting about life.  Yet the professors spent all their times studying the beads while ignoring the world around them.  You would have thought that professors of physics would be the ones least likely to be vulnerable to this kind of critique, however, since their field forces them to constantly keep in touch with physical reality. 

But this is where Smolin’s thesis comes in.   String theory originally became popular when it was seen as a promising way to potentially unite relativity and quantum mechanics, but that first burst of interest was decades ago.   Since then, thousands of physicists have graduated with Ph.D.s and spent their career working out various details and implications of string theory, but the theory is far from complete.   In fact, the mathematics is so complex that a complete theory seems to be beyond reach, and due to various parameters that cannot be independently derived, it’s probably more correct to describe string theory as a huge family of theories rather than as one unifying theory.   In some of our other podcasts we’ve touched on bizarre but fun ideas implied by these theories, such as the universe having 11-dimensions, and our existence actually being on the surface of a multidimensional membrane.

Yet despite all this activity, Smolin points out that string theory has never come close to experimental verification.   All the academic work in this area has essentially amounted to glass bead games, working out complex mathematical relationships with no clear relevance to reality through experimental verification.    This is a major contrast with past revolutions in physics:  even though some results of early 20th-century physics seemed bizarre, they ultimately were subject to experimental verification.    Einstein’s theory of relativity, for example, predicted astronomical phenomena such as the curvature of light and modifications to the classic Doppler effect.  As a result, observations such as the 1938 Ives-Stilwell experiment were able to provide solid confirming evidence, and relativistic effects are now in daily use by our GPS systems.   Nobody has yet come up with an observable effect that we can use to test string theory, at least with any technology that currently seems within humanity’s reach.

So, according to Smolin, what has caused academic physics to waste such a massive amount of time and resources on a field that seems mainly to be a mathematical game?    As Smolin summarized in a 2015 followup interview, “The problems are rooted in the way the career and funding structures of the academy reward me-too science, lack of courage, entrenchment of failed research programs, legacy building, empire building, narrowness, defensive strategies and groupthink. “    These issues are largely a consequence of the modern sociology of academia.   Starting in the mid 20th century, the idea of being a professor transitioned from something rare and unique to a standardized and regimented profession.   The influence of the senior professors grew, and there was increasing pressure on new entrants to conform to their existing theories, while at the same time needing to “publish or perish”, quickly publish multiple articles to prove their worth.   

Related to this thesis, Smolin divides academics into two classes:  “technicians” and “seers”.  The current type of organization favors technicians, smart but compliant Ph.D.s who can cleverly extend existing theories, over seers, brilliant minds with truly original concepts.   While technicians are necessary to complete the work of science, it is seers who lead the way and discover or invent new paradigms.   The classic prototype of a seer is Albert Einstein, who initially could not get an academic job, but explored revolutionary ideas while working as a patent clerk. Smolin points out that to create revolutionary new theories, seers often have to spend several years working out the basic concepts before they can generate publishable results, and this tends to prevent their academic success.   He mentions numerous modern seers he has identified, most of whom have had to develop their ideas apart from standard academic environments.    Smolin argues that to restore progress in physics, academia needs to find a way to encourage and reward seers as well as technicians.

I don’t know enough about modern physics to accurately critique Smolin’s comments on string theory, but having spent some time in grad school in another field, I can easily believe his points about technicians and seers.   I think the leaders of every academic field need to look closely at Smolin’s critique and their tendency towards conformity and subservience.   They need to make sure they are providing a way for truly original thinkers to make fundamental changes to their fields when needed, not simply retreating from the real world into a series of comfortable and well-defined glass bead games.

And this has been your math mutation for today.


Saturday, May 28, 2016

220: Cognitive BSes

Audio Link

Hi everyone— before we start, just wanted to remind you, the Math Mutation book is out!   To order, you can follow the link at or just search for it on Amazon.    And if you ever pass through the Portland, Oregon metro area, I’ll be happy to autograph your copy.   If you like it, posting a positive review on Amazon would be really helpful.  Now on to today’s topic.

As you may recall, one of the topics we covered in some previous podcasts, and in the Math Mutation book, is the idea of “Cognitive Biases”.   These are well-known ways in which the human brain has a natural instinct to think in ways that violate basic laws of logic and mathematics.   One classic example is the Anchoring bias:  if asked a question that has a quantitative answer, you will tend to give an estimate close to numbers you recently heard.   For example, suppose I arrange separate discussions with two people to estimate how many listeners Math Mutation has.   With the first one, I start by asking “Does Math Mutation have more than 100 listeners, or fewer than 100?”.   But with the second one, I open with “Does Math Mutation have more than 1 million listeners, or fewer than 1 million?”   If I then ask both of them to estimate the total number of listeners, the first will probably come up with a much smaller estimate than the second, even though neither has any objective information to justify a particular number.   

After reading the chapter in my book, my old Princeton classmate Tim Chow pointed out that calling this a “Cognitive Bias” might not be justified.    Sure, the listener technically has no information to support the larger number in the second case— but in cases where we are talking to another human being, we trust them to provide relevant information.   This includes both direct statements of facts, and implications that might not be directly stated.   If I ask you whether Math Mutation has more or fewer than 100 listeners, I am implicitly communicating the information that the 100 number is pretty close, even though I have not rigorously declared this to be a relevant fact.   So if this number isn’t close, I have essentially misled you with false information— the fact that you trusted me and used the wrong number is my fault, not some flaw in your mental logic.    Thus, this “Cognitive Bias” is really a social manipulation.

Now, if you’re familiar with the literature on this topic, you might point out an interesting experiment that seems to refute this.   In this experiment, subjects saw a roulette wheel spin, then were asked the percentage of the United Nations countries that were in Africa.   Even though there is no logical reason for them to suspect the roulette wheel had advanced knowledge of geopolitics, their answers were still biased towards the results they saw on the wheel.   Many similar experiments have been carried out.   Doesn’t this provide irrefutable proof that this really is a cognitive bias?

Not so fast.   This is a very artificial situation.   Maybe when asked to guess a number about which they have absolutely no idea, they just grab any arbitrary number they can think of, which will tend to be one they saw recently.   They aren’t following some flawed cognitive process, they just don’t have any reason to pick any particular number.   Again, this doesn’t really indicate a mathematical flaw in their reasoning— they don’t think the number they picked has a particular logical justification.   Not knowing an answer, they just defaulted to what was at the top of their head.   

Most of the other well-established Cognitive Biases are open to similar criticisms.  Another example is the Conjunction Fallacy:   suppose I tell you that Joe is a Princeton mathematics graduate and chess champion, and then ask you to choose the more likely of two statements.  1.  “Joe is now a physics professor.”  2. “Joe is now a physics professor and head of the local Math Mutation fan club.”    You will likely choose option #2, since it seems like this kind of guy should be a Math Mutation fan.   But on reflection, option 2 must be strictly less likely than option 1, as it takes the same basic fact and adds an additional, more restrictive, condition.  But again, there is information being communicated between the lines:  if I give you those two choices, you probably interpret #1 as implicitly stating that Joe is NOT president of the Math Mutation fan club.   I didn’t say that, but the additional choice in the second option made this a very reasonable inference.   Once again, it can be seen as more of a social manipulation, where I leveraged typical communication conventions to imply something without actually stating it, and the implication is not strictly justified by mathematical logic.

We should point out, though, that even if the so-called “Cognitive Biases” are not truly flaws in the logic of the human brain, they are still important psychological effects to be aware of for many reasons.   For example, let’s take a look at some practical applications of the Anchoring bias.  It’s well known that when negotiating prices in business, your opening offer can set an anchor that affects the entire discussion.   Negotiating business contacts usually have some level of trust in each other, and taking advantage of this to establish a good anchor is a smart, though slightly manipulative, technique.    On another note, suppose you’re a surveyor trying to get accurate estimates in a survey or questioning experts on a difficult topic.   You need to be careful not to include some kind of number in the question that might unintentionally influence the result.   So knowing about Anchoring is still very useful, whether you call it a true cognitive bias or a simple persuasion technique.    In general, I still believe the Cognitive Biases are worth studying and raising awareness of, though maybe more as social or linguistic phenomena than as true flaws in the human mind.

And this has been your math mutation for today.


Sunday, May 1, 2016

219: A Portal to the Past

Audio Link

I was sad to hear of the recent passing of Umberto Eco, one of my favorite contemporary novelists.   Due to his long career as a professor of “semiotics”, his novels drew on a wide variety of historical, mathematical, and scientific ideas from throughout the past millennium.    One that I read relatively recently was “The Island of the Day Before”, his 1994 story of a soldier marooned during the 1600s, the historical period during which the governments of Europe were desperately searching for a reliable way to measure longitude.   You may recall our discussion of this multi-century quest back in episode 108.    The key plot twist is that the main character, an Italian soldier named Roberto Della Griva, gets marooned on a ship that is trapped within sight of the International Dateline, and an island which sits beyond it.   

Della Griva attaches a mystical significance to this line, as is implied by the book’s title.   He somehow convinces himself that if he could just cross it, it would mean he was traveling back in time.   If he will manage to swim across the line tomorrow, would it mean that today he should see himself swimming in the distance?  Here is one amusing passage from the book:   

“Indeed, as he sees it distant not only in space but also (backwards) in time, from this moment on, whenever he mentions that distance, Roberto seems to confuse space and time, and he writes, "The bay, alas, is too yesterday," and ,"How much sea separates me from the day barely ended," and even, "Threatening rainclouds are coming from the Island, whereas today it is already clear . . . .”

While these speculations are rather absurd, this discussion got me curious about the actual history of the International Dateline.   It has been recognized since ancient times that the time of day is slightly different as you travel to the east or west, but the idea that you might travel all the way around the world and gain or lose a day was only really conceivable in relatively recent eras of human history.    The real history of the Dateline can probably be properly said to have started with Magellan’s circumnavigation of the globe.   When the handful of survivors of that three-year voyage arrived home in 1522, they were surprised to discover that despite careful logging of their travels, they had lost a day.     Here is a description from one of them:

On Wednesday, the ninth of July [1522], we arrived at one these islands named Santiago, where we immediately sent the boat ashore to obtain provisions. [...] And we charged our men in the boat that, when they were ashore, they should ask what day it was. They were answered that to the Portuguese it was Thursday, at which they were much amazed, for to us it was Wednesday, and we knew not how we had fallen into error. For every day I, being always in health, had written down each day without any intermission. But, as we were told since, there had been no mistake, for we had always made our voyage westward and had returned to the same place of departure as the sun, wherefore the long voyage had brought the gain of twenty-four hours, as is clearly seen.

As you can see, even though they were caught by surprise at first, the sailors were able to quickly realize their fallacy.   Because they were traveling in the same direction as the sun, they had experienced one less day, but each day they had experienced was slightly longer.   So there was no actual time travel, just an accounting error.    A similar phenomenon was observed later by other circumnavigators, such as English explorer Francis Drake.    This incident also inspired the famous surprise ending of Jules Verne’s “Around the World in 80 Days”.   Even though the reasons for the gain or loss of a day upon circumnavigation were well known, I suppose it is vaguely possible than an uneducated sailor like Eco’s character could have attached a more mystical significance to the effect.

But even these odd experiences of sailors were not really common enough to motivate standardization of time zones and an international dateline, until the era of trains came along in the 19th century.   Suddenly it was possible to move quickly and continuously between areas with different local times.    Finally in October 1884, representatives from 25 nations met at an international conference in Washington, DC, and came up with the system of time zones we know today, based on longitudinal lines starting from the Greenwich meridian, and times derived by adding or subtracting from the Greenwich Mean Time, or GMT.   To increase the chances of universal adoption, it was agreed that local islands and nations can move the timelines for convenience, which is why we see those squiggly time zone boundaries today instead of simple longitudinal lines.     Amusingly, the French seemed to be insulted by the idea of a location in England defining the time zones: until 1911, instead of referring to Greenwich Mean Time, they referred to Paris Mean Time minus nine minutes and 21 seconds, which was equivalent.

Unfortunately, since these meridians and zones are all artificial labels, I’m afraid Della Griva’s dream of somehow using them for time travel would never quite pan out.   Some modern writers ponder this idea as well, but I think they are mostly tongue-in-cheek.   For example, travel author Bill Bryson has written:  

“I left Los Angeles on January 3 and arrived in Sydney fourteen hours later on January 5. For me there was no January 4. None at all. Where it went exactly I couldn’t tell you. All I know is that for one twenty-four-hour period in the history of earth, it appears I had no being.   I find it a little uncanny, to say the least. I mean to say, if you were browsing through your ticket folder and you saw a notice that said, ‘Passengers are advised that on some crossings twenty-four-hour loss of existence may occur’…, you would probably get up and make inquiries, grab a sleeve, and say, ‘Excuse me.’”   

But somehow I don’t think Bryson really believes he lost a day of his life due to the shifting of a few time labels.    I would be more concerned about the portion of my existence that is wasted while crammed into a tiny airline seat for half a day.

In a more serious vein, there was a solar eclipse last month that started on March 9 and ended on March 8, but it was actually traveling forward in time the whole way, although it achieved its peculiar timeline by crossing the International Dateline as it travelled.   Also, you shouldn’t completely lose hope— one form of time travel across the dateline is possible.   Remember that under the theory of relativity, if you travel by airplane at a high speed, you really do lose a tiny fraction of a second, as time slows down for you in relation to your friends on the ground.   However, that works across any line, not just one artificial one.

And this has been your math mutation for today.


Saturday, April 30, 2016

The Book Is Out!

Hi everyone-- the Math Mutation book, "Math Mutation Classics:  Exploring Interesting, Fun, and Weird Corners of Mathematics", is now available.  You can order it from Amazon at this link.    A perfect Mother's or Father's day gift for a geeky parent!

By the way, if you like it, don't forget to post a review on Amazon.   And of course, if you're in the Portland, Oregon metro area, I'll be happy to autograph your copy sometime.


Sunday, March 27, 2016

218: Itching for the I Ching

Audio Link

Recently I was reading a biography of John Cage, the quirky avant-garde 20th-century classical composer who I have mentioned in a few previous podcasts.    One of the most fascinating aspects of Cage’s composing was his attempt to introduce random elements into his music, starting in the 1950s, in order to free himself from preconceived patterns.   He experimented with numerous sources of randomness, including die rolls, ambient noise from the environment, and even imperfections in the paper he was writing on.   But one method that absorbed his interest for a long time was the ancient Chinese book of divination known as the I Ching.    Once Cage discovered the I Ching, it became his main guide in the selection of random numbers.   Some of his compositions required thousands of random numbers to be completed.   As a result, many of his visitors noted that anyone who stepped through Cage’s door was soon drafted into tossing coins for a few hours to generate I Ching trigrams for use in Cage’s music.

The I Ching,  or Book of Changes, is said to be one of the world’s oldest books, written around 3000 years ago.   It is based around interpreting the significance of various patterns of whole and broken lines, traditionally determined by tossing yarrow sticks, or by an equivalent method based on tossing coins.   The most fundamental set of patterns generated by the I Ching are the eight trigrams, patterns of three lines, which may each be solid or broken.   Each of the eight trigrams has several possible meanings, such as the mind, the spirit, emotions, or bodily sensations.   Pairs of these trigrams can be combined into one of 64 possible hexagrams, for an even richer set of possible meanings to interpret.    Being a listener of this podcast, you have probably realized by now that the combinations of three or six lines, each of which can be solid or broken, is precisely equivalent to a three- or six- digit binary number, if you interpret the solid lines as 1s and the broken ones as 0s.   So essentially, the I Ching is a divination system based on random numbers, expressed in binary, or base-2, notation, between 0 and 63.     Now I’m sure Chinese scholars will say I’m shortchanging the deep philosophy of the system, since these random divinations are accompanied by thousands of pages of interpretive text.   But it’s undeniable that these numbers are the basis.

Because of this numerical aspect, it’s actually not uncommon among historians of science to credit the ancient Chinese for first coming up with the idea of the binary number system, which is critical to modern computers.   Personally I’m a bit skeptical of this aspect of I Ching studies:  while the ancient book discussed many ways to combine and interpret the trigrams and hexagrams, they weren’t using these as a basis for a numerical system or for calculations of mathematical significance.   On the other hand, the legendary Gottfried Leibniz, co-inventor of calculus and early designer of ideas for calculating machines, did credit the I Ching for inspiring the idea of binary arithmetic in some 17th-century writings.    I think this may have been largely due to the fact that there were no other precedents for this idea in Leibniz’s time, though.   Most likely, he was astonished to see some basic ideas of his base-2 new arithmetic system in this ancient text, though he probably would still have developed the binary system if unaware of these writings.

As I read more about the I Ching online though, I was surprised to see that its description as a system of binary numbers is actually a bit of an oversimplification.   The reason is that the I Ching describes a complex procedure for generating the lines,  not the simple 0/1 coin toss you would have guessed.   When using the coin method to generate a solid or broken line, you are to toss three coins, with one side of each coin considered the “yin” side and the other the “yang” side.   Each yin toss has a value of 2, which each yang toss has a value of 3.   You then add the values together, to get a total between 6 and 9.   A 6 or 8 is a broken line, while a 7 or 9 is a solid line.    But there is more to it:  the less probable 6 or 9 values indicate that their line is “moving”, while the 7 or 8 lines are “stable”.   While the symbolic trigrams or hexagrams are still drawn with mainly solidness or brokenness visible, you need to note which are moving and which are stable, as this can make a major difference in the results of your divination.    Thus, one might say that the I Ching is really a base-4 divination system rather than binary.   In some of his writings, John Cage actually claimed to be using these stable and moving aspects to guide his randomly generated music.

But on top of the base-4 complication, there is yet one more mathematical wrinkle.   While the totally random methods such as tossing sticks and coins are the most commonly used, one online scholar notes that the I Ching describes another, more complex, method for generating the next 6/7/8/9 line number based on the current one, using a series of mathematical calculations.    These calculations are actually pseudo-random, similar to the Linear Congruential Generation algorithms used by modern computers.  This means that the results are deterministic, though hard enough to predict that they appear random.   Furthermore, according to this online analysis, the official I Ching algorithm is somewhat biased:  while solid and broken lines are equally likely, the 9 is much more probable than the 6, meaning that solid lines are significantly more likely to be “moving” than broken ones.   I’m sure New Age mystics would say there is some deep meaning in this, and that Yang is more mobile than Yin, or something like that.   Being a bit more of a cynic, I would lean towards the interpretation that the ancient Chinese were just not mathematically advanced enough to notice the problem.

Anyway, I’m not sure how all this was supposed to lead to John Cage generating better music:  while I really enjoy reading about his bizarre random methods, trying to listen to the resulting music for more than a few seconds at a time is not a very pleasant experience.   It’s also amusing that Cage put so much energy into generating numbers using I Ching methods, when he could have bought books of pre-generated random numbers, which were available for engineering and cryptographic applications for decades before the advent of modern computers, and saved a lot of time.   But I wonder if Cage’s avant-garde admirers would have claimed to like his music as much, if he told them the source was the Rand Corporation rather than ancient Chinese mysticism.

And this has been your math mutation for today.