Sunday, June 25, 2017

231: A New Dimension in Housing

You may recall several earlier podcasts in which we discussed the works of Buckminster Fuller, the visionary futurist and architect of the mid-20th century.   He is best known for his discovery of the geodesic dome, and has been memorialized in the name of the chemical “fuillerene”, an arrangement of 60 carbon atoms with a similar structure.    He also was a popular, if somewhat eccentric, speaker on the idea of “Spaceship Earth”, offering various feel-good prescriptions for enabling the future survival of the human race.   A critical component of his philosophy was a new “synergetic” geometry based on 60-degree rather than 90-degree angles, which would surely lead to new ways of looking at the world.    But recently I’ve been reading a new biography of Fuller’s early years, “Becoming Bucky Fuller” by Loretta Lorance, and was surprised to learn that despite his own later descriptions of his early life, Fuller did not originally set out to be a futurist or visionary, or to save the world through a philosophical revolution.   Originally, he was simply trying to start a successful company, and hoping to follow in the footsteps of industrialists like Henry Ford.

Fuller’s first significant job in the 1920s, after leaving the army, was with his father-in-law marketing the “Stockade System”, a clever design of reusable wood-composite blocks for construction purposes.   The idea was to standardize construction on a precision-manufactured type of block, with holes in each block that could be lined up to pour in large quantities of concrete to add structural stability.    This could improve construction efficiency and reduce the cost of buildings.   This company eventually failed, but it gave Fuller a more ambitious idea.   Why not try to mass-produce full houses, rather than the component bricks?    He compared the concepts to Ford’s mass production of cars, in contrast with the cumbersome process of getting a house built.  “What would happen if a person, seeking to purchase an automobile, had to hire a designer, then send the plans out for bid, then show them to the bank, and then have them approved by the town council, all before work on the vehicle could begin?”   Cars would be far more expensive, and would only be affordable to the wealthiest citizens.   By mass-producing houses like cars, private homes would be within reach at much lower income levels.

Fuller attempted several versions of the design of this house, thinking of the practicalities of mass production.  However, his inclination to have it supported by a central, cylindrical metal frame supporting an overall circular or hexagonal design was very unusual for the time, and probably appeared to most people like something out of science fiction.   He also concentrated as much on the philosophy of his new houses as he did on the actual design, marketing it in a pamphlet called “4D Timelock”, implicitly linking his project with new scientific developments regarding the fourth dimension.   There were several reasons why Fuller considered his new designs to be four-dimensional.  First, there was the push for temporal efficiency of construction, to a greater degree than past architecture, incorporating the dimension of Time.   Then, there was supposed built-in longevity to his designs, due to the use of superior materials and techniques, again incorporating the idea of Time from the beginning.   Finally, he claimed that using advanced geometric concepts, like radiating spheres and trigonometry, was a key component that integrated all dimensions.   I find that last claim a bit odd, since those concepts are clearly part of three-dimensional geometry, but was unable to locate an online copy of Fuller’s original pamphlet to check the claim in more detail.

Fuller’s attempts to get investors to actually fund this new concept were, unfortunately, not very successful.   He initially made the mistake of trying to unveil it at a national architects’ convention— one which was dominated by a fear of mass-production and a movement among architects to make sure that every design remained custom and unique.    He then sent out many copies of his “4D Timelock” to potential investors, but while he received some fascinated replies, he got very little money.   Thinking the “4D” concept might be scaring some people off, he changed the marketing name to “Dymaxion”, combining “dynamism”, “maximum”, and “ions”.    His first big break was when he got permission to display a model at the Marshall Field department store in Chicago, and the public were intrigued by the bizarre design.   

Word-of-mouth led to other opportunities for him to display and talk about his ideas.  He exhibited and spoke about the Dymaxion house throughout the 1930s, as well as working on other related projects.   Along the way, he had to start describing his ideas as potential houses of the future— because despite his popularity, he had failed to attract enough investment to mass-manufacture the actual house anytime soon.    But the model’s bizarre appearance, and the rhetoric that connected it with the fourth dimension, were nicely tied with this conception.   Connected with this futurism was the appealing idea that these new houses could enable social progress:  by making housing less expensive and delivering it efficiently, a huge proportion of humanity could be lifted out of poverty and provided practical homes of their own.   As Lorance puts it, “As time progressed the issue changed from the specific house to the possibilities the house represented”.    

In other words, Fuller’s tours promoting the Dymaxion House launched his reputation as a futurist, visionary thinker, and his popularity as a public speaker.   This gave him the freedom and success to later explore many other radical ideas, such as his geodesic domes, which led to well-deserved worldwide fame.   Ironically, we would probably not remember him nearly as well today if his first proposals had actually succeeded in attracting investors, and he had simply become the founder and CEO of a practical manufactured-home provider.

And this has been your math mutation for today.


Saturday, May 27, 2017

230: Just Say The Dog Ate It

Audio Link

In the last few podcasts, you may recall that we’ve been discussing the Collatz Conjecture, a famous unsolved problem that’s very easy to state.   Just take any positive integer, and repeatedly perform the following operation:  if it’s odd, triple it and add 1, but if it’s even, divide it by two.   The conjecture is:  with this process, will you always end up eventually back at 1?   While the conjecture has been tested and is true for a huge range of values, no mathematician has yet been able to prove that it will always be true.   Based on that, I was surprised at a result I got when googling references on this conjecture for my last podcast.   It was an article by a Kentucky professor named Benjamin Braun, in which he talked about using the Collatz Conjecture as a homework problem for an undergraduate math class.

Now, when hearing this, you may be reminded of the strange story of statistician George Dantzig, which we have discussed in a previous podcast.   When in college, Dantzig arrived late in class one day, and copied down some problems he saw on the board, assuming they were the day’s homework.   He thought the homework was harder than usual, but finished it in a couple of days, and handed it in.   Then he discovered that these had actually been famous unsolved problems, and because he thought they were homework, he had solved them, earning his Ph.D. in one day!   This story became a staple of “power of positive thinking” lectures throughout the 20th century.

So, does Braun hope there is another Dantzig lurking out there in his classes, who could solve the Collatz Conjecture if approaching it with the right attitude?   That would be nice, but that’s a longshot.   Actually, Braun believes that unsolved problems can have a significant educational value as homework.   Too many students share a misperception that math is all about using known formulas and procedures to get answers, even after completing many college math classes, and are never in a position to explore a truly interesting problem.    When challenged with the Collatz Conjecture, students need to experiment with various hypotheses without knowing which ones will be true, and then can proceed to develop and prove some interesting insights.   For example, maybe some students will write a program to graph the number of steps it takes for various numbers to get down to 1.   This might lead them to hypothesize and prove an intermediate theorem, such as a relationship between a number’s base-2 logarithm and the minimum number of Collatz steps to 1.   In the end, they probably won’t solve the famous conjecture, but they will learn a lot along the way.

Braun describes three main benefits of this style of assignment:
  1. “Students are forced to depart from the answer-getting mentality of mathematics.”   As we have just discussed, they probably won’t completely solve the problem.
  2. “Students are forced to redefine success in learning as making sense and increasing depth of understanding.”    Since they are free from the pressure to find a solution, they can relax and concentrate on what they are learning.
  3. “Students can work in a context where failure is normal.”    As they examine the problem, students may come up with various hypotheses, and it’s fine if some of them are wrong.  As Braun describes it, they will understand the “pervasive normality of small mistakes in the day-to-day lives of mathematicians and scientists”.

Naturally, you might wonder how students react to being given an unsolvable problem.   Apparently this varies a bit:  Braun mentions that while some students love the exercise, others  are inclined to feel frustration and defeat.   But I think there is little doubt that this exercise is very unique in an undergraduate curriculum.  It’s a great path for students to understand that math isn’t all about replicating known algorithms, formulas, and proofs, and that there is still a huge unknown mathematical universe out there to explore.

And this has been your math mutation for today.


Sunday, March 26, 2017

229: When Numbers Change

Audio Link

In the last episode we discussed the Collatz Conjecture, a simple yet unsolved problem in elementary number theory.   During the podcast, I mentioned that it had been checked for specific values up to 10 to the 60th power, and no counterexample has been found.   Now, for common day-to-day purposes, you might say we can take the conjecture as true, since we’re unlikely to deal in real life with any number that large, unless perhaps we are doing advanced work in physics or chemistry.    Most likely I won’t be filling my car with 10 to the 61st gallons of gasoline & needing to rely on its obscure mathematical properties.   But beyond that, you might wonder if it’s possible at all for the conjecture to be false— after all, why would regular numbers start behaving differently once we get above a certain threshold?  We need to be careful though.   In fact, there are various mathematical conjectures that have been shown to be true up to some immensely large bound, but then suddenly fail.   Today we’ll discuss a few examples.   

Actually, if you think about it a bit, it’s not too hard to construct artificial cases of a conjecture that’s true up to some large number, then fails afterwards.   Here’s an easy one:  “All numbers are less than 10 to the 60th power.”   I dare you, try any number less than 10 to the 60th, and you’ll see this theorem seems miraculously true!  But just try a single larger number, and it will fail.     OK, you may consider that one to be cheating, as of course it’s possible to do this by mentioning a specific limit.   But here’s another artificial case that doesn’t mention a specific limit:   For any number N, that number does not represent the ASCII-encoded text of a Math Mutation podcast.   You may recall that modern computers represent any text document as a long series of numbers, which could be mashed together to represent one large number.   If the shortest episode of this podcast has, say, 500 characters in it, with each character represented by a 8 binary digits, then the smallest counterexample is around 2 to the 4000th power.     Try any smaller number, and it will obey the theorem.   We should point out that even if the artificial nature of these two examples makes them unsatisfying, they are still valid as existence proofs, showing that it is possible for a theorem to prove true for a huge range of numbers and then fail.

The world of mathematics, however,  provides no shortage of “real” conjectures, some created by brilliant mathematicians who just happened to interpolate incorrectly on a handful of topics.   Probably the most famous is one by Pierre de Fermat, the creator of Fermat’s Last Theorem.   I’m sure you remember this Theorem, where in the 1600s, Fermat conjectured that there are no whole number solutions to a^n + b^n = c^n for n greater than two,   That one turned out to be true for all possible numbers, as proven by Andrew Wiles in the 1990s.   But Fermat also came up with many other conjectures.   (He actually had a somewhat pretentious habit of writing down his conjectures along with comments that he had a proof, but not writing down the proof; that’s probably a story for another podcast though.)   Anyway, Fermat examined a set of numbers of the form 2(2n) + 1, which came to be known as Fermat numbers.   The first five of these are 3, 5, 17, 257, and 65537, which he noticed are all prime.   So he hypothesized that all Fermat Numbers are prime.  It seemed like a pretty good guess, though due to the rapid exponential growth it was hard to check for too many values.   But within 70 years after Fermat’s death, Leonhard Euler found a factorization for the 5th Fermat number, which is up in the 4-billions range, showing that it was not prime.  This was actually a pretty impressive accomplishment in a pre-computer age.

Fermat got his posthumous revenge though.   One of the longest-standing serious conjectures that has ended up being disproved was an analog of Fermat’s Last Theorem that Euler proposed in 1769, Euler’s sum-of-powers conjecture.   This basically set up a family of equations related to Fermat’s famous theorem, which Euler thought would all be unsolvable with whole numbers.  One example is a5 + b5 + c5 + d5 = e5..    This actually stood for several centuries, until finally in 1966 a brute-force search with modern computer technology enabled L. J. Lander and T.R. Parkin to find a solution:  275 + 845 + 1105 + 1335 = 1445.      Those numbers may not seem that large, but remember that when combining five 3-digit numbers, you have a truly immense number of possibilities, 10 to the 15th power, again virtually unsearchable in a pre-computer era.

Anyway, in the show notes you can find links to various other cases where a conjecture was thought to be true, and held true for millions of examples or beyond— but then in the end, was discovered to be incorrect.     We shouldn’t feel bad about these:  that’s how all math and science advances, by trying to interpolate new truths about the universe from what has been discovered so far.  Sometimes we are right, and sometimes even the best of us are wrong.   So even though we’ve tested the Collatz Conjecture for a massive range of possible values, there still could be a hidden counterexample lurking somewhere in the far reaches of exponential values, waiting to catch us by surprise a few centuries down the road.

And this has been your math mutation for today.


Monday, February 20, 2017

228: So Easy It's Hard

Audio Link

Let’s try an experiment.  Think of a positive whole number.   Any number will do.   Now, follow this simple rule:  if the number is even, divide it by two.   If it’s odd, multiply by 3 and add 1.   Repeat this process until your resulting number is 1.    So, for example, suppose we start with 5.   We multiply by 3 and add 1, to get 16.   Then, following the same rule, we divide by 2 to get 8.  Then we divide by 2 to get 4, and divide by 2 again to get 2, then 1.   If you try this with a few numbers, you’ll see that although you may go up and down a few times, you always seem to end up at 1.  But are you always guaranteed to arrive at 1, no matter what number you started with?   

Believe it or not, this simple question has not been solved.  It’s a famous open problem of mathematics, known as the Collatz Conjecture, or the “3n+1 problem”.     If we define the stopping time as the number of steps to get to 1, this conjecture can be stated as follows:  all positive whole numbers have a finite Collatz stopping time.   Despite being simple enough to explain to an elementary school student, this problem has defied the efforts of mathematicians and hobbyists for nearly a century.  The late quirky mathematician Paul Erdos once offered a $500 bounty for anyone who solves this problem, but this vast fortune has not yet been claimed.

By experimenting manually with a few numbers, you can easily convince yourself that the conjecture is true— it seems like you really do always end up back at 1, no matter how you started.   Yet your path to get there can vary wildly.   If you start with a power of 2, you can see that you’ll dive straight back to 1.   Some well-positioned odd numbers are almost as easy:  for example, if you start with 85, you’ll then jump to 256, which is a power of 2, and head straight back from there to 1.  On the other hand, if you start with the seemingly innocent number of 27, you will find the total stopping time is 111 steps, during with you visit numbers as high as 9232.  The Wikipedia page has some nice graphs showing how the stopping time varies:  its maximum value seems to slightly increase as the starting numbers increase, but there is no simple pattern that can be established to prove the conjecture.    Computers have experimentally shown that the conjecture holds for numbers up to 2^60, but of course that does not prove that it will remain true forever.

This Collatz stopping time function can also be seen as an example of chaos, a case where a very slight change in initial conditions can cause a dramatic difference in the result.   Why is it that starting with 26 will enable you to finish in a mere 10 steps, while increasing to 27 takes 111 steps, and then many higher numbers have far fewer steps?   It’s a good example to keep in mind when someone claims they have made accurate predictions about some iterative physical system using computer models.  Can they make the case that their model is somehow simpler than the Collatz process, of either halving or tripling and incrementing a single number at a time?   If not, what makes them think their modeling is less chaotic than the Collatz problem, or that their initial conditions are so accurate that they have ruled out chaos effects?     

As with many unsolved problems, this problem is also attractive to many slightly self-deluded amateurs, who every few years publish an article or make an online post claiming to have proven it.   One simple way to test any proposed proof is to see if it also applies to very similar problems for which the conjecture is false.  For example, instead of multiplying odd numbers by 3, multiply them by 5 before adding 1, turning the question into the “5n+1 problem”.   In that case, if we start with the number 13, the sequence is 13, 66, 33, 166, 83, 416, 208, 104, 52, 26, 13.   Since we got back to the number we started with, this means we repeat forever, without ever getting back to 1!   Thus, any attempted proof of the Collatz 3n+1 problem would have to also have some built-in reason for why it doesn’t apply to the 5n+1 version.    Why is the number 3 so special compared to 5?   Well, if I could answer that, I would be riding away in a limo paid for by Erdos’s $500.   

Still, even if you don’t expect to solve it, it is kind of fun to play with example values and look for patterns.   The way that this simple formula can seem to cause numbers to wander away from you, circle around and tease you temptingly, or race straight down to 1, can seem almost lifelike at times.   An intriguing online abstract claims to describe the problem as “an ecological process of competing organisms”, made of 1s in bit strings.  (Sadly, the full paper for that one is hidden behind a paywall, so I wasn’t able to read it.)     But I think my favorite summary of the problem is the one in the XKCD web comic:   “The Collatz Conjecture states that if you pick a number, and if it’s even divide it by two and if it’s odd multiply by three and add one, and you repeat this procedure long enough, eventually your friends will stop calling to see if you want to hang out.”

And this has been your math mutation for today.


Sunday, January 29, 2017

227: Heads In The Clouds

Audio Link

A few days ago I was flipping channels on the TV, and saw a few minutes of one of the horrible movie adaptations of Jonathan Swift’s classic 1726 satirical novel, Gulliver’s Travels.  When you hear that title, you probably think of a man being tied up on the beach and being captured by an army of tiny Liiluputians.   Most adaptations of the novel focus on that nation of tiny people, which actually comprised only the first section of the Travels.   Although mainly a political satire, even that part of the book had an influence on modern mathematics and computer science—  the Lilliputians were fighting a war over which end of an egg to crack first, with the Big Endians vs the Little Endians.   We now use those terms when describing whether the highest or lowest byte comes first in each multi-byte ‘word’ that forms a computer memory.   But that’s not our main topic today.   What I want to talk about is one of Gulliver’s later voyages, which directly satirized the mathematics and science of Swift’s day:    the voyage to Laputa.

Laputa was a city built on a floating island, populated by a highly educated race of men who spent all day contemplating advanced ideas of music, mathematics and science, almost completely disconnected from any practical matters.   These were a group of people who literally had their “heads in the clouds”.   According to at least one website, this metaphor was already in use by the 1600s, so Swift may have had it in mind when designing this city.  In fact, Laputans are always so deep in thought that they must hire an assistant to alert them when they need to interact with the real world.   As Swift described it:  

It seems the minds of these people are so taken up with intense speculations, that they neither can speak, nor attend to the discourses of others, without being roused by some external taction upon the organs of speech and hearing; for which reason, those persons who are able to afford it always keep a flapper … in their family, as one of their domestics; nor ever walk abroad, or make visits, without him.  And the business of this officer is, when two, three, or more persons are in company, gently to strike … the mouth of him who is to speak, and the right ear of him or them to whom the speaker addresses himself.  This flapper is likewise employed diligently to attend his master in his walks, and upon occasion to give him a soft flap on his eyes; because he is always so wrapped up in cogitation, that he is in manifest danger of falling down every precipice, and bouncing his head against every post; and in the streets, of justling others, or being justled himself into the kennel.   

Somehow all this contemplation never translates in practice to real world usefulness.  Gulliver admires the elaborate care which their tailor takes to measure every detail of his body, but the clothes then delivered are ill-fitted.  Their houses are all poorly put together, with walls at odd angles, because their precise geometric instructions are too refined for the uneducated servants who end up having to do the actual building.    They serve their food cut carefully into various geometrical shapes or representing musical instruments, with no relevance towards whether that is an appropriate or useful presentation for actual consumption.   He summarizes the situation with “I have not seen a more clumsy, awkward, and unhandy people, nor so slow and perplexed in their conceptions upon all other subjects, except those of mathematics and music.” 

A question you might now be asking is:  why does Swift seem so hostile to advanced science and mathematics, which by our time have resulted in amazing improvements to human comfort, productivity, and lifespan?   We need to keep in mind that back in the early 1700s, it was not at all obvious that all the effort spent by the elites on pursuing advanced studies of mathematics and science were actually leading anywhere.   One of the few practical abilities the Laputans did have was the power to lower their floating city and crush rebellious townspeople on the ground, perhaps a hint at the worry that new science was too often used to develop instruments of war rather than advance humanity.  Here is one of Swift’s comments on the unfulfilled promises of scientific leaders:  “ All the fruits of the earth shall come to maturity at whatever season we think fit to choose, and increase a hundred fold more than they do at present; with innumerable other happy proposals.  The only inconvenience is, that none of these projects are yet brought to perfection; and in the mean time, the whole country lies miserably waste, the houses in ruins, and the people without food or clothes. “   In fact, these promises were largely fulfilled by the sciences in the 20th century— too bad Swift never lived to meet Norman Borlaug and see the massive agricultural productivity increases of his Green Revolution.

Swift also was particularly sensitive to the suspicious claims that scientific understanding of mathematical laws governing the natural world would somehow enable a corresponding scientific and mathematical reorganization of society to benefit mankind.   He’s actually pretty explicit about this, stepping back from the satire to address the real world directly at one point:  

But what I chiefly admired, and thought altogether unaccountable, was the strong disposition I observed in them towards news and politics, perpetually inquiring into public affairs, giving their judgments in matters of state, and passionately disputing every inch of a party opinion.  I have indeed observed the same disposition among most of the mathematicians I have known in Europe, although I could never discover the least analogy between the two sciences; unless those people suppose, that because the smallest circle has as many degrees as the largest, therefore the regulation and management of the world require no more abilities than the handling and turning of a globe; 

Here I think Swift was on to something, when we consider that the major mass-murdering totalitarian movements of the 20th century all had intellectuals at their core who believed they needed to scientifically re-engineer society.    On the other hand, since I’m actually an engineer who now serves in elected political office, I should probably stop the podcast at this point before getting myself into trouble.   

f you’re a fellow math geek and haven’t read Gulliver’s voyage to Laputa, I think you’ll really enjoy it.   Since it’s so old it’s out of copyright, you can follow a link in the show notes and read it for free at Project Gutenburg.
And this has been your math mutation for today.


Thursday, December 29, 2016

226: See You Next Year

Audio Link

Before we start, I’d like to thank listener Maurizio Codogno, who published a nice review of the Math Mutation book at   Bizarrely enough, he wrote his review in Italian, but thanks to the magic of Google Translate, that doesn’t stop other listeners from reading it!   Remember that if you like the podcast and/or book, I really do appreciate a nice review at iTunes, Amazon, or Goodreads.

Now, on to today’s topic.   Recently I’ve been reading a collection of essays, short biographical recollections, and text-based art experiments by the radical 20th-century composer John Cage, titled “A Year From Monday”.    You may recall Cage as someone I’ve mentioned in several podcasts,  as he composed (if you can call it that) the silent music piece ‘4 minutes 33 seconds”, plus numerous musical pieces generated based on complicated formulas involving random numbers.   As usual with Cage, his entries in this book are sprinkled with many instances of weirdness for the sake of weirdness, woven in with a bit of celebrity name-dropping.   But worth reading for the bizarre humor and occasional surprising insight.   

Cage also used the essays in the book as lyrics for one of his strangest music pieces, “Indeterminacy”.   In Indeterminacy, he read each short story at a speed designed to fill a constant interval, while randomly-determined music played in the background.   Due to the timing needs of the music, pieces would sometimes be read very quickly, to fit in a lot of words, or very slowly to fill the available time.   There also was a random ordering to the stories.   As Cage described it, “My intention in putting the stories together in an unplanned way was to suggest that all things – stories, incidental sounds from the environment, and, by extension, beings – are related, and that this complexity is more evident when it is not oversimplified by an idea of relationship in one person’s mind.”   I actually bought the 2-CD set a number of years back, but found listening to it a rather frustrating experience, as you can probably guess.   As often happens with Cage, the idea of the piece is a lot more fun than the actual end result.

Getting back to the book, one of the aspects that I find most amusing is the conundrum represented in its title, “A Year From Monday”.   Apparently Cage was having fun with a group of old friends, and they decided they wanted to get together again.  One of them suggested that they would all meet at a favorite spot in Mexico “a year from Monday”, and they all agreed to the proposal, without further clarification.   Cage liked the idea, as it appealed to, as he described it, “my interests in ambiguity and my interest in non-measurement”.  After leaving, however, Cage started wondering, when exactly did they agree to meet?

As you’ve probably already figured out, the phrase “a year from Monday” is rather ambiguous.   How do we define such an interval?   The easiest method would be to assume that they just meant the same date next year:  if we assume that the discussion occurred on Monday, June 2nd, for example, then they would meet next year on June 2nd.   But this will not be a Monday, since the number of days in a year is not divisible by 7— is this is a problem?   Normally, when we talk about an interval starting on a day of the week, we expect to meet on that same day again:  for example, when scheduling a monthly meeting in a tool like Microsoft Outlook, we usually select options like “the first Monday of every month” or “the second Monday of every month”, so perhaps it would be more reasonable to assume that the plan actually meant to meet on the first Monday of June next year.

There is also the question of how to handle the possible case of a leap year.   If the intervening February had an extra day, would they have to meet a day later than originally planned, more like a year from Tuesday?   On the other hand, maybe our slavish devotion to human-created calendars is part of the problem.   If an objectively-measured solar year was intended, this is about 365.25 days, so it might make more sense to meet 365 days later, but delay the meeting time by 6 hours in order to make the interval precisely one year.

While Cage claimed that he enjoyed the ambiguity, he also had a conflicting tendency to carefully plan and measure musical pieces like an engineer, as shown by the precise time intervals used in Indeterminacy.    Thus, eventually he realized that he had to figure out when exactly he was going to Mexico.   When he tried to confirm his plans with the other attendees, he soon realized that due to the ambiguous phrasing, most of them had not actually taken the rendezvous seriously.  In fact, some had make firm plans which would prevent them from meeting in Mexico anywhere near the proposed date.   With that, Cage decided to give up his attempts at planning, realizing that some problems don’t have a clear answer, and simply rely on Fate.  As he phrased it, “We don’t have to make plans to be together.  (Last July, Merce Cunningham and I ran into Bucky Fuller in the airport outside of Madrid.)  Circumstances do it for us.” 
And this has been your math mutation for today.


Tuesday, November 29, 2016

225: A Crazy Kind of Computer

Audio Link

Before we start, just a quick reminder:  the Math Mutation book would make a perfect holiday present for the math geeks in your life!   You can find it on Amazon, or follow the link at .   And if you do have the book, we could use a few more good reviews on Amazon.   Thanks!   And now, on to today’s topic.

Recently I learned about a cool trick that can enable very rapid computation of seemingly difficult mathematical operations.  This method, known as stochastic computing, makes clever use of the laws of probability to dramatically cut down the amount of logic, essentially the number of transistors, needed to compute elementary functions.   In particular, a multiplication operation, which takes hundreds or thousands of transistors on a conventional computer, can be performed by a stochastic computer with a single AND gate.   Today we’ll look at how such an amazing simplification of modern computing tasks is possible.
First, let’s review the basic elements of standard computation, as realized in the typical design of a modern computer.   At a logical level, a computer is essentially built out of millions of instances of three simple gate types, each of which takes one or two single-bit inputs, which can be either 0 or 1.  These are known as AND, OR, and NOT gates.   An AND gate returns a 1 if both its inputs are 1, an OR gate returns a 1 if at least one of its inputs is 1, and a NOT gate transforms a single bit from 0 to 1 or vice versa.   An operation like multiplication would occur by representing your pair of numbers in binary, as a set of 0s and 1s, and running the bits through many AND, OR, and NOT gates, more or less replicating the kind of long multiplication you did in elementary school, with a few optimizations thrown in.    That’s why it takes so many gates to implement the multiplier in a conventional computer.

So, with this being said, how can we reduce the multiplication operation down to a single AND gate?   The key is that instead of representing our numbers in binary, we send a stream of bits down each input to the gate, such that at any moment the probability of that input being 1 is equal to one of the numbers we are multiplying.   So for example, if we want to multiply 0.8 times 0.4, we would send a stream where 1s appear with 80% probability the first input, and one with 40% 1s down the second input.   The output of the AND gate would be a stream whose proportion of 1s is 0.8 times 0.4, or 0.32.
The reason this works is due to a basic law of probability:   the probability of two independent events both happening is equal to the product of their probabilities.   For example, if I tell you there is an 80% (or 0.8) chance I will record a good podcast next year, and a 40% (or 0.4) chance I will record a bad podcast next year, the chance I will record both a good and bad podcast is 0.8 times 0.4, or 0.32.  Thus there is a 32% chance I will do both.    The fact that this ANDing of two probabilistic events is equivalent to multiplying the probabilities is the key to making a stochastic computer work.   
Now if you’re listening carefully, you may have noticed a few holes in my description of this miraculous form of computation.   You may recall from elementary probability that there is one major limitation to this probability-multiplying trick:  the two probabilities must be *independent*.  So suppose you want your stochastic computer to find the square of 0.8.   Can you just connect your 80%-probability wire to both inputs of the AND gate, and expect an output result that is 0.8 time 0.8 (or 0.64) ones?   No— the output will actually just replicate the input value of 0.8, since at any given time, it will be 1 if and only if the input stream had a value of 1.   Think about my real-life example again:  if I tell you there’s an 80% chance I’ll record a good podcast next year, what’s the chance I’ll record a good podcast AND I’ll record a good podcast?   Stating it redundantly doesn’t change the probability, it’s still 80%.   To compute a square operation in a stochastic computer, I need to ensure I have two *independent* bit streams that each represent the number I’m squaring.   So I need two separately-generated streams, each with that 80% probability, and can’t take the shortcut of connecting the same input twice.   If performing a series of computations in a stochastic computer, a designer needs to be very careful to take correlations into account at each stage.  
To look at another challenge of this computing style, let’s ask another basic question:  why doesn’t everyone implement their computers this way?  It’s not a question of technology— stochastic computing was first suggested in the 1950s, and such devices were actually constructed starting in the late 1960s.   Most importantly, generating independent bit streams with the probabilities equal to all the inputs of your computing problem, and then decoding the resulting output stream to find the probability of a 1 in your results, are both very complex computing tasks in themselves.   So I kind of cheated when I said the multiplication was done solely with the single AND gate.   Processing the inputs and outputs requires major designs with thousands (or more) of logic gates, quickly wiping out the benefit of the simplified multiplication.    This isn’t a total barrier to the method though.   The key is to find cases where we are able to do a lot of these probabilistic operations in relation to the number of input bit streams we need to create.    This can even be an advantage in some ways:  unlike a conventional computation, if a stochastic computer spends extra time performing the same computation and observing the output, it can increase the precision of the result.   But these issues do mean that finding good applications for the method can be rather tricky, as complex input and output generation must be balanced against the major simplification of certain operations.

Although stochastic computers were first constructed almost a half-century ago, they were then mostly abandoned for several decades.   With the breakneck pace of advances in conventional computing, there just wasn’t that much interest in exploring such exotic methods.    But with the recent slowdown of Moore’s Law, the traditional doubling of computer performance every 18 months or so, there has been a renewed interest in alternate computing models.   Certain specific applications, such as low-density parity check codes and image processing, are well-suited to the stochastic style of computing, and are current subjects of ongoing research.    I’m sure that in coming years, we’ll hear about more clever solutions to common problems that can be better performed by this unusual computing method.

And this has been your math mutation for today.