Sunday, July 6, 2014

197: Four Dimensional Greek Warships

Audio Link

Today we're going to discuss the famous Ship of Theseus paradox.   This paradox, known at least since the time of Socrates, involves the ship that the famous hero Theseus sailed from Crete to Athens.   When the ship returned, the local shipwrights noticed that one of its boards was starting to rot, so replaced it.   Because the ship was so famous, rather than eventually scrapping it as it got old, they continued over the years replacing any broken or decaying boards with new ones.    Eventually 100% of the wood on the ship had been replaced:  not a single plank remaned that had been present on the original voyage from Crete.   At this point, was this still the same ship from Theseus's voyage, or should this be considered a new ship that had been constructed in the port of Athens?    Many years later, Thomas Hobbes made this paradox even more confusing by adding one more issue:  suppose someone had painstakingly gathered the removed boards and used them to construct a second, complete though not-very-sturdy, ship.  Would this new ship have a better claim to be the true Ship of Theseus?

This paradox has been described with several variations over the years.    During the enlightenment John Locke described essentially the same paradox based on patching a sock.    Jules Verne came up with a clever version where an old man took a much younger wife, who after being widowed many years later took on a much younger husband, and the pattern repeated for hundreds of years.   Was this "the same" marriage centuries down the line?    In modern times, you're probably most likely to encounter this paradox in terms of rock bands, who seem to be in a perpetual state of warfare over the band name after some original members leave.   For many years I was disappointed that the last Velvet Underground album, "Squeeze", was out of print; it was considered to not be a true VU album by most fans, since none of the original members were left by the time it was recorded, but I was still curious to hear it.   Eventually iTunes made that album available, and after listening to it once, I have to take the side that the vessel in Athens' harbor should have been burnt as kindling.

But more seriously, how do we resolve this paradox?   After all the planks had been replaced, is Theseus's ship the one made out of the new planks, the one constructed from the old planks, or neither?   I think the most satisfying solution I have heard is based on the concept of "four dimensionalism".   The idea here is that our problem stems from the naive definition of Theseus's ship as an object at some point in three dimensional space.   We need to think of  the Ship of Theseus as the union of a continuous set of objects in four-dimensional spacetime, accounting for not just the three dimensions of the physical object but also the points in the fourth dimension of time.   Each 'slice' of the Ship of Theseus consists of a three-dimensional ship at a particular point in time, and the Ship of Theseus is a union of all these slices.  

In this view, the Ship of Theseus consists of the original ship on the day it returned to port, plus the ship in that port with one plank replaced a week later, and so on.    We need to be clear about what we are defining as Theseus's ship at each point in time.   Thus gradually changing out the planks doesn't make the ship a different ship, since we defined the Ship of Theseus to  be the one that is continually getting repaired over some interval in four-dimensional spacetime.     Note that it also possible to instead define the Ship of Theseus to be the sum of the original planks at each given point in time.    With this alternative but still valid definition, Theseus's oddly defined 'ship' will start out as the original ship upon its return from Crete, but then at many future time points consist of a partial ship plus a pile of wood in a junkyard somewhere, until Hobbes completes his duplicate.

Of course, there are simpler ways to resolve the paradox as well.   One might argue that the real problem is just the vague and muddy definition of what is the Ship of Theseus, which the paradox assumes the user to have as an implicit notion but is never clearly stated.   (I wonder if the "E prime" techniques from the last episode, where you avoid the word 'is' in favor of more specific verbs, could have helped here.)    If we had clearly defined the Ship from the outset, in terms of its planks, its deed, the captain, or something more tangible, we would never hit a seeming paradox.    A nice metaphor might come from some U.S. gun control laws, where the a component known as the "lower receiver" is considered to contain the identity of the firearm, and it's legally the same weapon as long as any other part is replaced, though it becomes a different one if you change the receiver.

Anyway, next time you stop in at the local mechanic and they replace a tire or air filter, think about whether Theseus would let you tell your spouse that you brought home a new car.  

And this has been your math mutation for today.







References:

Sunday, June 15, 2014

196: When 'Is' Isn't

Audio Link


Back in episode 192, we talked about Lojban, a language intended to fix the various flaws that make most natural languages illogical. But aside from the Lojban effort, others have attempted simpler solutions to improve the logic in natural languages. Followers of the cultlike philosophical movement known as General Semantics have found a way to supposedly improve the English language by transforming it into a new language called "E prime". They base E prime on the observation that poor and ambiguous usage of the verb "to be" causes many logical problems. This verb can hide the assumption of a logical hypothesis that needs to be better grounded in the user's previous definitions, and can create an illusion of a factual observation when in fact a writer merely states an opinion or guess. Thus, by completely disallowing usage of the all forms of the verb "to be", we can speak more logically and consistently, and add clarity and rigor to our thought processes. This change to the language can also make statements and discussions less dogmatic, and some theorize that it could reduce strife and conflict in human societies. The name "E prime" comes from the equation "E prime equals (capital) E - (lowercase) e", where capital E represents the full English language, and lowercase e represents the verb "to be".

To get an idea of the problems E prime attempts to solve, let's look at some example ambiguous or illogical statements in English that would no longer be allowed. Suppose you want to say "Erik is a podcaster". Do you mean that Erik makes his living at podcasting? That Erik records podcasts as a hobby? That Erik spends his weekends partying at a fraternity nicknamed "the podcasters"? You could mean any or none of these things. Another example comes from a classic Shakespeare quote, "To be or not to be, that is the question." We all know what that means, referring to Hamlet's struggles to decide whether or not to commit suicide. But we only know that due to lots of context from studying the play in school, or from hearing others talk about it. Wouldn't it be better if Shakespeare had said, "To live or to die, I ask myself this." While you may dispute the poetic value of this rephrasing, I think you'll agree that it communicates the idea clearly, even to someone who doesn't have any context about Shakespeare or Hamlet.

This ambiguity in language comes from the fact that the verb "to be" can have many different meanings in English. It can signify identity, class membership, predication, existence, location, mathematical equality, or be used as an auxiliary verb. E prime advocates offer some common suggestions for making each of these types of statements clearer. For example, statements of possession can usually be made to better include the titles or credentials being asserted, as in replacing "he is the landlord" with "he owns the building and manages it." In statements about mathematics, using the precise term "equals" to replace "is" makes discussions much more logically sound, and helps to distinguish statements of equations from definitions of terms.

Some have claimed that while all these complaints of ambiguity are valid, eliminating a verb entirely from the language goes a bit too far, and the issues should be solved by improving general discipline in language use. E prime advocates tend to admit that this might make sense in theory, but in real life people just have too much difficulty applying that kind of discipline to their writing. For example, one web author writes, "After seven years of experience with this technique, I must agree with Dr. Kellogg (who even speaks in E-Prime) that, to work effectively, E-Prime requires the total elimination of be forms, since we use them addictively, even compulsively, as their subliminal residuum even in third drafts attests. On a recent foray into cyberspace, for instance, I found a Web Page featuring four sentences 'rewritten' in E-Prime--two of them containing be forms!"

Naturally, as we could expect with a movement for such a major change, some members of the General Semantics movement have opposed the idea of E prime as well. They point out that eliminating all forms of a particular verb can only make writing poorer and less interesting, by reducing a writer's options. They also point out that the forbidden verb communicates unambiguously in many common situations: for example, if someone asks "What color is that rose", the answer "That rose is red" contains no ambiguity or cause for confusion. They also point out that if we have the goal of eliminating logically unsupported inferences, malicious writers can always sneak them in using other verbs. They could change, for example, the non-e-prime "Erik's podcasts are silly" to the fully acceptable "Erik records silly podcasts", keeping the implied accusation of silliness without using the word "are". If you look at the article linked in the show notes, you'll also find numerous obscure objections to e-prime that have meaning mostly to General Semantics devotees.

So, should we all try to eliminate the verb "to be" from our writing, in order to make it logically clearer, more rigorous, and less dogmatic? Judging by what I see on the web, a small but dedicated E prime community seems to still exist, as a subset of the still-continuing General Semantics movement. I think the adoption of some aspects of their philosophy by the somewhat creepy Scientology movement didn't do them any favors. But as with many radical ideas, I think the concepts of E prime may contain a kernel of truth. Several teachers of English and composition claim online that while they do not enforce the strictness of E prime for general usage, they recommend it to their students as a way to improve discipline and clarity in their writing. After reading a few articles on it, I do feel motivated to try to review my writing for overuse of unsupported "is" statements. If you listen carefully, you may already have noticed that I attempted to deliver this episode in E prime, with the only uses of the verb "to be" in cases where I am quoting it rather than using it. Does my thinking and speaking seem clearer than usual, or just more awkward? Only you listeners know for sure.

And this has been your math mutation for today.



References:

Sunday, May 18, 2014

195: Time Reversed Worlds

Audio Link

Recently I read Martin Gardner's classic book "The New Ambidextrous Universe", on various forms of symmetry found in modern physics.   One of the most amusing ideas discussed in this book is the idea of time-reversed worlds.   Is the direction of time truly fixed, as it seems to be from our point of view?   Or could there be parts of our universe where time runs in reverse from how we observe it, so our future is their past, and vice versa?    Naturally, this idea has been explored by many science fiction writers over the past century.   I think my favorite use of the idea was by Kurt Vonnegut in his novel Slaughterhouse-Five.   Let me quote a bit from Vonnegut's description of how a time-reversed observer would describe the bombing of Dresden during World War II:  

"The formation flew backwards over a German city that was in flames. The bombers opened their bomb bay doors, exerted a miraculous magnetism which shrunk the fires, gathered them into cylindrical steel containers, and lifted the containers into the bellies of the planes. The containers were stored neatly in racks. ... When the bombers got back to their base, the steel cylinders were taken from the racks and shipped back to the United States of America, where factories were operating night and day, dismantling the cylinders, separating the dangerous contents into minerals. Touchingly, it was mainly women who did this work. The minerals were then shipped to specialists in remote areas. It was their business to put them into the ground, to hide them cleverly, so they would never hurt anybody ever again."

More seriously, we might ask the question, how could a time-reversed world be possible?   One idea comes from the concept of antimatter.     As you may recall from physics, most fundamental particles have a corresponding antiparticle with opposite charge.   For example, an electron has a negative charge, and its antiparticle the positron has a positive charge.   When a particle and an antiparticle collide, they annihilate each other in a burst of energy, creating a new photon, or light particle.     Also, sometimes it is possible for a photon to spontaneously generate a particle-antiparticle pair.    In 1948 the Nobel-prize-winning physicist Richard Feynman began creating diagrams,   known as Feynman diagrams, to illustrate these basic interactions.

While viewing the mathematical symmetry in these diagrams, Feynman had a curious insight.   Perhaps antiparticles could simply be particles travelling backwards in time.    For example, a typical interaction in a Feynman diagram might show an electron and positron colliding, generating a short-lived photon, and the photon then creating an electron-positron pair.   But you could also interpret this as the first electron suddenly reversing direction, emitting a photon as it turns backwards in time to become a positron.   Then a short time later, a positron travelling backwards in time collides with the photon, and reverses direction to become a forward-moving electron.   The resulting picture is exactly the same.    This is a bit easier to understand if you see the diagrams visually; take a look at the Wikipedia link in the show notes to see what I mean.

If antimatter is just matter travelling backwards in time, could there be entire solar systems and worlds somewhere out there in the universe that are made of antiparticles, and thus experiencing time backwards from our point of view?   It's initially challenging to tell directly if a distant solar system is matter or antimatter, since the photon is its own antiparticle, and thus an antimatter solar system would look just like a matter one through a telescope.   However, modern physicists have realized that interstellar space does contain some atoms, approximately one per cubic meter.   This doesn't sound like a lot, but is enough that any matter/antimatter region boundary should be detectable due to the particles annihilating each other and generating gamma ray bursts.   There are also some level of heavier particles that occasionally arrive in cosmic rays, where we can also look for evidence of antimatter regions.    The physicists looking for these phenomena are now pretty sure that there are no antimatter regions in the observable universe.     However, there could still be such regions in areas too far away for us to observe.

Of course, there are other objections to the possiblity of time reversal in our universe, such as the need to follow laws of entropy, which seem to point time monotonically in one direction.   And if some time-reversed antimatter aliens stopped by to visit, it would be a rather unpleasant encounter, with them getting annihilated particle by particle as soon as they arrived.    We might try to communicate with them by radio, but it would be a rather confusing conversation, as the only questions they could answer would be ones that we hadn't transmitted yet.   But perhaps by programming long-lasting computers to send messages sometime in the future, we could at least have a rudimentary discussion and become aware of the basics of each other's existence.

And this has been your math mutation for today.

References:

Sunday, April 13, 2014

194: Voyages Through Animalspace

Audio Link

Before we start, I'd like to thank listeners WhWaldo and DLove21 for posting some more great reviews to iTunes.   Thanks guys!    And to you other listeners out there, don't forget, you too could be mentioned in an upcoming podcast by posting a nice review.   

Anyway, on to today's topic.   Yesterday I was at the Oregon Zoo with my daughter, and we saw lots of cute and not-so-cute animals, including a tortoise, lizards, tigers, sea otters, and a chimpanzee.   It's always amazed me that such a variety of animals could evolve on our planet, and through a variety of mutations some primal forms have led to all these diverse and dissimilar creatures.   For a long time I found this hard to grasp, until I read Richard Dawkins' famous book "The Ancestor's Tale".    In one chapter of the book, he described evolution as a grand mathematical journey through a special kind of multidimensional space.   Somehow this geometric view of evolution made it seem more real, and more sensible to me than it had ever been before, so I thought I would go ahead and share it with you.

What do we mean by a journey through a geometric space?   Let's start by talking about a journey through an ordinary three-dimensional space.   Think of a 3-D graph you might set up in a tool like Excel, showing your location in your house in terms of length, width, and height relative to the front door.   So a dot at coordinates (0,0,0) might indicate that you are at the front door, while (10,10,10) might show that you are in your computer room a short distance away on the second floor.     Suppose you ask the question:  is it possible to get from the front door to the computer room?   The answer is yes if you can draw a continuous path in your graph from coordinates (0,0,0) to (10,10,10), in which every point along the way is physically reachable.    If your computer room is unreachable- say your wife encased it in steel walls on all sides to keep you from playing so many video games-- this is represented by impassible blacked-out regions in your graph, preventing you from drawing this continuous path.

Now let's look at how we can model animal evolution as a graph.   Think about several characterists of animals, such as fuzziness, size, and strength.    You could draw points on a 3-D graph showing where some similar animals fall in these dimensions.   Perhaps your house cat would be close to the graph's origin, while a Siberian tiger would be represented by a point further out.    Let's ask the question:  can we travel on a continuous path, where motion is due to genetic mutations resulting in a living creature slightly different on one of our three dimensions, from the housecat point to the tiger point?      It's pretty easy to imagine mutations that make an animal slightly larger, stronger, or fuzzier.     Nobody would seriously propose, for example, that there is a blacked-out region somewhere betweem the cat and tiger where, after a certain size, there is no way a creature with that specification could be alive.    So we can easily imagine that the cat and the tiger are related.

Looking at just these three dimensions is obviously a massive simplification, as there are thousands of dimensions along which an animal can be described:   diet type, eyesight, hearing, and many other things you probably can't even think of if you're not a professional vet.   So the three dimensions we are limited to in our sad dimensionally poor existence, at least from our perception, are not sufficient to describe a creature.   But the core concept remains:  any animal can be thought of as a point in a large multidimensional graph.   Graphs with more than three dimensions can be easily modeled with modern computer systems, though we can't physically look at more than three in a single figure.   If you want to figrue out if some animal can have an evolutionary relationship to another animal, you just need to ask:  can you conceive a continual path from one to the other in this gigantic space?   It doesn't matter if the path is incredibly long- evolution has millions of years to work with.  

The most challenging part is that there are lots of blacked-out regions on this graph, representing non-viable monstrosities:   the point with the size of a housecat and the bite strength of a tiger, for example, can probably never be reached, though if you try to pet my cat while he's washing himself you may get pretty close.    As Dawkins points out, "in the multidimensional landscape of all possible animals, living creatures are islands of viability separated from other islands by gigantic oceans of grotsesque deformity.   Starting from any one island, you can evolve away one step at a time, here inching out a leg, there shaving the tip of a horn, or darkening a feather."    So the islands that Dawkins describes in his graph can be thought of as connected by thick sandbars, showing paths from one to the other where the intermediate creatures are reasonable.    The journey from a T. Rex to a chicken may seem incredible, but I don't find it that hard  to imagine a very long continuous series of changes that trace this journey in this strange type of space:  changes in size, gradual transformation of arms to wings, hardening of teeth into a beak, etc.

There's actually one more detail of this space that makes evolution slightly easier to believe than it might sound at first.   We've been talking about continuous paths, but that is an oversimplification.   Every genetic change is actually a tiny discrete 'jump' from one point to another, so the paths do not have to be fully continuous.   So, for example, the jump from total blindness to  light-sensitive spots, then to recessed spots filled with fluid, and so on to a full eye may seem to have many discontinuities, but that's okay, as long as none of the discontinuities is large enough that it can't be jumped by a small genetic mutation.   So some thin blacked-out regions of this graph may not be insurmountable.   There are of course some discontinuties that can't be jumped-- a bird with a petroluem-based jet propulsion system might be plottable here, but it would require such a massive set of changes at once that it's probably effecitvely impossible.   The blacked-out regions between our superbird and the chicken are likely just too thick to allow an evolutionary jump.

Anyway, maybe this only helps for math geeks, but I found Dawkin's spatial explanation a really intuitive way to think about evolution.  Next time you play with your cat or dog, remember that he's not just a pet, he's a unique point in a massive multidimensional space.

And this has been your math mutation for today.







References:

Sunday, March 23, 2014

193: Nonrandom Randomness

Audio Link




Recently my wife got a bit annoyed with me as we drove to a restaurant on our Saturday date night. The problem was that, as usual, I plugged my iphone into my car's radio to play music for our drive, telling it to shuffle the playlist and select random songs. On this particular night, my iphone decided to play four David Bowie songs in a row. Now I should admit that Bowie does take up a nontrivial proportion of my usual playlist, about 160 or so out of the 1000 songs on the list. But it was still pretty surprising that we heard nothing but Bowie on the drive; my wife thought I had set the iphone on Bowie-only just to drive her nuts. Is such a streak a reasonable result for a truly random song shuffle?

Well, to answer this, we should think about the probability-- does it make sense that every once in a while, we would experience a streak like this? It's kind of similar to the "birthday paradox", which you may have heard me mention before. Suppose you are at a party with a bunch of friends, and ask them all their birthdays, looking to see if any two share the same one. You would think that the probability of two people with the same birthday would be pretty low, since if you ask a random person their birthday, the chance of them sharing your birthday is only 1 in 365. But actually, the probability reaches 50 percent as soon as you have 23 people at the party. This seems pretty counterintuitive at first. But think about the number of pairs you have with 23 people: the total number of possible pairs of people is 23 times 22 over 2, or 253. When you look at it this way, the number of pairs seems in the right ballpark to have a decent chance of a shared birthday.

The actual calculation is a bit more complex. An easy way to look at it is by analyzing the party attendees one at a time, and calculating the chances that we do NOT have two people sharing the same birthday. We want to calculate, for each person, the chance that they do not share a birthday with any of the previously analyzed visitors. P1, the probability that there are no shared dates yet after looking at the first person, is 1, since there are no people before him. P2 is 364/365, the chance that visitor #2 does not have the same birthday as visitor #1. P3 is 363/365, the chance that visitor #3 doesn't have his birthday on any of the two days seen so far. And so on. The final probabiity that nobody has shared birthdays is P1*P2*P3*..., up to the number of party attendees. You can see the full probability calculation for this situation at the Wikipedia page linked in the show notes. The ultimate result, as I mentioned before, is that if you have 23 attendees, the probability is only about 49% that there are no shared birthdays.

So, now that we understand the birthday paradox, or at least you're willing to entertain the notion if it hasn't fully sunk in, what does that have to do with shuffled songs? Well, as one author points out at HowStuffWorks.com, you can think about shared artists for songs as something like shared birthdays. My playlist has way fewer artists represented than the number of days in a year, and I have been playing them over and over on many car trips. In the particular case of Bowie, we can see that the odds are better than average, as he represents about 1/6th of my typical playlist. Thus any time a Bowie song plays, there is roughly a 1 in 6 cubed, or 1 in 216, chance it starts a streak of four. And I've gone on a lot more than 216 car rides in my life. So it's not only not unusual, but expected, for me to see regular Bowie streaks. And that doesn't count streaks by other musicians as well, who have slightly lower odds but also are expected to occasionally appear several times in a row.

We also need to keep in mind the human predisposition to look for patterns in randomness. Back when song shuffling first became available on music players, it was a known problem that people would often randomly get the same song twice in a row or only a few songs apart, and then assume something must be wrong with their device. And of course, once people experience this once, they will suffer from a confirmation bias, looking for instances where the same song is repeated and concluding that these verify the supposed technical glitch. Something about our brains just isn't hard-wired to understand or accept the coincidences inherent in randomness. One simple solution was biased random selection, where the device purposely can avoid playing the same artist or song twice in a row based on user settings. Another change that helps is that current ipods and similar devices shuffle the music like a deck of cards, creating a full random ordering of all the songs in the playlist, rather than randomizing after each song. Thus inherently prevents repeats until the user chooses to re-shuffle their list.

To see an extreme case of our human predisposition towards finding patterns, try flipping four coins, writing down the results, and asking a friend if they appear random. No matter what pattern you get, it will probably look nonrandom to your friend! If you get 3 or 4 of the same result, such as Heads Tail Heads Heads, it will certainly seem like the coin was biased. If you have 2 of each result, there is no way to avoid having it look like a pattern: either a repeated pair like HTHT, or a symmetric pair like HTTH. You have to really think about it to convince your brain of the randomness of such a set of coin flips.

So, when dealing with birthday sharing at parties, coin flips, music shuffling, or annoyed spouses, remember that sometimes truly random results can seem nonrandom, and try to take a step back and really think about the processes and probabilities involved.

And this has been your math mutation for today.




References:

Sunday, February 23, 2014

192: A Logical Language

Audio Link

If you're a speaker of English, which is pretty likely since you're listening to this podcast, you may have found yourself occasionally frustrated by its arbitrary nature, and the difficulties and ambiguities this sometimes causes.    Why are there so many ways of spelling "their" there?   Why should you have to twist your tongue if you want to sell sea shells by the seashore?   And if you talk about a little girls' school, why should listeners be confused about whether the school or the girls are little?    It may not surprise you to learn that the desire for a more well-defined and mathemtically sound human language has been around for a long time.   In fact, it has been over 50 years since Dr. James Cooke Brown first defined the language Loglan, a new human language based on the mathematical concepts of the predicate calculus.    Later iterations of the language, after an internal poltiical struggle against Dr. Brown by language enthusiasts, were renamed Lojban.   In theory, Lojban, unlike English and other natural languages, is claimed to be minimal, regular, and unambiguous.

How do they define Lojban as such a clean language?    First they made a careful choice of phonemes, basic sounds, chosen from among the ones most common in a variety of world languages.    Each distinct-sounding phoneme is connected to uniquely defined symbols, removing any possible confusion about how to pronounce a given word:  a word's sound is completely determined by how it is spelled.   Then they defined a set of around 1,350 phonetically-spelled basic root words using these phonemes, being careful to not create homonyms or synonyms that could lead to confusion.   The number of letters in a word and its consonant-vowel pattern determine what type of word it is:  for example, a two-letter word with a consonant followed by a vowel is a simple operator, while five-letter words are what is known as "predicates".    Replacing many aspects of parts of speech such as nouns and verbs from traditional languages, the formation of sentences is based around the predicates, which are in many ways analogous to the logic predicates of mathematics.   For example, the predicate "tavla" means "x1 talks to x2 about x3 in language x4", with x1, x2, x3, and x4 being slots that may be filled by other Lojban words.  

To get a better idea of how this works, let's look at a specific example.   In the opening I alluded to the sentence "That's a little girls' school", which is ambiguous in English:  is it a school for little girls, or a little school for girls?    In Lojban, if it is the school that is little, the translation is "Ta cmalu nixli bo ckule".   The predicate "cmalu" defines something being small.   "Nixli" means "girl", and "ckule" means "school".   The connector "bo" groups its two adjacent words together, just like enclosing them in parentheses in a mathematical equation, showing that we are talking about a school for girls, and it is that whole thing which is small.   Alternatively, if we said "Ta cmalu bo nixli ckule", the virtual parentheses would be around "cmalu" for is-a-small and "nixli" for girl, showing that what is small is the girls, not the school.   If there were no "bo" at all, there is a determinstic order-of-operations just like in a mathematical equation:   the leftmost choice of words is always grouped together.   So "Ta cmalu bo nixli ckule" and "Ta cmalu nixli ckule" are equivalent.   Pretty simple, right?   Well, maybe not, but after you stare at it for a while it kind of makes sense.   And it does eliminate an ambiguity we have in English, at least for this case.

The adherents of these logical languages claim many potential benefits from learning them.   They were originally developed to test the Sapir-Whorf Hypothesis, which claims that a person's primary language can determine how they think.   Whatever the merits to this idea, I have a hard time seeing Lojban as a valid testing tool, unless a child is raised with this as his primary language without learning any natural languages-- and that would be rather cruel to a child, I think!    But many other virtues are claimed.    Since the language is fully logical, it should facilitate precise engineering and technical specifications; a goal I can sympathize with, since I regularly deal at work with challenges of interpreting plain-English design specs.   It is also claimed as a building block towards Artificial Intelligence, since its logical nature should make it easier to teach to a computer than natural languages.    It is also claimed as a culturally neutral international language, though it has fallen far short of other choices like Esperano in popularity.     And its adherents also enjoy it as a "linguistic toy", helping to research aspects of language in the course of building an artificial one.

At this point, I should add that I actually have a bit of a personal perspective on the viability of this kind of approach to technical specs.   It is claimed that the logical and precise nature of Lojban will mean that if engineers would just learn it, all our specs would be clearer and unambiguous, leading to great increases in engineering productivity.    But I work in the area of Formal Verification, where we are trying to verify chip designs, often having to convert plain-English specs into logically precise formats.    For many years there were different verification languages proposed for people to use in specifications, many offering minimal, highly logical, and well-defined semantics.   But the ones that caught on the most in the engineering community and became de facto standards have not been the elegantly designed minimal ones, but the ones that were most flexible and added features corresponding more to the way humans think about the designs.    So I'm a little skeptical of the idea that engineers would willingly replace English with a language like Lojban in order to gain more logical precision.  

In any case, I think the biggest failure of Lojban has been that there are not enough people willing to learn it.    Perhaps the human brain's language areas might just not be hard-wired in a way that naturally supports the predicate calculus.     Even the lojban.org page states "At any given time, there are at least 50 to 100 active participants...   A number of them can hold a real-time conversation in the language."   So out of 50-100 people who are paying attention, only a subset of these can actually speak it?   In comparison, Esperanto, an artificial international language designed by idealists in the late 19th century, has tens of thousands of speakers, and an estimated thousand who learned the language natively from birth.  And even Klingon, an artifcial language invented for "Star Trek" and of no practical use to anybody, is rumored to have more fluent speakers than Lojban.

So, if you want to learn a cool way to think differently about language and make it more mathematically precise, go ahead and visit the Lojban institute online and start your lessons.   But if you're hoping to make your engineering specifications more precise, communicate with your neighbors, or bring about world peace, you're out of luck.   So remember to teach your children English as well.

And this has been your math mutation for today.

References:

Sunday, February 2, 2014

191: Liking The Lottery

Audio Link

If you're the kind of person who listens to math podcasts, you've probably heard the often-repeated statement that a government-run lottery is a "tax on stupidity", due to the fact that stupid people are likely to waste their money on something that has a negative expected value.   But is the case really that open-and-shut?    Does the negative expected value of a lottery automatically make it not worth playing, and would this situation reverse if the expected value ventured into the realm of the positive?

Let's start by reviewing the concept of expected value.   At a basic level, this is the sum of the possible values of your lottery ticket, each multiplied by the probablility of getting that value.   As a simple example, suppose you have a local lottery where you pay 1 dollar to guess a number from 1 to 10, and if you're right, you get 6 dollars back.   Your expected value is 9/10 times -1, since in 9 out of 10 cases you lose your dollar, and 1/10 times 5, for a total of -40 cents of expected value.    This represents the likely average return per round if you play hundreds of rounds of this lottery.   Expected value calcuations for real-life lotteries are similar, except that they deal with very tiny probabilities and values in the millions.     Real-life lotteries almost always have negative expected values; for example, one link in the shownotes calculates expected value of a recent Powerball ticket at -1.58.   Actually, for real life lotteries there are some complicating factors that reduce it further:   you have to account for possibly splitting the jackpot with someone else who guessed the same numbers, and also the hefty chunk of taxes that Uncle Sam will take out of your winnings, but let's simplify this discussion by ignoring those factors.

Now here's the critical question:  suppose after many weeks of a growing Powerball jackpot, which happens sometimes if there is no winner, the pot grows to the hundreds of millions, and the expected value crosses over into positive range.   Is it now a more rational decision to play the lottery?   I would argue no:  you are still much more likely to be struck by an asteroid or lightning, die from a bee sting,  or suffer a plane crash than to win.   The expected value calculation really only kicks in if you are buying millions of tickets, in which case you can use it to figure out if your massive bet is likely to be profitable.   In the show notes you can find a link to an amusing article about an investment group that actually did try to buy all the tickets to a Virginia lottery one year, but was a bit hosed by the fact that not all the tickets could be printed in time.

Another way to realize the limited usefulness of the expected value is to think about a slightly odd lottery, as suggested in a blog by statistician Alan Salzberg:  suppose you could spend all your savings for a 1 in  1000 chance to win 10 billion dollars.      If you have less than 10 million dollars in the bank, this game actually has a positive expected value.    Would you play it?   I think 99.9% of people would think playing such a game is insane.   When you can only play it once, you need to think about things other than the statistical average of thousands of trials.   What is the likely net effect on your life if you play it once?   Chances are overwhelming that this lottery would leave you penniless.

So, if the expected value calculation doesn't make sense, how do we figure out if playing the lottery is rational?   I think the key factor is the cost to you of spending the price of the lottery ticket.  Assuming you are doing OK economically, spending a couple of dollars every week can probably be considered effectively zero cost:  you are likely to casually spend more than that on potato chips from vending machines, lattes at Starbuck's, etc.    For this near-zero cost, what value do you get?   There is that thrill of scratching off the numbers or watching the drawing, and having that infinitesmal but nonzero chance of becoming an instant millionaire; given the low cost, maybe that alone makes it worthwhile.   You also know that your ticket cost has contribtued to your state government; your feelings on that may vary, but even if you're a hardcore libertarian, there may be at least a few government services you are OK with funding.   And you're probably happier giving money voluntarily than being required to by law, which would be the effect if nobody played the lottery and income or sales taxes had to be raised instead.

Now I'm not saying that lotteries are always a good idea:  the arguments I just made are predicated on the fact that the lottery ticket is effectively zero cost to you.   If it is not-- if the 2 dollars per week would make a real difference in your life, or you are spending more money than you can afford to lose-- you need to realize that the chances of winning something are so infinitesmal that this is really not a wise expenditure.    It's always kind of sad to see blue-collar-looking people pump what seems like hundreds of quarters into Oregon Lottery video poker machines in bars.   Saving up the money annually to buy asteroid insurance would be more likely to benefit their families in the long run. 

But overall, does playing the lottery mean you're stupid?   This looks to me like an area where many people blindly apply a mathematical formula without really thinking about what it means.    Assuming the cost of a lottery ticket is effectively zero when compared to your income, it looks to me like the answer is no, playing the lottery may be perfectly rational.

And this has been your math mutation for today.

References: