Feeds:
Posts
Comments

Posts Tagged ‘math’

“I tried to start a business, but it failed. It must be Obama’s fault.”

People make this sort of argument all the time. They use their own experience (i.e. one data point) to make some generalization about POTUS or Congress or the governor or whatever. But such arguments are, most of the time, completely bullshit.

I’ll go out on a limb: if your business failed, it’s probably your fault.

The fact of the matter is, it’s hard to succeed in business. After about six years or so, over 50% of new businesses have failed. [See here to see where I got my hard numbers, most of which come from the U.S. Bureau of Labor Statistics.] How do you know, if your business failed, that it was x, y, or z’s fault and not your own? Would your business have really survived, even under a perfect utopia of your imagining?

First of all, the consistency of the “current administration” (and by that I mean POTUS and Congress and the governor and your local city council and so on) doesn’t really matter jack squat when it comes to business survival rates. Check out this graph:

jobs

There’s no noticeable difference between Clinton, Bush, and Obama here. 20% to 25% of all new businesses fail in their first year, no matter who’s in charge. (This is a little hard to see from the graph directly, as the x-axis is shifted in a strange way.  I used the hard data here.)  After a decade, roughly two-thirds have failed. You can blame POTUS if you like, or alternatively, blame whoever is controlling Congress, but either way you’re slipping into the No True Scotsman fallacy: “Sure, 25% of businesses fail in their first year, but I am different… I would have succeeded had it not been for x, y, or z. I am not bad at business; I would have succeeded if only…” You get the idea. (By the way, and anecdotally, if I were playing a blame game, I’d be more likely to blame a governor, because in my experience state policies tend to effect businesses more than federal policies.)

Now, I’ll admit that certain laws (passed by Congress) or policies (enacted by POTUS) can hurt businesses in specific instances. If you start the Acme Widget corporation, and there’s a huge Widget Tax enacted, your business might fail, and you wouldn’t be remiss in blaming Congress or POTUS. But I’m more interested in generalizations: given that your business failed, can we estimate (if at all?) whether or not it was your fault, or someone else’s fault?

Here’s where Bayes’ Theorem comes to the rescue. First, let’s make some definitions. Let

P(good) = Probability that a random person is good at business;

P(suck) = Probability that a random person sucks at business;

P(fail|good) = Probability that your business fails in year 1, given that you’re good at business;

P(fail|suck) = Probability that your business fails in year 1, given that you suck at business;

P(suck|fail) = Probability that you suck at business, given that your business failed in year 1.

Bayes’ Theorem can help us calculate that final quantity, given assumptions about the previous four. Let’s start with a pretty arbitrary guess:

P(good) = 20%

P(suck) = 80%

What’s my justification for saying that 4/5 people suck at business? Well, the graph above seems to be approaching 20% asymptotically. Only 20% of businesses survive “for the long haul”. And 20% “feels” right: most people aren’t good at business, but there’re still millions of people who are.

Now, let’s say 1000 people start a business. Based on the assumptions above, 800 of them suck at business, whereas 200 of them are good at it. We’ve already mentioned that (say) 75% of the businesses will survive their first year, and 25% will fail: 750 vs. 250. Let’s further assume that

P(fail|suck) = 30%.

I pulled that percent out of a hat, but it seems reasonable to assume if 25% of all businesses fail in their first year, then more than 25% of businesses run by sucky managers will fail. But not too much more: 75% of businesses still succeed in their first year, no matter who is running the show. With P(fail|suck) fixed, we’re forced to concede that

P(fail|good) = 5%.

Bad luck, that. Some 5% of businesses run by good managers will still fail in their first year. These are the people who can rightly blame the administration. (Where did the 5% come from? Well, 30% * 800 + 5% * 200 = 250 failed businesses.)

Now: on to Bayes’ Theorem! (For a discussion of how to use this handy tool, see here). We find that

P(suck|fail) = [P(fail|suck) P(suck)]/[ P(fail|suck) P(suck) + P(fail|good) P(good)]

P(suck|fail) = [(0.3) (0.8)]/[ (0.3) (0.8) + (0.05) (0.2)] = 0.96

Translated into English, this means that if your business fails in its first year, we can conclude that it was probably your fault. There’s about a 96% chance that you suck at business.

This is a fascinating result…and you’ll get similar results no matter what assumptions you make, as long as they are reasonable and consistent. For one thing, you’ll always get that P(suck|fail) > P(suck). If I take a person off the street, they have about an 80% chance of sucking at business. But if I take a failed business owner, someone whose business failed in its first year, then I have some extra data, and their chance of sucking at business has gone up to 96%. (There’s a 4% chance that they are good at business but got unlucky.)  A person whose business fails has no right to complain about their failure. It was probably their fault.

There are two lessons here. One, stop whining. Two, remember Bayes’ Theorem!

Thomas_Bayes

I suck at business, too.

Advertisement

Read Full Post »

Anyone who studies physics and/or mathematics has often encountered the following conundrum:

How do you distinguish 18th-century French mathematicians with surnames beginning with an “L”? (I call these E.C.F.M.W.S.B.W.A.L.’s)

For example, you might recall that an E.C.F.M.W.S.B.W.A.L. invented the calculus of variations, some time around 1760.  Was it Legendre?  Lagrange? Laplace?  Or maybe you remember that an E.C.F.M.W.S.B.W.A.L. was the father of probability theory, and worked on the Buffon needle problem.  Was that Laplace?  Legendre? Lagrange?

So as a public service, I’ve sorted this out for you.  I henceforth talk about these three great mathematicians, and hope to distinguish them in your mind.

Lagrange: perhaps the best mathematician of the 1700’s.

Lagrange is the oldest of the E.C.F.M.W.S.B.W.A.L.’s, born in 1736.  Some call him the greatest mathematician of the century, although I might give that title to Euler.  In any case, he’s responsible for a host of discoveries: he pretty much invented an entire branch of mathematics, the calculus of variations; he used this tool to reformulate classical mechanics (think L = T – V) making it suitable for non-Cartesian coordinates, such as polar; he invented Lagrange multipliers, an elegant way to deal with constraints in differential equations; and he introduced the f(x),f'(x),f”(x)…notation for derivatives.

His greatest work was Mécanique analytique; all of the above achievements are found in this book.  Hamilton described the work as a “a scientific poem,” for its elegance is astounding.

lagrange

Lagrange

Lagrange was rigorous and abstract: he bragged that the Mécanique analytique did not have a single diagram.  To Lagrange, math was an art; the aesthetics of a theory took precedence over utility.

Laplace: the “applied” mathematician

Laplace was seven years younger than Lagrange, born in 1749.  He also is associated with classical mechanics, but unlike Lagrange, he did not reformulate the field per se.  Rather, he took Newtonian mechanics to its “apex” with his work Mécanique céleste.  This work is brilliant, but it’s also clunky and difficult.  It analyzes the orbits of all known bodies in the solar system, and concludes that there is no need of God to keep the whole mess going.  In fact, Napoleon supposedly asked why Laplace didn’t mention God in the Mécanique céleste.  He reportedly said “I have no need for that hypothesis.”

ban-laplace

Laplace

Laplace didn’t place as much emphasis on “beauty” in mathematics.  To him, math was just a tool.  Not surprisingly, he contributed to the “applied” field of probability theory; in fact, he’s arguably the founder of probability theory as we know it today.

Legendre: the elliptic integral guy

Although highly regarded in his day, Legendre (b. 1752) is really a tier below the first two guys.  Basically, he worked out how to do some elliptic integrals, and he introduced the Legendre transformation, which is used in many branches of physics.  For example, you can go back and forth between the Hamilton and Lagrange approaches of classical mechanics by means of Legendre transformation.  Also, such transformations are ubiquitous in thermodynamics (think U → H → A → G).

Legendre is also know for the portrait debacle.  Only a single known image of Legendre exists, and that image is not flattering:

Legendre

Legendre

Every other supposed portrait of Legendre is actually the picture of some obscure politician, because of a mistake which has propagated forward for 200 years.

In summary:

Lagrange: the beauty of math; reformulated mechanics in the Mécanique analytique

Laplace: math as a tool; Newtonian mechanics reaches its zenith in Mécanique céleste; probability theory

Legendre: the creepy looking elliptic integral guy

Note: I have not mentioned Lavoisier (b. 1743) because he was a chemist.  But if you really need him:

Lavoisier: a chemist who was guillotined in the French Revolution.

[Note added Dec. 4, 2014]  I could have included L’Hopital (French, died 1704) but all he did was write a textbook.  Laguerre was French, but he was born in 1834;  Lebesgue was French, but he was born in 1875.

Read Full Post »

As promised, the solutions…

1.   681472 [Um, Didn’t we answer this one earlier?]

2.   3927.27272… seconds This represents the amount of time it takes the minute hand of a clock to lap the hour hand.  For example, the hands coincide at midnight; they next coincide 3927.27272 seconds later, or at about 1:05:27 AM.

3.   23.14069… This is just e^π.

4.   2.1656 x 10^185 This is how many cubic planck lengths fit in the observable universe…basically, if our universe were a 3D computer, this is how many pixels you’d need.

5.   1.03 light year/year^2 This is the acceleration due to gravity g, in non-standard units.  It has the following interpretation: if you ignored relativity and accelerated at a rate of 1 g (reasonable for a starship), after a year you’d have reached the speed of light.

6.   133956 This is the number of possible combinations of two birthdays, since 133956 = 366^2.  If everyone on Earth had a significant other, there would be over 26,000 couples with the exact same two birthdays as you and your other.

7.   About 19.5 million people The number of people on Earth who share your birthday.

8.   0.739085… This is called the “Dottie number”…an irrational number that solves the equation cos x = x.

9.   1.72048 m^2 The area of a pentagon with sides of 1 m.

10.   0.004295 % This is what percent of Earth’s history homo sapiens has been around.

Read Full Post »

Many Worlds Puzzle #3

Today there are really 10 puzzles. Can you figure out the significance of each number below? I’ve answered the first to get you started.

1.   681472

This number has a prime factorization of 2^9 x 11^3, which indicates that it equals 88^3. There are 88 keys on a piano…so one obvious interpretation is that the number 681472 is the number of possible three-note permutations that could start any piece on a piano (not counting rests, and ignoring duration). I wonder how many of the permutations have actually ever been played over the years?

2.   3927.27272… seconds

3.   23.14069…

4.   2.1656 x 10^185

5.   1.03 light year/year^2

6.   133956

7.   About 19.5 million people

8.   0.739085…

9.   1.72048 m^2

10.   0.004295 %

Because many of these problems are challenging, I will post hints in a week or so.

Read Full Post »

Up until Oct. 7, 2013, my modest blog averaged about 18 hits per day.  Then this happened:

VIRAL

A post of mine, the 9 kinds of physics seminar, had gone viral.  I was shocked, to the say the least.

I spent some time investigating what happened.  The original post went out on a Thursday, Oct. 3.  Nothing much happened, other than a few likes from the usual suspects (thank you, John Zande!)  I did share the post with Facebook friends, which include not a few physicists.  (Note: I don’t normally share my blog posts to Facebook.)  Then on Monday, Oct. 7, the roof caved in.

It started in India.  Monday morning, I had over 800 hits from India.  My initial thought was that I was bugged somehow.  But soon, hits started pouring in from around the world, especially the USA.

And then it kept going.

On Tuesday, Oct. 8, the Physics Today Facebook page shared the post, where (as of today) 451 more people have shared it, and 188,000 people have liked it.  (Interesting question: my blog has only had 130,000 views.  Are there really that many people who like Facebook posts without even clicking on the link?)

The viral spike peaked on Wed., Oct. 9.  I had discovered by then that my post had been re-blogged and re-tweeted numerous times, by other physicists around the world.  If you Google “The 9 kinds of physics seminar” you can see some of the tweets for yourself.

Why did the post go viral?  Who knows.  I’m not a sociologist.  I think it was a good post, but that’s not the whole story.  More importantly, the post was funny, and it resonated with a certain segment of the population.  If I knew how to make another post go viral, I’d do so, and soon be a millionaire.

What’s fascinating to me, though, as a math nerd, is to examine how the virality played out mathematically.  Here’s how it looked for October:

Chart 1

I don’t know anything, really, about viral cyberspace, but this graph totally matches my intuition.  Note that for the last few days, the hits have been around 400/day, still much greater than the pre-viral era.

After the spike, is the decay exponential?  I’m not a statistician (maybe Nate Silver could help me out?) but I do know how to use Excel.  Hence:

Chart 2

The decay constant is 0.495, corresponding to a half-life of 1.4 days.  So after the peak, the number of hits/day was reduced by 1/2 every 1.4 days.

This trend didn’t continue, however.  Let’s extend the graph to include most of October:

Chart 3

Over this longer time span, the decay constant of 0.281 corresponds to a half-life of 2.5 days.  The half-life is increasing with time.  You can see this by noticing that the first week’s data points fall below the exponential fit line.  It’s as if you have a radioactive material with a half-life that increases; the radioactive decay rate goes down with time, but the rate at which the number of decays decreases is slowing down.  (Calculus teachers: cue discussion about first vs. second derivatives.)

Maybe this graph will help:

Chart 4

The long-term decay rate seems to be 0.1937, corresponding to a half-life of 3.6 days.  At this rate, you would expect the blog hits to approach pre-viral levels by mid-November.  I doubt that will happen, since the whole experience generated quite a few new blog followers; but in any case, the graph should level off quite soon.  What the new plateau level will be, I don’t know.

Where’s Nate Silver when you need him?

**********************************************************************************

If you enjoyed this post, you may also enjoy my book Why Is There Anything? which is available for the Kindle on Amazon.com.

sargasso

I am also currently collaborating on a multi-volume novel of speculative hard science fiction and futuristic deep-space horror called Sargasso Nova.  Publication of the first installment will be January 2015; further details will be released on Facebook, Twitter, or via email: SargassoNova (at) gmail.com.

Read Full Post »

In my continuing effort to present cutting-edge research, I present here my findings on the 9 kinds of physics undergrad.

First, let’s look at a scatter plot of Ability vs. Effort for a little more than 100 students.  (This data was taken over a span of five years at a major university which will remain unnamed.  Even though it’s Wake Forest University.)

HR diagram

Student ability is normalized so that 1 is equivalent to the 100th percentile, 0 is the 50th percentile, and –1 is the 0th percentile.  [This matches the work of I. Emlion and A. Prilfül, 2007]  Ability scores below –0.5 are not shown (such students more properly belong on the Business Major H-R diagram).

On the x-axis is student effort, given as a spectral effort class [this follows B. Ess, 2010]:

O-class: Obscene

B-class: Beyond awful

A-class: Awful

F-class: Faulty

G-class: Good

K-class: Killer

M-class: Maximal

As you can see, most students fall onto the Main Sequence.

HR typical

The Typical student (effort class G, 50th percentile) has a good amount of effort, and is about average in ability.  They will graduate with a physics degree and eventually end up in sales or marketing with a tech firm somewhere in California.

HR giant

The Giant student (effort class K, 75th percentile) has a killer amount of effort and is above average in ability.  Expect them to switch to engineering for graduate school.

HR smug

The Smug Know-it-all student (effort class O, 100th percentile) is of genius-level intellect but puts forth an obscenely small amount of effort.  They will either win the Nobel prize or end up homeless in Corpus Christi.

HR grad

The Headed to grad school student (effort class B, 75th percentile) is beyond awful when it comes to work, and spends most of his/her time playing MMORPG’s.  However, they score well on GRE’s and typically go to physics graduate schools, where to survive they will travel to the right (off the main sequence).

HR industry

The Headed to industry student (effort class F, 55th percentile) is slightly above average but has a faulty work ethic.  This will change once they start putting in 60-hour weeks at that job in Durham, NC.

HR phobe

The Hard working math-phobe student (effort class M, 30th percentile) is earnest in their desire to do well in physics.  However, their math skills are sub-par.  For example, they say “derivatize” instead of “take the derivative”.  Destination: a local school board near you.

HR super

The Supergiant student (effort class K, 100th percentile) is only rumored to exist.  I think she now teaches at MIT.

HR frat

The Frat boy student (effort class O, 50th percentile) is about average, but skips almost every class.  Their half-life as a physics student is less than one semester.  They will eventually make three times the salary that you do.

HR dwarf

The White dwarf student (effort class B, 30th percentile) is below average in ability and beyond awful when it comes to putting forth even a modicum of effort.  Why they don’t switch to being another major is anyone’s guess.

***********************************************************************************

If you enjoyed this post, you may also enjoy my book Why Is There Anything? which is available for the Kindle on Amazon.com.  The book is weighty and philosophical, but my sense of humor is still there!

***********************************************************************************

sargasso

I am also currently collaborating on a multi-volume novel of speculative hard science fiction and futuristic deep-space horror called Sargasso Nova.  My partner in this project is Craig Varian – an incredibly talented visual artist (panthan.com) and musician whose dark ambient / experimental musical project 400 Lonely Things released Tonight of the Living Dead to modest critical acclaim a few years back.  Publication of the first installment will be January 2015; further details will be released on our Facebook page, Twitter feed, or via email: SargassoNova (at) gmail.com.

Read Full Post »

Prescribed_burn_in_a_Pinus_nigra_stand_in_Portugal

Run for your lives!

A few days ago I heard a story on NPR about wildfires in Yosemite.  It turns out that something like 360 square miles of forest have burned.  Being a math geek, I immediately took the (approximate) square root of 360 in my head:

360 ≈ 19 x 19

I did this without really even thinking about it; I did it in order to be able to visualize the size of the Yosemite blaze.  I now had a picture in my head of a square, 19 miles by 19 miles.  A burning square.  That’s how big the conflagration was.  And the mental math was important because I have no intuition at all about square units.

[Disclaimer for my readers not in the USA: I use the S.I. units (m/kg/s) in my physics research, but in American culture units like miles, inches, gallons, etc. are still endemic.  Sorry about that.]

Quick: how many square feet is a baseball diamond?  If you’re like me, absolutely nothing comes to mind.

I do know that a baseball diamond is 90 ft. x 90 ft. square.  So that’s the answer: 8100 sq. ft.  (752.5 m2)  The problem is that, somehow, psychologically, 90 ft. x 90 ft. seems much smaller than 8100 sq. ft., even though they are the same.

The county I live in, Jackson County, NC, is 494 sq. mi. (1,279 km2).  Somehow, this seems big to me.  But in order to better visualize this area, take a square root: the county is like a 22 mile x 22 mile square (36 km x 36 km).  In those terms, the county seems puny (although it is still bigger than Andorra).  The area of Jackson county is less than 1% the total area of the state of North Carolina.

What about the Yosemite fire?  360/494 = 73%.  So that fire is about three-fourths the size of the puny county that I live in.  A big fire, sure, but not apocalyptic.

The problem that all of this illustrates is one of scaling.  Most of my students know that 1 m = 100 cm.  However, very few know (initially) that 1 m2 ≠ 100 cm2.  Instead, 1 m2 = 10,000 cm2.  That’s because a square meter is a 100 cm x 100 cm square.

This fact leads people’s intuitions wildly astray.  Suppose I double the length and width of an American football field.  The area goes up by a factor of 4.  What was approximately 1 acre has become 4 acres.  Suppose I switch from a 10-inch pizza, which feeds 2, to a 20-inch pizza.  That pizza feeds 8.

It gets even stranger if you imagine the switch from length to volume.  Michelangelo’s David is almost 17 ft. tall.  Assume David was 5’8’’ (68 inches).  Then the statue represents a scaling factor of x3 in terms of length (3 x 68 = 204 in. = 17 ft.)  Imagine a real-life David, 17 ft. tall.  How much would he weigh?  If the life-size David is 160 pounds, the 17 ft. David would be 160 x 33 = 160 x 27 = 4,320 pounds.  To most people, this seems very strange.

David

He weighs 4320 pounds. If he weren’t made of stone, that is.

But back to my original idea: I had mentioned that I had no intuition about square units.  I don’t think many people do.  What intuition I do have is based on experience, and comparing unknowns to knowns.  500 sq. miles is about the size of the county I live in.  An acre is about a football field.  1000 sq. ft. is about the area of a small house.  500 sq. in. is about the area of a modest flat screen TV.  100 fm2 (a barn) is about the cross-sectional area of a Uranium nucleus.  A hectare is about 2.5 football fields stuck together.  And so on.  I’m sure you have your own internal mnemonics to help you gauge area, or volume.

If not, just remember: you can also do the square root in your head.  So if that guy on NPR says there’s a fire that’s 100,000 sq. miles in area, you can visualize

100,000 ≈ 316 x 316

and since this is very similar to the size of Colorado (380 miles x 280 miles) you can start kissing your love ones and planning for the apocalypse.

Read Full Post »

I am going to argue that the Zimmerman verdict (for the shooting of Trayvon Martin) was the correct one.  You will either agree with me or you will not.  And then I will argue that either way, it doesn’t matter in the slightest to most people’s lives.

Let me just say, before you dismiss this post entirely because of some preconceived notion about my politics, then I am very liberal on social issues.  (As I’ve mentioned in the past, I literally don’t have an opinion on many complicated economic issues.)  I’m strongly supportive of privacy rights, voting rights, women’s rights, LGBT rights, and animal rights.  I think the idea of building a giant wall to keep out every illegal alien is absurd.  I am for the legalization of marijuana and for the decriminalization of other drug use in general.  I think man-made global warming is a self-evident fact.  I think big monolithic corporations, in the long term, have a negative effect on the happiness of the masses because they operate as entities without conscience, self-awareness, or humanity.

But when the Zimmerman verdict came back on July 13 as not guilty, I wasn’t surprised.  I wasn’t even outraged.  I just sort of shrugged and moved on.

Granted: there is still racism in this country.  I will even argue that there are often two disparate systems of justice in the U.S.: one for whites, and one for non-whites.  But still…what does that have to do with the Zimmerman verdict?

Scenario 1.  Let’s suppose the George Zimmerman is a total card-carrying KKK racist.  He may be, he may not be…I don’t have any evidence one way or another.  And most of you don’t, either.  But let’s just suppose he is.  Let’s say he follows Trayvon Martin looking for trouble; hoping for a confrontation; hoping to scare the boy.  A scuffle ensues and Martin is shot.

Is that murder?

I’m not a lawyer, but it doesn’t sound like murder.  Manslaughter seems a better fit.

Scenario 2.  Let’s be more realistic.  Let’s assume Zimmerman is a racist, but not the frothing-at-the-mouth kind.  He just feels uncomfortable having a black guy in his neighborhood.  However, if you asked him, he’d claim to not be a racist, claim to have black friends, and try to seem like a reasonable guy.

He follows Martin, hoping to scare him off, but not actively hoping for a fight; he genuinely wants to keep the peace.  If Martin gets scared, well that’s OK: he’s got no business being in this part of town.  A scuffle ensues and Martin is shot.

Is that murder, or even manslaughter?

Again, I don’t think so.  In this case, if Zimmerman is guilty of something, it’s…I don’t know…reckless endangerment?  Putting himself and another in a situation where only bad things could happen?

I didn’t follow the trial all that closely, but I will say that some people who followed the trial even less than I did were outraged at the verdict.  I can understand this, on some level; if a travesty occurs (the shooting of Trayvon Martin was certainly a travesty) then people want justice; they may even want revenge.  If Zimmerman wasn’t to blame, then who is?  Saying “the system” or “society” or “endemic racial profiling” are the root causes of Martin’s death isn’t satisfying, because you can’t put those nouns behind bars and throw away the key and feel good about yourself.  If no one gets blamed, then how does Trayvon Martin get justice?

Here are four ways Trayvon Martin could have gotten justice, or may still get justice:

  1. Florida’s inane stand-your-ground law gets repealed.  That would be justice.
  2. Community watch volunteers stop carrying guns and instead call trained police professionals if they see suspicious behavior.  That would be justice.
  3. Politicians stop listening to NRA lobbyists, and start listening to common sense: that would be justice.
  4. Zimmerman admits what he did was horribly bad judgment; pleads guilty to reckless endangerment; then performs 300 hours of community service as a sort of penance.  (In the long run this outcome would have been better for Zimmerman than the not guilty verdict, because I suspect Zimmerman may be a pariah for the rest of his life.  A little bit of remorse would have gone a long way.)

Anyway, all things considered, I think the jury did what 99% of juries would have done in this case, which was let Zimmerman go free.  The prosecution did not succeed in proving their case.  In retrospect, I think that going for a murder charge was ill-advised and entirely political; they should have aimed a little lower.  Going for manslaughter from the start had a much better chance of success.  Putting Zimmerman away for life on a murder rap isn’t justice; it’s revenge.

OK then.  Feel free to agree, or rabidly disagree.

It doesn’t matter.

The Zimmerman case was just one case.  One case, out of thousands of criminal cases in the U.S. every year.  That is, the Zimmerman trial was just one data point.

I’ve talked about this before.  You can’t really draw any conclusions about anything from one data point.  And yet, people do it all the time.  It’s a fallacy that probably has a name, but the name eludes me.  But to most people, it’s not a fallacy.  It has the weight of proof.

“I don’t believe in global warming.”  [Katrina devastates New Orleans]  “Wait, now I do!”

“I don’t think M. Night Shyamalan is a good director.”  [Watches The Sixth Sense] “Wait, now I do!”

“I don’t think racial profiling is a real thing.”  [Martin gets shot and his Skittles spill to the ground]  “Wait, now I do!”

I hope all three of these arguments is equally absurd to you.  If not, I think you lack the scientific mindset.  Now, don’t get me wrong: I think global warming is real, and I think racial profiling is real.  It’s just that you can’t make the case for those things with only one data point.  (Indeed, the case of M. Night Shyamalan shows that one data point can lead you horribly astray: after the wonderful The Sixth Sense Shyamalan has directed six turkeys in a row.)

I do think that racism still pervades the country.  I do think that whites get a different kind of justice than non-whites in our judicial system.  I do think that our country is obsessed…in an unhealthy way…with small metal devices whose sole purpose is to kill other human beings.  But I don’t believe any of these things solely because of a single data point.  You have to look at the big picture, look at the data in aggregate.  A preponderance of evidence is required to separate fact from fiction, truth from rumor, knowledge from urban legend.  As much as politicians love to bring up pithy examples, tell symbolic anecdotes, those examples and anecdotes are really rather meaningless.  Give me the data or go home.

And that is why the Zimmerman verdict is really rather meaningless.  Not to the family of Trayvon Martin, of course; I feel for them and am very sorry for their loss.  But as to what the trial says, in a larger context, about our society in general?  It says nothing.  A single data point says nothing.  It cannot say anything; that’s a simple mathematical fact.  It takes at least two points to make a line.

If you want to know how prevalent racism is, or how two “separate-but-equal” judicial systems pervade the U.S., or even whether putting guns in the hands of rent-a-cops endangers citizens, look at the data.  Data, plural.  Give Nate Silver a call.  Don’t argue by colorful anecdote.  And if you don’t have the hard data, at least have the courage to admit to yourself that what you believe is based on nothing.

That’s what I believe.  And yes, it’s really just based on nothing.  But I’m OK with that, because somewhere, hunched over a desk, Nate Silver is crunching all the numbers, and he’s still not a witch.

Read Full Post »

When I was young, I once looked at a box of cereal and had an epiphany.  “Why is that cereal there?”  A universe of unfathomable complexity, with 100,000,000,000 galaxies, each with 100,000,000,000 stars, making 10,000,000,000,000,000,000,000 possible solar systems with planets around them—all that, and I’m sitting across from a box of Vanilly Crunch?

Vanilly

Since that existential crisis, I’ve always wondered why there was something instead of nothing.  Why isn’t the universe just one big empty set?  “Emptiness” and “nothingness” have always seemed so perfect to me, so symmetric, that our very existence seems at once both arbitrary and ugly.  And no theologian or philosopher ever gave me an answer I thought was satisfying.  For a while, I thought physicists were on the right track: Hawking and Mlodinow, for example, in The Grand Design, describe how universes can spontaneously appear (from nothing) according to the laws of quantum mechanics.

I have no problem with quantum mechanics: it is arguably the most successful theory devised by mankind.  And I agree that particles can spontaneously create themselves out of a vacuum.  But here’s where I think Hawking and Mlodinow are wrong: the rules of physics themselves do not constitute “nothing”.  The rules are something.  “Nothing” to me implies no space, no time, no Platonic forms, no rules, no physics, no quantum mechanics, no cereal at my breakfast table.  Why isn’t the universe like that?  And if the universe were like that, how could our current universe create itself without any rules for creation?

But wait—don’t look so smug, theologians.  Saying that an omnipotent God created the universe doesn’t help in any way.  That just passes the buck; shifts the stack by one.  For even if you could prove to me that a God existed, I would still feel a sense of existential befuddlement.  Why does God herself exist?  Nothingness still seems more plausible.

Heidegger called “why is there anything?” the fundamental question of philosophy.  Being a physicist, and consequently being full of confidence and hubris, I set out to answer the question myself.  I’d love to blog my conclusions, but the argument runs about 50,000 words…longer than The Great Gatsby.  Luckily for you, however, my book Why Is There Anything? is now available for the Kindle on Amazon.com:

rave book

You can download the book here.

You might wonder if my belief in the many-worlds interpretation (MWI) of quantum mechanics affected my thinking on this matter.  Well, the opposite is true.  In my journey to answer the question “why is there anything?” I became convinced of MWI, in part because of the ability of MWI to partially answer the ultimate question.  My book Why Is There Anything? is a sort of chronicle of my intellectual journey, one that I hope you will find entertaining, enlightening, and challenging.

Read Full Post »

mcfly

“I am your probability density”

In an earlier post I discussed my philosophy of teaching special relativity.  My main idea was that physics professors should keep the “weird stuff” at bay, and start with non-controversial statements; once students are on board, you can push under the grass and show them the seething Lynchian bugs beneath.

Well, what about quantum mechanics?  Does the same philosophy apply?

My answer is yes, of course.  Don’t start with Schrödinger’s cat.  Don’t mention the Heisenberg uncertainty principle, or wave collapse, or the EPR experiment, or Bell’s theorem, or the double slit experiment, or quantum teleportation, or many worlds, or Einstein’s dice.  Start with the problems of physics, circa 1900, and how those problems were gradually solved.  In working out how physicists were gradually led to quantum mechanics, students will build up the same mental framework for understanding quantum mechanics.  At least, that’s how it works in theory.

Now, my perspective is from the point of view of a professor who teaches only undergraduates.  I only get to teach quantum mechanics once a year: in a course called Modern Physics, which is sort of a survey course of 20th century physics.  (If I were to teach quantum mechanics to graduate students, my approach would be different; I’d probably start with linear algebra and the eigenvalue problem, but that’s a post for another day.)  As it is, my approach is historical, and it seems to work just fine.  I talk about the evidence for quantized matter (i.e. atoms), such as Dalton’s law of multiple proportions, Faraday’s demonstration in 1833 that charge is quantized, Thomson’s experiment, Millikan’s experiment, and so on.  Then I explain the ultraviolet catastrophe, show how Planck was able to “fix” the problem by quantizing energy, and how Einstein “solved” the problematic photoelectric effect with a Planckian argument.  Next is the Compton effect, then the Bohr model and an explanation of the Balmer rule for hydrogen spectra…

We’re not doing quantum mechanics yet.  We’re just setting the stage; teaching the student all the physics that a physicist would know up until, say, 1925.  The big breakthrough from about 1825-1925 is that things are quantized.  Things come in lumps.  Matter is quantized.  Energy is quantized.

The big breakthrough of 1925-1935 is, strangely, the opposite: things are waves.  Matter is waves.  Energy is waves.  Everything is a wave.

So then, quantum mechanics.  You should explain what a wave is (something that is periodic in both space and time, simultaneously).  Here, you will need to teach a little math: partial derivatives, dispersion relations, etc.  And then comes the most important step of all: you will show what happens when two (classical!) wave functions are “averaged”:

ψ1 = cos(k1x – ω1t)

ψ2 = cos(k2x – ω2t)

Ψ(x,t) = (1/2) cos(k1x – ω1t)  + (1/2) cos(k2x – ω2t)

Ψ(x,t) = cos(Δk·x – Δω·t) · cos(k·x – ω·t)

where Δk ≡ (k1 – k2)/2, k ≡ (k1 + k2)/2, etc.

[Here I have skipped some simple algebra.]

This entirely classical result is crucial to understanding quantum mechanics. In words, I would say this: “Real-life waves are usually combinations of waves of different frequencies or wavelengths.  But such ‘combination waves’ can be written simply as the product of two wave functions: one which represents ‘large-scale’ or global oscillations (i.e. cos(Δk·x – Δω·t)) and one which represents ‘small-scale’ or local oscillations (i.e. cos(k·x – ω·t)).

This way of looking at wave functions (remember, we haven’t introduced Schrödinger’s equation yet, nor should we!) makes it much easier to introduce the concept of group velocity vs. phase velocity: group velocity is just the speed of the large-scale wave groups, whereas phase velocity is the speed of an individual wave peak.  They are not necessarily the same.

It is also easy at this point to show that if you combine more and more wave functions, you get something that looks more and more like a wave “packet”.  In the limit as the number of wave functions goes to infinity, the packet becomes localized in space.  And then it’s simple to introduce the classical uncertainty principle: Δk·Δx > ½.  It’s not simple to prove, but it’s simple to make plausible.  And that’s all we want at this point.

We’re still not doing quantum mechanics, but we’re almost there.  Instead, we’ve shown how waves behave, and how uncertainty is inherent in anything with a wave-like nature.  Of course now is the time to strike, while the iron is hot.

What if matter is really made from waves?  What would be the consequences of that?  [Enter de Broglie, stage right]  One immediately gets the Heisenberg relations (really, this is like one line of algebra at the most, starting from the de Broglie relations) and suddenly you’re doing quantum mechanics!  The advantage of this approach is that “uncertainty” seems completely natural, just a consequence of being wave-like.

And whence Schrödinger’s equation?  I make no attempt to “prove” it in any rigorous way in an undergraduate course.  Instead, I just make it imminently plausible, by performing the following trick.  First, introduce complex variables, and how to write wave functions in terms of them.  Next, make it clear that a partial derivative with respect to x or t can be “re-written” in terms of multiplication:

d ψ /dx  →  ik ψ

d ψ /dt  →  –iω ψ

Then “proving” Schrödinger’s equation in a non-rigorous way takes 4 lines of simple algebra:

E = p2/2m

E ψ = (p2/2m)ψ

Now use the de Broglie relations E = ħω and p = ħk…

ħw ψ = (ħ2k 2/2m) ψ

iħ(∂ψ/∂t) = (–ħ2/2m) ∂2ψ/∂x2

There’s time enough for weirdness later.  Right now, armed with the Schrödinger equation, the student will have their hands full doing infinite well problems, learning about superposition, arguing about probability densities.  As George McFly said, “I am your density.”  And as Schrodinger said, probably apocryphally, “Don’t mention my cat till you see the whites of their eyes.”

Read Full Post »

Older Posts »