Posts Tagged ‘Physics’

I thought I would re-post this excellent discussion of the many-worlds interpretation by David Yerle:

Why I Believe in the Many-Worlds Interpretation

I agree with him 100%, and he says it better than I ever could.  The crux of the argument is this: it depends on the book you’re reading, but as a practical matter there are typically 4 postulates of quantum mechanics (about the primacy of the wavefunction, Schrödinger’s equation, measurements being Hermitian operators, and wave function collapse).  Many worlds is what you get when you reject the unmotivated “wave function collapse” postulate.  It is a simpler theory in terms of axioms, so obeys Occam’s razor.  If multiple universes bother you, think of how much it bothered people in the 1600’s to contemplate multiple suns (much less multiple galaxies!)


Read Full Post »

In my continuing effort to present cutting-edge research, I present here my findings on the 9 kinds of physics undergrad.

First, let’s look at a scatter plot of Ability vs. Effort for a little more than 100 students.  (This data was taken over a span of five years at a major university which will remain unnamed.  Even though it’s Wake Forest University.)

HR diagram

Student ability is normalized so that 1 is equivalent to the 100th percentile, 0 is the 50th percentile, and –1 is the 0th percentile.  [This matches the work of I. Emlion and A. Prilfül, 2007]  Ability scores below –0.5 are not shown (such students more properly belong on the Business Major H-R diagram).

On the x-axis is student effort, given as a spectral effort class [this follows B. Ess, 2010]:

O-class: Obscene

B-class: Beyond awful

A-class: Awful

F-class: Faulty

G-class: Good

K-class: Killer

M-class: Maximal

As you can see, most students fall onto the Main Sequence.

HR typical

The Typical student (effort class G, 50th percentile) has a good amount of effort, and is about average in ability.  They will graduate with a physics degree and eventually end up in sales or marketing with a tech firm somewhere in California.

HR giant

The Giant student (effort class K, 75th percentile) has a killer amount of effort and is above average in ability.  Expect them to switch to engineering for graduate school.

HR smug

The Smug Know-it-all student (effort class O, 100th percentile) is of genius-level intellect but puts forth an obscenely small amount of effort.  They will either win the Nobel prize or end up homeless in Corpus Christi.

HR grad

The Headed to grad school student (effort class B, 75th percentile) is beyond awful when it comes to work, and spends most of his/her time playing MMORPG’s.  However, they score well on GRE’s and typically go to physics graduate schools, where to survive they will travel to the right (off the main sequence).

HR industry

The Headed to industry student (effort class F, 55th percentile) is slightly above average but has a faulty work ethic.  This will change once they start putting in 60-hour weeks at that job in Durham, NC.

HR phobe

The Hard working math-phobe student (effort class M, 30th percentile) is earnest in their desire to do well in physics.  However, their math skills are sub-par.  For example, they say “derivatize” instead of “take the derivative”.  Destination: a local school board near you.

HR super

The Supergiant student (effort class K, 100th percentile) is only rumored to exist.  I think she now teaches at MIT.

HR frat

The Frat boy student (effort class O, 50th percentile) is about average, but skips almost every class.  Their half-life as a physics student is less than one semester.  They will eventually make three times the salary that you do.

HR dwarf

The White dwarf student (effort class B, 30th percentile) is below average in ability and beyond awful when it comes to putting forth even a modicum of effort.  Why they don’t switch to being another major is anyone’s guess.


If you enjoyed this post, you may also enjoy my book Why Is There Anything? which is available for the Kindle on Amazon.com.  The book is weighty and philosophical, but my sense of humor is still there!



I am also currently collaborating on a multi-volume novel of speculative hard science fiction and futuristic deep-space horror called Sargasso Nova.  My partner in this project is Craig Varian – an incredibly talented visual artist (panthan.com) and musician whose dark ambient / experimental musical project 400 Lonely Things released Tonight of the Living Dead to modest critical acclaim a few years back.  Publication of the first installment will be January 2015; further details will be released on our Facebook page, Twitter feed, or via email: SargassoNova (at) gmail.com.

Read Full Post »

As a public service, I hereby present my findings on physics seminars in convenient graph form.  In each case, you will see the Understanding of an Audience Member (assumed to be a run-of-the-mill PhD physicist) graphed as a function of Time Elapsed during the seminar.  All talks are normalized to be of length 1 hour, although this might not be the case in reality.


The “Typical” starts innocently enough: there are a few slides introducing the topic, and the speaker will talk clearly and generally about a field of physics you’re not really familiar with.  Somewhere around the 15 minute mark, though, the wheels will come off the bus.  Without you realizing it, the speaker will have crossed an invisible threshold and you will lose the thread entirely.  Your understanding by the end of the talk will rarely ever recover past 10%.


The “Ideal” is what physicists strive for in a seminar talk.  You have to start off easy, and only gradually ramp up the difficulty level.  Never let any PhD in the audience fall below 50%.  You do want their understanding to fall below 100%, though, since that makes you look smarter and justifies the work you’ve done.  It’s always good to end with a few easy slides, bringing the audience up to 80%, say, since this tricks the audience into thinking they’ve learned something.


The “Unprepared Theorist” is a talk to avoid if you can.  The theorist starts on slide 1 with a mass of jumbled equations, and the audience never climbs over 10% the entire time.  There may very well be another theorist who understands the whole talk, but interestingly their understanding never climbs above 10% either because they’re not paying attention to the speaker’s mumbling.


The “Unprepared Experimentalist” is only superficially better.  Baseline understanding is often a little higher (because it’s experimental physics) but still rarely exceeds 25%.  Also, the standard deviation is much higher, and so (unlike the theorist) the experimentalist will quite often take you into 0% territory.  The flip side is that there is often a slide or two that make perfect sense, such as “Here’s a picture of our laboratory facilities in Tennessee.”


You have to root for undergraduates who are willing to give a seminar in front of the faculty and grad student sharks.  That’s why the “Well-meaning Undergrad” isn’t a bad talk to attend.  Because the material is so easy, a PhD physicist in the audience will stay near 100% for most of the talk.  However, there is most always a 10-20 minute stretch in the middle somewhere when the poor undergrad is in over his/her head.  For example, their adviser may have told them to “briefly discuss renormalization group theory as it applies to your project” and gosh darn it, they try.  This is a typical case of what Gary Larson referred to as “physics floundering”.  In any case, if they’re a good student (and they usually are) they will press on and regain the thread before the end.


The “Guest From Another Department” is an unusual talk.  Let’s say a mathematician from one building over decides to talk to the physics department about manifold theory.  Invariably, an audience member will gradually lose understanding and, before reaching 0%, will start to daydream or doodle.  Technically, the understanding variable U has entered the complex plane.  Most of the time, the imaginary part of U goes back to zero right before the end and the guest speaker ends on a high note.


The “Nobel Prize Winner” is a talk to attend only for name-dropping purposes.  For example, you might want to be able to say (as I do) that “I saw Hans Bethe give a talk a year before he died.”  The talk itself is mostly forgettable; it starts off well but approaches 0% almost linearly.  By the end you’ll wonder why you didn’t just go to the Aquarium instead.


The “Poetry” physics seminar is a rare beast.  Only Feynman is known to have given such talks regularly.  The talks starts off confusingly, and you may only understand 10% of what is being said, but gradually the light will come on in your head and you’ll “get it” more and more.  By the end, you’ll understand everything, and you’ll get the sense that the speaker has solved a difficult Sudoku problem before your eyes.  Good poetry often works this way; hence the name.


The less said about “The Politician”, the better.  The hallmark of such a talk is that the relationship between understanding and time isn’t even a function.  After the talk, no one will even agree about what the talk was about, or how good the talk was.  Administrators specialize in this.

If you enjoyed this post, you may also enjoy my book Why Is There Anything? which is available for the Kindle on Amazon.com.  The book is weighty and philosophical, but my sense of humor is still there!


I am also currently collaborating on a multi-volume novel of speculative hard science fiction and futuristic deep-space horror called Sargasso Nova.  My partner in this project is Craig Varian – an incredibly talented visual artist (panthan.com) and musician whose dark ambient / experimental musical project 400 Lonely Things released Tonight of the Living Dead to modest critical acclaim a few years back.  Publication of the first installment will be January 2015; further details will be released on our Facebook page, Twitter feed, or via email: SargassoNova (at) gmail.com.

Read Full Post »

Formula snobs

I am a formula snob.

We all know about grammar snobs: the ones who complain bitterly about people using who instead of whom.  Many people know how to use whom correctly; only grammar snobs care about it.  I gave up the whom fight long ago (let’s just let whom die) but I am a grammar snob when it comes to certain words.  For example, ‘til is not a word, as I have discussed before.

However, I am almost always a formula snob.

Consider this formula from the text I’m currently using in freshman physics:

x = v0 t + ½ a t2.


Robin Thicke, c. 2012

To me, looking at this equation is like watching Miley Cyrus twerk with Beetlejuice.  I would much, much rather the equation looked like this:

Δx = v0 Δt + ½ a Δt2.

The difference between these two formulas is profound.  To understand the difference, we need to talk about positions, clock readings, and intervals.

A position is just a number associated with some “distance” reference point.  We use the variable x to denote positions.  For example, I can place a meter stick in front of me, and an ant crawling in front of the meter stick can be at the position x=5 cm, x=17 cm, and so on.

A clock reading is just a number associated with some “time” reference point.  We use the variable t to denote clock readings.  For example, I can start my stopwatch, and events can happen at clock readings t=0 s, t=15 s, and so on.

Here’s the thing: physics doesn’t care about positions and clock readings.  Positions and clock readings are, basically, arbitrary.  A football run from the 10 yard line to the 15 yard line is a 5 yard run; going from the 25 to the 30 is also a 5 yard run.  The physics is the same…I’ve just shifted the coordinate axes.  If I watch a movie from 8pm to 10pm (say, a Matt Damon movie) then I’ve used up 2 hours; the same thing goes for a movie from 9:30pm to 11:30pm.  Because a position x and a clock reading t ultimately depend on a choice for coordinate axes, the actual values of x and t are of little (physical) importance.

Suppose someone asks me how far I can throw a football.  My reply is “I threw a football and it landed on the 40 yard line!”  That’s obviously not very helpful.  A single x value is about as useful as Kim Kardashian at a barn raising.


Can you pass that hammer, Kim?

Or suppose someone asks, “How long was that movie?” and my response is “it started at 8pm.”  Again, this doesn’t say much.  Physics, like honey badger, doesn’t care about clock readings.

Most physical problems require two positions, or two clock readings, to say anything useful about the world.  This is where the concept of interval comes in.  Let’s suppose we have a variable Ω.  This variable can stand for anything: space, time, energy, momentum, or the ratio of the number of bad Keanu Reeves movies to the number of good (in this last case, Ω is precisely 18.)  We define an interval this way:

ΔΩ = Ωf – Ωi

So defined, ΔΩ represents the change in quantity Ω.  It is the difference between two numbers.  So Δx = xf – xi is the displacement (how far an object has traveled) and Δt = tf – ti  is the duration (how long something takes to happen).

honey badger

Honey badger doesn’t care.

When evaluating how good a football rush was, you need to know where the player started and where he stopped.  You need two positions.  You need Δx.  Similarly, to evaluate how long a movie is, you need the starting and the stopping times.  You need two clock readings.  You need Δt.

I’ll say it again: most kinematics problems are concerned with Δx and Δt, not x and t.  So it’s natural for a physicist to prefer formulas in terms of intervals (Δx = v0 Δt + ½ a Δt2) instead of positions/clock readings (x = v0 t + ½ a t2).

But, you may ask, is the latter formula wrong?

Technically, no.  But the author of the textbook has made a choice of coordinate systems without telling the reader.  To see this, consider my (preferred) formula again:

Δx = v0 Δt + ½ a Δt2.

The formula says, in English, that if you want to calculate how far something travels Δx, you need to know the object’s initial speed v0, its acceleration a, and the duration of its travel Δt.

From the definition of an interval, this can be rewritten as

xf  – xi = v0 (t– ti) + ½ a (t– ti) 2.

This formula explicitly shows that two positions and two clock readings are required.

At this point, you can simplify the formula if you make two arbitrary choices: let xi = 0, and let ti = 0.  Then, of course, you get the (horrid) expression

x = v0 t + ½ a t2.

I find this horrid because (1) it hides the fact that a particular choice of coordinate system was made; (2) it over-emphasizes the importance of positions/clock readings and undervalues intervals, and (3) it ignores common sense.  Not every run in football starts at the end-zone (i.e. x = 0).  Not every movie starts at noon (i.e. t = 0).  The world is messier than that, and we should strive to have formulas that are as general as possible.  My formula is always true (as long as a is constant).  The horrid formula is only true some of the time.  That is enough of a reason, in my mind, to be a formula snob.


A formula snob?

Bonus exercise: show that the product

ΩKeanu Reeves  x  ΩMatt Damon  ≈  3.0

has stayed roughly constant for the past 15 years.

Read Full Post »

When I was in elementary school, at some indeterminate age, I made a model of the atom with pipe cleaners and Styrofoam balls.  It probably looked something like this:


These models are about as accurate as depicting the Taj Mahal as a decrepit hovel:


The Taj Mahal, built from 1632–1653.

Sure, the atom has a nucleus; this nucleus has protons and (usually) neutrons.  And electrons “orbit” the atom (although quantum mechanics tells us that this “orbit” is a much more nebulous concept than Bohr would have us believe).  But—and here’s the main problem with 5th grade Styrofoam ball models—the scale is completely, totally, massively wrong.

Let’s do a simple calculation.  A typical atomic radius is one the order of 0.1 nm.  A typical nucleus, about 10 fm.  What is the ratio of these two lengths?

About 10,000 to 1.

This bears repeating.  A nucleus is something like 10,000 times smaller than an atom, by length.  By volume, it’s even more dramatic:

A nucleus is 1,000,000,000,000 times smaller than an atom, by volume.

You don’t really get that impression from the Styrofoam ball model, do you?

A typical football stadium has a radius of maybe 120 m.  One ten-thousandth of this is 1.2 cm, about the size of a pea.  To get a sense of what an atom really looks like,  place a pea at the center of a field in the middle of a football stadium.  Then imagine, at the outskirts of the stadium, there are a few no-see-um gnats (biting midges, of the family Ceratopogonidae).  These bugs represent the electrons.  The atoms are the bugs and the pea.  That’s it.  The rest of the atom is empty space.


Another way to think about it is this:

In terms of volume, a nucleus is only 0.0000000001% of the volume of the atom.

That means, for those of you scoring at home, that 99.9999999999% of an atom is nothing.

That is, you are mostly nothing.  So am I.  So is Matt Damon.

So the next time you’d like to help your kids make a model of the atom, just forget it.  Whatever model you make will be about as accurate as the physics in The Core.  I’d recommend instead getting some nice casu marzu, having a strong red wine, and watching True Grit.  You’ll thank me for it.


There’s flies in this, too.

Read Full Post »


I’ve had the following conversation at least a few dozen times:

“So where do you work?”

Me: “I’m a professor over at the university.”

“What do you teach?”


“Physics?  Yikes!  Physics is hard.”

Mathematics and chemistry folks that I know get similar responses.  The unspoken assumption is, why would you want to study something so difficult?

Well, why wouldn’t you?

This leads me to my main point.  When asked why I decided to study physics in the first place, my response is usually “Because physics is hard.”  To me, that’s a sufficient reason.  Not necessary, but sufficient.  I can’t imagine having a job that wasn’t mentally challenging.  Well, unless they paid me enough.

My first exposure to physics (not just science but physics) was in high school, 10th grade I think, when I read a copy of The Dancing Wu Li Masters.  Today I know this book is full of new age nonsense, Deepak Chopra-esque mumbo jumbo, but of course I couldn’t know that at the time.  All I could see at the age of 15 was this great bizarre world of quantum weirdness, and what’s more people were still investigating it.  There was work to be done.  Any copy of Bullfinch’s mythology, or any religious text for that matter, was full of similar bizarre weirdness, but those fields of study seemed static and dead.  But quantum mechanics?  You mean people get paid to think about this shit, and study it in a laboratory?  Count me in!

I was lucky enough to recognize at the time that I didn’t yet have the toolkit for thinking about these kinds of things.  Without a working understanding of calculus, without following the trajectory of physics history into the early 20th century, without seeing the careful, subtle arguments of the physics greats, one can’t really get a handle on quantum mechanics at all.  I wish I had a dollar for every time I met someone who claimed to know “all about” quantum mechanics because they watched a Nova episode about Schrödinger’s cat.  But sorry, quantum mechanics is primarily (arguably entirely) a mathematical theory and as such there are no shortcuts to understanding.  Read as many Brian Greene books as you like…read my book, while you’re at it…but all that can really do is whet your appetite for more advanced study.

That’s what happened to me.  I read a new age book filled with nonsense, but that had enough physics to get me interested.  I wanted to learn more than the author; I wanted to be able to tell him where he was wrong.  (I can certainly do this now.)  And I stuck with physics because it’s maddeningly difficult.

Don’t be afraid of learning difficult things.  Study physics.  Take up quilting.  Learn to play the violin.  Learn how to fix a boat.  Read a book about the Crimean war.  Invent a recipe for Baked Alaska.

If it’s not difficult, then why are you bothering with it?

Read Full Post »

Every Halloween, a steady stream of trick-or-treaters visits my house.  They’re looking for handouts, of course: Snickers, Pixy Stix, Reese’s, jawbreakers.  A simple mathematical model of the children’s visits can help explain the Doppler effect.


Let’s say the children are dressed as zombies (zombies are all the rage).  Let’s further assume that the children act like zombies—not from premature candy consumption and subsequent hyperglycemia—but from a desire for greater verisimilitude.  Thus, we make the following assumptions:

  • The children move at a uniform speed v.
  • The children are separated by a uniform distance λ.

With what frequency f do the children visit my house?

It’s obvious that a greater speed v means that that children visit more often.  So:

f  v,

meaning that frequency is proportional to speed.  Further, it seems clear that a larger distance λ between children means less visits, so that

f  1/ λ.

It is logical to take these proportionalities and combine them, giving

= v/ λ.

In the study of waves, this is the fundamental relation between frequency and wavelength.  We see that in this analogy, the zombie children are meant to represent successive peaks of a wave, and the distance between the children represents wavelength.

(I haven’t defined what a “wave” actually is.  If you’re curious, a wave is something that is periodic in both space and time.  The space periodicity of the zombie kids is codified by the number λ, since if you travel a distance λ in space, you get another zombie kid just like the first.  The time periodicity is codified by the period T = 1/f, since if you’re at the house and you wait for time T, an identical zombie kid will show up at your door.)

So, what about the Doppler effect?


Sheldon models the Doppler effect.

We now imagine that I live in a mobile home.  ( I don’t, actually, but I was injured in a tornado in 2011, so I guess I am an “honorary” mobile home denizen.)  What is the effect of me driving the mobile home either towards or away from the zombie kids?

Suppose I drive towards the kids at a gentle 1 m/s.  If they are walking 2 m/s towards me, our relative speed is 3 m/s.  The kids show up at my doorstep more frequently.

If instead I drive away, the kids aren’t visiting as often, since our relative speed is now just 1 m/s.  In fact if I drive away at 2 m/s or greater, the kids never catch me and f drops to zero.

The same thing happens with sound.  Regions of less dense/more dense air propagate from a source to a receiver (presumably, your ears).  Each “pulse” is analogous to a zombie kid, and the frequency with which the pulses jostle your eardrums is interpreted by your brain as a pitch.  Higher frequency, higher pitch.  Now, in air the speed of sound is roughly constant, so the f that you hear depends upon one thing: the wavelength.  The more separation between pulses, the lower the pitch you hear.

You can guess what comes next.  If you run away from a sound source, the pulses can’t hit your eardrums as often; you hear a lower pitch.  Conversely, running towards a sound source makes the pitch sound higher.  The f has increased.

Of course its not just the house (the receiver) that can move; the source of the sound (i.e. the zombie kids) can move as well.  I can implement this idea next October 31 by installing a moving walkway outside my house.  If the kids walk at 2 m/s on my moving walkway, but the walkway itself is set at 1 m/s towards me, then the kids are actually moving at 3 m/s relative to me and will visit more often.  (A moving sound source such as a siren sounds higher in pitch if moving towards you.)  I could likewise set the walkway to move away from me, and have the kids visit less often or not at all.  (A siren traveling away from you sounds lower in pitch.)

This analogy is not perfect by any means.  For one thing, the zombie children (who are actual, flesh-and-blood material objects) represent wave maxima, which are mathematical abstractions.  In the case of sound, if I shout and you hear it, that does not mean that any actual physical object traveled from me to you.  A series of wave pulses traveled, sure; but no individual atoms or molecules went all the way across the room.  In a typical wave, energy travels from A to B but matter does not.

If you have trouble seeing how this can be, recall the wave (also known as the Mexican wave) that appears in large sports stadiums.  Wikipedia says it best: “The result is a wave of standing spectators that travels through the crowd, even though individual spectators never move away from their seats.”  Similarly sound can travel from me to you, even though the individual oxygens and nitrogens don’t really move that far.

Which brings up another idea of mine, which I’d like to patent.  Waves (in football stadiums) are always transverse: people raise their arms and then lower them, in a direction perpendicular to the motion of the wave itself.  But if we really wanted to model sound with a stadium wave, we should instruct the audience to move their arms from side to side.  This would set up a longitudinal wave.  I think it would look a little different.

Next time you’re directing festivities for a crowd of 10,000+, get them to do a longitudinal wave.  You’ll thank me for it.  And if you have kids of an appropriate age, dress them as zombies this Halloween.  At my house, zombie trick-or-treaters get the best candy.

Read Full Post »

When I was young, I once looked at a box of cereal and had an epiphany.  “Why is that cereal there?”  A universe of unfathomable complexity, with 100,000,000,000 galaxies, each with 100,000,000,000 stars, making 10,000,000,000,000,000,000,000 possible solar systems with planets around them—all that, and I’m sitting across from a box of Vanilly Crunch?


Since that existential crisis, I’ve always wondered why there was something instead of nothing.  Why isn’t the universe just one big empty set?  “Emptiness” and “nothingness” have always seemed so perfect to me, so symmetric, that our very existence seems at once both arbitrary and ugly.  And no theologian or philosopher ever gave me an answer I thought was satisfying.  For a while, I thought physicists were on the right track: Hawking and Mlodinow, for example, in The Grand Design, describe how universes can spontaneously appear (from nothing) according to the laws of quantum mechanics.

I have no problem with quantum mechanics: it is arguably the most successful theory devised by mankind.  And I agree that particles can spontaneously create themselves out of a vacuum.  But here’s where I think Hawking and Mlodinow are wrong: the rules of physics themselves do not constitute “nothing”.  The rules are something.  “Nothing” to me implies no space, no time, no Platonic forms, no rules, no physics, no quantum mechanics, no cereal at my breakfast table.  Why isn’t the universe like that?  And if the universe were like that, how could our current universe create itself without any rules for creation?

But wait—don’t look so smug, theologians.  Saying that an omnipotent God created the universe doesn’t help in any way.  That just passes the buck; shifts the stack by one.  For even if you could prove to me that a God existed, I would still feel a sense of existential befuddlement.  Why does God herself exist?  Nothingness still seems more plausible.

Heidegger called “why is there anything?” the fundamental question of philosophy.  Being a physicist, and consequently being full of confidence and hubris, I set out to answer the question myself.  I’d love to blog my conclusions, but the argument runs about 50,000 words…longer than The Great Gatsby.  Luckily for you, however, my book Why Is There Anything? is now available for the Kindle on Amazon.com:

rave book

You can download the book here.

You might wonder if my belief in the many-worlds interpretation (MWI) of quantum mechanics affected my thinking on this matter.  Well, the opposite is true.  In my journey to answer the question “why is there anything?” I became convinced of MWI, in part because of the ability of MWI to partially answer the ultimate question.  My book Why Is There Anything? is a sort of chronicle of my intellectual journey, one that I hope you will find entertaining, enlightening, and challenging.

Read Full Post »


An image from Zynga’s Mafia Wars, obviously.

One way to say it is…

Let’s say we have a bunch of squirrels congregating underneath a tree.  We can startle the squirrels, and they jump up into the branches of the tree.  But these are ground-loving squirrels, so they very quickly jump back down—sometimes to a lower branch, sometimes all the way down to the ground.

An interesting fact: the squirrels scream as they fall.  This isn’t because of fear, so there’s no need to alert PETA; they’re just emitting squeals of delight as they plummet.  And the frequency of their squeals (and hence the musical note produced) depends upon how far they fall.  A larger drop, a higher frequency.

My goal is to have a whole bunch of squirrels scream with the exact same frequency, all at once.  How can this be done?

Clearly, I need a bunch of squirrels all on the same branch, and I have to hope they all jump off that branch all at the same time.  But this is trickier than it sounds.

It turns out that our tree, a northern elm (Ne for short) has lots of branches, most of them slippery.  In most cases a squirrel will jump off such a branch almost immediately.  However—and this is lucky for us—the 8th branch from the bottom is not-so-slippery.  Squirrels actually like to hang out on this branch for a little while.  Squirrels, being squirrels, do succumb to peer pressure, though, so when one squirrel eventually jumps to the next-lowest branch, the rest of the squirrels follow suit—all screaming in unison—producing a nice, loud, resonant scream of 4.7 x 1014 Hz.

Here’s the problem: when we initially scare the squirrels into the tree, they don’t all jump up to that 8th branch.  Why would they?  They jump up at random, and only a fraction land on the branch we want.  Even if a sizeable number then jump down all at once, producing the desired sound, it is drowned out by all the other screams and squeals of all the other jumping squirrels on all the other branches.  This isn’t what we want.

But maybe we can be super clever.  Let’s get another tree, let’s say a hemlock (He for short), and place it next to the Ne tree.  Why hemlock?  The cool thing about hemlock is that there’s basically only one branch that squirrels can reach (the other branches are just too high).  And here’s the luckiest coincidence of all: this single branch of the He tree is at almost the exact same height as the 8th branch of the Ne tree (the branch that the squirrels kind of like).

So here’s what we do.  We scare a bunch of squirrels beneath the He tree, and they all jump up to that lone branch (they don’t have a choice.)  We then slide the He tree next to the Ne tree.  The naturally curious squirrels climb over to the Ne tree, because the branches are at the same height.  And guess what—the conditions are now just right for our squirrels-screaming-in-unison trick!  We have a large population of squirrels on a not-so-slippery branch, and when the peer pressure clicks in—eeeeeeeekkkk!

Another way to say it is…

Let’s say we have a bunch of electrons in atoms in a gas.  We can excite the electrons with an electric field, and they jump up into higher atomic energy levels.  But being electrons, they very quickly jump back down—sometimes to a lower energy level, sometimes all the way down to the ground state.

An interesting fact: the electrons emit photons as they fall.  And the frequency of these photons depends upon how far they fall.  A larger drop, a higher frequency.

My goal is to have a whole bunch of photons emitted with the exact same frequency, all at once and in phase.  How can this be done?

Clearly, I need a bunch of electrons all on the same energy level, and I have to hope they all jump off that level all at the same time.  But this is trickier than it sounds.

It turns out that our gas, neon (Ne for short) has lots of energy levels, most of them “slippery”.  In most cases an electron will jump off such a level almost immediately.  However—and this is lucky for us—the 8th branch from the bottom (the 5s energy level) is metastable.  Electrons actually like to hang out on this level for a little while.  Electrons, being electrons, do succumb to peer pressure, though, so when one electron eventually jumps to the next-lowest branch, the rest of the electrons follow suit—in a process called stimulated emission—producing a nice, intense, in-phase cascade of photons with f=4.7 x 1014 Hz (about 633 nm, which is ruby red).

Here’s the problem: when we initially excite the electrons in Ne, they don’t all jump up to that 5s level.  Why would they?  They jump up at random, and only a fraction land on the level we want.  Even if a sizeable number then jump down all at once, producing the desired lasing frequency, it is drowned out by all the other photons emitted by all the other jumping electrons on all the other levels.  This isn’t what we want.

But maybe we can be super clever.  Let’s get another gas, let’s say helium (He for short), and place it in the same container as the Ne.  Why helium?  The cool thing about helium is that there’s basically only one energy level that electrons can reach (the 1s2s level).  And here’s the luckiest coincidence of all: this single energy level of He is almost the exact same energy as the 5s state of Ne.

So here’s what we do.  We excite a bunch of electrons in He, and they all jump up to the 1s2s level  (they don’t have a choice.)  We then mix the He with Ne.  Through collisions, many of the electrons in the 1s2s level state of He are transferred to the 5s state of Ne.  And guess what—the conditions are now just right for our cascading electrons trick!  We have a large population of electrons in a metastable state (a population inversion), and when stimulated emission clicks in—we get coherent laser light!

Whether or not those electrons raid your bird feeders for sunflower seeds is another issue entirely.

[Note: my book Why Is There Anything? is now available for download on the Kindle!]

Read Full Post »

Imagine that there’s an ice cream truck parked near a school.  It’s 3pm; class is dismissed and a steady stream of kids approach the truck.  For simplicity, let’s stipulate the following rules:

1. If a kid has enough money, he or she will buy an ice cream cone;

2. A kid will only buy one ice cream cone;

3. A kid will buy the most expensive ice cream cone he or she can.

After the kids buy ice cream (or not) they continue on their way, having whatever leftover money they might have in their pocket.

My question is this: by seeing how much money kids have left, what can you determine about the prices of ice cream cones?  (Astute observers will note that this toy model has many similarities to the Franck-Hertz experiment of 1914, but we’re not there yet.)

Let’s look first at some hypothetical data.  Here we have plotted L (money left over in a kid’s pocket) as a function of M (how much money a kid started with) for a school with 11 kids:

Franck 1

There doesn’t seem to be a pattern.  Maybe we don’t have enough data?  Let’s increase the number of kids to 42:

Franck 2

The sawtooth form of the function is characteristic of problems of this type, and can be understood intuitively.  First of all, notice that the function is linear to begin with, with a slope of exactly one; this means that below a certain threshold value of M, a kid can’t afford any ice cream, so he/she ends up with the same amount of money he/she started with.  Eventually, if M ≥ $1, a kid can buy a cone; L then plummets because the kid has spent a dollar.

It seems obvious from the graph, then, that ice cream cones are priced at $1, $2, and $3; beyond that we don’t have any data so can’t draw more conclusions.

Now, if we relax the stipulation that kids only buy one cone, then our data is ambiguous.  Maybe the cones are priced $1/$2/$3, or maybe cones are always just $1, and kids are buying more than one cone if they can.  We can’t distinguish between these cases; that’s why I put the original stipulation there to begin with.

Let’s try a more challenging graph:

Franck 3

This graph is much harder to interpret.  The first peak is much bigger than the others; there is also a very tiny peak on the right-hand side.  It helps if you know where the jumps are: L jumps down to zero whenever M is equal to $0.67, $1.02, $1.34, $1.69, $2.01, and $2.04.  If you want to work out the ice cream cone prices for yourself, feel free.  I’ll wait.

The solution?  A little trial and error will give you two different prices for ice cream cones: A= $0.67 and B=$1.02.  Then the jumps occur at A, B, 2A, A+B, 3A, and 2B.  If we were doing physics, we’d make a prediction: we would expect the next jump to occur at $2.36, which is B+2A.  Observing this would support our hypothesis.  But if the next jump occurred at $2.22, say, then we’d have to revise our theory and posit a new ice cream C priced at $2.22.

What does any of this have to do with physics?

I use this example when I introduce the Franck-Hertz experiment to my students.  This experiment was first performed in 1914 (an auspicious year!) and provided support for Bohr’s idea that atoms have specific (quantized) energy levels.  Electrons are accelerated and shot towards mercury atoms (in a vapor).  The electrons may then give energy to the atoms (exciting them) or they may not, bouncing off without loss of kinetic energy.  We look at how much energy the electrons have to start with, and how much they end up with, and thereby deduce the energy levels of the mercury atoms.

How is that possible?  In terms of the analogy, make the following transformations:

kids –> electrons

ice cream truck –> Hg atoms

kids buying ice cream at certain prices –> electrons giving certain amount of energy to Hg atoms

prices of ice cream cones –> energy levels of Hg

M (initial amount of money) –> V (proportional to initial kinetic energy of electron)

L (leftover money) –> I (proportional to final kinetic energy of electron)

If energy levels of an atom are truly quantized, then we would expect a graph of I vs. V to look like our graphs above, with an increasing sawtooth pattern.  Where the drops occur will then tell us the specific energy levels of the Hg atom.  (Incidentally, why do we use V and I?  Well, in the actual experiment the easiest way to measure initial kinetic energy is by measuring the voltage used to accelerate the electrons; the easiest way to measure final kinetic energy is to measure the current of the electrons after they have passed through the Hg vapor.)

How did the experiment go?  Here are the results:


It should be obvious that atomic energy levels have specific “prices”, and that there’s a minimum amount of energy that an electron must have in order to “buy” an atomic transition (exciting the atom at the expense of the electron’s kinetic energy).  It remains for the experimenter to do some elementary unit conversions, to translate I and V into final and initial kinetic energy.

[Note for advanced students and/or physicists: it is interesting to ponder the following questions, which highlight the fact that the ice cream analogy is not perfect: (1) why is the I vs.V graph not linear at low voltage? (2) After “spending” kinetic energy to cause the first excited state, why does the electron’s energy not drop all the way down to zero?]

Read Full Post »

« Newer Posts - Older Posts »