## A game with Pi

Here’s an image of something I wrote down, took a photo of, and posted here for you. It’s a little game you can play with any irrational number. I took as an example.

You just learned about an important math concept/process called **continued fraction **expansions.

With it, you can get very precise rational number approximations for any irrational number to whatever degree of error tolerance you wish.

As an example, if you truncate the above last expansion where the 292 appears (so you omit the “1 over 292” part) you get the rational number 335/113 which approximates to 6 decimal places. (Better than 22/7.)

You can do the same thing for other irrational numbers like the square root of 2 or 3. You get their own sequences of whole numbers.

**Exercise**: for the square root of 2, show that the sequence you get is

1, 2, 2, 2, 2, …

(all 2’s after the 1). For the square root of 3 the continued fraction sequence is

1, 1, 2, 1, 2, 1, 2, 1, 2, …

(so it starts with 1 and then the pair “1, 2” repeat periodically forever).

## Matter and Antimatter don’t always annihilate

It is often said that when matter and antimatter come into contact they’ll annihilate each other, usually with the release of powerful energy (photons).

Though in essence true, the statement is not exactly correct (and so can be misleading).

For example, if a proton comes into contact with a positron they will not annihilate. (If you recall, the positron is the antiparticle of the electron.) But if a positron comes into contact with an electron then, yes, they will annihilate (yielding a photon). (Maybe they will not instantaneously annihilate, since they could for the minutest moment combine to form positronium, a particle they form as they dance together around their center of mass – and then they annihilate into a photon.)

The annihilation would occur between particles that are *conjugate* to each other — that is, they have to be of the same type but “opposite.” So you could have a whole bunch of protons come into contact with antimatter particles of other non-protons and there will not be mutual annihilation between the proton and these other antiparticles.

Another example. The meson particles are represented in the quark model by a quark-antiquark pair. Like this: . Here p and q could be any of the 6 known quarks and the stands for the antiquark of . If we go by the loose logic that “matter and antimatter annihilate” then no mesons can exist since and will instantly destroy one another.

For example, the pion particle has quark content consisting of an up-quark u and the anti-particle of the down quark. They don’t annihilate even though they’re together (in some mysterious fashion!) for a short while before it decays into other particles. For example, it is possible to have the decay reaction

(which is not the same as annihilation into photons) of the pion into a muon and a neutrino.

Now if we consider quarkonium, i.e. a quark and its antiquark together, such as for instance or , so that you have a quark and its own antiquark, then they do annihilate. But, before they do so they’re together in a bound system giving life to the particles for a very very short while (typically around seconds). They have a small chance to form a particle before they annihilate. It is indeed amazing to think how such Lilliputian time reactions are part of how the world is structured. Simply awesome! 😉

PS. The word “annihilate” usually has to do when photon energy particles are the result of the interaction, not simply as a result of when a particle decays into other particles.

Sources:

(1) Bruce Schumm, Deep Down Things. See Chapter 5, “Patterns in Nature,” of this wonderful book. 🙂

(2) David Griffiths, Introduction to Elementary Particles. See Chapter 2. This is an excellent textbook but much more advanced with lots of Mathematics!

## Comparing huge numbers

Comparing huge numbers is often times not easy since you practically cannot write them out to compare their digits. (By ‘compare’ here we mean telling which number is greater (or smaller).) So it can sometimes be a challenge to determine.

Notation: recall that N! stands for “N factorial,” which is defined to be the product of all positive whole numbers (starting with 1) up to and including N. (E.g., 5! = 120.) And as usual, M^{n} stands for M raised to the power of n (sometimes also written as M^n).

Here are a couple examples of huge numbers (which we won’t bother writing out!) that aren’t so easy to compare but one of which is larger, just not clear which. I don’t have a technique except maybe in an *ad hoc* manner.

In each case, which of the following pairs of numbers is larger?

(1) (58!)^{2} and 100!

(2) (281!)^{2} and 500!

(3) (555!)^{2} and 1000!

(4) 500! and 10^{1134}

(5) 399! + 400! and 401!

(6) 8^{200} and 9^{189}

(The last two of these are probably easiest.)

Have fun!

## Escher Math

You’ve all seen these Escher drawings that seem to make sense locally but from a global, larger scale, do not – or ones that are just downright strange. We’ll it’s still creative art and it’s fun looking at them. They make you think in ways you probably didn’t. That’s Art!

Now I’ve been thinking if you can have similar things in math (or even physics). How about Escher math or Escher algebra?

Here’s a simple one I came up with, and see if you can ‘figure’ it out! 😉

**(5 + {4 – 7)2 + 5}3**.

LOL! 🙂

How about Escher Logic!? Wonder what that would be like. Is it associative / commutative? Escher proof?

Okay, so now … what’s your Escher?

Have a great day!

## Richard Feynman on Erwin Schrödinger

I thought it is interesting to see what the great Nobel Laureate physicist Richard Feynman said about Erwin Schrödinger’s attempts to discover the famous Schrödinger equation in quantum mechanics (see quote below). It has been my experience in reading physics that this sort of “heuristic” reasoning is part of doing physics. It is a very creative (sometimes not logical!) art with mathematics in attempting to understand the physical world. Dirac did it too when he obtained his Dirac equation for the electron by pretending he could take the square root of the Klein-Gordon operator (which is second order in time). Creativity is a very big part of physics.

“When Schrödinger first wrote it down [his equation], he gave a kind of derivation based on some heuristic arguments and some brilliant intuitive guesses. Some of the arguments he used were even false, but that does not matter; the only important thing is that the ultimate equation gives a correct description of nature.” — Richard P. Feynman (Source: The Feynman Lectures on Physics, Vol. III, Chapter 16, 1965.)

## Principles of quantum theory

One beautiful summer morning I spent a couple hours in a park reflecting on what I know about quantum mechanics and thought to sketch it out from memory. (A good brain exercise to recapture things you learned and admire.) This note is an edited summary of my handwritten draft (without too much math).

Being a big subject, I will stick to some basic ideas (or principles) of quantum theory that may be worth noting.

Two key concepts are that of a `state’ and that of an ‘observable’.

The former describes the state of the system under study. The observable is a thing we measure. So for example, an electron can be in the ground state of an atom – which means that it is in `orbital’ of lowest energy. Then we have other states that it can be in at higher energies.

The observable is a quantity distinct from a state and one that we measure. Such as for example measuring a particle’s energy, its mass, position, momentum, velocity, charge, spin, angular momentum, etc.

QM gives us principles / interpretations by how states and observables can be mathematically described and how they relate to one another. So here is the first principle.

**Principle 1. The state of a system is described by a function (or vector) ψ. The probability density associated with it is given by |ψ|².**

This vector is usually a mathematical function of space, time (sometime momentum) variables.

For example, f(x) = exp(-x^2) is one such example. You can also have wave examples such as g(x) = exp(-x^2) sin(x) which looks like a localized wave (a packet) that captures both being a particle (localized) and a wave (due to the wave nature of sin(x)). This wave nature of the function allows it to interfere constructively or destructively with other similar functions — so you can have interference! In actual QM these wavefunctions involve more variables that one x variable, but I used one variable to illustrate.

**Principle 2. Each measurable quantity (called an ‘observable’) in an experiment is represented by a matrix A. (A Hermitian matrix or operator.)**

For example, energy is represented by the Hamiltonian matrix H, which gives the energy of a system under study. The system could be the hydrogen atom. In many or most situations, the Hamiltonian is the sum of the kinetic energy plus the potential energy (H = K.E. + V).

For simplicity, I will treat a measurable quantity and its associated matrix on equal footing.

From matrix algebra, a matrix is a rectangular array of numbers – like, say, a square array of 3 by 3 numbers, like this one I grabbed from the net:

Turns out you can multiply such things and do some algebra with them.

Two basic facts about these matrices is:

(1) they generally do not have the commutative property (so AB and BA aren’t always equal), unlike real or complex numbers,

(2) each matrix A comes with `magic’ numbers associated to it called eigenvalues of A.

For example the matrix

(called diagonal because it has all zeros above and below the diagonal) has eigenvalues 1, 4, -3. (When a matrix is not diagonal we have a procedure for finding them. Study Matrix or Linear Algebra!)

**Principle 3. The possible measurements of a quantity will be its eigenvalues. **

For example, the possible energy levels of an electron in the hydrogen atom are eigenvalues of the associated Hamiltonian matrix!

**Principle 4. When you measure a quantity A when the system is in the state ψ the system `collapses’ into an eigenstate f of the matrix A.**

Therefore the system makes a transition from state **ψ** to state **f **(when A is measured).

So mathematically we write

**Af = af**

which means that **f** is an eignstate (or eigenvector) of **A** with eigenvalue **a**.

So if **A** represents energy then `**a’** would be energy measurement when the system is in state **f**.

For a general state **ψ** we cannot say that **A ψ = aψ. **This eigenvalue equation is only true for eigenstates, not general states.

**Principle 5. Each state ψ of the system can be expressed as a superposition sum of the eigenstates of the measurable quantity (or matrix) A.**

So if ** f, g, h, … **are the eigenstates of

**A**, then any other state

**ψ**of the system can be expressed as a superposition (or linear combination) of them:

**ψ = bf + cg + dh + …
**

where b, c, d, … are (complex) numbers. Further, **|c|^2** = probability **ψ** will `collapse’ into the eigenstate **g** when measurement of **A** is performed.

These principles illustrate the *indeterministic* nature of quantum theory, because when measurement of **A** is made, the system can collapse into any one of its many eigenstates (of the matrix **A**) with various probabilities. So even if you had the ‘exact same’ setup initially there is no guarantee that you would see your system state change into the same state each time. That’s non-causality! (Quite unlike Newtonian mechanics.)

**Principle 6. (Follow-up to Principles 4 and 5.) When measurement of A in the state ψ is performed, the probability that the system will collapse into the eigenstate vector φ is the dot product of Aψ and φ.**

The latter dot product is usually written using the Dirac notation as **<φ|A|ψ>**. In the notation above, this would be same as **|c|^2**.

Next to the basic eigenvalues of A, there’s also it’s `average’ value or expectation value in a given state. That’s like taking the weighted average of tests in a class – with weights assigned to each eigenstate based on the superposition (as in the weights b, c, d, … in the above superposition for ψ). So we have:

**Principle 7. The expected or average value of quantity A in the state described by ψ is <ψ|A|ψ>.**

In our notation above where **ψ = bf + cg + dh + …,** this expected value is

**<ψ|A|ψ>** = |b|^2 times ** (eigenvalue of f)** + |c|^2 times (eigenvalue of g) + …

which you can see it being the weighed average of the possible outcomes of A, namely from the eigenvalues, each being weighted according to its corresponding probabilities **|b|^2, |c|^2, … . **

In other words if you carry out measurements of A many many times and calculate the average of the values you get, you get this value.

**Principle 8. There are some key complementary measurement observables. (Classic example: Heisenberg relation QP – PQ = ih.)**

This means that if you have two quantities P and Q that you could measure, if you measure P first and then Q, you will not get the same result as when you do Q first and then P. (In Newton’s mechanics, you could at least in theory measure P and Q simultaneously to any precision in any order.)

Example, position and momentum are complementary in this respect — which is what leads to the Heisenberg Uncertainty Principle, that you cannot measure both the position and momentum of a subatomic particle with arbitrary precision. I.e., there will be an inherent uncertainty in the measurements. Trying to be very accurate with one means you lose accuracy with they other all the more.

From Principle 8 you can show that if you have an eigenstate of the position observable Q, it will not be an eigenstate for P but will be a superposition for P.

So `collapsed’ states could still be superpositions! (Specifically, a collapsed state for Q will be a superposition, uncollapsed, state for P.)

That’s enough for now. There are of course other principles (and some of the above are interlinked), like the Schrodinger equation or the Dirac equation, which tell us what law the state ψ must obey, but I shall leave them out. The above should give an idea of the fundamental principles on which the theory is based.

Have fun,

Samuel Prime

## The 21 cm line of hydrogen in Radio Astronomy

This has been a wonderful discovery back in the 1950s that gave Radio Astronomy a good push forward. It also helped in mapping out our Milky Way galaxy (which we really can’t see very well!).

It arose from a feature of quantum field theory, specifically from the hyperfine structure of hydrogen. (I’ll try to explain.)

You know that the hydrogen atom consists of a single proton at its central nucleus and a single electron moving around it somehow in certain specific quantized orbits. It cannot just circle around in any orbit.

That was one of Niels Bohr’s major contributions to our understanding of the atom. In fact this year we’re celebrating the 100th anniversary of his model of the atom (his major papers written in 1913). Some articles in the June issue of Nature magazine are in honor of Bohr’s work.

Normally the electron circles in the lowest orbit associated with the lowest energy state – usually called the ground state (the one with n = 1).

It is known that protons and electrons are particles that have “spin”. (That’s why they are sometimes also called ‘fermions’.) It’s as if they behave like spinning tops. (The Earth and Milky Way are spinning too!)

The spin can be in one direction (say ‘up’) or in the other direction (we label as ‘down’). (These labels of where ‘up’ and ‘down’ are depends on the coordinates we choose, but let’s now worry about that.)

When scientists looked at the spectrum of hydrogen more closely they saw that even while the electron can be in the same ground state – and with definite smallest energy – it can have slightly different energies that are very very close to one another. That’s what is meant by “hyperfine structure” — meaning that the usual energy levels of hydrogen are basically correct except that there are ever so slight deviations from the normal energy levels.

It was discovered by means of quantum field theory that this difference in ground state energies arise when the electron and proton switch between spinning in the same direction to spinning in opposite directions (or vice versa).

When they spin in the same direction the hydrogen atom has slightly more energy than when they are spinning in opposite direction.

And the difference between them?

The difference in these energies corresponds to an electromagnetic wave corresponding to about 21 cm wavelength. And that falls in the radio band of the electromagnetic spectrum.

So when the hydrogen atom shows an emission or absorption spectrum in that wavelength level it means that the electron and proton have switched between having parallel spins to having opposite spins. When the switch happens you see an electromagnetic ray either emitted or absorbed.

It does not happen too often, but when you have a huge number of hydrogen atoms — as you would in hydrogen clouds in our galaxy — it will invariably happen and can be measured.

Now it’s a really nice thing that our galaxy contains several hydrogen clouds. So by measuring the Doppler shift in the spectrum of hydrogen — at the 21 cm line! — you can measure the velocities of these clouds in relation to our location near the sun.

These velocity distributions are used together with other techniques to map out the hydrogen clouds in order to map out and locate the spiral arms they fall into.

That work (lots of hard work!) showed astronomers that our Milky Way does indeed have arms, just as we would see in some other galaxies, such as in the picture shown here of NGC 1232.

The one UNKNOWN about the structure of our Milky Way is that we don’t know whether it has 2 or 4 arms.

**References:**

[1] University Astronomy, by Pasachoff and Kutner.

[2] Astronomy (The Evolving Universe), by Michael Zeilik.

(These are excellent sources, by the way.)