## Largest Triangle inside a Curvilinear Triangle

In the diagram shown we have part of a circle of radius whose center is at the point and which is tangent to the x and y axes — though the graph is drawn for R = 2, we want to work with general R.

Our focus is on the region under the circle above the x-axis. The question is: what is the maximum area that a triangle inside this region can have?

It may occur to you that there is a reasonable `quick’ answer, but the point of the problem is to reason it out carefully so you more or less have a proof that you do indeed get a maximum area. Since the region is concave, the vertices of a triangle cannot be so that one is too close to the far right while another vertex close to the far top left (or else the triangle would not be fully inside the region).

Have fun!

## Two Little Circles

In the diagram shown we see a big circle of radius R that is tangent to both the x and y axes. What is the radius of the little circle to its southwestern corner? Express its radius in terms of R. (The little circle is also required to be tangent to the axes. The diagram is draw for a circle of radius 2 but we’re working with a general radius R.)

After having solved this problem, the next question will be this:

You can easily see that we can repeat the process by looking at the southwestern circles for ever and ever because there will always be a gap, and you get smaller and smaller circles with small radii. What if you add up all their radii? Do they add up? If so to what number do they add up, in terms of the original radius R. If they don’t add up, why don’t they?

## Einstein summation convention

Suppose you have a list of n numbers .

Their sum is often shorthanded using the Sigma notation like this

which is read “sum of from k=1 to k=n.” This letter k that varies from 1 to n is called an ‘index’.

**Vectors**. You can think of a vector as an ordered list of numbers

If you have two vectors and their **dot** **product** is defined by multiplying their respective coordinates and adding the result:

Using our summation notation, we can abbreviate this to

While working thru his general theory of relativity, Einstein noticed that whenever he was adding things like this, the same index k was repeated! (You can see the k appearing once in A and also in B.) So he thought, well in that case maybe we don’t need a Sigma notation! So remove it! The fact that we have a repeating index in a product expression would mean that a Sigma summation is implicitly understood. (Just don’t forget! And don’t eat fatty foods that can help you forget!)

With this idea, the Einstein summation convention would have us write the above dot product of vectors simply as

In his theory’s notation, it’s understood that the index k here would vary from 1 to 4, for the four dimensional space he was working with. That’s Einstein’s index notation where 1, 2, 3, are the indices for space coordinates (i.e., ), and k=4 for time (e.g., ). One could also write space-time coordinates using the vector where t is for time.

(Some authors have k go from 0 to 3 instead, with k=0 corresponding to time and the others to space coordinates.)

I used ‘k’ because it’s not gonna scare anyone, but Einstein actually uses Greek letters like instead of the k. The convention is that Greek index letters range over all 4 space-time coordinates, and Latin indices (like k, j, m,etc) for the space coordinates only. So if we use instead of k the dot product of the two vectors would be

So if we write it means we understand that we’re summing these over the 4 indices of space-time. And if we write it means that we’re summing these over the 3 indices of space only. More specifically,

and

There is one thing that I left out of this because I didn’t want to complicate the introduction and thereby scare readers! (I already may have! Shucks!) And that is, when you take the dot product of two vectors in Relativity, their indices are supposed to be such that one index is a subscript (‘at the bottom’) and the other repeating index is a superscript (‘at the top’). So instead of writing our dot product as it is written as

(This gets us into *covariant* vectors, ones written with subscripts, and *contravariant* vectors, ones written with superscripts. But that is another topic!)

How about we promote ourselves to Tensors? Fear not, let’s just treat it as a game with symbols! Well, tensors are just like vectors except that they can involved more than one index. For example, a vector such as in the above was written , so it involves one index . What if you have two indices? Well in that case we have a **matrix** which we can write . (Here, the two indices are sitting side by side like good friends and aren’t being multiplied! There’s an imaginary comma that’s supposed to separate them but it’s not conventional to insert a comma.)

The most important tensor in Relativity Theory is what is called the metric tensor written . It describes the distance structure (metric = distance) on a curved space-time. So much of the rest of the geometry of space, like its curvature, how to differentiate vector fields, curved motion of light and particles, shortest path in curved space between two points, etc, comes from this metric tensor .

The Einstein ‘gravitational tensor’ is one such tensor and is written . Tensors like those are called rank 2 tensors because they involve two different indices. Another good example of a rank 2 tensor is the energy-momentum tensor often written as . This tensor encodes the energy and matter distribution in spaces that dictate its geometry — the geometry (and curvature) being encoded in the Einstein tensor . (If you’ve read this far, you’re really getting into Relativity! And I’m very proud of you!)

You could have a tensors with 3, 4 or more indices, and the indices could be mixed subscripts and superscripts, like for example and .

If you have tensors like this, with more than 1 or 2 indices, you can still form their dot products. For example for the tensors D and F, you can take any lower index of D (say you take and set it equal to an upper index of F — and add! So we get a new tensor when we do this dot product! You get

where it is understood that since the index is repeated, you are summing over that index (from 1 to 4) (as I’ve written out on the right hand side). Notice that the indices that remain are . So this dot product gives rise to yet another tensor with these indices – let’s give the letter C:

.

This process where you pick two indices from tensors and add their products along that index is called ‘**contraction**‘ – even though it came out of doing a simple idea of dot product. Notice that in general when you contract tensors the result is not a number but is in fact another tensor. This process of contraction is very important in relativity and geometry, yet it’s based on a simple idea, extended to complicated objects like tensors. (In fact, you can call the original dot product of two vectors a contraction too, except it would be number in this case.)

Thank you!

## Escher Math

You’ve all seen these Escher drawings that seem to make sense locally but from a global, larger scale, do not – or ones that are just downright strange. We’ll it’s still creative art and it’s fun looking at them. They make you think in ways you probably didn’t. That’s Art!

Now I’ve been thinking if you can have similar things in math (or even physics). How about Escher math or Escher algebra?

Here’s a simple one I came up with, and see if you can ‘figure’ it out! 😉

**(5 + {4 – 7)2 + 5}3**.

LOL! 🙂

How about Escher Logic!? Wonder what that would be like. Is it associative / commutative? Escher proof?

Okay, so now … what’s your Escher?

Have a great day!