Monday, February 22, 2010

The Underlying Vector Space

Since it is possible to represent physical quantities with vectors, we desire to somehow connect our algebra to this vector space. We can state that the algebra has all of the linear properties that the vector space has. The only thing that a vector space has, which our algebra does not have, is a dot product.

The dot product of a vector space introduces a "norm" function, where we can identify a particular scalar value that is characteristic of an element of the vector space. We have, up to now, defined two characteristic scalar values. We will use the value of the determinant to represent the "norm", or the "length" of the element.

So, how are we going to go about associating this algebra with a vector space?

First, we will assign a set of basis elements to the algebra. These basis elements will be denoted as a bold lower case e. Our algebra represents a multi-dimensional linear space, and so we will need to distinguish the basis elements that correspond to each dimension. This is done by a subscripted index variable, as is standard in most relativistic notation - i.e. eμ.

Every element of the algebra can now be considered as a linear combination of these basis vectors. The scalars that multiply the basis vectors are called the components. The components are generally represented by lower case italicized symbols with superscripted index variables - i.e. aμ.

The index variables represent the particular index we are interested. We will use standard relativistic conventions with these index variables. First, if an upper and lower index match, then we sum over all values of the index. Second, if the index can represent any of the space-time dimensions then it is represented by a lower case greek letter. If the index represents only one of the spatial dimensions, then it is represented by a lowercase latin letter.

Using the linear combination of basis elements, as well as the index conventions, we can represent an element A in terms of its components aμ like this:

A = aμeμ

We expect that the components of a particular element of the algebra should be the same as the components of the corresponding quantity in the associated vector space.

Now, lets consider what might happen if we try to associate the determinant of an element A with the dot product of the components of A.

AA = aμaνgμν

Here g is the metric of the associated vector space. We can expand the left hand side in terms of the components of A.

AA = aμeμaνeν

Since AA is already a scalar, we can take the scalar part without changing anything.

AA = aμaν (1/2) (eμeν + eνeμ)

We can now form a relationship bewtween the products of the basis elements of the algebra and the metric of the associated vector space

(1/2) (eμ eν + eνeμ) = gμν

This relationship is the fundamental identity which allows us to define multiplication of the basis vectors in such a way that we are assured that the components of the element in the algebra are identical to the components of the vectors in the associated vector space. This relationship also DEFINES the multiplication rules bewtween the basis elements. Thus, the metric is what defines the multiplication rule for the algebra. Since the metric is a non-trivial aspect of physics, we have a clue here that the multiplication rule for the algebra actually carries substantial physics with it.

This relationship between basis elements and the metric is very similar to what is called the "Fundamental Identity of a Clifford Algebra". The only difference is that in the Clifford Algebra version does not contain any algebraic conjugates.

Of course, my version is better :)

The dot product between two different elements is defined as

<AB>S = aμbνgμν

If you would like to generalize this to a generalized "dot product" that can be associated with any square matrices, then it would have the following form

(1/n) Tr(A Adjoint(B)) = aμbνgμν

Here, n is the dimension of the matrix.

No comments:

Post a Comment