tag:blogger.com,1999:blog-62682199611755887422018-03-06T02:01:43.995-08:00Solving the UniverseEric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.comBlogger17125tag:blogger.com,1999:blog-6268219961175588742.post-5890160617752297152010-07-21T22:43:00.001-07:002010-07-22T15:03:30.180-07:00The "N" Field - part 2To discuss my thoughts of the <em>N</em> in this section, I will not use Lagrangians, rather I will use Hamiltonians. The Lagrangian (<em>L</em>) is related to the Hamiltonian (<em>H</em>) as follows:<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?L=H-\vec{p}\cdot\vec{v}"/></div><br /><br />The Hamiltonian represents the total energy of a system. The Hamiltonian of a free particle is merely the kinetic energy of the particle.<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?H=E=\frac{1}{2}mv^2 = \frac{1}{2}\vec{p}\cdot\vec{v}"/></div><br /><br />There is a very simple trick to find the Hamiltonian of a charged particle in the influence of an electromagnetic field. The extension can be made by replacing the components of the relativistic four momentum with the components of the canonical four momentum.<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?p\rightarrow p+eA"/></div><br /><br />Where <em>A</em> is the electromagnetic four potential. In terms of space and time pieces, this is<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?E\rightarrow E+eV"/><br /><img src="http://www.codecogs.com/eq.latex?\vec{p}\rightarrow \vec{p}+e\vec{A}"/></div><br /><br />Making this transformation on the free particle Hamiltonian yields<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?E+eV=\frac{1}{2}\left(\vec{p}+e\vec{A}\right)\cdot\vec{v}"/></div><br /><br />Solving for <em>E</em> gives<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?E=E_{free}+e\vec{A}\cdot\vec{v}+eV"/></div><br /><br />So we started with the energy of a free particle, and transformed the four momentum to get the energy of a particle that is subject to the electromagnetic field.<br /><br />Start with a free particle. Extend the momentum. End up with interaction terms.<br /><br />The <em>N</em> field seems to be a similar type of extension - except in this case we aren't extending from four momentum to canonical four momentum, but rather from four potential to "canonical four potential", like this<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?A\rightarrow A+\frac{m}{e}u"/></div><br /><br />Where <em>u</em> is the four velocity of the particle.<br /><br />So, whereas <em>F</em> is defined as<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?F=c\partial\overline{A}"/></div><br /><br />We can define <em>N</em> as<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?N=c\partial\left(\overline{A+\frac{m}{e}u}\right)"/></div><br /><br />So the point I am driving at here is this: on one hand we can derive the Hamiltonian of a particle in an electric field by starting with a free Hamiltonian, and extending the four momentum to the canonical momentum. So, what might happen if we were to start with the "free" Maxwell equations (free here means sourceless) and extend the four potential to the "canonical four potential"? My guess is:<br /><br />Start with a sourceless field. Extend the potential. End up with source terms.<br /><br />The free Maxwell equations are written as<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\partial\overline{F}=0"/></div><br /><br />Extending <em>A</em> to the canonical form converts <em>F</em> into <em>N</em> and the equation becomes<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\partial\overline{N}=0"/></div><br /><br />If the free vs. interaction Hamiltonian analogy holds true here, then this homogeneous equation may represent the inhomogeneous maxwell equations.<br /><br />Splitting this up into field terms and source terms we have<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\partial\overline{F}+\frac{mc}{e}\partial\overline{\partial\overline{u}}=0"/><br /><img src="http://www.codecogs.com/eq.latex?\partial\overline{F}=-\frac{mc}{e}\partial\overline{\partial\overline{u}}"/></div><br /><br />The empirical form of the inhomogeneous Maxwell equations with source terms is given in terms of currents as<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\partial\overline{\left[F\right]_V}\right=\frac{1}{\epsilon}\overline{j}"/></div><br /><br />We can make the correlation<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\overline{j}=-\frac{m}{e}\sqrt{\frac{\epsilon}{\mu}}\partial\left[\,\overline{\partial\overline{u}}\,\right]_V"/></div><br /><br />Of course, as stated previously, when the empirical current shows up as a source term, this represents approximating local disturbances in the potential wave as point disturbances, i.e. delta function sources. If we do not make this approximation, then <em>j</em> is always zero, and the local disturbances manifest themselves through the derivatives of the scalar EM field. Thus, if we do not make the approximation then we have<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\partial\overline{u}\partial=0"/></div><br /><br />Or, more importantly, taking into account the fact that the four vector potential also obeys this equation, we can make a homogenious wave from the four momentum.<br /><br /><div style="text-align:center"><img src="http://www.codecogs.com/eq.latex?\partial\overline{p}\partial=0"/></div>Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-2361166848072502072010-06-26T22:23:00.000-07:002010-06-26T23:28:08.574-07:00The "N" FieldI have recently been exposed to a terrific idea, proposed by my friend Bill Polson, while enjoying a nice breakfast overlooking the coast.<br /><br />He expressed a few ideas about Lagrangians that I think are rather revolutionary.<br /><br />We usually think of an action integral is the integral of a scalar Lagrangian function along a parametrized path, with time as the parameter. The path that minimizes the result of this integral is the path that a particle takes.<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?S=\int_{t_0}^{t_f}L\,dt"/></div><br /><br />Bill Polson introduced the idea of a vector valued Lagrangian. This is something I've done as well, but Bill does something rather special with his vector Lagrangian. He forms the action integral out of a line integral on a closed loop. The first branch of the loop represents the path the particle might take, while the returning branch contains a slight variation. He is then able to make a correlation, via Stokes Law, between the abstract idea of minimizing action, and the geometric properties of the vector Lagrangian field.<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?S_{BP}=\oint_C\vec{L}\cdot\vec{n}\,dt=\int_A\nabla\times\vec{L}\,d\sigma=\int_A N\,d\sigma"/></div><br /><br />Bill introduces a vector field he calls <em>N</em>, which should contain vector and pseudo-vector portions. The action is stationary when integrated over the "area" contained inside of the closed path. He defined <em>N</em> as the "curl" of the vector Lagrangian field, but he meant the generalization of the curl to 4 dimensions, of course.<br /><br />Now, I think this idea is rockin' as is. However, Bill went further and figured out what <em>N</em> needed to be in order to reproduce the Lagrangian of a charged particle in an EM field.<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?N=F+\nabla\times u"/></div><br /><br />Again, the curl here is a generalization of the 3D curl of a vector field into 4 dimensions. Also, there are several factors such as <em>c</em>, charge and mass that Bill neglected for convenience.<br /><br />Using the Clifford Algebra notation, this is how I would define <em>N</em><br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?\partial\overline{L} = N = F + \frac{mc}{e}\partial\overline{u}"/></div><br /><br />Here, I have inserted the mass, charge, and speed of light constants where they are appropriate. If I replace <em>F</em> with its definition in terms of the four-vector potential <em>A</em>, this becomes<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?N=c\partial\overline{\left(A+\frac{m}{e}u\right)}=\frac{c}{e}\partial\overline{\left(eA + mu\right)}"/></div><br /><br />The quantity under the conjugation bar is the canonical momentum <em>p</em> of a particle in an EM field. Using this we can build an analogy. As <em>p</em> is to <em>A</em>, <em>N</em> is to <em>F</em>.<br /><br />Now we have<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?N=\frac{c}{e}\partial\overline{p}=\partial\overline{L}"/></div><br /><br />Or, just<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?L=\frac{c}{e}p"/></div><br /><br />The Action integral <em>S<sub>BP</sub></em> expressed in terms of Clifford Algebra notation is now<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?S_{BP}=\oint_C\frac{c}{e}\left<p\overline{\frac{\partial x}{\partial t}}\right>_S\,dt"/></div><br /><br />Now, let's compare this with my version of the action integral, which is an integral of a scalar Lagrangian density function over a volume.<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?S_{EB}=\int_V\left<p\overline{dx}\right>_S"/></div><br /><br />Thus, according to this definition of action, my vector valued Lagrangian field is<br /><br /><div style="text-align:center;"><img src="http://www.codecogs.com/eq.latex?L=p"/></div><br /><br />I don't define an <em>N</em> field, but my vector Lagrangian is the same as Bill's apart from a constant factor.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-19043172970988843322010-03-04T21:10:00.000-08:002010-03-05T09:08:35.820-08:00The Scalar FieldIn the previous post, we saw that a homogeneous paravector wave provided an exact description of the Maxwell's equations, except there is an added scalar field, which is on the same footing as the Electric and Magnetic fields. This scalar field does not appear in standard electromagnetic theory, and we are going to find out why.<br /><br />To begin, we will define the electro-magnetic bi-paravector <strong>F</strong> in terms of the potential <strong>A</strong><br /><br /><div style="text-align: center;font-family:trebuchet ms;"><strong>A</strong> = ( V/c, <strong>A</strong> )<br /><br /><strong>F</strong> = c<strong>∂<span style="border-top: 2px solid black;">A</span></strong><br /><br /><strong>F</strong> = ( ∂V/∂t + c<strong>∇</strong>•<strong>A</strong>, - ∂<strong>A</strong>/∂t - <strong>∇</strong>V + ic (<strong>∇</strong> × <strong>A</strong>) )</div><br />We can make the following definitions for the fields<br /><br /><div face="trebuchet ms" style="text-align: center;"><strong>E</strong> = -∂<strong>A</strong>/∂t - <strong>∇</strong>V<br /><br /><strong>B</strong> = <strong>∇</strong> × <strong>A</strong><br /><br />φ = (1/c) ∂V/∂t + <strong>∇</strong>•<strong>A</strong></div><br /><br />In terms of these fields the electro-magnetic bi-paravector is given by<br /><br /><div face="trebuchet ms" style="text-align: center;"><strong>F</strong> = ( cφ, <strong>E</strong> + ic<strong>B</strong> )</div><br />If you want to recover the orthodox version of the electro-magnetic field tensor, then just take the vector portion of this, ignoring the scalar portion. For now, we are not going to ignore the scalar portion, so we can see what physical implications it has.<br /><br />To begin with, we see that the scalar field is a Lorentz invariant. It is the same in all reference frames.<br /><br />The vector fields are gauge invariant. We can express the classical concept of a gauge transformation as follows<br /><br /><div face="trebuchet ms" style="text-align: center;"><strong>A</strong>' → <strong>A</strong> + <strong>∂</strong>Χ<br /><br /><strong>F</strong>' → <strong>F</strong> + c<strong>∂<span style="border-top: 2px solid black;">∂</span><strong>Χ</strong></strong></div><br />Now, if we want to get full gauge invariance, we should expect the second term to vanish. You can show that since <span style="font-family:trebuchet ms;">Χ</span> is a scalar the vector portion of the second term vanishes identically, therefore the <strong style="font-family: trebuchet ms;">E</strong> and <strong style="font-family: trebuchet ms;">B</strong> fields are trivially gauge invariant.<br /><br />We can achieve gauge invariance of the scalar field, only in the case where the scalar <span style="font-family:trebuchet ms;">Χ</span> belongs to a restricted set of functions which obey the condition<br /><br /><div face="trebuchet ms" style="text-align: center;"><strong>∂<span style="border-top: 2px solid black;">∂</span></strong>Χ = 0</div><br />This expression reduces to the scalar homogeneous wave equation. The idea of a restricted gauge invariance provides for some non-trivial gauge transformations, which end up being valid, even in the context of the traditional, unrestricted, gauge principle.<br /><br />For instance, consider a paravector <strong>Ψ</strong>, which is not the gradient of a scalar, but which does satisfy the expression<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>∂<span style="border-top: 2px solid black;">Ψ</span></strong> = 0</div><br />Such a paravector can be used as a gauge function, in the sense that<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>A</strong>' → <strong>A</strong> + <strong>Ψ</strong><br /><br /><strong>F</strong>' → <strong>F</strong></div><br />Such a gauge function ends up being non-trivially gauge invariant for the vector fields. This means that the vector fields are gauge invariant, but it is due to the form of the gauge function, not because of mathematical identities.<br /><br />We will escalate the concept of gauge invariance to include the non-trivial gauge functions as well. The higher principle that we used to introduce the non-trivial gauge functions requires that we restrict the traditional scalar gauge functions to only include solutions to the homogeneous wave equation. This refinement resides within the limitations of traditional electro-magnetic theory, though it may require us to rethink the physical meaning behind various gauge transformations.<br /><br />For instance, the Lorenz gauge<br /><br /><div style="text-align:center;font-family:trebuchet ms;">(1/c) ∂V/∂t + <strong>∇</strong>•<strong>A</strong> = 0</div><br />If you remember the definition of the scalar field, you see that applying the Lorenz gauge is the same as making the statement<br /><br /><div style="text-align:center;font-family:trebuchet ms;">φ = 0</div><br />Since the derivatives of the scalar field are equated with the source terms, we see that setting this field to zero, or any constant for that matter, has the physical meaning of having no source terms. Luckily, this is the case where the Lorentz gauge is employed.<br /><br />We will continue discussing the scalar field in the next post, where we will ask the question, "what force does the scalar field apply to a particle?"Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-55369734817021742952010-03-04T11:27:00.000-08:002010-03-04T21:08:31.232-08:00The Wave Equation<span style="color: rgb(0, 102, 0);"><em>Note: I usually prefer to use the nabla symbol (∇) to represent the vector portion of the gradient. If you are using IE and can see the nabla symbol as an upside down triangle rather than a box, then you are lucky :(<br /><br />I suggest using firefox, or another browser. Sorry microsoft, I love ya, but I need my math notation.</em><br /></span><br />We are going to construct the homogeneous wave equation, for a paravector wavefunction. The reason for doing this is because I have a hunch that it may be possible to show that this homogeneous wave equation is a natural result of the definition of the derivative with respect to a paravector variable, but so far this hunch currently has the status of a hypothesis. Therefore, for the sake of my hypothesis we will proceed on this path.<br /><br />To generate the wave equation, we will operate on a paravector two times with the gradient operator. Begin by letting the gradient operate on an arbitrary paravector that we will call <strong><span style="font-family:trebuchet ms;">Ψ</span></strong>. Remember that this must be done in a specific way in order to maintain relativistic significance.<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-family:trebuchet ms;"><strong>Θ</strong> = <strong>∂<span style="border-top: 2px solid black;">Ψ</span></strong></span></div><br />This quantity is a bi-paravector. Let's act on this bi-paravector with another gradient operation in a relativistically correct way, and set the result to zero, for a homogeneous wave.<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-family:trebuchet ms;"><strong>Θ∂</strong> = <strong>∂<span style="border-top: 2px solid black;">Ψ</span>∂</strong> = 0</span></div><br />This is the expression for a homogeneous paravector wave. Let's see if this wave equation resembles anything that we have any physical intuition for.<br /><br />Let's express the gradient and the wavefunction in terms of their scalar and vector parts<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-family:trebuchet ms;"><strong>∂</strong> = ( (1/c) ∂/∂t, -<strong>∇</strong> )<br /><br /><strong>Ψ</strong> = ( ψ, <strong>ψ</strong> )</span></div><br />When we act with the gradient operator, we treat it as if we were multiplying it, in the following manner<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-family:trebuchet ms;"><strong>∂<span style="border-top: 2px solid black;">Ψ</span></strong> = ( (1/c) ∂/∂t, -<strong>∇</strong> )( ψ, -<strong>ψ</strong> )<br /><br />= ( (1/c) ∂ψ/∂t + <strong>∇•ψ</strong>, -(1/c) ∂<strong>ψ</strong>/∂t - <strong>∇</strong>ψ + i<strong>∇ </strong>× <strong>ψ</strong> )</span></div><br />For the sake of brevity, rather than writing out all of these terms, we will just consider that the resulting bi-paravector has three parts: a scalar part, a real vector part, and an imaginary vector part.<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-family:trebuchet ms;"><strong>Θ</strong> = ( θ, <strong>θ</strong><sub>R</sub> + i<strong>θ</strong><sub>I</sub> )<br /><br />θ = (1/c) ∂ψ/∂t + <strong>∇</strong>•<strong>ψ</strong> </span></div><div style="text-align: center;font-family:trebuchet ms;"><br /><span style="font-family:trebuchet ms;"><strong>θ</strong><sub>R</sub> = -(1/c) ∂<strong>ψ</strong>/∂t - <strong>∇</strong>ψ</span></div><div style="text-align: center;font-family:trebuchet ms;"><br /><span style="font-family:trebuchet ms;"><strong>θ</strong><sub>I</sub> = <strong>∇</strong> × <strong>ψ</strong></span></div><br />Now we apply the second gradient to the bi-paravector<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-family:trebuchet ms;"><strong>Θ∂</strong> = ( θ, <strong>Θ</strong><sub>R</sub> + i<strong>Θ</strong><sub>I</sub> )( (1/c) ∂/∂t, -<strong>∇</strong> ) = 0<br /><br />( (1/c) ∂θ/∂t - <strong>∇</strong>•<strong>θ</strong><sub>R</sub> - i<strong>∇</strong>•<strong>θ</strong><sub>I</sub>, (1/c)∂<strong>θ</strong><sub>R</sub>/∂t + (i/c) ∂<strong>θ</strong><sub>I</sub>/∂t - <strong>∇</strong>θ + i<strong>∇</strong> × <strong>θ</strong><sub>R</sub> - <strong>∇</strong> × <strong>θ</strong><sub>I</sub> ) = 0</span></div><br />Now, this is a long and hairy equation, which involves four pieces: a real scalar, an imaginary scalar, a real vector and an imaginary vector. Physically speaking, there is a scalar term, a pseudo-scalar term, a vector term, and a pseudo-vector term. If the result of this expression is truly zero, then each of these four peices must independantly equal zero as well. We will set them to zero, and rearrange them slightly<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>∇</strong>•<strong>θ</strong><sub>R</sub> = (1/c) ∂θ/∂t<br /><br /><strong>∇</strong>•<strong>θ</strong><sub>I</sub> = 0<br /><br /><strong>∇</strong> × <strong>θ</strong><sub>R</sub> = - (1/c) ∂<strong>θ</strong><sub>I</sub>/∂t<br /><br /><strong>∇</strong> × <strong>θ</strong><sub>I</sub> = -<strong>∇</strong>θ + (1/c) ∂<strong>θ</strong><sub>R</sub>/∂t</div><br />Can you see it yet? The first time I saw this, I about peed my pants.<br /><br />These equations resemble the Maxwell equations in their differential form. We can make the following substitutions in order to get the equations closer in form to Maxwell's equations<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>θ</strong><sub>R</sub> → c<strong>E</strong><br /><br /><strong>θ</strong><sub>I</sub> → <strong>B</strong></div><br />These substitutions constrain the wave function Ψ to take on the role of the electro-magnetic four-potential<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>Ψ</strong> → <strong>A</strong> = ( V/c, <strong>A</strong> )</div><br /><br />Using these assignments of our wave variables, we can express the wave equation in a from that is very similar to the Maxwell equations.<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>∇</strong>•<strong>E</strong> = ∂θ/∂t<br /><br /><strong>∇</strong>•<strong>B</strong> = 0<br /><br /><strong>∇</strong> × <strong>E</strong> = - ∂<strong>B</strong>/∂t<br /><br /><strong>∇</strong> × <strong>B</strong> = -<strong>∇</strong>θ + (1/c<sup>2</sup>) ∂<strong>E</strong>/∂t</div><br />Now, other than the discovery that the Maxwell equations are a manifestation of a paravector wave equation, there are a few other interesting tidbits you should take note of.<br /><br />First, there is no magnetic monopole, and there is no way to introduce one without messing up the covariance of the expression.<br /><br />Second, the source terms are represented by the derivatives of a scalar field. What is this scalar field?<br /><br />Third, it is the <em>homogeneous</em> wave equation which represents the electro-magnetic field <em>with</em> sources. We usually introduce inhomogeneous terms in order to represent sources.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-73004608270949815452010-03-04T10:19:00.000-08:002010-03-04T16:55:18.061-08:00Gradient OperatorI think some of the physics that is encoded in the algebra manifests itself mainly in the derivatives. For instance, the true meat of the complex algebra is the analytic functions. Can we come up with some type of space-time version of an analytic function? I've made a few attempts, and I've seen other attempts, but in general I'm just not satisfied. There is no parallel that is as clean and clear as the analytic functions of a complex variable. The troubles with derivatives in the algebraic representation of space-time all boil down to non-commutativity. I'll keep searching.<br /><br />In the mean time, let's discuss a derivative operator, which I believe is a valid operator, though I wish it would arise in a more clean fashion. Since I am not satisfied with any particular derivation of this operator, I will just introduce it without derivation: the algebraic gradient<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><span style="font-family:trebuchet ms;"><strong>∂</strong> = ∂<sup>μ</sup><strong>e</strong><sub>μ</sub></span></div><br />Here the notation <span style="font-family:trebuchet ms;">∂<sup>μ</sup></span> makes reference to contra-variant (raised index) partial derivative with respect to the μ<sup>th</sup> coordinate. Remember that a standard partial derivative is defined with the index lowered.<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><span style="font-family:trebuchet ms;">∂<sub>μ</sub> = ∂/∂x<sup>μ</sup></span></div><br />If we use the minkowski metric to raise the index, we must change the sign on the spatial terms of the gradient.<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>∂</strong> = (1/c) ∂/∂t <strong>e</strong><sub>0</sub> - ∂/∂x <strong>e</strong><sub>1</sub> - ∂/∂y <strong>e</strong><sub>2</sub> - ∂/∂z <strong>e</strong><sub>2</sub></div><br />Now, in tensor algebras, we use index balancing in order to construct relativistically significant quantities. In this algebra we don't deal with the component indexes much, but we still need to be concerned with making relativistically significant quantities.<br /><br />We can use the fact that the gradient transforms like a para-vector. Thus, if we want the gradient to act on a paravector, it needs to do so like this<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>∂<span style="BORDER-TOP: black 2px solid">A</span></strong></div><br />Which results in a biparavector.<br /><br />If the gradient acts on a biparavector it must do so like this<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><span style="font-family:trebuchet ms;"><strong>B∂</strong> or <strong><span style="BORDER-TOP: black 2px solid">B</span>∂</strong></span></div><br />Which results in a paravector. Note that the gradient acting from the right is a distinct operation due to non-commutativity.<br /><br />And if the gradient acts on a spinor, it must do so like this<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>∂<span style="BORDER-TOP: black 2px solid">S</strong><sup>†</sup></span></div><br />which results in another spinor. I only know that this expression is relativistically significant, but since I haven't yet seen this operation in action, I have no physical intution as to its significance.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-22066934110013772112010-02-26T11:03:00.000-08:002010-03-03T13:55:53.926-08:00The Classical SpinorIn the previous post we discussed the different relativistic significant quantities. We will now use these quantities to quantify physical parameters.<br /><br />Consider a particle that is travelling on a path with arbitrary acceleration. At any point on the path, it is possible to find a co-moving reference frame, or a frame where the particle appears to be instantaneously at rest. We will call this the "rest frame", even though we must continually change this frame as the particle progresses.<br /><br />The term "rest" in relativity does not mean "motionless", for even if the particle is not moving in space, it is moving in time. Strictly speaking it is moving in time at the speed of light. In the rest frame of the particle, the velocity is always along the time direction. Thus, we can represent the velocity of the particle with a scalar.<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>u</strong> = c</div><br />If we want to transform this velocity from the rest frame back to the frame where we are observing the particle, we need to apply a Lorentz transformation.<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>u</strong>' = <strong>LuL</strong><sup>†</sup> = c<strong>LL</strong><sup>†</sup></div><br />As we have previously seen, a para-vector such as <strong><span style="font-family:trebuchet ms;">u'</span></strong> can be a composition of spinors. Thus, we can consider the spinor <span style="font-family:trebuchet ms;"><strong>L</strong></span> to characterize the velocity of the particle. As part of his development of the <em><a href="http://en.wikipedia.org/wiki/Algebra_of_physical_space">Algebra of Physical Space</a></em>, W. E. Baylis chooses to assign a special name to the spinor <span style="font-family:trebuchet ms;"><strong>L</strong></span>. He calls it the Classical Spinor, or Classical Eigen-Spinor, and denotes it <span style="font-family:trebuchet ms;"><strong>Λ</strong></span>. Baylis uses units where c = 1, but we aren't going to do that here. Therefore we need to include this factor in the definition of the classical spinor.<br /><div style="text-align: center; font-family: trebuchet ms;"><strong>Λ</strong> = (c<sup>1/2</sup>) <strong>L</strong></div><br />This is called the classical spinor, because it is the spinor which is representative of the classical trajectory of the particle, and so in some respect representative of the particle itself. Using the classical spinor has manay advantages. For instance, we not only can determine the velocity of the particle, but we can also determine the spatial orientation of the coordinate system that the particle resides in.<br /><br />We previously saw that the Lorentz transformation can be given in an exponential form, in terms of the generators of the transformation. We can likewise construct the classical spinor from generators, which are functions the proper time of the particle τ. For instance, we can express a particle that is spinning at a constant rate as<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>Λ</strong> = (c<sup>1/2</sup>) e<sup>(1/2) <strong>ω</strong><em>τ</em></sup></div><br />In this case, the generator of the transformation is the vector <strong>ω</strong><em>τ</em>. We call the vector <strong>ω</strong> the spatial rotation rate, and require that it is purely imaginary. The classical spinor can describe a particle with quantum-like spin, without the need of any quantum postulates. For instance, the classical spinor changes sign upon a full rotation, and requires two full rotations before the sign is restored. In this sense, a particle represented by a classical eigenspinor can be associated with a spin 1/2 particle, like an electron.<br /><br />If we want a particle that can accelerate as well as spin, then we allow the rotation rate to have real as well as imaginary parts. If the rotation rate has arbitrary real and imaginary parts, we call it the space-time rotation rate, and designate it as <strong>Ω</strong>.<br /><br />A classical eigenspinor with a constant space-time rotation rate represents a particle that is spinning and accelerating at a constant rate.<br /><div style="text-align: center; font-family: trebuchet ms;"><strong>Λ</strong> = (c<sup>1/2</sup>) e<sup>(1/2) <strong>Ω</strong><em>τ</em></sup></div><br />We can take a derivative with respect to τ in order to acheive what Baylis calls the equation of motion for the classical spinor.<br /><div style="text-align: center; font-family: trebuchet ms;">d<strong>Λ</strong>/d<em>τ</em> = (1/2) <strong>ΩΛ</strong></div><br />This equation ends up being valid, even when the space-time rotation rate is not constant.<br /><br />So what are the possible values the <strong>Ω</strong> can take? Knowing that the magnitude of the 4-velocity of a particle is constant, we also know that the magnitude of the classical spinor is also constant. This constraint helps us determine the restriction on <strong>Ω</strong>.<br /><br /><div style="text-align: center; font-family: trebuchet ms;"><strong>Λ</strong><strong><span style="border-top: 2px solid black;">Λ</span></strong> = c e<sup>(1/2) (<strong>Ω</strong> + <strong><span style="border-top: 2px solid black;">Ω</span></strong>) <em>τ</em></sup></div><br />In order for <strong>Λ</strong> to be constant, we need the factor involving the exponent to go away. This will happen if the quantity in the exponent involving <strong>Ω</strong> is always zero. This quantity in the exponent just happens to represent the scalar portion of <strong>Ω</strong>.<br /><br />There are two ways of interpreting this result. We may say that the physical quantity that we associate with <strong>Ω</strong> can not have a scalar portion, or we can say that the physical quantity <em>can</em> have a scalar portion, but the scalar portion does not contribute to <strong>Λ</strong>. Though Baylis uses the first interpretation, I personally prefer the second. If you want to take the second interpretation, then you need to modify the expression for the equation of motion of <strong>Λ</strong>.<br /><br /><br /><div style="text-align: center; font-family: trebuchet ms;">d<strong>Λ</strong>/d<em>τ</em> = (1/2) <<strong>Ω</strong>><sub><em>V</em></sub><strong>Λ</strong></div><br />The reason I prefer this interpretation, is because it introduces a symmetry. The symmetry here is that we can freely change the scalar portion of <strong>Ω</strong> without changing the resulting trajectory of the particle. This symmetry should have physical consequences, as all symmetries do. If these physical consequences can be justified, then this form of the equation will be justified.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-66114027575493548412010-02-23T09:12:00.000-08:002010-02-23T10:26:26.203-08:00Para-Vectors, Bipara-Vectors, and SpinorsWe can think of a para-vector as the linear combination of a real scalar with a real vector. The para-vector is the equivalent to the four-vector in tensor based formalisms. If we were using tensor language, the four-vector would have been defined in terms of its transformation properties. Likewise, we can define a para-vector from it's transformation properties.<br /><br />For instance, I can say that an object is a para-vector without making reference to the state of the components, but merely by noting the way that it transforms. Para-vectors transform like this:<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong>' = <strong>LAL</strong><sup>†</sup></div><br />As previously stated, a bipara-vector is the product of 2 paravectors. However, relativity dictates that any physically significant quantity is covariant, or able to maintain the same form after a lorentz transformation. Consider what will happen if we merely multiply two para-vectors together<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>C</strong> = <strong>AB</strong></div><br />Now let's apply a lorentz transformation to the equation<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>C</strong>' = <strong>LAL</strong><sup>†</sup><strong>LBL</strong><sup>†</sup></div><br />Due to the <span style="font-family:trebuchet ms;"><strong>L<sup>†</sup>L</strong></span> factor that ends up getting sandwiched in between <span style="font-family:trebuchet ms;"><strong>A</strong></span> and <span style="font-family:trebuchet ms;"><strong>B</strong></span>, we see that <span style="font-family:trebuchet ms;"><strong>C</strong></span> is not a covariant quantity - it changes it's form after the transformation. However, consider the definition<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>C</strong> = <strong>A<span style="BORDER-TOP: black 2px solid">B</span></strong><br /><br /><strong>C</strong>' = <strong>LAL</strong><sup>†</sup><span style="BORDER-TOP: black 2px solid"><strong>L</strong></span><sup>†</sup> <span style="BORDER-TOP: black 2px solid"><strong>B</strong></span> <span style="BORDER-TOP: black 2px solid"><strong>L</strong></span><br /><br /><strong>C</strong>' = <strong>LA</strong><span style="BORDER-TOP: black 2px solid"><strong>B</strong></span> <span style="BORDER-TOP: black 2px solid"><strong>L</strong></span><br /><br /><strong>C</strong>' = <strong>LC</strong><span style="BORDER-TOP: black 2px solid"><strong>L</strong></span></div><br />In the third line we use the fact that <span style="font-family:trebuchet ms;"><strong>L<span style="BORDER-TOP: black 2px solid">L</span></strong> = 1</span>. This allows the sandwich factor to be "disolved", and allows <span style="font-family:trebuchet ms;"><strong>C'</strong></span> to take the same form as <span style="font-family:trebuchet ms;"><strong>C</strong></span>. Thus, according to this definition, <span style="font-family:trebuchet ms;"><strong>C</strong></span> is a covariant quantity. We have also derived the transformation rule for a bipara-vector. We can use this transformation law as the definition of a bipara-vector.<br /><br />Note that in both cases, the para-vector transforms to a para-vector, and the bipara-vector transforms to a bipara-vector. Both transformation laws also preserve the "group" property of the transformation. In other words, if I were to perform 2 transformations in a row, the result could be equivalent to some single overall transformation.<br /><br />If we must have a transformation law in order for a quantity to be physically significant, we ask: if <span style="font-family:trebuchet ms;"><strong>L</strong></span> itself is physically significant, is it a para-vector, or a bipara-vector? In order to answer this, consider the action of two sequential transformations<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong>' = <strong>L<sub></strong>2<strong></sub>LAL<sup>†</sup>L<sub></strong><strong>2</sub><sup>†</sup></strong><br /><br /><strong>B</strong>' = <strong>L<sub></strong>2<strong></sub>LB<span style="BORDER-TOP: black 2px solid">L</span> <span style="BORDER-TOP: black 2px solid">L</span><sub></strong>2<strong></sub></strong></div><br />In both of these cases <span style="font-family:trebuchet ms;"><strong>L</strong></span> acts before <span style="font-family:trebuchet ms;"><strong>L</strong><sub>2</sub></span>. This set of sequential transformations would have been equivalent to a single transformation by a composite <span style="font-family:trebuchet ms;"><strong>L'</strong></span>.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>L</strong>' = <strong>L</strong><sub>2</sub><strong>L</strong></div><br />We can consider here that <span style="font-family:trebuchet ms;"><strong>L</strong></span> has been transformed by <span style="font-family:trebuchet ms;"><strong>L</strong><sub>2</sub></span>. If we consider this to be a valid transformation law, then we see that <span style="font-family:trebuchet ms;"><strong>L</strong></span> does not transform like a para-vector, or a bipara-vector. We call quantities that transform in this way "Spinors". Spinors transform with only a single application of the transformation like this:<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>S</strong>' = <strong>LS</strong></div><br />Spinors behave like basic building blocks of relativistically significant quantities. For instance, we can use two spinors to construct a para-vector.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong> = <strong>S</strong><sub>1</sub><strong>S</strong><sub>2</sub><sup>†</sup></div><br />This quantity obeys the transformation law of a para-vector.<br /><br />We can also use two spinors to construct a bipara-vector<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>B</strong> = <strong>S</strong><sub>1</sub><span style="BORDER-TOP: black 2px solid"><strong>S</strong></span><sub>2</sub></div><br />This quantity obeys the transformation law of a bipara-vector.<br /><br />To recap, here are our 3 relativistically significant quantities<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>S</strong>' = <strong>LS</strong> ; spinor<br /><br /><strong>A</strong>' = <strong>LAL</strong><sup>†</sup> ; para-vector<br /><br /><strong>B</strong>' = <strong>LB<span style="BORDER-TOP: black 2px solid">L</span></strong> ; bipara-vector</div>Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-80695699735521433262010-02-22T14:42:00.000-08:002010-02-22T15:46:43.113-08:00Lorentz TransformationsNow we begin to introduce a little physics. (The form of the multiplication law is already hiding a bit of physics.)<br /><br />We are going to derive a Lorentz transformation. To be specific, we are going to derive a proper Lorentz transformation, in other words, the transformation can include boosts and rotations, but not inversions.<br /><br />We may expect that a general transformation can be obtained by the following rule<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong>' = <strong>S A R</strong></div><br />We require two elements, <span style="font-family:trebuchet ms;"><strong>S</strong></span> and <span style="font-family:trebuchet ms;"><strong>R</strong></span>, since multiplication on the left is a distinct operation from multiplication on the right.<br /><br />Using this general form, we first require that the transformation does not change the real-ness, or imaginary-ness of <span style="font-family:trebuchet ms;"><strong>A</strong></span>. We can do this by assuming that <span style="font-family:trebuchet ms;"><strong>A</strong></span> is real, and setting <span style="font-family:trebuchet ms;"><strong>A</strong></span>' equal to the Hermitian conjugate of <span style="font-family:trebuchet ms;"><strong>A</strong></span>'. This condition is stated as<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>R</strong><sup>†</sup> <strong>A</strong><sup>†</sup> <strong>S</strong><sup>†</sup> = <strong>S A R</strong></div><br />Since <span style="font-family:trebuchet ms;"><strong>A</strong></span> is entirely real, this condition can only be satisfied if<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>R</strong> = <strong>S</strong><sup>†</sup></div><br />Thus, the form of the proper Lorentz transformation must be<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong>' = <strong>S A S</strong><sup>†</sup></div><br />The other condition on a Lorentz transformation is that it leaves the space-time interval invariant. The space-time interval is the length of the displacement in space-time. We know how to find such a length - we use a dot product, which has previously been defined for our algebra. In other words, a Lorentz transformation must be invariant to the dot product defined for our algebra.<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><<strong>A</strong>'<span style="BORDER-TOP: black 2px solid"><strong>B</strong>'</span>><sub><em>S</em></sub> = <<strong>A</strong><span style="BORDER-TOP: black 2px solid"><strong>B</strong></span>><sub><em>S</em></sub></div><br />We can expand the left hand side of the equation as follows<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms">(1/2) (<strong>S A S</strong><sup>†</sup><span style="BORDER-TOP: black 2px solid"><strong>S</strong></span><sup>†</sup> <span style="BORDER-TOP: black 2px solid"><strong>B</strong></span> <span style="BORDER-TOP: black 2px solid"><strong>S</strong></span> + <strong>S B S</strong><sup>†</sup><span style="BORDER-TOP: black 2px solid"><strong>S</strong></span><sup>†</sup> <span style="BORDER-TOP: black 2px solid"><strong>A</strong></span> <span style="BORDER-TOP: black 2px solid"><strong>S</strong></span>) = <strong>S</strong> <span style="BORDER-TOP: black 2px solid"><strong>S</strong></span>(<span style="BORDER-TOP: black 2px solid"><strong>S</strong></span> <strong>S</strong>)<sup>†</sup><<strong>A</strong><span style="BORDER-TOP: black 2px solid"><strong>B</strong></span>><sub><em>S</em></sub></div><br />Thus the dot product remains invariant only if the factor involving <span style="font-family:trebuchet ms;"><strong>S</strong></span> is equal to 1. For a proper Lorentz transformation, this is accomplished if<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><strong>S <span style="BORDER-TOP: black 2px solid">S</span></strong> = 1</div><br />We give the symbol <span style="font-family:trebuchet ms;"><strong>L</strong></span> to such a quantity that satisfies these conditions. The components of <span style="font-family:trebuchet ms;"><strong>L</strong></span> can be parametrized in terms of a direction <span style="font-family:trebuchet ms;">N</span> and an angle <span style="font-family:trebuchet ms;"><em>θ</em></span>.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>L</strong> = ( cosh(<em>θ</em>/2), N sinh(<em>θ</em>/2) )</div><br />In this parametrization <span style="font-family:trebuchet ms;"><em>θ</em></span> is allowed to be real, imaginary, or complex. If <span style="font-family:trebuchet ms;"><em>θ</em></span> is purely real then it represents the rapidity of a Lorentz boost in the direction of <span style="font-family:trebuchet ms;">N</span>. If <span style="font-family:trebuchet ms;"><em>θ</em></span> is purely imaginary then it represents the rotation angle of a spatial rotation around an axis <span style="font-family:trebuchet ms;">N</span>. If <span style="font-family:trebuchet ms;"><em>θ</em></span> is complex, then there is a combination of boost and rotation, in a screw-like motion.<br /><br />We can also use an exponential form to describe <span style="font-family:trebuchet ms;"><strong>L</strong></span>. For instance, a general Lorentz transformation can be written as<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>L</strong> = e<sup>(1/2) (Γ + iΘ)</sup></div><br />Here <span style="font-family:trebuchet ms;">Γ</span> and <span style="font-family:trebuchet ms;">Θ</span> are pure vectors that are entirely real, and they represent the 6 generators of Lorentz transformations. Using this form, we can represent the Lorentz transformation as<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong>' = e<sup>(1/2) (Γ + iΘ)</sup> <strong>A</strong> e<sup>(1/2) (Γ - iΘ)</sup></div><br />This form is reminiscent of the group theoretic operator approach that is used in quantum mechanics. This will not be the last time we are reminded of quantum mechanics...Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-74579924270261728742010-02-22T14:05:00.000-08:002010-02-22T14:41:45.186-08:00(Scalar, Vector) NotationWe have already determined how the basis elements of the algebra multiply with each other. However, if each element can have up to 8 components, then the product of two of these elements could have up to 64 terms. This is not very friendly.<br /><br />In this post we will present the algebraic multiplication law in terms of familiar vector operations such as dot and cross products.<br /><br />We can represent an arbitrary element of the algebra as a linear combination of 4 pieces, using (scalar, vector) notation<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong> = (<em>a</em> + i<em>b,</em> C + iD)</div><br />Here the scalar portions are designated by lowercase italicized symbols, while the vector portions are uppercase non-italicized symbols. These 4 quantities represent the scalar, pseudo-scalar, vector, and pseudo-vector quantities.<br /><br />We will first consider the multiplication of purely elements that are purely real.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong> = (<em>a, </em>A)<br /><strong>B</strong> = (<em>b,</em> B)</div><br />The multiplication of two such elements results in 16 terms. These terms can be written using the vector dot and cross products as<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>AB</strong> = (<em>ab</em> + A ∙ B, <em>a</em>B + <em>b</em>A + iA × B)</div><br />The right hand side of this equation contains 3 distinct portions, a scalar, a vector, and a pseudo-vector term. Now let <span style="font-family:trebuchet ms;"><strong>A</strong></span> and <span style="font-family:trebuchet ms;"><strong>B</strong></span> represent general elements. This can be done in the following way<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong> = <strong>C</strong> + i<strong>D</strong><br /><strong>B</strong> = <strong>E</strong> + i<strong>F</strong><br /><strong>AB</strong> = <strong>CB</strong> - <strong>DF</strong> + i<strong>CF</strong> + i<strong>DB</strong></div><br /><br />Here <span style="font-family:trebuchet ms;"><strong>C</strong></span>, <span style="font-family:trebuchet ms;"><strong>D</strong></span>, <span style="font-family:trebuchet ms;"><strong>E</strong></span>, and <span style="font-family:trebuchet ms;"><strong>F</strong></span> are all purely real algebraic elements.<br /><br />If an algebraic element is purely real, it is called a para-vector. Some examples of para-vectors are position, or momentum<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>x</strong> = (c<em>t</em>, X)<br /><strong>p</strong> = (<em>E</em>/c, P)</div><br />We have applied the appropriate factor of c in both cases, in order for the scalar portion to have the correct scale with respect to the vector portion - relativistically speaking. We see that we have associated known relativistic four vectors with algebraic para-vectors.<br /><br />The multiplication of two para-vectors results in a bi-para-vector. Such a quantity usually corresponds with a multi-indexed tensor in standard relativistic treatments.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-24129617361097028372010-02-22T13:32:00.001-08:002010-02-23T08:59:30.499-08:00Imaginary Numbers, Spatial Inversions, and DualsHere is the situation so far. We have an 8 dimensional algebra which has two conjugate operations. The conjugate operations allow us to split any element of the algebra into 4 sub categories:<br /><ul><li>Real Scalar</li><li>Imaginary Scalar</li><li>Real Vector</li><li>Imaginary Vector</li></ul><p>We have already determined the physical significance of the scalar vs. vector categories - we use this to encode the space/time split. Can we determine a physical significance for the real and imaginary split as well?</p><p>These categories are defined by their behavior when one of the two conjugates are applied. For instance, objects in the scalar category are unaffected by the algebraic conjugate, and objects in the real category are unaffected by the hermitian conjugate.</p><p>How do these categories behave when both conjugates are applied at the same time?</p><ul><li>Real Scalar - unchanged</li><li>Imaginary Scalar - flips sign</li><li>Real Vector - flips sign</li><li>Imaginary Vector - unchanged</li></ul><p>This behavior can be explained if we state that both conjugates applied simultaneously has the physical meaning of spatial inversion. We should already know that there are 4 different quantities that behave differently under spatial inversion.</p><ul><li>Scalars - unchanged under spatial inversion</li><li>Pseudo-Scalars - flips sign under spatial inversion</li><li>Vectors - flips sign under spatial inversion</li><li>Pseudo-Vectors - unchanged under spatial inversion</li></ul><p>If we assign the physical meaning of spatial inversion to the action of both conjugates applied at the same time, then we can determine an actual physical meaning for the imaginary portion of the algebra - namely, purely imaginary quantities are pseudo-quantities.</p><p>For instance, we might represent both time and volume with a real number. However, we would expect volume to change signs upon spatial inversion, whereas time should not.</p><p>Likewise, the Electric field should change signs upon a spatial inversion, but the Magnetic field should not.</p><p>We have known about pseudo-quantities for a very long time. However, it is the usual practice to place these pseudo-quantities in the same 4 dimensional space as the non-pseudo-quantities, with the stipulation that "you can't add vectors with pseudo-vectors". In other words, a physics equation cannot contain both vector and pseudo-vector pieces.</p><p>The reason for this is because these pieces are linearly independent. The introduction of the imaginary unit in the algebra helps us distinguish pseudo-quantities, and also enforces the linear Independence of these quantities.</p><p>In 3 dimensions, the primary example of a pseudo-vector is the cross product<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">C = A × B</div><br />the example of a pseudo-scalar is the triple product<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">D = A ∙ (B × C)</div><br />With this identification of the physical meaning behind the real/imaginary split, we can safely state that the algebra represents an 8 dimensional space, without needing to postulate any wierd parallel universes. Rather we merely provide 4 dimensions for normal quantities, and 4 linearly independent dimensions for pseudo-quantities.<br /><br />So if we are using an 8 dimensional space, how come we are using a 4 dimensional metric? The answer is that the "pseudo" half, or imaginary half, of the space uses a metric that is implied, and which can be determined. The pseudo metric is the same as the non-pseudo metric, except for an overall sign change. Remember that we chose the (1, -1, -1, -1) metric. The pseudo metric is then (-1, 1, 1, 1).<br /><br />Most texts on relativity state that the overall sign of the metric is unimportant. We see here that the sign of the metric distinguishes the pseudo space from the non-pseudo space. So would it have made a difference if we had chosen the (-1, 1, 1, 1) metric to begin with? The answer is no. I might still choose the time component to correspond with the scalar basis element. I would have to choose <span style="font-family:trebuchet ms;">i</span> as the scalar basis element. I would still be able to derive the rest of the algebra. The only difference would be the assignment of <span style="font-family:trebuchet ms;">i</span> to pseudo quantities.<br /><br />In other words, the choice of metric is related to a duality principle, which has much more physical significance than a mere sign convention. In fact, the duality principle that allows us to have a choice of metrics is the same duality principle which allows us to compose a dual electromagnetic field.<br /><br />In order to find the dual representation of any algebraic element, we need merely multiply by <span style="font-family:trebuchet ms;">-i</span>. This operation is equivalent to the Hodge dual used in other representations. The only problem with this dual operator is that it doesn't exactly <em>undo</em> itself. Meaning, two applications of <span style="font-family:trebuchet ms;">-i</span> results in an overall sign change.<br /><br />I personally like to use the following for a type of dual operator, since it un-does itself nicely:<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong>' = i<strong>A</strong><sup>†</sup><br /><strong>A</strong>'' = i<strong>A</strong>'<sup>†</sup> = i(i<strong>A</strong><sup>†</sup>)<sup>†</sup> = <strong>A</strong></div><br />This duality is a very profound part of nature, or at least in our representation of nature. It seems like an easy thing to overlook, and therefore it probably contains a bounty of rich truths - hidden in plain sight.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-65616753487928782010-02-22T13:05:00.000-08:002010-02-22T13:27:41.167-08:00The Hermitian ConjugateNow that we have complex scalars to work with, we need to determine how to start to determine what the imaginary portion of the algebra physically represents.<br /><br />To do this, we will introduce the Hermitian conjugate. The Hermitian conjugate is designated by a superscripted dagger, like <span style="font-family:trebuchet ms;"><strong>A</strong><sup>†</sup></span>.<br /><br />When the Hermitian conjugate acts on a scalar quantity, it has the effect of a standard complex conjugate.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><em>a</em><sup>†</sup> = <em>a</em><sup>*</sup></div><br />When the Hermitian conjugate acts on a product of algebraic elements, it reverses the order of the elements, similar to the action of the algebraic conjugate<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">(<strong>AB</strong>)<sup>†</sup> = <strong>B</strong><sup>†</sup> <strong>A</strong><sup>†</sup></div><br /><br />If the Hermitian conjugate does nothing when it acts on an element, that element is called "Hermitian" or real. Anti-hermitian, or imaginary elements change sign when acted on by the Hermitian conjugate. The basis elements are assumed to be real elements. Thus, in order to take the Hermitian conjugate of a given element, we merely take the complex conjugate of the components<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>A</strong><sup>†</sup> = (<em>a</em><sup>μ</sup>)<sup>*</sup><strong>e</strong><sub>μ</sub></div><br /><br />Using the algebraic conjugate we were able to split an algebraic element into a scalar and vector portion, which we physically identify with the time and space portions. We can make a similar split using the Hermitian conjugate into real and imaginary portions.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><<strong>A</strong>><sub><em>R</em></sub> = (1/2) (<strong>A</strong> + <strong>A</strong><sup>†</sup>)<br /><<strong>A</strong>><sub><em>I</em></sub> = (1/2) (<strong>A</strong> - <strong>A</strong><sup>†</sup>)</div><br /><br />When using basis elements that are derived from the Minkowski metric, we see that we can change the sign on the <span style="font-family:trebuchet ms;">i</span>'s merely be reversing the order of all products of basis elements. For this reason, the Hermitian conjugate is also sometimes referred to as the "Reversal operator". In other words, if we can represent any element as the product of a set of real elements, the Reversal operator merely reverses the order of all of the factors - which ends up having the same effect as the Hermitian conjugate.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-42329535455326911082010-02-22T11:48:00.000-08:002010-02-22T13:28:42.759-08:00The Minkowski MetricAs we have seen, we can begin to establish a multiplication rule for the algebra, as long as we know what the metric of the underlying vector space is.<br /><br />For our purpose, we will consider the Minkowski Metric of special relativity as our starting point. We will use the (1, -1, -1, -1) convention of this metric. Using this metric requires that we introduce 4 basis elements. The fundamental identity applied to a 4x4 matrix gives us 16 expressions that will encode how we multiply these 4 basis elements.<br /><br />When we say we want to use the Minkowski metric, we are actually implying that we want to discuss space-time, not just any old vector space which has a Minkowski norm.<br /><br />At this point, we have a decision to make regarding how we will represent space and time in this algebra. Although space and time have the same footing in special relativity, we know from practical experience that there is a physical distinction between space and time. So far, in our algebra, the only thing that we have which might encode this distinction is the ability to seperate quantities out into scalar and vector portions.<br /><br />If we decide to associate the scalar portion of an element with the time component, and the vector portion of the element with the spatial components, then we also make a statement about the physical interpretation of the conjugate operation. We are saying that the algebraic conjugate does nothing to the time component since it is a pure scalar, and it flips the sign of any space component which are pure vectors.<br /><br />If we make the decision to encode the space/time boundary with the scalar/vector boundary, then we can immediately determine the identity of <span style="font-family:trebuchet ms;"><strong>e</strong><sub>0</sub></span>. The basis vector associated with time must be the basis associated with scalars.<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>e</strong><sub>0</sub> = 1</div><br />Making this identification automatically solves 7 of the 16 expressions introduced by the fundamental identity. We now only need to figure out the last 9, which correspond with how the spatial basis elements multiply with each other.<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><strong>e</strong><sub><em>i</em></sub><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>j</em></sub></span> = -<strong>e</strong><sub><em>j</em></sub><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>i</em></sub></span> ; <em>i</em> ≠ <em>j</em></div><br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><strong>e</strong><sub><em>i</em></sub><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>i</em></sub></span> = -1</div><br />Since the algebraic conjugate only applies an overall change of sign to the purely spatial basis elements, we can recast these equations into<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><strong>e</strong><sub><em>i</em></sub><strong>e</strong><sub><em>j</em></sub> = - <strong>e</strong><sub><em>j</em></sub><strong>e</strong><sub><em>i</em></sub> ; <em>i</em> ≠ <em>j</em><br /><br /></div><div style="TEXT-ALIGN: center" face="trebuchet ms">(<strong>e</strong><sub><em>i</em></sub>)<sup>2</sup> = 1</div><br />The first equation states that if we swap the order of any 2 spatial basis elements, we need to change the sign of the product. The second equation states that any basis element multiplied by itself is equal to 1.<br /><br />As of yet we have not determined how to represent the product of 2 spatial basis elements. Before we can do this, we need to take a look at the product of all 3 distinct spatial basis elements<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub></div><br />We don't really know what this quantity is yet, but we do know that if we change the order of the basis elements, it will change the overall sign of the quantity. We also know that applying an algebraic conjugate to any individual factor will also change the sign of the product.<br /><br />The first thing we are going to do is apply the algebraic conjugate to this triple product.<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub></span> = <span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub>3</sub></span> <span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub>2</sub></span> <span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub>1</sub></span></div><br />Since each of these elements is a pure spatial quantity, the action of the conjugate accumlates to a single minus sign on the product. The application of the conjugate has applied an overall odd permutation to the order of the elements, which results in a second minus sign. Thus we have the identity<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub></span> = <strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub></div><br /><br />In otherwords, the algebraic conjugate does nothing to the triple product. From our definition, then, the triple product is a scalar. We may assume that the triple product must therefore be 1 - the only scalar with unit magnitude, however consider what happens if we square the triple product<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">(<strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub>)(<strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub>) = -<strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub><strong>e</strong><sub>3</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>1</sub> = -<strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>1</sub> = -<strong>e</strong><sub>1</sub><strong>e</strong><sub>1</sub> = -1</div><br />Thus, the triple product is a scalar that results in -1 if it is squared. The only quantity that can do this is the imaginary unit, <span style="font-family:trebuchet ms;">i</span>. Thus we can make the identification<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub> = i</div><br />Consider multiplying the triple product on the right by <span style="font-family:trebuchet ms;"><strong>e</strong><sub>3</sub></span><br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub><strong>e</strong><sub>3</sub> = <strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub> = i<strong>e</strong><sub>3</sub></div><br />We can perform a similar trick in order to determine the products of the other spatial basis vectors. The end result is the following list of multiplication rules<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">(<strong>e</strong><sub><em>μ</em></sub>)<sup>2</sup> = 1; all <em>μ</em><br /><strong>e</strong><sub>0</sub><strong>e</strong><sub><em>μ</em></sub> = <strong>e</strong><sub><em>μ</em></sub>; all <em>μ</em><br /><strong>e</strong><sub>1</sub><strong>e</strong><sub>2</sub> = -<strong>e</strong><sub>2</sub><strong>e</strong><sub>1</sub> = i<strong>e</strong><sub>3</sub><br /><strong>e</strong><sub>2</sub><strong>e</strong><sub>3</sub> = -<strong>e</strong><sub>3</sub><strong>e</strong><sub>2</sub> = i<strong>e</strong><sub>1</sub><br /><strong>e</strong><sub>3</sub><strong>e</strong><sub>1</sub> = -<strong>e</strong><sub>1</sub><strong>e</strong><sub>3</sub> = i<strong>e</strong><sub>2</sub></div><br />The good news is that these rules encapsulate the entire multiplication rule for the algebra. Since we are using the Minkowski metric, we have basically built special relativity right into the algebra itself. The bad news is that our scalars are no longer real numbers, rather they are complex numbers. Aside from not knowing up front what physical significance the imaginary unit has, we have also doubled the size of our algebra from 4 dimensions to 8. We still only have 4 basis vectors, but our scalars are now a 2 dimensional sub-algebra. No need to fear - the complexities of complex scalars will be addressed shortly.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-45161582306643138782010-02-22T10:34:00.000-08:002010-02-24T09:03:22.764-08:00The Underlying Vector SpaceSince it is possible to represent physical quantities with vectors, we desire to somehow connect our algebra to this vector space. We can state that the algebra has all of the linear properties that the vector space has. The only thing that a vector space has, which our algebra does not have, is a dot product.<br /><br />The dot product of a vector space introduces a "norm" function, where we can identify a particular scalar value that is characteristic of an element of the vector space. We have, up to now, defined two characteristic scalar values. We will use the value of the determinant to represent the "norm", or the "length" of the element.<br /><br />So, how are we going to go about associating this algebra with a vector space?<br /><br />First, we will assign a set of basis elements to the algebra. These basis elements will be denoted as a bold lower case <span style="font-family:trebuchet ms;"><strong>e</strong></span>. Our algebra represents a multi-dimensional linear space, and so we will need to distinguish the basis elements that correspond to each dimension. This is done by a subscripted index variable, as is standard in most relativistic notation - i.e. <span style="font-family:trebuchet ms;"><strong>e</strong><sub><em>μ</em></sub></span>.<br /><br />Every element of the algebra can now be considered as a linear combination of these basis vectors. The scalars that multiply the basis vectors are called the components. The components are generally represented by lower case italicized symbols with superscripted index variables - i.e. <span style="font-family:trebuchet ms;"><em>a<sup>μ</sup></em></span>.<br /><br />The index variables represent the particular index we are interested. We will use standard relativistic conventions with these index variables. First, if an upper and lower index match, then we sum over all values of the index. Second, if the index can represent any of the space-time dimensions then it is represented by a lower case greek letter. If the index represents only one of the spatial dimensions, then it is represented by a lowercase latin letter.<br /><br />Using the linear combination of basis elements, as well as the index conventions, we can represent an element <span style="font-family:trebuchet ms;"><strong>A</strong></span> in terms of its components <span style="font-family:trebuchet ms;"><em>a<sup>μ</sup></em></span> like this:<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>A</strong> = <em>a<sup>μ</sup></em><strong>e</strong><sub><em>μ</em></sub></div><br />We expect that the components of a particular element of the algebra should be the same as the components of the corresponding quantity in the associated vector space.<br /><br />Now, lets consider what might happen if we try to associate the determinant of an element <span style="font-family:trebuchet ms;"><strong>A</strong></span> with the dot product of the components of <span style="font-family:trebuchet ms;"><strong>A</strong></span>.<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>A<span style="BORDER-TOP: black 2px solid">A</span></strong> = <em>a<sup>μ</sup>a<sup>ν</sup>g<sub>μν</sub></em></div><br />Here <span style="font-family:trebuchet ms;"><em>g</em></span> is the metric of the associated vector space. We can expand the left hand side in terms of the components of <span style="font-family:trebuchet ms;"><strong>A</strong></span>.<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>A<span style="BORDER-TOP: black 2px solid">A</span></strong> = <em>a<sup>μ</sup><strong>e</strong><sub><em>μ</em></sub><span style="BORDER-TOP: black 2px solid"><em>a<sup>ν</sup></em><strong>e</strong><sub><em>ν</em></sub></span></em></div><br />Since <span style="font-family:trebuchet ms;"><strong>A<span style="BORDER-TOP: black 2px solid">A</span></strong></span> is already a scalar, we can take the scalar part without changing anything.<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><strong>A<span style="BORDER-TOP: black 2px solid">A</span></strong> = <em>a<sup>μ</sup>a<sup>ν</sup></em> (1/2) (<strong>e</strong><sub><em>μ</em></sub><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>ν</em></sub></span> + <strong>e</strong><sub><em>ν</em></sub><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>μ</em></sub></span>)</div><br />We can now form a relationship bewtween the products of the basis elements of the algebra and the metric of the associated vector space<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">(1/2) (<strong>e</strong><sub><em>μ</em></sub> <span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>ν</em></sub></span> + <strong>e</strong><sub><em>ν</em></sub><span style="BORDER-TOP: black 2px solid"><strong>e</strong><sub><em>μ</em></sub></span>) = <em>g<sub>μν</sub></em></div><br />This relationship is the fundamental identity which allows us to define multiplication of the basis vectors in such a way that we are assured that the components of the element in the algebra are identical to the components of the vectors in the associated vector space. This relationship also DEFINES the multiplication rules bewtween the basis elements. Thus, the metric is what defines the multiplication rule for the algebra. Since the metric is a non-trivial aspect of physics, we have a clue here that the multiplication rule for the algebra actually carries substantial physics with it.<br /><br />This relationship between basis elements and the metric is very similar to what is called the "Fundamental Identity of a Clifford Algebra". The only difference is that in the Clifford Algebra version does not contain any algebraic conjugates.<br /><br />Of course, my version is better :)<br /><br />The dot product between two different elements is defined as<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><<strong>A<span style="BORDER-TOP: black 2px solid">B</span></strong>><sub><em>S</em></sub> = <em>a<sup>μ</sup>b<sup>ν</sup>g<sub>μν</sub></em></div><br />If you would like to generalize this to a generalized "dot product" that can be associated with any square matrices, then it would have the following form<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">(1/<em>n</em>) Tr(<strong>A </strong>Adjoint(<strong>B</strong>)) = <em>a<sup>μ</sup>b<sup>ν</sup>g<sub>μν</sub></em></div><br />Here, <span style="font-family:trebuchet ms;"><em>n</em></span> is the dimension of the matrix.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-57761300772322626652010-02-22T08:54:00.000-08:002010-02-22T10:33:32.430-08:00Determinant and TraceSo far we have discovered 2 scalars that can characterize a particular element of the algebra. The first is given by<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><strong>A<span style="BORDER-TOP: black 2px solid">A</span></strong> = <em>a</em></div><br />If we were to represent our algebra by square matrices, then this quantity would be the determinant of the matrix which represents <span style="font-family:trebuchet ms;"><strong>A</strong></span>.<br /><br />The second characterizing scalar is called the "scalar part" of <span style="font-family:trebuchet ms;"><strong>A</strong></span>, and is given by<br /><br /><div style="TEXT-ALIGN: center;font-family:trebuchet ms;" ><<strong>A</strong>><sub><em>S</em></sub> = 1/2(<strong>A</strong> + <strong><span style="BORDER-TOP: black 2px solid">A</span></strong>)</div><br />To see how these two scalars relate to each other, we need to introduce the exponential function. Consider that we have an element that is expressed as the exponent of <span style="font-family:trebuchet ms;"><strong>A</strong></span><br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">e<sup><strong>A</strong></sup></div><br />Taking the exponent of an algebraic element that we don't know much about seems kind of ambiguous. However, if you must, consider that the exponent converges from the infinite sum<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">e<sup><strong>A</strong></sup> = <span style="font-size:130;">∑</span> (1/<em>n</em>!) <strong>A</strong><sup><em>n</em></sup></div><br />Now let us determine what happens if we take the algebraic conjugate of <span style="font-family:trebuchet ms;">e<sup><strong>A</strong></sup></span>.<br /><br />First, we know that the conjugate commutes with respect to addition, so the conjugate gets applied to each term of the infinite sum seperately.<br /><br />Next, we know that the conjugate only affects <span style="font-family:trebuchet ms;"><strong>A</strong><sup><em>n</em></sup></span>, since <span style="font-family:trebuchet ms;">1/<em>n</em>!</span> is a scalar.<br /><br />Finally, we know that the conjugate of <span style="font-family:trebuchet ms;"><strong>A</strong><sup><em>n</em></sup></span> is just the <span style="font-family:trebuchet ms;"><em>n</em></span><sup>th</sup> power of the conjugate of <span style="font-family:trebuchet ms;"><strong>A</strong></span><br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms"><span style="BORDER-TOP: black 2px solid"><strong>A</strong><sup><em>n</em></sup></span> = <span style="BORDER-TOP: black 2px solid"><strong>A</strong></span><sup><em>n</em></sup></div><br />The end result of all of this is<br /><br /><div style="TEXT-ALIGN: center" face="trebuchet ms"><span style="BORDER-TOP: black 2px solid">e<sup><strong>A</strong></sup></span> = e<sup><span style="BORDER-TOP: black 2px solid"><strong>A</strong></span></sup></div><br />Multiplying <span style="font-family:trebuchet ms;">e<sup><strong>A</strong></sup></span> by its conjugate produces a scalar, which we have identified with the determinant of <span style="font-family:trebuchet ms;">e<sup><strong>A</strong></sup></span>.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">e<sup><strong>A</strong></sup> <span style="BORDER-TOP: black 2px solid">e<sup><strong>A</strong></sup></span> = e<sup><strong>A</strong></sup>e<sup><span style="BORDER-TOP: black 2px solid"><strong>A</strong></span></sup> = det(e<sup><strong>A</strong></sup>)</div><br />Now, if we multiply the exponent of two real numbers together, this is the same as exponentiating the sum of the two numbers. For instance<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">e<sup><em>x</em></sup>e<sup><em>y</em></sup> = e<sup><em>x</em>+<em>y</em></sup></div><br />This rule does not apply generally since the algebraic elements do not commute with respect to multiplication. However, the elements <span style="font-family:trebuchet ms;"><strong>A</strong></span> and <span style="BORDER-TOP: black 2px solid;font-family:trebuchet ms;" ><strong>A</strong></span> DO commute, and so the exponential rule CAN be applied in this instance.<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">e<sup><strong>A</strong></sup> <span style="BORDER-TOP: black 2px solid">e<sup><strong>A</strong></sup></span> = e<sup><strong>A</strong>+<span style="BORDER-TOP: black 2px solid"><strong>A</strong></span></sup></div><br />The factor in the exponent is a multiple of the scalar part of <span style="font-family:trebuchet ms;"><strong>A</strong></span> as we have previously defined.<br /><br />Thus we can relate the determinant of <span style="font-family:trebuchet ms;"><strong>A</strong></span> with the scalar portion of <span style="font-family:trebuchet ms;"><strong>A</strong></span> through the exponential function<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">det(e<sup><strong>A</strong></sup>) = e<sup><strong>a</strong></sup> <span style="BORDER-TOP: black 2px solid">e<sup><strong>A</strong></sup></span> = e<sup><strong>A</strong> + <span style="BORDER-TOP: black 2px solid"><strong>A</strong></span></sup> = e<sup>2<<strong>A</strong>><em>s</em></sup></div><br />This equation can be compared to a parallel equation from the determinant theory of square matrices<br /><br /><div style="TEXT-ALIGN: center; FONT-FAMILY: trebuchet ms">det(e<sup><strong>A</strong></sup>) = e<sup>Tr(<strong>A</strong>)</sup></div><br />Thus we see that if we were to use square matrices to represent the elements of our algebra, then the "scalar part" is merely a multiple of the trace of the matrix.<br /><br />In the theory of Lie Algebra, a matrix that has a trace of zero is used to represent the generator of a specific transformation. In our algebra, such a generator would be an element whose scalar portion is zero.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-87142933903621079252010-02-21T17:07:00.001-08:002010-02-22T08:47:14.169-08:00Scalars and VectorsThe algebraic conjugate operation affords us the ability to observe a distinction between different kinds of quantities. First, we know from the definition of the conjugate<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-weight: bold;font-family:trebuchet ms;" >A<span style="border-top: 2px solid black;">A</span></span><span style="font-family:trebuchet ms;"> = </span><span style="font-style: italic;font-family:trebuchet ms;" >a</span></div><br />that when the algebraic conjugate acts on a scalar, it does nothing. For instance<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="border-top: 2px solid black; font-style: italic;">a</span> = <span style="border-top: 2px solid black; font-weight: bold; padding-top: 5px;">A<span style="border-top: 2px solid black;">A</span></span> = <span style="font-weight: bold;"><span style="border-top: 2px solid black; padding-top: 5px;"><span style="border-top: 2px solid black;">A</span></span> <span style="border-top: 2px solid black;">A</span></span> = <span style="font-weight: bold;">A<span style="border-top: 2px solid black;">A</span></span> = <span style="font-style: italic;">a</span></div><br />We are going to make this another defining property of a scalar - it is invariant to the algebraic conjugate.<br /><br />Now there are several ways to define a scalar in math and physics, and so far I have made reference to none of these. Rather I have so far stated that a scalar is<br /><ul><li>able to commute with respect to multiplication by any element of the algebra</li><li>invariant with respect to application of the algebraic conjugate</li></ul>We will see that these defining properties of a scalar lead to the other definitions that we know so well. We will also see that these 2 defining properties end up being mutually satisfied - at least for the end result of our physical algebra.<br /><br /><span style="font-weight: bold;font-family:trebuchet ms;" >A<span style="border-top: 2px solid black;">A</span></span> is a characteristic scalar associated with <span style="font-weight: bold;font-family:trebuchet ms;" >A</span>. We can find another characteristic scalar. Rewrite <span style="font-weight: bold;font-family:trebuchet ms;" >A</span> into terms that have symmetric and anti-symetric combinations with <span style="font-weight: bold;font-family:trebuchet ms;border-top:2px solid black;" >A</span>.<br /><br /><div style="text-align: center;font-family:trebuchet ms;"><span style="font-weight: bold;">A</span> = 1/2(<span style="font-weight: bold;">A</span> + <span style="border-top: 2px solid black; font-weight: bold;">A</span>) + 1/2(<span style="font-weight: bold;">A</span> - <span style="border-top: 2px solid black; font-weight: bold;">A</span>)</div><br />If you expand the expression on the right hand side, you will see that it reduces to <span style="font-weight: bold;font-family:trebuchet ms;" >A</span>. The expression on the right hand side has 2 terms. The first term is invariant to the algebraic conjugate, and thus we see that the first term is a scalar term. The application of the word "scalar" in this context becomes very similar to the original historical usage introduced by Hamilton.<br /><br />If we remove the scalar part, what are we left with? The algebra represents multi-component objects, and the scalar part only represents single component objects. Thus we can assume that there are several components associated with the remaining portion, if we strip the scalar portion away. <br /><br />If we apply the algebraic conjugate to the second, or anti-symmetric term, it changes sign. This is to be the defining property of a Vector. Again, the term vector has a host of meanings, and definitions. The usage as applied here is different than the standard relativistic version of the word, but it is similar to the original usage coined by Hamilton. We are going to be talking about vectors much more in future posts.<br /><br />Since we know that can always decompose any element into a scalar and a vector part, we will use a special notation to represent this<br /><br /><div style="text-align: center;"><span style="font-family:trebuchet ms;"><<span style="font-weight: bold;">A</span>><sub><span style="font-style: italic;">S</span></sub> = 1/2(<span style="font-weight: bold;">A</span> + <span style="border-top: 2px solid black; font-weight: bold;">A</span>)</span></div><br /><div style="text-align: center;"><span style="font-family:trebuchet ms;"><<span style="font-weight: bold;">A</span>><sub><span style="font-style: italic;">V</span></sub> = 1/2(<span style="font-weight: bold;">A</span> - <span style="border-top: 2px solid black; font-weight: bold;">A</span>)</span></div><br />And we will use the algebraic conjugate to detect a pure scalar or a pure vector<br /><ul><li>If the conjugate does nothing to the algebraic element, it must be a pure scalar</li><li>If the conjugate flips the sign of the element, it must be a pure vector</li></ul><br />The very naive idea of a scalar and a vector is that a scalar is a single component quantity, and a vector is a multi-component quantity. As the algebra develops we will see that this is a pretty reasonable first approach to these quantities.<br /><br />Most elements in the algebra have a linear combination of both a scalar and a vector part. The word we use to talk about a linear combination of a scalar and a vector is "para-vector". The meaning behind a para-vector is closer to the meaning of the relativistic vector.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-23483415964083343312010-02-19T21:54:00.000-08:002010-02-21T17:06:49.712-08:00Algebraic ConjugateIn order to more fully define the operation of division, we need to introduce a special operation called the <span style="font-style: italic;">algebraic conjugate</span>.<br /><br />The algebraic conjugate acts on elements of the algebra. An element that has been acted on by the algebraic conjugate operation is denoted by the symbol with a bar over the top, such as <span style="border-top: 2px solid black; font-weight: bold;font-family:arial;" >A</span>. The algebraic conjugate is defined such that<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;"> = </span><span style="font-style: italic;font-family:trebuchet ms;" >a</span><br /></div><br />We don't really care yet what the value of <span style="font-style: italic;">a</span> is, since we haven't really defined the conjugate, or the multiplication. What does matter is that <span style="font-style: italic;">a</span> is a scalar. Using this definition of the algebraic conjugate, we can relate it to the "division" or inverse operation.<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><sup style="font-family: trebuchet ms;">-1</sup><span style="font-family:trebuchet ms;"> = 1 / </span><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;"> = (1 /</span><span style="font-weight: bold;font-family:trebuchet ms;" > A</span><span style="font-family:trebuchet ms;">)</span><span style="font-family:trebuchet ms;"> ( </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A </span><span style="font-family:trebuchet ms;">/ </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A </span><span style="font-family:trebuchet ms;">)</span><span style="font-family:trebuchet ms;"> = </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;"> / </span><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;"> = (1 / </span><span style="font-style: italic;font-family:trebuchet ms;" >a</span><span style="font-family:trebuchet ms;">) </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span><br /></div><br />Now, <span style="font-style: italic;font-family:trebuchet ms;" >a</span> is a real number, whose inverse is defined. This relationship between the inverse, the conjugate, and <span style="font-style: italic;">a</span> is present in an identity from matrix algebra:<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><sup style="font-family: trebuchet ms;">-1</sup><span style="font-family:trebuchet ms;"> = (1 / Det(</span><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;">) ) Adjoint(</span><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;">)</span><br /></div><br />Thus, if we were using matrices to represent our physical quantities, <span style="font-style: italic;">a</span> would be the determinant of the matrix and <span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span> would be the matrix Adjoint.<br /><br />If we make the requirement that the conjugate operation should be linear, then we have the following property<br /><br /><div style="text-align: center;"><span style="border-top: 2px solid black;font-family:trebuchet ms;" ><span style="font-weight: bold;">A</span> + <span style="font-weight: bold;">B</span></span><span style="font-family:trebuchet ms;"> = </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;"> + </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >B</span><br /></div><br />We can use the relationship between the conjugate and the inverse to show that<br /><br /><div style="text-align: center;"><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >AB</span><span style="font-family:trebuchet ms;"> = </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >B</span><span style="font-family:trebuchet ms;"> </span><span style="border-top: 2px solid black; font-weight: bold;font-family:trebuchet ms;" >A</span><br /></div><br />Finally, since we know that taking an inverse twice should do nothing, we can again employ the relationship between the conjugate and the inverse to show that taking the conjugate twice also does nothing.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0tag:blogger.com,1999:blog-6268219961175588742.post-71318050787002780042010-02-19T20:54:00.000-08:002010-03-05T09:21:37.373-08:00Algebraic RepresentationIn physics, we need a way to represent physically meaningful concepts in a quantifiable way. When discussing things like time, or mass, we can choose real numbers to represent these concepts quantitatively. The real numbers form an algebra, which basically means that if you add them or multiply them, the result is also a real number.<br /><br />The principle form of representation for most physical objects is a vector, which does not belong to an algebra. This means that there is no intrinsic product defined so that two vectors can be multiplied by each other to form a vector. This lack of definition of multiplication in our basic representation of physical quantities removes understanding of how these quantities interplay with each other.<br /><br />If we want to solve the universe we need to figure out how to represent our physical quantities as algebras rather than vector spaces, so we don't miss out on the added information provided by this multiplication.<br /><br />We will employ a linear algebra for the representation. What this means is that although we can decompose a quantity like position into its principle components, the full representation can be achieved by a square matrix, AND this representation is preferred over the component form.<br /><br />An element of our physical algebra will be denoted by a bold symbol, such as <span style="font-weight: bold;font-family:trebuchet ms;" >A</span>.<br /><br />The product of two such elements can be written as<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >AB</span><span style="font-family:trebuchet ms;"> = </span><span style="font-weight: bold;font-family:trebuchet ms;" >C</span><br /></div><br />The omission of the multiplication symbol distinguishes an algebraic multiplication from other types of multiplication such as dot products, or cross products. At this point, looking at an equation such as this is pretty pointless, since we don't know exactly <span style="font-style: italic;">how</span> to multiply the elements together, even if we know the physical meaning of the symbols. We will discover how to multiply the elements of our physical algebra by first examining what types of properties this multiplication is expected to have.<br /><br />In general, the members of an algebra are not expected to commute with respect to multiplication. What this means is<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >AB</span><span style="font-family:trebuchet ms;"> ≠ </span><span style="font-weight: bold;font-family:trebuchet ms;" >BA</span><br /></div><br />generally. In other words, the order of multiplication is usually important.<br /><br />There are usually elements in the algebra that commute with every other element in the algebra. We call these special elements <span style="font-style: italic;">scalars</span> and denote them by a non-bold lowercase italicized symbol, such as <span style="font-style: italic;font-family:trebuchet ms;" >a</span>.<br /><br />In other words, we know that for scalars we can always re-arrange the order of multiplication.<br /><br /><div style="text-align: center;"><span style="font-style: italic;font-family:trebuchet ms;" >a</span><span style="font-weight: bold;font-family:trebuchet ms;" >B</span><span style="font-family:trebuchet ms;"> = </span><span style="font-weight: bold;font-family:trebuchet ms;" >B</span><span style="font-style: italic;font-family:trebuchet ms;" >a</span><br /></div><br />always.<br /><br />Common scalar values in physics represent quantities such as time, or mass. We are used to representing these scalar quantities with real numbers, and we will continue to do so. This means that we require the real numbers to be present in our algebraic representation of the universe.<br /><br />Since the real numbers are included in our algebra, we know that we have access to two special real numbers, <span style="font-family:trebuchet ms;">0</span> and <span style="font-family:trebuchet ms;">1</span>. These two numbers help us define the operations of subtraction and division.<br /><br />For instance, if we know that<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><span style="font-family:trebuchet ms;"> + </span><span style="font-weight: bold;font-family:trebuchet ms;" >B</span><span style="font-family:trebuchet ms;"> = 0</span><br /></div><br />We can rewrite the sum<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >C</span><span style="font-family:trebuchet ms;"> + </span><span style="font-weight: bold;font-family:trebuchet ms;" >B</span><span style="font-family:trebuchet ms;"> = </span><span style="font-weight: bold;font-family:trebuchet ms;" >C</span><span style="font-family:trebuchet ms;"> - </span><span style="font-weight: bold;font-family:trebuchet ms;" >A</span><br /></div><br />Likewise, if we know that<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >AB</span><span style="font-family:trebuchet ms;"> = 1</span><br /></div><br />We can rewrite the product<br /><br /><div style="text-align: center;"><span style="font-weight: bold;font-family:trebuchet ms;" >CB</span><span style="font-family:trebuchet ms;"> = </span><span style="font-weight: bold;font-family:trebuchet ms;" >CA</span><sup style="font-family: trebuchet ms;">-1</sup><br /></div><br />We cannot strictly call this a "division", since there would be two possible definitions - given that we cannot in general rearrange the order of multiplication.Eric Brownhttp://www.blogger.com/profile/00315121935974620846noreply@blogger.com0