next up previous contents
Next: Differential forms and the Up: Differential forms. Previous: Differential forms.   Contents

The exterior algebra of a vector space.

If $ V $ is a vector space we define a $ k$-linear map to be a map

$\displaystyle \omega \colon V \times \dots \times V \to \mathbb{R},
$

where there are $ k$ copies of $ V $, which is linear in each factor. That is

\begin{displaymath}\begin{split}\omega(v_1, \dots,v_{i-1}, \alpha v + \beta w , ...
... \beta \omega(v_1, \dots,v_{i-1}, w, v_{i+1}, v_k). \end{split}\end{displaymath}    

We define a $ k$-linear map $ \omega$ to be totally antisymmetric if

$\displaystyle \omega(v_1, \dots,v_i, v_{i+1}, \dots, v_k) =
- \omega(v_1, \dots,v_{i+1}, v_i, \dots, v_k)
$

for all vectors $ v_1, \dots, v_k$ and all $ i$. Note that it follows that

$\displaystyle \omega(v_1, \dots, v, v, \dots, v_k) = 0
$

and if $ \pi \in S_k$ is a permutation of $ k$ letters then

$\displaystyle \omega(v_1, v_2, \dots, v_k) =$   sgn$\displaystyle (\pi)\omega(v_{\pi(1)}, v_{\pi(2)}, \dots ,v_{\pi(k)})
$

where sgn$ (\pi)$ is the sign of the permutation $ \pi$. We denote the vector space of all $ k$-linear, totally antisymmetric maps by $ \Lambda^k(V^*)$. and call them $ k$ forms. If $ k=1$ the $ \Lambda^1(V^*)$ is just $ V^*$ the space of all linear functions on $ V $ and if $ k=0$ we make the convention that $ \Lambda^0(V^*)=
\mathbb{R}$. We need to collect some results on the linear algebra of these spaces.

Assume that $ V $ has dimension $ n$ and that $ v_1, \dots, v_n$ is a basis of $ V $. Let $ \omega$ be a $ k$ form. Then if $ w_1, \dots, w_k$ are arbitrary vectors and we expand them in the basis as

$\displaystyle w_i = \sum_{j=1}^n w_{ij} v_j.
$

then we have

$\displaystyle \omega(w_1, \dots, w_k) = \sum_{j_1, \dots, j_k = 1}^n w_{1j_1}
w_{2j_2}\dots w_{kj_k} \omega(v_{j_1}, \dots, v_{j_k})
$

so that it follows that $ \omega$ is completely determined by its values on basis vectors. In particular if $ k > n$ then $ \Lambda^k(V^*) = 0$.

If $ \alpha^1$ and $ \alpha^2$ are two linear maps in $ V^*$ then we define an element $ \alpha^1 \wedge \alpha^2 $, called the wedge product of $ \alpha^1$ and $ \alpha^2$, in $ \Lambda^2(V^*)$ by

$\displaystyle \alpha^1 \wedge \alpha^2 (v_1, v_2) = \alpha^1(v_1)\alpha^2(v_2) -
\alpha^1(v_2)\alpha^2(v_1).
$

More generally if $ \omega \in \Lambda^p(V^*)$ and $ \rho \in \Lambda^q(V^*) $ we define $ \omega \wedge \rho \in \Lambda^{p+q}(V^*)$ by

\begin{multline*}
(\omega \wedge \rho)(w_1, \dots, w_{p+q}) \\
=\frac{1}{p!q!} ...
...1)}, \dots, w_{\pi(p)})
\rho(w_{\pi(p+1)}, \dots, w_{\pi(p+q)}).
\end{multline*}

Assume that $ \dim(V) = n$. Then we leave as an exercise the following proposition.

Proposition 5.1   The direct sum

$\displaystyle \Lambda(V^*) = \bigoplus_{k=1}^n \Lambda^k(V^*)
$

with the wedge product is an associative algebra.

We call $ \Lambda(V^*)$ the exterior algebra of $ V^*$. We call an element $ \omega \in \Lambda^k(V^*)$ an element of degree $ k$. Because of associativity we can repeatedly wedge and disregard brackets. In particular we can define the wedge product of $ m$ elements in $ V^*$ and we leave it as an exercise to show that

$\displaystyle \alpha^1 \wedge \alpha^2\wedge \dots \wedge \alpha^m =
\sum_{\pi\in S_m}$   sgn$\displaystyle (\pi)
\alpha^1 (v_{\pi(1)}) \alpha^2 (v_{\pi(2)}) \dots \alpha^m(v_{\pi(m)}).
$

Notice that

$\displaystyle \alpha^1 \wedge \dots \wedge \alpha^i\wedge \alpha^{i+1} \wedge
...
...wedge \dots \wedge
\alpha^{i+1}\wedge \alpha^{i} \wedge \dots \wedge \alpha^m
$

and that

$\displaystyle \alpha^1 \wedge \dots \wedge \alpha \wedge \alpha \wedge
\dots \wedge \alpha^m = 0.
$

Still assuming that $ V $ is $ n$ dimensional choose a basis $ v_1, \dots, v_n$ of $ V $. Define the dual basis of $ V^*$, $ \alpha^1, \dots, \alpha^n$, by

$\displaystyle \alpha^i(v_j) = \Delta^i_j
$

for all $ i$ and $ j$. We want to define a basis of $ \Lambda^k(V^*)$. Define elements of $ \Lambda^k(V)$ by choosing $ k$ numbers $ i_1, \dots, i_k$ between $ 1$ and $ n$ and considering

$\displaystyle \alpha^{i_1} \wedge \dots \wedge \alpha^{i_k}.
$

As we are trying to form a basis we may as well keep the $ i_j$ distinct and ordered $ 1\leq i_1 < \dots < i_k\leq n$. We show first that these elements span $ \Lambda^k(V^*)$. Let $ \omega$ be an element of $ \Lambda^k(V^*)$. Notice that

$\displaystyle \alpha^{i_1} \wedge \dots \wedge \alpha^{i_k}(v_{j_1},\dots, v_{j_k})
$

equals zero unless there is a permutation $ \pi$ such that $ j_l = i_{\pi(l)}$ for all $ l$ and equals sgn$ (\pi)$ if there is such a permutation. Consider vectors $ w_1, \dots, w_k$ and expand them in the basis as

$\displaystyle w_i = \sum_j w_{ij} v_j.
$

Then we have

$\displaystyle \omega(w_1, \dots, w_k) = \sum_{j_1, \dots, j_k} w_{1j_1}
w_{2j_2}\dots w_{kj_k} \omega(v_{j_1}, \dots, v_{j_k})
$

so that it follows that $ \omega$ is completely determined by its values on basis vectors. For any ordered $ k$-tuple $ 1\leq i_1 < \dots < i_k\leq n$ define

$\displaystyle \omega_{i_1\dots i_k}= \omega(v_{i_1}, \dots, v_{i_k})
$

and consider

$\displaystyle \tilde \omega = \sum_{1\leq i_1 < \dots < i_k \leq n}
\omega_{i_1\dots i_k} \alpha^{i_1}\wedge \dots \wedge \alpha^{i_k}.
$

We show that $ \omega =\tilde\omega$. It suffices to apply both sides to vectors $ (v_{i_1}, \dots, v_{i_k})$ for any $ 1\leq i_1 < \dots < i_k\leq n$ and show that they are equal but that is clear from previous discussions. So $ \Lambda^k(V^*)$ is spanned by the basis vectors $ v_{i_1}\wedge \dots \wedge v_{i_k}$. We have

Proposition 5.2   The vectors $ v_{i_1}\wedge \dots \wedge v_{i_k}$ where $ 1\leq i_1 < \dots < i_k\leq n$ are a basis for $ \Lambda^k(V^*)$.

Proof. We have already seen that these vectors span. It suffices to show that they are linearly independent. We do this by induction on $ n$. If $ n= 1$ then the result is clear as the only non-trivial case is $ k=1$ when the result is straightforward. More generally assume we have a linear relation amongst some of the basis vectors. There has to be an index $ i$ such that the corresponding $ v_i$ does not occur in all the vectors in that linear relation. Otherwise there is only one vector in the linear relation and that is not possible. Then wedge the whole relation with $ v_i$. The terms containing $ v_i$ disappear and we obtain a relation between the vectors constructed for the case of a dimension less so by induction that is not possible. $ \qedsymbol$

It is sometimes useful to sum over all $ k$-tuples $ i_1, \dots, i_k$ not just ordered ones. We can do this -- an keep the uniqueness of the coefficients $ \omega_{i_1\dots i_k}$ -- if we demand that they be antisymmetric. That is

$\displaystyle \omega_{j_1\dots j_ij_{i+1} \dots j_k}
= - \omega_{j_1\dots j_{i+1}j_i \dots j_k}.
$

Then we have
$\displaystyle \omega$ $\displaystyle = \sum_{1\leq i_1 < \dots < i_k \leq n}
\omega_{i_1\dots i_k} \alpha^{i_1}\wedge \dots \wedge \alpha^{i_k}$    
  $\displaystyle = \sum_{1\leq i_1, \dots, i_k \leq n}\frac{1}{k!}
\omega_{i_1\dots i_k} \alpha^{i_1}\wedge \dots \wedge \alpha^{i_k}.$    

We will need one last piece of linear algebra called contraction. Let $ \omega \in \Lambda^k(V)$ and $ v \in V$. Then we define a $ k-1$ form $ \iota_{v}\omega$, the contraction of $ \omega$ and $ v$ by

$\displaystyle \iota_v(\omega)(v_1, \dots, v_{k-1}) = \omega(v_1, \dots, v_{k-1}, v)
$

where $ v_1, \dots, v_{k-1}$ are any $ k-1$ elements of $ V $.

Example 5.1   Consider the vector space $ \mathbb{R}^3$. Then we know that zero forms and one forms are just real numbers and linear maps respectively. Notice that in the case of $ \mathbb{R}^3$ we can identify any linear map $ v$ with the vector $ v = (v^1, v^2, v^3)$ where

$\displaystyle v(x) = \sum_{i=1}^3 v^i x^i.
$

Let $ \alpha^i$ be the basis of linear functions defined by $ \alpha^i(x) = x^i$. We have seen that every two form $ \omega$ on $ \mathbb{R}^3$ has the form

$\displaystyle \omega = \omega_1 \alpha^2 \wedge \alpha^3 +
\omega_2 \alpha^3 \wedge \alpha^1 + \omega_3 \alpha^1 \wedge \alpha^2.
$

Every three-form $ \mu$ takes the form

$\displaystyle \mu = a \alpha^1 \wedge \alpha^2 \wedge \alpha^3.
$

It follows that in $ \mathbb{R}^3$ we can identify three-forms with real numbers by identifying $ \mu$ with $ a$ and we can identify two-forms with vectors by identifying $ \omega$ with $ (\omega_1, \omega_2, \omega_3)$.

It is easy to check that with these identifications the wedge product of two vectors $ v$ and $ w$ is identified with the vector $ v \times w$. In other words wedge product corresponds to cross product.


next up previous contents
Next: Differential forms and the Up: Differential forms. Previous: Differential forms.   Contents
Michael Murray
1998-09-16