The content in today's blog is taken straight from Harold M. Edwards' Galois Theory.

Theorem 1: Fundamental Theorem on Symmetric Polynomials

Every symmetric polynomial in r

_{1}, ..., r

_{n}can be expressed as a polynomial in elementary symmetric polynomials σ

_{1}, ..., σ

_{n}. In addition, a symmetric polynomial with integer coefficients can be expressed as a polynomial in elementary symmetric polynomials with integer coefficients.

Proof:

(1) For n=1, the symmetric polynomial is x - r

_{1}so that we see that a

_{1}= -r

_{1}and σ

_{i}= (-1)

^{1}(-r

_{1})=r

_{1 }

[See Definition 1, here for definition of symmetric polynomial; see Definition 2, here for definition of the kth elementary symmetric polynomial]

(2) if a

_{1}is an integer, then r

_{1}is an integer and σ

_{1}is an integer.

(3) The polynomial in elementary symmetric functions is then x - σ

_{1}which shows that the theorem is true for n=1.

(4) Let's assume that the theorem has been proved for all symmetric polynomials up to n-1.

(5) Let G(r

_{1}, r

_{2}, ..., r

_{n}) be a symmetric polynomial in n variables so that:

G(r

_{1}, r

_{2}, ..., r

_{n}) = (x - r

_{1})*(x - r

_{2})*...*(x - r

_{n})

(6) Let G

_{i}be a series of polynomials such that:

G(r

_{1}, r

_{2}, ..., r

_{n}) = G

_{0}+ G

_{1}r

_{n}+ G

_{2}(r

_{n})

^{2}+ ... + G

_{v}(r

_{n})

^{v}

where:

v is the highest power of r

_{n}that occurs in G and each G

_{i}is only made up of r

_{1}, r

_{2}, ..., r

_{n-1}.

We can see that G

_{0}is a polynomial that doesn't include r

_{n}. G

_{1}includes all multiples of (r

_{n})

^{1}. G

_{2}includes all the multiples of

**(rn)**and so on up until v which is defined as the highest power of r

^{2}_{n}which can be 1.

(7) Now, since G(r

_{1}, r

_{2}, ..., r

_{n}) is a symmetric polynomial, it is unchanged if any two variables r

_{i}and r

_{j}are interchanged. [See Definition 1 here for definition of symmetric polynomial]

(8) Since G(r

_{1}, r

_{2}, ..., r

_{n}) is unchanged with any interchange, so too are the polynomials defined by G

_{i}are unchanged.

(9) This means that each G

_{i}is itself a symmetrical polynomial on r

_{1}, r

_{2}... r

_{n-1}.

(10) By the inductive hypothesis in step #4, we can assume that each G

_{i}is itself expressible as an elementary symmetric polynomial in r

_{1}, r

_{2}, ..., r

_{n-1}

(11) Let τ

_{i}, τ

_{2}, ..., τ

_{n-1}denote these elementary polynomials in (n-1) variables such that [See Definition 2 here for definition of elementary symmetric polynomials]:

τ

_{1}= r

_{1}+ r

_{2}+ ... + r

_{n-1}

τ

_{2}= r

_{1}r

_{2}+ r

_{1}r

_{3}+ ... + r

_{n-2}r

_{n-1}

...

τ

_{n-1}= r

_{1}r

_{2}*...*r

_{n-1 }

[NOTE: Each τ

_{i}represents the sum of all i-combinations of 1 ... n-1. So, τ

_{1}is the sum of all 1-combinations, τ

_{2}is the sum of all 2-combinations, and τ

_{n-1}is the sum of all n-1 combinations (for which there is only one)].

(12) So, from the inductive hypothesis, if we let G

_{i}(r

_{1}, ..., r

_{n-1}) represent the symmetric polynomial G

_{i}, we can see that there exists another polynomial g

_{i}such that:

G

_{i}(r

_{1}, ..., r

_{n-1}) = g

_{i}(τ

_{1}, τ

_{2}, ..., τ

_{n-1})

^{}

(13) Further, from the inductive hypothesis, we know that if G

_{i}has integer coefficients, then, so does g

_{i}.

(14) Let σ

_{1}, σ

_{2}, ..., σ

_{n}be the elementary symmetric polynomials in n variables.

(15) We can now restate σ

_{i}in terms of τ

_{i}since:

σ

_{1}= r

_{1}+ r

_{2}+ ... + r

_{n}= τ

_{1}+ r

_{n}σ

_{2}= r

_{1}r

_{2}+ r

_{1}r

_{3}+ ... + r

_{n-1}r

_{n}= τ

_{2}+ r

_{n}τ

_{1}

[Note: r

_{n}*τ

_{1}is equal to the sum of all (n-1) 1-combinations with r

_{n}]

σ

_{3}= r

_{1}r

_{2}r

_{3}+ r

_{1}r

_{2}r

_{4}+ ... + r

_{n-2}r

_{n-1}r

_{n}= τ

_{3}+ r

_{n}τ

_{2}[Note: r

_{n}*τ

_{1}is equal to the sum of all (n-1) 2-combinations with r

_{n}]

σ

_{4}= τ

_{4}+ r

_{n}τ

_{3}

...

σ

_{n}= r

_{1}*r

_{2}*....*r

_{n}= 0 + r

_{n}*t

_{n-1}

[Note: the main idea here is the σ

_{i}represents the sum of all i-combinations of n variables. We are now equating these sums with τ

_{i}which represents the sum of all i-combinations of (n-1) variables. In other words, we are now including the new combinations with r

_{n}.]

(16) We can now restate the equations in step #15 in terms of τ

_{i}to get (using the basic algebraic operations of addition/subtraction to both sides of the equation):

τ

_{1}= σ

_{1}- r

_{n}

τ

_{2}= σ

_{2}- r

_{n}τ

_{1}= σ

_{2}- r

_{n}(σ

_{1}- r

_{n}) = σ

_{2}- r

_{n}σ

_{1}+ (r

_{n})

^{2}

τ

_{3}= σ

_{3}- r

_{n}τ

_{2}= σ

_{3}- r

_{n}σ

_{2}+ (r

_{n})

^{2}σ

_{1}- (r

_{n})

^{3}.

...

τ

_{n-1}= σ

_{n-1}- r

_{n}τ

_{n-2}= σ

_{n-1}- r

_{n}σ

_{n-2}+ ... + (-1)

^{n-1}(r

_{n})

^{n-1}

(17) Finally, we can use the last equation in step #15, to get:

0 = σ

_{n}- r

_{n}τ

_{n-1}= σ

_{n}- r

_{n}σ

_{n-1}+ ... + (-1)

^{n}(r

_{n})

^{n}.

(18) Since we restate all the terms τ

_{i}in terms of r

_{n}and σ

_{i}, we can define a polynomial f

_{i}(σ

_{1}, σ

_{2}, ..., σ

_{n-1}), such that using step # 12, we have:

G(r

_{1}, r

_{2}, ..., r

_{n}) = f

_{0}+ f

_{1}(σ

_{n}) + f

_{2}(σ

_{n})

^{2}+ ... + f

_{μ}(σ

_{n})

^{μ}

where each f

_{i}is a polynomial strictly in terms of σ

_{1}, σ

_{2}, ..., σ

_{n-1}and does not include σ

_{n}.

(19) Each f

_{i}has integer coefficients if G does since:

(a) Using steps #16, we can state each f

_{i}in terms of τ

_{1},...,τ

_{n-1}

(b) From step #13, we know that τ

_{1},...,τ

_{n-1}have integer coefficients if G has.

(20) From step #17, we get the following:

(r

_{n})

^{n}= (r

_{n})

^{n-1}σ

_{1}- (r

_{n})

^{n-2}σ

_{2}+ (r

_{n})

^{n-3}σ

_{3}+ ... + (-1)

^{n-1}σ

_{n}

(21) If μ ≥ n, then we can continuously apply the result in step #18 to the result in step #20, to get:

G(r

_{1}, r

_{2}, ..., r

_{n}) = f

_{0}(σ) + f

_{1}(σ)r

_{n}+ f

_{2}(σ)(r

_{n})

^{2}+ ... + f

_{n-1}(σ)(r

_{n})

^{n-1}

(22) Since interchanging any values r

_{i}with r

_{j}doesn't change the value of G(r

_{1}, r

_{2}, ..., r

_{n}), then it doesn't change the value of f

_{0}(σ) + f

_{1}(σ)r

_{n}+ f

_{2}(σ)(r

_{n})

^{2}+ ... + f

_{n-1}(σ)(r

_{n})

^{n-1}

(23) By repeatedly interchanging distinct r

_{i}and r

_{j}values where i ≠ j, we get the following n set of equations using step #21:

G(r

_{1}, r

_{2}, ..., r

_{n}) = f

_{0}+ f

_{1}r

_{1}+ f

_{2}(r

_{1})

^{2}+ ... + f

_{n-1}(r

_{1})

^{n-1}

^{}G(r

_{1}, r

_{2}, ..., r

_{n}) = f

_{0}+ f

_{1}r

_{2}+ f

_{2}(r

_{2})

^{2}+ ... + f

_{n-1}(r

_{2})

^{n-1}

^{...}G(r

_{1}, r

_{2}, ..., r

_{n}) = f

_{0}+ f

_{1}r

_{n}+ f

_{2}(r

_{n})

^{2}+ ... + f

_{n-1}(r

_{n})

^{n-1}

(24) We can think of these n equations of n terms as an n x n matrix of the form (r

_{i})

^{j-1}where i = the column (1 ... n) and j = the row (1 ... n).

(25) The determinant of this matrix which is a polynomial in r

_{1}, r

_{2}, ..., r

_{n}is nonzero since this is an example of the transpose of the Vandermonde Matrix [see Definition 1, here] and:

(a) From the properties of determinants, we know that det(V

_{n}) = det(V

_{n}

^{T}) [See Theorem 9, here]

(b) The formula for the determinant of the Vandermonde matrix is (see Theorem 1, here):

(c) Since by assumption r

_{1}≠ r

_{2}≠ ... ≠ r

_{n}[since each of the parameters is distinct], we can conclude that det(V

_{n}) ≠ 0.

(26) We can think of the set of equations as a multiplication between the above matrix [(r

_{i})

^{j-1}] and the vector [f

_{0}(σ), f

_{1}(σ), ..., f

_{n-1}(σ)] [See here for review of matrix multiplication] since:

(27) But, the the set of equations is also the result of multiplying the vector[G,0,0,...,0] since:

(a) We have the following equation:

(b) From step #23, we have:

=

(c) So we can conclude that the following two equations are equal:

(28) Since the determinant in step #24 above is nonzero, the matrix is invertible (see Theorem 4, here) and we can multiply its inverse to both sides to get:

(29) From this, we can conclude that:

f

_{0}= G and f

_{1}=0, ..., f

_{n-1}=0. [See Definition 2, here]

(30) Since f

_{0}is an elementary symmetric polynomial (see step #18 above) and since f

_{0}has integer coefficients if G does (see step #19 above), the conclusion follows from mathematical induction (see here for review of mathematical induction).

QED

In my next blog, I will complete the proof by reviewing the formula for the determinant of the Vandermonde Matrix. In a future blog, I will show how this theorem can be used to derive Newton's identities.

References

- Harold M. Edwards, Galois Theory, Springer, 1984.

## No comments:

Post a Comment