iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://en.wikipedia.org/wiki/Householder's_method
Householder's method - Wikipedia Jump to content

Householder's method

From Wikipedia, the free encyclopedia

In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1. Each of these methods is characterized by the number d, which is known as the order of the method. The algorithm is iterative and has a rate of convergence of d + 1.

These methods are named after the American mathematician Alston Scott Householder.

Method

[edit]

Householder's method is a numerical algorithm for solving the equation f(x) = 0. In this case, the function f has to be a function of one real variable. The method consists of a sequence of iterations

beginning with an initial guess x0.[1]

If f is a d + 1 times continuously differentiable function and a is a zero of f but not of its derivative, then, in a neighborhood of a, the iterates xn satisfy:[citation needed]

, for some

This means that the iterates converge to the zero if the initial guess is sufficiently close, and that the convergence has order d + 1 or better. Furthermore, when close enough to a, it commonly is the case that for some . In particular,

  • if d + 1 is even and C > 0 then convergence to a will be from values greater than a;
  • if d + 1 is even and C < 0 then convergence to a will be from values less than a;
  • if d + 1 is odd and C > 0 then convergence to a will be from the side where it starts; and
  • if d + 1 is odd and C < 0 then convergence to a will alternate sides.

Despite their order of convergence, these methods are not widely used because the gain in precision is not commensurate with the rise in effort for large d. The Ostrowski index expresses the error reduction in the number of function evaluations instead of the iteration count.[2]

  • For polynomials, the evaluation of the first d derivatives of f at xn using Horner's method has an effort of d + 1 polynomial evaluations. Since n(d + 1) evaluations over n iterations give an error exponent of (d + 1)n, the exponent for one function evaluation is , numerically 1.4142, 1.4422, 1.4142, 1.3797 for d = 1, 2, 3, 4, and falling after that. By this criterion, the d=2 case (Halley's method) is the optimal value of d.
  • For general functions the derivative evaluation using the Taylor arithmetic of automatic differentiation requires the equivalent of (d + 1)(d + 2)/2 function evaluations. One function evaluation thus reduces the error by an exponent of , which is for Newton's method, for Halley's method and falling towards 1 or linear convergence for the higher order methods.

Motivation

[edit]

First approach

[edit]

Suppose f is analytic in a neighborhood of a and f(a) = 0. Then f has a Taylor series at a and its constant term is zero. Because this constant term is zero, the function f(x) / (xa) will have a Taylor series at a and, when f ′ (a) ≠ 0, its constant term will not be zero. Because that constant term is not zero, it follows that the reciprocal (xa) / f(x) has a Taylor series at a, which we will write as and its constant term c0 will not be zero. Using that Taylor series we can write When we compute its d-th derivative, we note that the terms for k = 1, ..., d conveniently vanish: using big O notation. We thus get that the ratio If a is the zero of f that is closest to x then the second factor goes to 1 as d goes to infinity and goes to a.

Second approach

[edit]

Suppose x = a is a simple root. Then near x = a, (1/f)(x) is a meromorphic function. Suppose we have the Taylor expansion:

around a point b that is closer to a than it is to any other zero of f. By König's theorem, we have:

These suggest that Householder's iteration might be a good convergent iteration. The actual proof of the convergence is also based on these ideas.

The methods of lower order

[edit]

Householder's method of order 1 is just Newton's method, since:

For Householder's method of order 2 one gets Halley's method, since the identities

and

result in

In the last line, is the update of the Newton iteration at the point . This line was added to demonstrate where the difference to the simple Newton's method lies.

The third order method is obtained from the identity of the third order derivative of 1/f

and has the formula

and so on.

Example

[edit]

The first problem solved by Newton with the Newton-Raphson-Simpson method was the polynomial equation . He observed that there should be a solution close to 2. Replacing y = x + 2 transforms the equation into

.

The Taylor series of the reciprocal function starts with

The result of applying Householder's methods of various orders at x = 0 is also obtained by dividing neighboring coefficients of the latter power series. For the first orders one gets the following values after just one iteration step: For an example, in the case of the 3rd order, .

d x1
1 0.100000000000000000000000000000000
2 0.094339622641509433962264150943396
3 0.094558429973238180196253345227475
4 0.094551282051282051282051282051282
5 0.094551486538216154140615031261962
6 0.094551481438752142436492263099118
7 0.094551481543746895938379484125812
8 0.094551481542336756233561913325371
9 0.094551481542324837086869382419375
10 0.094551481542326678478801765822985

As one can see, there are a little bit more than d correct decimal places for each order d. The first one hundred digits of the correct solution are 0.0945514815423265914823865405793029638573061056282391803041285290453121899834836671462672817771577578.

Let's calculate the values for some lowest order,

And using following relations,

1st order;
2nd order;
3rd order;
x 1st (Newton) 2nd (Halley) 3rd order 4th order
x1 0.100000000000000000000000000000000 0.094339622641509433962264150943395 0.094558429973238180196253345227475 0.09455128205128
x2 0.094568121104185218165627782724844 0.094551481540164214717107966227500 0.094551481542326591482567319958483
x3 0.094551481698199302883823703544266 0.094551481542326591482386540579303 0.094551481542326591482386540579303
x4 0.094551481542326591496064847153714 0.094551481542326591482386540579303 0.094551481542326591482386540579303
x5 0.094551481542326591482386540579303
x6 0.094551481542326591482386540579303


Derivation

[edit]

An exact derivation of Householder's methods starts from the Padé approximation of order d + 1 of the function, where the approximant with linear numerator is chosen. Once this has been achieved, the update for the next approximation results from computing the unique zero of the numerator.

The Padé approximation has the form

The rational function has a zero at .

Just as the Taylor polynomial of degree d has d + 1 coefficients that depend on the function f, the Padé approximation also has d + 1 coefficients dependent on f and its derivatives. More precisely, in any Padé approximant, the degrees of the numerator and denominator polynomials have to add to the order of the approximant. Therefore, has to hold.

One could determine the Padé approximant starting from the Taylor polynomial of f using Euclid's algorithm. However, starting from the Taylor polynomial of 1/f is shorter and leads directly to the given formula. Since

has to be equal to the inverse of the desired rational function, we get after multiplying with in the power the equation

.

Now, solving the last equation for the zero of the numerator results in

.

This implies the iteration formula

.

Relation to Newton's method

[edit]

Householder's method applied to the real-valued function f(x) is the same as Newton's method applied to the function g(x):

with

In particular, d = 1 gives Newton's method unmodified and d = 2 gives Halley's method.

References

[edit]
  1. ^ Householder, Alston Scott (1970). The Numerical Treatment of a Single Nonlinear Equation. McGraw-Hill. p. 169. ISBN 0-07-030465-3.
  2. ^ Ostrowski, A. M. (1966). Solution of Equations and Systems of Equations. Pure and Applied Mathematics. Vol. 9 (Second ed.). New York: Academic Press.
[edit]