Mathematics 371 -- Numerical Analysis

September 21, 2001

Newton-Raphson Iteration -- ``the up side''

(and this is a big upside) The convergence

of Newton-Raphson iteration is often very fast:

> f:=x->exp(x) - x^2 - 2;

f := proc (x) options operator, arrow; exp(x)-x^2-2...

> x[0]:=1.0:

> for i to 5 do x[i]:=x[i-1] - f(x[i-1])/D(f)(x[i-1]); lprint(x[i]); end do:

1.392211192

1.323233192

1.319087329

1.319073677

1.319073676

The number of correct decimal places is roughly

doubling with each iteration

Newton-Raphson Iteration -- ``the down side(s)''

1) The behavior of Newton-Raphson iteration can be hard to predict --

the dependence of the root found on the initial point x[0] can be

quite subtle. Here is an example of that phenomenon. Our f is

> f:=x->x^4-3*x^2+0.5*x+1:

which has four distinct real roots:

> plot(f,-2..2);

[Maple Plot]

If we start iterating from x[0] = 0. then we get a sequence converging to the

root in [-2,-1]:

> x[0]:=0.0:

> for i to 7 do x[i]:=x[i-1]-f(x[i-1])/D(f)(x[i-1]); lprint(x[i]); end do:

-2.000000000

-1.794871795

-1.726039364

-1.718312570

-1.718219264

-1.718219250

-1.718219250

If, on the other hand, we start iterating from x[0] = .1 , then we get a sequence

converging to the root in [1,2].

> x[0]:=0.1:

> for i to 15 do x[i]:=x[i-1]-f(x[i-1])/D(f)(x[i-1]); lprint(x[i]) end do:

10.72604167

8.078920305

6.104732700

4.638752908

3.558480520

2.773382161

2.217308404

1.842947244

1.617259801

1.513191171

1.488514489

1.487160096

1.487156109

1.487156110

1.487156110

Note that in neither case is the root found the closest root to x[0] .

2. Here is another case where Newton-Raphson iteration does not converge at all.

> f:=x->x^3-x-3:

> x[0]:=0.0:

> for i to 12 do x[i]:=x[i-1] - f(x[i-1])/D(f)(x[i-1]); lprint(x[i]) end do:

-3.000000000

-1.961538462

-1.147175962

-.6579373e-2

-3.000389074

-1.961818175

-1.147430228

-.7256246e-2

-3.000473188

-1.961878646

-1.147485193

-.7402501e-2

> plot(f(x),x=-3.5..3.5);

[Maple Plot]

Can you see what is happening?

3. Finally, here is another perhaps unexpected case where Newton-Raphson does

not behave well in floating point arithmetic(!) Let's take a new f

and do Newton-Raphson iteration starting from x[0] = 2.0

> f:=x->x^4-2.45*x^3+.5175*x^2+2.446625*x-1.520875:

> x[0]:=2:

> for i to 22 do x[i]:=x[i-1]-f(x[i-1])/D(f)(x[i-1]); lprint(x[i]) end do:

1.741116751

1.557291680

1.428371994

1.338996466

1.277649948

1.235880298

1.207615413

1.188575888

1.175792220

1.167228684

1.161501006

1.157673236

1.155120678

1.153417311

1.152264734

1.151510154

1.150898741

1.150515011

1.149342678

1.148983998

1.149134374

1.149134374

So (with the default 10 decimal digit floating point number system) we get

rather slow convergence to 1.149134374 . But if we increase the number of

decimal digits used to 20:

> Digits:=20:

> x[0]:=2.0:

> for i to 30 do x[i]:=x[i-1]-f(x[i-1])/D(f)(x[i-1]); lprint(x[i]) end do:

1.7411167512690355330

1.5572916796083028960

1.4283719952230803627

1.3389964638330840569

1.2776499569903845576

1.2358802925598810602

1.2076154135903282352

1.1885759097830796671

1.1757923808203947156

1.1672287589771235447

1.1615010172085879619

1.1576741322294723081

1.1551191172669674239

1.1534140948411369623

1.1522766643342326095

1.1515180437108183638

1.1510121481218972297

1.1506748183241967239

1.1504499024068353207

1.1502999453955188221

1.1501999682457419020

1.1501333142304329951

1.1500888770730123818

1.1500592517906356800

1.1500395013775189105

1.1500263343451607040

1.1500175562479933339

1.1500117041893631362

1.1500078026720973299

1.1500052016303386478

Now we seem to be converging to something closer to 1.15(!) This is more

accurate -- the exact root is 1.15.