Diego Hernández Jiménez

Welcome to my personal website! Here I share some of my little projects.

Newton method from scratch

Attemp to implement the Newton-Raphson method in Python

Description

Studying calculus I found this interesting numeric method for approximating the real zeros of a function. That is, for estimating solutions for equations of the form $f(x)=0$. The algorithm can be described as follows (Larson & Edwards, 2013):

Let $f(c)=0$. where $f$ is differentiable on an open interval containing $c$. Then, to approximate $c$, use these steps.

  1. Make an initial estimate $x_1$ that is close to $c$

  2. Determine a new approximation $x_{n+1}=x_n-\frac{f(x_n)}{f’(x_n)}$.

  3. When $|x_n-x_{n+1}|$ is within the desired accuracy, let $x_{n+1}$ serve as the final approximation. Otherwise, return to Step 2 and calculate a new approximation.

However, this method doesn’t always yield a convergent sequence. This is an important issue. It means that there are functions whose zeros can’t be approximated using this technique. If we don’t detect those cases, we will end up applying the Newton method indefinitley without ever reaching a solution. Luckily, it can be shown that a condition sufficient to produce convergence to a zero of $f$ is that $\left|\frac{f(x)f’'(x)}{[f’(x)]^2}\right|<1$ on an open interval containing the zero.

Now that we have all the necessary ‘ingredients’ for our function, let’s see how it could be implemented.

Code

  • Iterative formula:

$x_{n+1}=x_n-\frac{f(x_n)}{f’(x_n)}$

def iterative_formula(f,x_n):
    
    from scipy.misc import derivative
    
    x_n1=x_n-(f(x_n)/derivative(f,x_n))

    return x_n1
  • Convergence condition

$\left|\frac{f(x)f’'(x)}{[f’(x)]^2}\right|<1$

def converges(f,x_n):
    
    from scipy.misc import derivative
    
    numerator=f(x_n)*derivative(f,x_n,n=2) # parameter 'n' for order of derivative
    denominator=derivative(f,x_n)**2
    
    return abs(numerator/denominator)<1
  • Precision:

$|x_n-x_{n+1}|$

def precise_enough(x_n,x_n1,error):
      
    return abs(x_n-x_n1)<=error

Complete algorithm:

def newton_method(f,x_n,error=0.01):
    
    if converges(f,x_n):
        
        x_n1=iterative_formula(f,x_n)

        iters=0
        while not precise_enough(x_n,x_n1,error):
            x_n=x_n1
            x_n1=iterative_formula(f,x_n)
            iters+=1

        return x_n1,iters
    else:
        print("Newton's method fails to converge")

The code is available here

Comments

  • I tested the code with several examples (you can check it in my github) and it seems to work fine. However, there are some examples that produce strange results. In other cases, the answer is correct but slightly different from the obtained using other code versions of the newton method algorithm. I wouldn’t say that makes the program a failure, but it necessarily means that I’ve written a function that doesn’t capture all the elements needed to solve the problem properly.
  • The function doesn’t return a tuple or a list with all the real zeros, just one solution, because the output depends on the first estimate given as argument. For instance, if you pass the function $f(x)=x^2-2$ and the estimate $x_1=1$, you get as a result $\approx{1.41}$ ($+\sqrt{2}$ ), but if you pass as first approximation $x_1=-1$ you get the other zero, which is $\approx{-1.41}$ ($-\sqrt{2}$ ). This forces the user to have in advance a rough idea of the value of the solutions.