+
+1. Root Finding Methods
+1.1. Newton’s method
++Newton’s method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as +\[ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} \] +
+1.2. Fixed point method
++Fixed-point iteration is a method of computing fixed points of iterated functions. More specifically, given a function f defined on the real numbers with real values and given a point x0 in the domain of f, the fixed point iteration is +\[ x_{n+1}=f(x_{n}),\,n=0,1,2,\dots\] +
+1.3. Secant method
++Secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The secant method can be thought of as a finite difference approximation of Newton’s method. +\[ x_{n}=x_{n-1}-f(x_{n-1}){\frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}={\frac {x_{n-2}f(x_{n-1})-x_{n-1}f(x_{n-2})}{f(x_{n-1})-f(x_{n-2})}}. \] +
+2. Interpolation techniques
+2.1. Hermite Interpolation
++Hermite Interpolation is a method of interpolating data points as a polynomial function. The generated Hermite interpolating polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences. +
+2.2. Lagrange Interpolation
++Lagrange polynomials are used for polynomial interpolation. See Wikipedia +
+2.3. Newton’s Interpolation
++Newton’s divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form. +
+3. Integration methods
+3.1. Euler Method
++Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit method for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. +\[ y_{n+1} = y_{n} + h f(t_{n} , y_{n}) \] +
+3.2. Newton–Cotes Method
++Newton–Cotes formulae, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulae for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes. +
+3.3. Predictor–Corrector Method
++Predictor–Corrector methods belong to a class of algorithms designed to integrate ordinary differential equations – to find an unknown function that satisfies a given differential equation. All such algorithms proceed in two steps: +
+-
+
- The initial, “prediction” step, starts from a function fitted to the function-values and derivative-values at a preceding set of points to extrapolate (“anticipate”) this function’s value at a subsequent, new point. +
- The next, “corrector” step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function’s value at the same subsequent point. +
3.4. Trapizoidal method
++Trapezoidal rule is a technique for approximating the definite integral. The trapezoidal rule works by approximating the region under the graph of the function f(x) as a trapezoid and calculating its area. +\[ \int _{a}^{b}f(x)\,dx\approx \sum _{k=1}^{N}{\frac {f(x_{k-1})+f(x_{k})}{2}}\Delta x_{k}\] +
+