diff --git a/README.md b/README.md index c828490..5bedf4d 100644 --- a/README.md +++ b/README.md @@ -1,38 +1,39 @@ # Table of Contents -1. [Root Finding Methods](#org97f8dc1) - 1. [Newton’s method](#org4ec5a5a) - 2. [Fixed point method](#orgd92eb51) - 3. [Secant method](#org5e86b54) -2. [Interpolation techniques](#org7879a30) - 1. [Hermite Interpolation](#org01982a3) - 2. [Lagrange Interpolation](#org1020c9c) - 3. [Newton’s Interpolation](#orgd08b2ee) -3. [Integration methods](#orgf7b000b) - 1. [Euler Method](#orge64619c) - 2. [Newton–Cotes Method](#orgb51f88e) - 3. [Predictor–Corrector Method](#org2f8adfb) - 4. [Trapizoidal method](#org4dbe660) +1. [Root Finding Methods](#orgefe5c09) + 1. [Newton’s method](#org7d679d2) + 2. [Fixed point method](#orgeb54040) + 3. [Secant method](#org82e886b) +2. [Interpolation techniques](#orgcd5aecd) + 1. [Hermite Interpolation](#org564bc6d) + 2. [Lagrange Interpolation](#org2720a8a) + 3. [Newton’s Interpolation](#orgc7ba82a) +3. [Integration methods](#org09c0dc7) + 1. [Euler Method](#orgb8d1fca) + 2. [Newton–Cotes Method](#org10d601a) + 3. [Predictor–Corrector Method](#org555de04) + 4. [Trapizoidal method](#orgcf2b400) + +![img](c:/Users/rjish/notes-orgmode/.attach/45/5c46bb-952b-4978-b48e-554565046442/_20240120_041518num-ana.png) \![GitHub]() \![GitHub forks]() \![GitHub Repo stars]() \![GitHub watchers]() -:TOC: :include all - - + # Root Finding Methods - + ## [Newton’s method](https://en.wikipedia.org/wiki/Newton%27s_method) -Newton’s method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as $$ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} $$ +Newton’s method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as +$$ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} $$ - + ## [Fixed point method](https://en.wikipedia.org/wiki/Fixed-point_iteration) @@ -40,7 +41,7 @@ Fixed-point iteration is a method of computing fixed points of iterated function $$ x_{n+1}=f(x_{n}),\,n=0,1,2,\dots$$ - + ## [Secant method](https://en.wikipedia.org/wiki/Secant_method) @@ -48,38 +49,38 @@ Secant method is a root-finding algorithm that uses a succession of roots of sec $$ x_{n}=x_{n-1}-f(x_{n-1}){\frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}={\frac {x_{n-2}f(x_{n-1})-x_{n-1}f(x_{n-2})}{f(x_{n-1})-f(x_{n-2})}}. $$ - + # Interpolation techniques - + ## Hermite Interpolation Hermite Interpolation is a method of interpolating data points as a polynomial function. The generated Hermite interpolating polynomial is closely related to the Newton polynomial, in that both are derived from the calculation of divided differences. - + ## Lagrange Interpolation Lagrange polynomials are used for polynomial interpolation. See [Wikipedia](https://en.wikipedia.org/wiki/Lagrange_polynomial) - + ## Newton’s Interpolation Newton’s divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Divided differences is a recursive division process. The method can be used to calculate the coefficients in the interpolation polynomial in the Newton form. - + # Integration methods - + ## Euler Method @@ -87,14 +88,14 @@ Euler method (also called forward Euler method) is a first-order numerical proce $$ y_{n+1} = y_{n} + h f(t_{n} , y_{n}) $$ - + ## Newton–Cotes Method Newton–Cotes formulae, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulae for numerical integration (also called quadrature) based on evaluating the integrand at equally spaced points. They are named after Isaac Newton and Roger Cotes. - + ## Predictor–Corrector Method @@ -104,7 +105,7 @@ Predictor–Corrector methods belong to a class of algorithms designed to integr 2. The next, *“corrector”* step refines the initial approximation by using the predicted value of the function and another method to interpolate that unknown function’s value at the same subsequent point. - + ## Trapizoidal method diff --git a/README.org b/README.org index 7751539..217385e 100644 --- a/README.org +++ b/README.org @@ -1,15 +1,19 @@ +:PROPERTIES: +:ID: 455c46bb-952b-4978-b48e-554565046442 +:END: #+TITLE: Numerical-analysis #+AUTHOR: Jishnu Rajendran +[[attachment:_20240120_041518num-ana.png]] + + ![GitHub](https://img.shields.io/github/license/jishnurajendran/Numerical-analysis?style=for-the-badge) ![GitHub forks](https://img.shields.io/github/forks/jishnurajendran/Numerical-analysis?style=for-the-badge) ![GitHub Repo stars](https://img.shields.io/github/stars/jishnurajendran/Numerical-analysis?style=for-the-badge) ![GitHub watchers](https://img.shields.io/github/watchers/jishnurajendran/Numerical-analysis?style=for-the-badge) -:PROPERTIES: -:TOC: :include all -:END: * Root Finding Methods ** [[https://en.wikipedia.org/wiki/Newton%27s_method][Newton's method]] -Newton's method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as $$ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} $$ +Newton's method (also known as the Newton–Raphson method) is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function. The process is repeated as +$$ x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}} $$ ** [[https://en.wikipedia.org/wiki/Fixed-point_iteration][Fixed point method]] Fixed-point iteration is a method of computing fixed points of iterated functions. More specifically, given a function f defined on the real numbers with real values and given a point x0 in the domain of f, the fixed point iteration is