• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

Proof of fixed-point theorem

Joined
11/5/14
Messages
294
Points
53
Hey guys,

I tried to write a proof for Banach's Contraction Mapping theorem, which is extremely important for fixed-point iteration to numerically solve for the zeroes of an equation, but I think it even extends to PDEs, where a function that solves the PDE is a fixed point in infinite dimensional function spaces. Do you guys think, my proof is rigorous and technically correct?

Cheers,
Quasar.
 

Attachments

  • banach_contraction_mapping_theorem.pdf
    94.3 KB · Views: 34
Thanks for sharing, I didn't know about Aitken's acceleration!

It would be a good exercise for me to show that, for the recursive sequence \( x_{n+1} = \frac{1}{2}\left(x_n + \frac{2}{x_n}\right)\), Aitken acceleration has quadratic (if I understood correctly?)rate of convergence, i.e. \( \lim \frac{(Ax)_n - l}{|x_n - l|^2}= \mu \), where \( \mu > 0 \).
 
I recently purchased a copy of this book : An Introduction to Numerical Analysis by Suli and Mayers. It assumes a background of only a first course in analysis, and the presentation of the material is intuitive, rigorous & clear - no hand-waving. I find it to be such a nicely written undergraduate level intro.
 
Last edited:
Thanks for sharing, I didn't know about Aitken's acceleration!

It would be a good exercise for me to show that, for the recursive sequence \( x_{n+1} = \frac{1}{2}\left(x_n + \frac{2}{x_n}\right)\), Aitken acceleration has quadratic (if I understood correctly?)rate of convergence, i.e. \( \lim \frac{(Ax)_n - l}{|x_n - l|^2}= \mu \), where \( \mu > 0 \).
and experimentally.
 
A nice feature of fixed point/contraction theorems is they converge for any initial guess in contrast to say Newton's method.
 
If I remember correctly, precisely why, one has to run a bracketing routine prior to Newton-Raphson. Geometrically speaking, if \( x_n \) is such that \(f'(x_n)\) is close to zero, then \(x_{n+1}\) can be far away from the root.
 
By the way, do you have a suggestion, for a good intro book to C++ (using C++ 17). I definitely have to re-learn modern C++ & I want to implement some algorithms, I learn in my numerics book.

Quantlib implements many of the solvers in C++, but I'd like to make an attempt at writing my own code.

SO has a well-written post on this here. But, (1) the answer was first written a while back and (2) not a lot of books that combine both numerical methods and modern C++.
 
By the way, do you have a suggestion, for a good intro book to C++ (using C++ 17). I definitely have to re-learn modern C++ & I want to implement some algorithms, I learn in my numerics book.

Quantlib implements many of the solvers in C++, but I'd like to make an attempt at writing my own code.

SO has a well-written post on this here. But, (1) the answer was first written a while back and (2) not a lot of books that combine both numerical methods and modern C++.
Do you already know C++11? or even C++03?(?)
BTW C++17 is only a stepping stone to C++20. TBH I don't think you are ready.

It's like judo

white>yellow->orange->green->blue->brown->black

and you in C++ terms?

You can do all these algos in C. Or Python.
 
Last edited:
If you ask me, I think, I'm still white-belt in C++.
I think I know basic object oriented programming, but I need to come up to speed, for example when you write C++ 11.
 
Another way to solve fixed point

x = g(x)

is to solve it as a least-squares minimisation problem

min (x - g(x))^2.

And use favourite optimiser.
 
If I remember correctly, precisely why, one has to run a bracketing routine prior to Newton-Raphson. Geometrically speaking, if \( x_n \) is such that \(f'(x_n)\) is close to zero, then \(x_{n+1}\) can be far away from the root.
For example.
and 1) several roots, 2) no roots are possible.
 
For example.
and 1) several roots, 2) no roots are possible.
\( f(x) = \frac{1}{2}-\frac{1}{1+M|x - 1.05|} \) has roots \(x_1=1.05-\frac{1}{M}\) and \(x_2=1.05+\frac{1}{M}\). It'll be fun to see the behavior of a 1D solver (or a bracketing routine) when \(1/M \) is near or smaller than the machine epsilon.

pathological_function.PNG
 
Even for M = 1.0e+9 etc. it will be a challenge for methods that use derivatives. It is like a boundary/internal layer problem

SHGO is a global optimiser that finds all roots.


It's very easy to use.
from scipy.optimize import minimize_scalar, minimize, shgo, brute, fmin
import math
import numpy as np

#DJD
print("boundary layer problem")
M = 1000
def func(x):
    z = 1.0/(1.0 + M*math.fabs(x - 1.05))
    return math.pow(0.5 - z,2)

bnds = [(0,None)]
result = shgo(func,  bounds = bnds, sampling_method ='sobol')
print("global minimum  ", result.x) 
print("success?  ", result.success)
print(result.fun)
print("local minima  ", result.xl) 
print("values at local minima  ", result.funl)
 
Last edited:
“The first rule of discovery is to have brains and good luck. The second rule of discovery is to sit tight and wait till you get a bright idea.”

George Pólya
 
Back
Top