## BINARY BISECTION

9 01 2010

INTRO:

The Bisection Method is a very well known root-finding algorithm that always comes at the very beginning of every book on Numerical Analysis.

The algorithmn searches for a root in the interval, $[a_{0},b_{0}]$ in whose endpoints the continous problem function, $f(x)$, takes opposite signs:

$f(a_{0})f(b_{0})<0$

The Bisection Method is based on Bolzano´s Intermediate Value Theorem, and it can give, also, an alternative proof for it. (Reference [1])

Here, it does not matter the function values: The algorithm does not use any other information from $f(x)$, other than its sign changes.

There are excelent pages on the internet that treat this topic, (references [1], [2] and [3] for example), and it would be useless to add more redundant and, for sure, worse presented summary about this numerical method.

I just want to take out something hidden inside the computer software, a small piece of math inside the programming area.

THE LOST FUNCTION

The core of the Bisection Method is the conditional IF… THEN… sentence, where the algorithm chooses in which half of the interval is the desired root.

This conditional evaluation can be converted into a decision function, Let´s say $\delta_i$, that gives $0$ if the root is on the left half interval and $1$, otherwise.

Now the iterative sequence can be written using this function, $\delta_i$, where:

$m_{i}$ is the midpoint of the search interval:

$\displaystyle m_{i}=\frac{a_{i}+b_{i}}{2}$

$\delta_{i}$ is the decision function, and the $\delta(i,j)$, in the second member, is the Kronecker´s delta:

$\delta_i=\delta(sgn(f(m_{i})),sgn(f(a_{i})))$

Then:

$\delta_{i} = \left\{ \begin{array}{ll} 1 & \mbox{if } (sgn(f(m_{i})=sgn(f(a_{i})) \\\\ 0 & \mbox{if } (sgn(f(m_{i})\neq sgn(f(a_{i})) \end{array}\right\}$

The endpoints of the search interval must be:

${[a_{i+1}, b_{i+1}]} = \left\{ \begin{array}{ll} {[m_{i},b_{i}]} & \mbox{if }(\delta_{i}=0) \\\\ {[a_{i},m_{i}]} & \mbox{if } (\delta_{i}=1) \end{array}\right\}$

Then, they can be expressed with the aid of $\delta_{i}$, as:

$a_{i+1}=a_{i}+\delta_{i}\cdot (m_{i}-a_{i})$

$b_{i+1}=b_{i}+(1-\delta_{i})\cdot (m_{i}-b_{i})$

Note:
If $f(x)$, has only a single root in the interval $[a_{0},b_{0}]$, then it is possible to use an alternate version of the decision function:

$\delta_i=\delta(sgn(f(m_{i})),sgn(f(a_{0})))$

with the advantage that then it is not necessary to evaluate $f(a_{i})$ in the main loop of the algorithm: this implies an improvement in performance.

LENGTHS RATIO:

We can name the ratio of lengths between the intervals $[a_{0},a_{i}]$, and $[a_{0},b_{0}]$, as the parameter:

$\displaystyle \theta_{i}=\frac{a_{i}-a_{0}}{b_{0}-a_{0}}$

If $\theta_{i}$ is expressed in function of $\delta_{i}$:

$\displaystyle\theta_{i}=\sum_{k=0}^{i-1} \frac{\delta_{k}}{2^{k+1}}$

The sequence formed by the distinct values of the decision function, $\delta_{i}$, matches up with the binary expansion (or the binary digits) of $\theta_{i}$

Then the midpoint and the endpoints of the interval in function of $\theta_{i}$, are:

$a_{i}=a_{0}+(b_{0}-a_{0})\cdot \theta_{i}$

$\displaystyle m_{i}=a_{0}+(b_{0}-a_{0})\cdot \left(\theta_{i}+\frac{1}{2^{i+1}}\right)$

$\displaystyle b_{i}=a_{0}+(b_{0}-a_{0})\cdot \left(\theta_{i}+\frac{1}{2^{i}}\right)$

HACKING BISECTION METHOD:

The connection between $\delta_{i}$ and $\theta_{i}$ can be used to convert the Bisection Method into a method for finding the binary expansion for a root of any function that holds the conditions that this method demands.

For example if we take:

$f(x)=x^2-x_{0}$, where $x_{0}>0$

with $a_{0}=0$ and $b_{0}=2^{k}$, where $2^{k-1} < \sqrt{x_{0}} < 2^{k}$, then:

$\displaystyle \sqrt{x_{0}}=\lim_{i \to +\infty}{\theta_{i}}$

and the sequence of $\delta_{i}$ gives the binary digits of $\sqrt{x_{0}}$

Archives:

References:

[1]-Mohamed A. Khamsi, Helmut Knaust – SOSMATH – The Bisection Method
[2]-John H. Mathews – California State Univ. Fullerton – Module for The Bisection Method
[3]-Holistic Numerical Methods Institute- – Bisection Method: Nonlinear Equations

## EUCLINACCI – [1]

1 02 2009

Maybe the roughest lesson, someone can ever receive, is this:

$1+1=2$

The implications related to that simple line, can fill a whole library: One of them, is the, so called, Fibonacci sequence:

$\{0,1, 1, 2, 3, 5, 8, 13, 21, 34, 55,...\}$

Defined as:

$\displaystyle f_{n} = \left\{ \begin{array}{lcr} f_{0}=0;\\ f_{1}=1;\\ f_{n}=f_{n-1}+f_{n-2}; n \ge 2 \end{array} \right\}$

Tabulating the ratio between two consecutive terms of the sequence, appear on insight, two properties:

1) This ratio has a finite limit.
2) Two consecutive Fibonacci numbers are
relatively prime.

$\displaystyle \begin{array}{|r|r|c|} \hline {n} & f_n & f_n/f_{n-1} \\ \hline 0 & 0 & -\\ 1 & 1 & -\\ 2 & 1 & 1.0000 \\ 3 & 2 & 2.0000 \\ 4 & 3 & 1.5000 \\ 5 & 5 & 1.6667 \\ 6 & 8 & 1.6000 \\ 7 & 13 & 1.6250 \\ 8 & 21 & 1.6154 \\ 9 & 34 & 1.6190 \\ 10 & 55 & 1.6176 \\ 11 & 89 & 1.6182 \\ 12 & 144 & 1.6180 \\ 13 & 233 & 1.6181 \\ 14 & 377 & 1.6180 \\ 15 & 610 & 1.6180 \\ 16 & 987 & 1.6180 \\ \hline \end{array}$

To prove the first property, we have:
$\displaystyle I = \lim_{x \to +\infty} \frac{f_n}{f_{n-1}} = \lim_{x \to +\infty} \frac{f_{n-1}+f_{n-2}}{f_{n-1}} = \\ 1 + \lim_{x \to +\infty} \frac{f_{n-2}}{f_{n-1}} = 1 + \lim_{x \to +\infty} \frac{f_{n-1}}{f_n};\\ I=1+\frac{1}{I} \Rightarrow I=1+\frac{1}{I}$

With positive solution, equal to the golden ratio.

$\displaystyle I=\frac{1+\sqrt{5}}{2}=\Phi$

The proof, of this second property, used to be by induction or contradiction [2], but this can also be proved using the oldest procedure designed to calculate de greatest common divisor, GCD, of two integers, known as The Euclidean Algorithm
The Euclidean method constructs a decreasing sequence of integers who share the same GCD.

$(a,b)=(r_0,r_1)=(r_2,r_3)=...=(r_{n-1},r_n)=r_n$

$\left\{\begin{array}{l} a=r_0; \\ b=r_1; \\ r_{i+1}=r_{i-1}-r_i*\bigg \lfloor\frac{r_{i-1}}{r_i}\bigg\rfloor;\\ \end{array}\right\}$

if $r_{i+1}=0$ then $(a,b)=r_i$

Where the function $\displaystyle \lfloor x \rfloor$ is the Floor Function, see [3].

Proof:

$\displaystyle (f_{n},f_{n-1})=(r_0,r_1)$

$\displaystyle r_2=r_0-r_1*\bigg\lfloor\frac{r_0}{r_1}\bigg\rfloor$

$\displaystyle r_2=f_n-f_{n-1}*\bigg\lfloor\frac{f_n}{f_{n-1}}\bigg\rfloor=f_n-f_{n-1}*\bigg\lfloor\frac{f_{n-1}+f_{n-2}}{f_{n-1}} \bigg\rfloor$

$\displaystyle r_2=f_n-f_{n-1}*\bigg\lfloor 1+\frac{f_{n-2}}{f_{n-1}}\bigg\rfloor =f_n-f_{n-1}*\bigg(1+\bigg\lfloor\frac{f_{n-2}}{f_{n-1}}\bigg\rfloor\bigg)$

$\displaystyle f_{n-2}\le f_{n-1} \implies \bigg\lfloor\frac{f_{n-2}}{f_{n-1}}\bigg\rfloor=0$

$\displaystyle r_2=f_n-f_{n-1}=f_{n-2}$

$\displaystyle(f_{n},f_{n-1})=(r_0,r_1)=(r_1,r_2)=(f_{n-1},f_{n-2})=...=(2,1)=1$

The Euclidean Algorithm reproduces all Fibonacci´s sequence and proves that two consecutive terms are relatively prime.

References:

[1]-Fundamentals of Number Theory William J. LeVeque – Courier Dover Publications, 1996 – ISBN 0486689069, 9780486689067
[2]-Consecutive Fibonacci Numbers Relatively Prime – The Math Forum@Drexel
[3] Weisstein, Eric W. “Floor Function.” From MathWorld–A Wolfram Web Resource. http://mathworld.wolfram.com/FloorFunction.html