9 01 2010


The Bisection Method is a very well known root-finding algorithm that always comes at the very beginning of every book on Numerical Analysis.

The algorithmn searches for a root in the interval, [a_{0},b_{0}] in whose endpoints the continous problem function, f(x), takes opposite signs:


The Bisection Method is based on Bolzano´s Intermediate Value Theorem, and it can give, also, an alternative proof for it. (Reference [1])

Here, it does not matter the function values: The algorithm does not use any other information from f(x), other than its sign changes.

There are excelent pages on the internet that treat this topic, (references [1], [2] and [3] for example), and it would be useless to add more redundant and, for sure, worse presented summary about this numerical method.

I just want to take out something hidden inside the computer software, a small piece of math inside the programming area.


The core of the Bisection Method is the conditional IF… THEN… sentence, where the algorithm chooses in which half of the interval is the desired root.

This conditional evaluation can be converted into a decision function, Let´s say \delta_i, that gives 0 if the root is on the left half interval and 1, otherwise.

Now the iterative sequence can be written using this function, \delta_i, where:

m_{i} is the midpoint of the search interval:

\displaystyle m_{i}=\frac{a_{i}+b_{i}}{2}

\delta_{i} is the decision function, and the \delta(i,j), in the second member, is the Kronecker´s delta:



\delta_{i} = \left\{ \begin{array}{ll} 1 & \mbox{if } (sgn(f(m_{i})=sgn(f(a_{i})) \\\\ 0 & \mbox{if } (sgn(f(m_{i})\neq sgn(f(a_{i})) \end{array}\right\}

The endpoints of the search interval must be:

{[a_{i+1}, b_{i+1}]} = \left\{ \begin{array}{ll} {[m_{i},b_{i}]} & \mbox{if }(\delta_{i}=0) \\\\ {[a_{i},m_{i}]} & \mbox{if } (\delta_{i}=1) \end{array}\right\}

Then, they can be expressed with the aid of \delta_{i}, as:

a_{i+1}=a_{i}+\delta_{i}\cdot (m_{i}-a_{i})

b_{i+1}=b_{i}+(1-\delta_{i})\cdot (m_{i}-b_{i})

If f(x), has only a single root in the interval [a_{0},b_{0}], then it is possible to use an alternate version of the decision function:


with the advantage that then it is not necessary to evaluate f(a_{i}) in the main loop of the algorithm: this implies an improvement in performance.


We can name the ratio of lengths between the intervals [a_{0},a_{i}], and [a_{0},b_{0}], as the parameter:

\displaystyle \theta_{i}=\frac{a_{i}-a_{0}}{b_{0}-a_{0}}

If \theta_{i} is expressed in function of \delta_{i}:

\displaystyle\theta_{i}=\sum_{k=0}^{i-1} \frac{\delta_{k}}{2^{k+1}}

The sequence formed by the distinct values of the decision function, \delta_{i}, matches up with the binary expansion (or the binary digits) of \theta_{i}

Then the midpoint and the endpoints of the interval in function of \theta_{i}, are:

a_{i}=a_{0}+(b_{0}-a_{0})\cdot \theta_{i}

\displaystyle m_{i}=a_{0}+(b_{0}-a_{0})\cdot \left(\theta_{i}+\frac{1}{2^{i+1}}\right)

\displaystyle b_{i}=a_{0}+(b_{0}-a_{0})\cdot \left(\theta_{i}+\frac{1}{2^{i}}\right)


The connection between \delta_{i} and \theta_{i} can be used to convert the Bisection Method into a method for finding the binary expansion for a root of any function that holds the conditions that this method demands.

For example if we take:

f(x)=x^2-x_{0}, where x_{0}>0

with a_{0}=0 and b_{0}=2^{k}, where 2^{k-1} < \sqrt{x_{0}} < 2^{k}, then:

\displaystyle \sqrt{x_{0}}=\lim_{i \to +\infty}{\theta_{i}}

and the sequence of \delta_{i} gives the binary digits of \sqrt{x_{0}}

[a]-010910-Binary Bisection.nb


[1]-Mohamed A. Khamsi, Helmut Knaust – SOSMATH – The Bisection Method
[2]-John H. Mathews – California State Univ. Fullerton – Module for The Bisection Method
[3]-Holistic Numerical Methods Institute- – Bisection Method: Nonlinear Equations