A few days ago I thought of an algorithm to calculate square roots. In this article I would like to introduce my algorithm.
I should mention that this algorithm probably exists.

First of all, my algorithm can only find roots between a specific range (for example \(0\) to \(1\)), thus we need to decrease the number whose root we are looking for to our range.
To do so we use a simple math identity: \(\sqrt{p^2x}=p\sqrt{x}\).
All we have to do is pick a number \(p\), divide \(x\) by \(p^2\) until it’s square root is in our range, and remember the number of times we repeated it – \(q\). When we have the result we need to multiply it by \(p^q\).

For example, if the range is \(0\) to \(1\), \(x=289\) and \(p=2\), we have to divide \(289\) by \(p^2=4\) \(q=5\) times to get \(0.2822265625\). Then, we use the algorithm I will explain later to find the root which is \(0.53125\). Now we can return \(0.53125*2^5=17\).
And indeed \(\sqrt{289}=17\).

The Algorithm

The algorithm is very simple and uses the same concept of binary search; it works by minimizing the possible range of the square root.
I use the algorithm in the range \(0\) to \(1\) but it can be used with any range.

We know that \(x\) is between \(0\) and \(1\), therefore its square root’s lower bound is \(a=0\) and upper bound is \(b=1\).
The next step is calculating the average of the bounds \(t=(a+b)/2\).
If \(t^2=x\) we return \(t\), if \(t^2<x\) all of the numbers between \(a\) and \(t\) are not the square root, hence \(a=t\). Similarly, if \(t^2>x\) our upper bound, \(b=t\).
We can repeat this step as many times as we want. Each iteration doubles the precision.
If we didn’t find the specific square root when we finish, we should return \((a+b)/2\), as it is the closest we can get to the actual square root.

The Code

I will demonstrate the whole algorithm in C#. In the following implementation \(p\) and the number of iterations are parameters.

public double Sqrt(double x, int p, int iterations)
	if (x < 0)
		throw new Exception("Can not sqrt a negative number");

	long multiplier = 1;
	int p2 = p * p;
	while (x > 1)
		multiplier *= p;
		x /= p2;

	if (x == 1 || x == 0)
		return multiplier * x;

	double a = 0;
	double b = 1;

	for (int i = 0; i < iterations; i++)
		double t = (a + b) / 2;
		if (t * t == x)
			return multiplier * t;
		else if (t * t < x)
			a = t;
			b = t;

	return multiplier * (a + b) / 2;


Rather than calculating the complexity of \(k\) iterations, I will calculate the complexity for precision of \(1\) to \(n\).

The first part consists of dividing \(x\) by \(p^2\) and multiplying the multiplier by \(p\), which is \(O(1)\) for each iteration.
\(x\) is divided \(\log_{p^2} x\) times, hence this part’s complexity is \(O(\log_{p^2} x)\).
Multiplying the result by the multiplier decreases the precision so we will take it in to consideration later. Finding the value of multiplier is simple: \(p^{\log_{p^2} x}=p^{(\log_p x)/(\log_p p^2)}=\sqrt{p^{\log_p x}}=\sqrt{x}\)

In the main part each iteration has a complexity of \(O(1)\) and it doubles the precision, as I explained before.
Let \(i\) be the minimal number of iterations to get a precision of \(1\) to \(n\).
That means that \(1/n=\sqrt{x}/2^i \Rightarrow 2^i=n \sqrt{x} \Rightarrow i=\log_2 (n \sqrt{x})\).
Therefore the total complexity of the main part is \(O(\log_2 (n \sqrt{x}))\).

Summing these two, the total complexity is
\(O(\log_{p^2} x + \log_2 (n \sqrt{x}))
\\=O((\log_p x)/(\log_p p^2) + \log_2 n + \log_2 \sqrt{x})
\\=O(0.5 \log_p x + \log_2 n + 0.5 \log_2 x)
\\=O(\log_2 x + \log_p x + \log_2 n)\)

We know that \(p \ge 2 \Rightarrow \log_p x \le \log_2 x\), therefore the worst-case complexity is \(O(\log_2 x + \log_2 x + \log_2 n)=O(\log_2 x + \log_2 n)=O(\log_2 nx)\).


I don’t know many square root algorithms which means that I don’t know other algorithm’s complexity. That said my algorithm is still very efficient.
More over, this method seems pretty intuitive to me, it is easy to understand and easy to implement.
It is also very easy to implement an algorithm that calculates square roots with a constant maximum error.

I hope someone will find this article useful.

Update 24/05/2011:
I read about Newton’s method, which is a method used to find the roots of functions. I found out that it is far more easy to implement and much more efficient in comparison to my method.

Tagged with:

3 Responses to Square Root Algorithm

  1. Omer Perry says:

    Enjoyed reading, I’d be glad to read a review about Newton’s Method, as Wikipedia tends to look like Gibberish to me when it comes to math and equations.

    Regardless of the consequences, you developed an efficient algorithm out of your mind, which is estimable :)

  2. kompresory says:

    The post you wrote is very good.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Set your Twitter account name in your settings to use the TwitterBar Section.