Talk:Bisection method

This is an old revision of this page, as edited by 2.222.44.210 (talk) at 12:36, 2 December 2011. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Latest comment: 14 years ago by Ezander in topic Language of Code examples
WikiProject iconMathematics Start‑class Mid‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-priority on the project's priority scale.

History

Does anyone know when the bisection method was first used? The method seems quite obvious but that doesn't necessarily mean it came before other methods, like secant for example.

Number of iterations for certain Tolerance

I think a little more should be added to this page about the number of iterations of the Bisection Method is needed to get the function within a certain Tolerance. By modeifying Theorem 2.1 from Numerical Analysis 8th edition, a quick bound approximation to the error can be found. Even know the actual error can be much smaller, it gives a good estimate of how long a function will take to converge by the Bisection Method. I would like to add info about this Sc920505 (talk) 13:07, 30 September 2008 (UTC)Reply

Hmm, my textbook give the formula for the absolute error as (b-a)/2^n. Whereas the one here is (b-a)/2^(n+1). I don't know which one is correct... can someone clarify for me?

  • Its (b-a)/2^(n+1). Think about it... Let n=0. then if it was (b-a)/2^n then the error would be b-a, which would extend beyond the limits of b and a, a ___domain in which we know MUST contain the target. Strange though, that a textbook had it wrong....
  • Also, I would like to tell readers of this article that bisection can break down if f is non-continous or is not monotonic or not a function. It is too much of an unnecessary complication for the article, but worth mentioning.
I don't think that f needs to be monotone (why do you think so?), but it indeed needs to be a continuous function. -- Jitse Niesen (talk) 12:11, 13 November 2005 (UTC)Reply
just put a note that the function must be differentiable along the section wanting to be root searched. The preceding unsigned comment was added by Mikefadock (talk • contribs) .
Please explain why you think it needs to be differentiable. -- Jitse Niesen (talk) 02:20, 3 March 2006 (UTC)Reply
The error is |b-a|/2^n. His text had it right. After 1 subdivision (n=1), the error is at most half the original interval, not 1/4. |b-a| doesn't extend beyond the ___domain in which we know must contain the target -- it's exactly equal to it. I also added the missing absolute value. 165.189.91.148 20:46, 20 April 2006 (UTC)Reply
It depends on what you take to be the approximation to the root. If you use the midpoint of the interval, the error is |b-a|/2^(n+1); if you use either endpoint, the error is |b-a|/2^n. Which formula is used, is not very important, in my opinion. -- Jitse Niesen (talk) 02:26, 21 April 2006 (UTC)Reply
Agreed, and I've edited the section to reflect the "controversy". The problem arises because the method doesn't give a single estimate for the root's ___location, and the concepts of "absolute error" and "length of the interval" are easily confused. The actual formula indeed isn't important, save in Calculus 1 classes where it's fairly often an exam question. Majromax (talk) 22:38, 19 November 2007 (UTC)Reply

Pseudo Code completion

The Pseudo Code presented in this article needs to be made more accurate. As of right now, it does not account for the possibility of encoutering the 0 exactily on xL or xR. A theird condition needs to be added.

No. Bisection method will happily converge to a root even if it occurs at the endpoint. It will just converge to the endpoint with the root. The given pseudocde is proper in that it will consider an interval beginning or ending at 0 to have a sign change. Admittedly, this isn't exactly an efficient approach, but it certainly gives clear code without extraneous conditions. Majromax (talk) 22:38, 19 November 2007 (UTC)Reply
Incorrect. The code checks if it is < 0, and if the left, right, or midpoint is exactly zero, an infinite loop would have occurred. Added a simple else - no performance hit compare dto the normal function. It could still be efficientized though. —Preceding unsigned comment added by 207.161.134.144 (talk) 18:25, 4 February 2010 (UTC)Reply

Visual Basic Code

state which version. Visual Basic.net is not backward compatible with previous versions anymore —Preceding unsigned comment added by 192.197.54.27 (talk) 14:10, 2 October 2007 (UTC)Reply

Although in general the two are not compatible, this code runs in both VB.net and the older VisualBasic. CRGreathouse (t | c) 21:32, 2 April 2008 (UTC)Reply
Anyway, Visual Basic isn't free, it would be wiser to write the pseudocode in Python or Java. --PhDP (talk) 01:30, 26 November 2008 (UTC)Reply
As long as the section header states 'pseudo-code', I find both Visual Basic, Python and Java inappropriate. Either change the header, or change the code into something that is more or less language-independent. Also keep in mind that source code is not particularly encourared in Wikipedia (WP:NOT). --Berland (talk) 11:35, 26 November 2008 (UTC)Reply

enumeration

If f has several roots in the interval [a, b], then the bisection method finds the odd-numbered roots with equal, non-zero probability and the even-numbered roots with zero probability (Corliss 1977).

with respect to which enumeration??? --Philtime (talk) 08:28, 11 May 2008 (UTC) PS: what if there are is an infinite and/or uncountable amount of roots? --Philtime (talk) 08:28, 11 May 2008 (UTC)Reply

I think by even-numbered it means double etc. So looking at something like
 
which has a double root at -1 and a triple root at 0, it is saying that 0 would almost always be found rather than -1. --Rumping (talk) 17:44, 30 August 2008 (UTC)Reply
I looked through Corliss' article and added some details to the article. The roots are enumerated according to their order. I don't know what happens if there are infinitely many roots; it makes sense to assume they all have probability zero, but perhaps the axioms of probability theory do not work in this case. -- Jitse Niesen (talk) 15:51, 20 September 2008 (UTC)Reply

Corliss' paper is rather poorly worded. He takes a probability space of functions with the property that the distribution of zeros is the same as the distribution of an odd number of independent random variables with uniform distribution. (This implies, for example, that multiple zeros don't happen.) Applying bisection to a random function in that probability space, the probability that the i-th smallest zero is found is 0 if i is even (but this is always true for functions with only simple zeros), and independent of i if i is odd. Corliss' result says nothing at all about the behaviour of bisection on particular functions. Also, I don't know any real-life class of functions which satisfies Corliss' assumptions, so the practical significance is uncertain. I hope my new version of the text is clear enough. McKay (talk) 02:16, 12 June 2009 (UTC)Reply

Convergence

Hi, I have a question on the "convergence linearly" . Since the absolute error of bisection convergence is |b-a|/2^n, so the rate of convergence is O(1/2^n) which means the growth of error is exponential. I think it is more precise to use this information instead of the phrase "convergence linearly"! —Preceding unsigned comment added by 132.235.37.166 (talk) 13:24, 3 October 2008 (UTC)Reply

This is called linear convergence in numerical analysis because the number of digits in the answer grows at a linear rate. Equivalently, the running time is linear in log(eps). McKay (talk) 06:13, 1 April 2009 (UTC)Reply

Computation of the midpoint

In section 'Practical consideration', 2. point, the passage

Another method relies on the assumption that (right+left)/2 exactly equals left or right if left and right are adjacent non-equal floating-point values. This is true for most hardware, including IEEE arithmetic in the absence of overflow, but can be violated if intermediate expressions are calculated to greater precision than stored variables (depending on computer optimization).

is totally dubious to me. What does this other method do about? Is that what's implemented in the code below i.e. use left+(right-left)/2 instead of (left+right)/2? And if so, does it take care of the case when the assumption is true (it sounds like that) or when the assumption is violated (makes more sense in my opinion)? Furthermore, when is the assumption violated-any example? Final question: if I know that IEEE arithmethic is used: is there any advantage of preferring left+(right-left)/2 over (left+right)/2? I can't see any, whether left and right are very close to each other or not... Ezander (talk) 08:48, 26 November 2010 (UTC)Reply

Ahh, now I see, it was about the stopping criterion, not the computation of the midpoint. Then it makes sense. However, still the simpler (hi+lo)/2 has been changed to lo+(hi-lo)/2. Any reason for this? Ezander (talk) 08:53, 26 November 2010 (UTC)Reply

Language of Code examples

I don't think it is very sensible that both code examples are in different languages. One in VB and the other one (it states in the source tag that it's C, which it clearly isn't) in some hodgepodge of languages (I've seen all that notation in different languages, but never together in one language: assignment (:=) from Pascal-like languages, comments (//) from C++, loop constructs (endwhile, endif) from PHP(?), very strange...

I'm not a friend of VB. However the code looks quite clear, nearly like pseudo-code... Maybe the second example should be changed also to VB? Ezander (talk) 09:03, 26 November 2010 (UTC)Reply