Basel problem: Difference between revisions

Content deleted Content added
m I explained why one may chose a shorter proof than Euler's one.
Undid revision 1307935930 by ComplexRational (talk) it was already corrent as S_n originally before the edits...this is for the partial product, so it wouldn't make sense to say that sin(x)/x equals this
 
(179 intermediate revisions by 81 users not shown)
Line 1:
{{Short description|Sum of inverse squares of natural numbers}}
{{Pi box}}
[[File:basel_problem_light_analogy.svg|thumb|upright=1.25|The Basel problem is analogous to the total [[apparent magnitude|apparent brightness]] of infinite identical [[point_source#Visible_electromagnetic_radiation_(light)|point light sources]] on the [[number line]] viewed from the origin (top figure), compared to a single light source at position 1 (bottom)]]
The '''Basel problem''' is a problem in [[mathematical analysis]] with relevance to [[number theory]], first posed by [[Pietro Mengoli]] in 1650 and solved by [[Leonhard Euler]] in 1734,<ref>{{cite journal|last=Ayoub|first=Raymond|title=Euler and the zeta function|journal=Amer. Math. Monthly|volume=81|year=1974|pages=1067–86|url=http://www.maa.org/programs/maa-awards/writing-awards/euler-and-the-zeta-function|doi=10.2307/2319041}}</ref> and read on 5 December 1735 in [[Russian Academy of Sciences#History|''The Saint Petersburg Academy of Sciences'']].<ref>[http://eulerarchive.maa.org/pages/E041.html E41 – De summis serierum reciprocarum]</ref> Since the problem had withstood the attacks of the leading [[mathematician]]s of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up years later by [[Bernhard Riemann]] in his seminal 1859 paper "[[On the Number of Primes Less Than a Given Magnitude]]", in which he defined his [[Riemann zeta function|zeta function]] and proved its basic properties. The problem is named after [[Basel]], hometown of Euler as well as of the [[Bernoulli family]] who unsuccessfully attacked the problem.
The '''Basel problem''' is a problem in [[mathematical analysis]] with relevance to [[number theory]], concerning an infinite sum of inverse squares. It was first posed by [[Pietro Mengoli]] in 1650 and solved by [[Leonhard Euler]] in 1734,<ref>{{citation |last= Ayoub |first= Raymond |title= Euler and the zeta function |journal= Amer. Math. Monthly |volume= 81 |year= 1974 |issue= 10 |pages= 1067–86 |url= https://www.maa.org/programs/maa-awards/writing-awards/euler-and-the-zeta-function |doi= 10.2307/2319041 |jstor= 2319041 |access-date= 2021-01-25 |archive-date= 2019-08-14 |archive-url= https://web.archive.org/web/20190814233022/https://www.maa.org/programs/maa-awards/writing-awards/euler-and-the-zeta-function |url-status= dead }}</ref> and read on 5 December 1735 in [[Russian Academy of Sciences#History|''The Saint Petersburg Academy of Sciences'']].<ref>[https://scholarlycommons.pacific.edu/euler-works/41/ E41 – De summis serierum reciprocarum]</ref> Since the problem had withstood the attacks of the leading [[mathematician]]s of the day, Euler's solution brought him immediate fame when he was twenty-eight. Euler generalised the problem considerably, and his ideas were taken up more than a century later by [[Bernhard Riemann]] in his seminal 1859 paper "[[On the Number of Primes Less Than a Given Magnitude]]", in which he defined his [[Riemann zeta function|zeta function]] and proved its basic properties. The problem is named after the city of [[Basel]], hometown of Euler as well as of the [[Bernoulli family]] who unsuccessfully attacked the problem.
 
The Basel problem asks for the precise [[summation]] of the [[Multiplicative inverse|reciprocals]] of the [[square number|squares]] of the [[natural number]]s, i.e. the precise sum of the [[Series (mathematics)|infinite series]]:
<math display="block">\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots. </math>
 
The sum of the series is approximately equal to 1.644934.<ref>{{Cite OEIS|1=A013661|mode=cs2}}</ref> The Basel problem asks for the ''exact'' sum of this series (in [[closed-form expression|closed form]]), as well as a [[mathematical proof|proof]] that this sum is correct. Euler found the exact sum to be <math display="inline">\frac {\pi^2}{6}</math> and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct. He produced an accepted proof in 1741.
:<math>\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \cdots</math> .
 
The solution to this problem can be used to estimate the probability that two large [[random number]]s are [[coprime]]. Two random integers in the range from 1 to {{Mvar|n}}, in the limit as {{Mvar|n}} goes to infinity, are relatively prime with a probability that approaches <math display="inline">\frac {6}{\pi^2}</math>, the reciprocal of the solution to the Basel problem.<ref>{{citation|contribution=Chapter 9: Sneaky segments|pages=101–106|title=Circle in a Box|series=MSRI Mathematical Circles Library|first=Sam|last=Vandervelde|publisher=Mathematical Sciences Research Institute and American Mathematical Society|year=2009}}</ref>
The sum of the series is approximately equal to 1.644934.<ref>{{Cite OEIS|1=A013661}}</ref> The Basel problem asks for the ''exact'' sum of this series (in [[Closed-form expression|closed form]]), as well as a [[mathematical proof|proof]] that this sum is correct. Euler found the exact sum to be {{sfrac|{{pi}}<sup>2</sup>|6}} and announced this discovery in 1735. His arguments were based on manipulations that were not justified at the time, although he was later proven correct, and it was not until 1741 that he was able to produce a truly rigorous proof.
 
A generalization of this result is (see section "A rigorous proof using Euler's formula and L'Hôpital's rule" below for proof):
 
<math>\sum_{n=-\infty}^\infty \frac{1}{n^2+a^2} = \frac{1}{1^2+a^2} + \frac{1}{2^2+a^2} + \frac{1}{3^2+a^2} + \cdots = \frac{\pi}{a}\coth(\pi a) </math>
 
==Euler's approach==
Euler's original derivation of the value <math display="inline">\frac{{sfrac|{{\pi^2}{6}<sup>2</supmath>|6}} essentially extended observations about finite [[polynomial]]s and assumed that these same properties hold true for infinite series.
 
Of course, Euler's original reasoning requires justification (100 years later, [[Karl Weierstrass]] proved that Euler's representation of the sine function as an infinite product is valid, by the [[Weierstrass factorization theorem]]), but even without justification, by simply obtaining the correct value, he was able to verify it numerically against partial sums of the series. The agreement he observed gave him sufficient confidence to announce his result to the mathematical community.
 
To follow Euler's argument, recall the [[Taylor series]] expansion of the [[trigonometric function|sine function]]
<math display=block> \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots </math>
Dividing through by {{Mvar|x}} gives
<math display=block> \frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots .</math>
 
The [[Weierstrass factorization theorem]] shows that the right-hand side is the product of linear factors given by its roots, just as for finite polynomials. Euler assumed this as a [[heuristic]] for expanding an infinite degree [[polynomial]] in terms of its roots, but in fact it is not always true for general <math>P(x)</math>.<ref>A priori, since the left-hand-side is a [[polynomial]] (of infinite degree) we can write it as a product of its roots as
:<math> \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots </math>
<math display=block>\begin{align}
 
\sin(x) & = x (x^2-\pi^2)(x^2-4\pi^2)(x^2-9\pi^2) \cdots \\
Dividing through by {{mvar|x}}, we have
 
:<math> \frac{\sin x}{x} = 1 - \frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + \cdots </math>
 
Using the [[Weierstrass factorization theorem]], it can also be shown that the left-hand side is the product of linear factors given by its roots, just as we do for finite polynomials (which Euler assumed as a [[heuristic]] for expanding an infinite degree [[polynomial]] in terms of its roots, but is in general not always true for general <math>P(x)</math>):<ref>A priori, since the left-hand-side is a [[polynomial]] (of infinite degree) we can write it as a product of its roots as
 
:<math>\begin{align}
\sin(x) & = x (x^2-\pi^2)(x^2-4\pi^2)(x^2-9\pi^2) \cdots \\
& = Ax \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots.
\end{align}
</math> Then since we know from elementary [[calculus]] that <math>\lim_{x \rightarrow 0} \frac{\sin(x)}{x} = 1</math>, we conclude that the leading constant must satisfy <math>A = 1</math>.</ref> This factorization expands the equation into:
<math display=block>\begin{align}
 
:<math>\begin{align}
\frac{\sin x}{x} &= \left(1 - \frac{x}{\pi}\right)\left(1 + \frac{x}{\pi}\right)\left(1 - \frac{x}{2\pi}\right)\left(1 + \frac{x}{2\pi}\right)\left(1 - \frac{x}{3\pi}\right)\left(1 + \frac{x}{3\pi}\right) \cdots \\
&= \left(1 - \frac{x^2}{\pi^2}\right)\left(1 - \frac{x^2}{4\pi^2}\right)\left(1 - \frac{x^2}{9\pi^2}\right) \cdots
Line 39 ⟶ 33:
 
If we formally multiply out this product and collect all the {{math|''x''<sup>2</sup>}} terms (we are allowed to do so because of [[Newton's identities]]), we see by induction that the {{math|''x''<sup>2</sup>}} coefficient of {{math|{{sfrac|sin ''x''|''x''}}}} is <ref>In particular, letting <math>H_n^{(2)} := \sum_{k=1}^n k^{-2}</math> denote a [[generalized harmonic number|generalized second-order harmonic number]], we can easily prove by [[Mathematical induction|induction]] that <math>[x^2] \prod_{k=1}^{n} \left(1-\frac{x^2}{\pi^2}\right) = -\frac{H_n^{(2)}}{\pi^2} \rightarrow -\frac{\zeta(2)}{\pi^2}</math> as <math>n \rightarrow \infty</math>.</ref>
<math display=block> -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.</math>
 
:<math> -\left(\frac{1}{\pi^2} + \frac{1}{4\pi^2} + \frac{1}{9\pi^2} + \cdots \right) = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.</math>
 
But from the original infinite series expansion of {{math|{{sfrac|sin ''x''|''x''}}}}, the coefficient of {{math|''x''<sup>2</sup>}} is {{math|−{{sfrac|1|3!}} {{=}} −{{sfrac|1|6}}}}. These two coefficients must be equal; thus,
<math display=block>-\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.</math>
 
Multiplying both sides of this equation by −{{pi}}<sup>2</sup> gives the sum of the reciprocals of the positive square integers.<ref name="HAVIL-GAMMA">{{citation|last1=Havil|first1=J.|title=Gamma: Exploring Euler's Constant|url=https://archive.org/details/gammaexploringeu00havi_882|url-access=limited|date=2003|publisher=Princeton University Press|___location=Princeton, New Jersey|isbn=0-691-09983-9|pages=[https://archive.org/details/gammaexploringeu00havi_882/page/n60 37]–42 (Chapter 4)}}</ref>
:<math>-\frac{1}{6} = -\frac{1}{\pi^2}\sum_{n=1}^{\infty}\frac{1}{n^2}.</math>
<math display=block>\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.</math>
 
Multiplying both sides of this equation by −{{pi}}<sup>2</sup> gives the sum of the reciprocals of the positive square integers.
 
:<math>\sum_{n=1}^{\infty}\frac{1}{n^2} = \frac{\pi^2}{6}.</math>
 
This method of calculating <math>\zeta(2)</math> is detailed in expository fashion most notably in Havil's ''Gamma'' book which details many [[zeta function]] and [[logarithm]]-related series and integrals, as well as a historical perspective, related to the [[Euler-Mascheroni constant|Euler gamma constant]].<ref name="HAVIL-GAMMA">{{cite book|last1=Havil|first1=J.|title=Gamma: Exploring Euler's Constant|date=2003|publisher=Princeton University Press|___location=Princeton, New Jersey|isbn=0-691-09983-9|pages=37–42 (Chapter 4)}}</ref>
 
===Generalizations of Euler's method using elementary symmetric polynomials===
 
Using formulasformulae obtained from [[elementary symmetric polynomialspolynomial]]s,<ref>Cf., the formulasformulae for generalized Stirling numbers proved in: {{cite journalcitation|last1=Schmidt|first1=M. D.|title=Combinatorial Identities for Generalized Stirling Numbers Expanding f-Factorial Functions and the f-Harmonic Numbers|journal=J. Integer Seq.|date=2018|volume=21|issue=Article 18.2.7|url=https://cs.uwaterloo.ca/journals/JIS/VOL21/Schmidt/schmidt18.html}}</ref> this same approach can be used to enumerate formulasformulae for the even-indexed [[zeta constants|even zeta constants]] which have the following known formula expanded by the [[Bernoulli numbers]]:
<math display=block>\zeta(2n) = \frac{(-1)^{n-1} (2\pi)^{2n}}{2 \cdot (2n)!} B_{2n}. </math>
 
For example, let the partial product for <math>\sin(x)</math> expanded as above be defined by <math>\frac{S_n(x)}{x} = \prod\limits_{k=1}^n \left(1 - \frac{x^2}{k^2 \cdot \pi^2}\right)</math>. Then using known [[Newton's identities#Expressing elementary symmetric polynomials in terms of power sums|formulas for elementary symmetric polynomial]]s (a.k.a., Newton's formulas expanded in terms of [[power sum]] identities), we can see (for example) that
:<math>\zeta(2n) = \frac{(-1)^{n-1} (2\pi)^{2n}}{2 \cdot (2n)!} B_{2n}. </math>
<math display=block>
 
\begin{align}
For example, let the partial product for <math>\sin(x)</math> expanded as above be defined by <math>\frac{S_n(x)}{x} := \prod\limits_{k=1}^n \left(1 - \frac{x^2}{k^2 \cdot \pi^2}\right)</math>. Then using known [[Newton's identities#Expressing elementary symmetric polynomials in terms of power sums|formulas for elementary symmetric polynomials]] (a.k.a., Newton's formulas expanded in terms of [[power sum]] identities), we can see (for example) that
\left[x^4\right] \frac{S_n(x)}{x} & = \frac{1}{2\pi^4}\left(\left(H_n^{(2)}\right)^2 - H_n^{(4)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{2\pi^4}\left(\zeta(2)^2-\zeta(4)\right) \\[4pt]
 
& \qquad \implies \zeta(4) = \frac{\pi^4}{90} = -2\pi^4 \cdot [x^4] \frac{\sin(x)}{x} +\frac{\pi^4}{36} \\[8pt]
:<math>
\left[x^6\right] \frac{S_n(x)}{x} & = -\frac{1}{6\pi^6}\left(\left(H_n^{(2)}\right)^3 - 2H_n^{(2)} H_n^{(4)} + 2H_n^{(6)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{6\pi^6}\left(\zeta(2)^3-3\zeta(2)\zeta(4) + 2\zeta(6)\right) \\[4pt]
\begin{align}
& \left[x^4qquad \right]implies \frac{S_nzeta(x6)}{x} & = \frac{1}{2\pi^46}\left(\left(H_n^{(2)945}\right)^2 = -3 H_n^{(4)}\right)cdot \qquadpi^6 [x^6] \xrightarrowfrac{n \rightarrow \inftysin(x)}{x} \qquad- \frac{12}{23} \left(frac{\zeta(2)pi^2-}{6} \zeta(frac{\pi^4)\right)}{90} + \frac{\ pi^6}{216},
& \qquad \implies \zeta(4) = \frac{\pi^4}{90} = -2\pi^2 \cdot [x^4] \frac{\sin(x)}{x} +\frac{\pi^4}{36} \\
\left[x^6\right] \frac{S_n(x)}{x} & = -\frac{1}{6\pi^6}\left(\left(H_n^{(2)}\right)^3 - 2H_n^{(2)} H_n^{(4)} + 2H_n^{(6)}\right) \qquad \xrightarrow{n \rightarrow \infty} \qquad \frac{1}{6}\left(\zeta(2)^3-3\zeta(2)\zeta(4) + 2\zeta(6)\right) \\
& \qquad \implies \zeta(6) = \frac{\pi^6}{945} = -3 \cdot \pi^6 [x^6] \frac{\sin(x)}{x} - \frac{2}{3} \frac{\pi^2}{6} \frac{\pi^4}{90} + \frac{\pi^6}{216},
\end{align}
</math>
 
and so on for subsequent coefficients of <math>[x^{2k}] \frac{S_n(x)}{x}</math>. There are [[Newton's identities#Expressing power sums in terms of elementary symmetric polynomials|other forms of Newton's identities]] expressing the (finite) power sums <math>H_n^{(2k)}</math> in terms of the [[elementary symmetric polynomialspolynomial]]s, <math>e_i \equiv e_i\left(-\frac{\pi^2}{1^2}, -\frac{\pi^2}{2^2}, -\frac{\pi^2}{3^2}, -\frac{\pi^2}{4^2}, \cdotsldots\right), </math> but we can go a more direct route to expressing non-recursive formulas for <math>\zeta(2k)</math> using the method of [[elementary symmetric polynomialspolynomial]]s. Namely, we have a recurrence relation between the [[elementary symmetric polynomials]] and the [[Power sum symmetric polynomial|power sum polynomials]] given as on [[Newton's identities#Comparing coefficients in series|this page]] by
<math display=block>(-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n),</math>
[[Newton's identities#Comparing coefficients in series|this page]] by
 
:<math>(-1)^{k}k e_k(x_1,\ldots,x_n) = \sum_{j=1}^k (-1)^{k-j-1} p_j(x_1,\ldots,x_n)e_{k-j}(x_1,\ldots,x_n),</math>
 
which in our situation equates to the limiting recurrence relation (or [[generating function]] convolution, or [[Cauchy product|product]]) expanded as
<math display=block> \frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -[x^{2k}] \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i. </math>
 
:<math> \frac{\pi^{2k}}{2}\cdot \frac{(2k) \cdot (-1)^k}{(2k+1)!} = -[x^{2k}] \frac{\sin(\pi x)}{\pi x} \times \sum_{i \geq 1} \zeta(2i) x^i. </math>
 
Then by differentiation and rearrangement of the terms in the previous equation, we obtain that
<math display=block>\zeta(2k) = [x^{2k}]\frac{1}{2}\left(1-\pi x\cot(\pi x)\right). </math>
 
:<math>\zeta(2k) = [x^{2k}]\frac{1}{2}\left(1-\pi x\cot(\pi x)\right). </math>
 
===Consequences of Euler's proof===
By the above results, we can conclude that <math>\zeta(2k)</math> is ''always'' a [[rational]] multiple of <math>\pi^{2k}</math>. In particular, since <math>\pi</math> and integer powers of it are [[Transcendental number|transcendental]], we can conclude at this point that <math>\zeta(2k)</math> is [[irrational]], and more precisely, [[Transcendental number|transcendental]] for all <math>k \geq 1</math>. By contrast, the properties of the odd-indexed [[zeta constants]], including [[Apéry's constant]] <math>\zeta(3)</math>, are almost completely unknown.
 
By Euler's proof for <math>\zeta(2)</math> explained above and the extension of his method by elementary symmetric polynomials in the previous subsection, we can conclude that <math>\zeta(2k)</math> is ''always'' a [[rational]] multiple of <math>\pi^{2k}</math>. Thus compared to the relatively unknown, or at least unexplored up to this point, properties of the odd-indexed [[zeta constants]], including [[Apéry's constant]] <math>\zeta(3)</math>, we can conclude much more about this class of [[zeta constants]]. In particular, since <math>\pi</math> and integer powers of it are [[Transcendental number|transcendental]], we can conclude at this point that <math>\zeta(2k)</math> is [[irrational]], and more precisely, [[Transcendental number|transcendental]] for all <math>k \geq 1</math>.
 
==The Riemann zeta function ==
The [[Riemann zeta function]] {{math|''ζ''(''s'')}} is one of the most significant functions in mathematics because of its relationship to the distribution of the [[prime number]]s. The zeta function is defined for any [[complex number]] {{math|''s''}} with real part greater than 1 by the following formula:
<math display=block>\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}.</math>
 
{{anchor|zeta_2}}Taking {{math|''s'' {{=}} 2}}, we see that {{math|''ζ''(2)}} is equal to the sum of the reciprocals of the squares of all positive integers:
:<math>\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}.</math>
<math display=block>\zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2}
 
Taking {{math|''s'' {{=}} 2}}, we see that {{math|''ζ''(2)}} is equal to the sum of the reciprocals of the squares of all positive integers:
 
:<math>\zeta(2) = \sum_{n=1}^\infty \frac{1}{n^2}
= \frac{1}{1^2} + \frac{1}{2^2} + \frac{1}{3^2} + \frac{1}{4^2} + \cdots = \frac{\pi^2}{6} \approx 1.644934.</math>
 
Convergence can be proven by the [[integral test for convergence#Applications|integral test]], or by the following inequality:
<math display=block>\begin{align}
 
:<math>\begin{align}
\sum_{n=1}^N \frac{1}{n^2} & < 1 + \sum_{n=2}^N \frac{1}{n(n-1)} \\
& = 1 + \sum_{n=2}^N \left( \frac{1}{n-1} - \frac{1}{n} \right) \\
Line 104 ⟶ 82:
\end{align}</math>
 
This gives us the [[upper bound]] 2, and because the infinite sum contains no negative terms, it must converge to a value strictly between 0 and 2. It can be shown that {{math|''ζ''(''s'')}} has a simple expression in terms of the [[Bernoulli number]]s whenever {{math|''s''}} is a positive even integer. With {{math|''s'' {{=}} 2''n''}}:<ref>{{cite bookcitation|first1=Tsuneo|last1=Arakawa|first2=Tomoyoshi|last2=Ibukiyama|first3=Masanobu|last3=Kaneko|title=Bernoulli Numbers and Zeta Functions|publisher=Springer|date=2014|page=61|isbn=978-4-431-54919-2}}</ref>
<math display=block>\zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}.</math>
 
==A proof using Euler's formula and L'Hôpital's rule==
:<math>\zeta(2n) = \frac{(2\pi)^{2n}(-1)^{n+1}B_{2n}}{2\cdot(2n)!}.</math>
 
The normalized [[sinc function]] <math>\text{sinc}(x)=\frac{\sin (\pi x)}{\pi x}</math> has a [[Weierstrass factorization theorem|Weierstrass factorization]] representation as an infinite product:
==A rigorous proof using Euler's formula and L'Hôpital's rule==
<math display=block>\frac{\sin (\pi x)}{\pi x} = \prod_{n=1}^\infty \left(1-\frac{x^2}{n^2}\right).</math>
 
The [[Sinc function]] <math>\text{sinc}(x)=\frac{\sin (\pi x)}{\pi x}</math> has a [[Weierstrass factorization theorem|Weierstrass factorization]] representation as infinite product:
 
:<math>\frac{\sin (\pi x)}{\pi x}=\displaystyle\prod_{n=1}^\infty \left(1-\frac{x^{2}}{n^{2}}\right).</math>
 
The infinite product is [[Analytic function|analytic]], so taking the [[natural logarithm]] of both sides and differentiating yields
<math display=block>\frac{\pi \cos (\pi x)}{\sin (\pi x)}-\frac{1}{x}=-\sum_{n=1}^\infty \frac{2x}{n^2-x^2}</math>
 
(by [[Uniform convergence#To differentiability|uniform convergence]], the interchange of the derivative and infinite series is permissible). After dividing the equation by <math>2x</math> and regrouping one gets
:<math>\frac{\pi \cos (\pi x)}{\sin (\pi x)}-\frac{1}{x}=-\displaystyle\sum_{n=1}^\infty \frac{2x}{n^{2}-x^{2}}.</math>
<math display=block>\frac{1}{2x^2}-\frac{\pi \cot (\pi x)}{2x}=\sum_{n=1}^\infty \frac{1}{n^2-x^2}.</math>
 
After dividing the equation by <math>2x</math> and regrouping one gets
 
:<math>\frac{1}{2x^{2}}-\frac{\pi \cot (\pi x)}{2x}=\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}-x^{2}}.</math>
 
We make a change of variables (<math>x=-it</math>):
<math display=block>-\frac{1}{2t^2}+\frac{\pi \cot (-\pi it)}{2it}=\sum_{n=1}^\infty \frac{1}{n^2+t^2}.</math>
 
:<math>-\frac{1}{2t^{2}}+\frac{\pi \cot (-\pi it)}{2it}=\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}+t^{2}}.</math>
 
[[Euler's formula]] can be used to deduce that
<math display=block>\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2it}\frac{i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}=\frac{\pi}{2t}+\frac{\pi}{t\left(e^{2\pi t} - 1\right)}.</math>
 
or using the corresponding [[hyperbolic function]]:
:<math>\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2it}\frac{i\left(e^{2\pi t}+1\right)}{e^{2\pi t}-1}=\frac{\pi}{2t}+\frac{\pi}{t\left(e^{2\pi t}-1\right)}.</math>
<math display=block>\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2t}{i\cot (\pi i t)}=\frac{\pi}{2t}\coth(\pi t).</math>
:or using [[Hyperbolic function]]:
:<math>\frac{\pi \cot (-\pi i t)}{2it}=\frac{\pi}{2t}{i\cot (\pi i t)}=\frac{\pi}{2t}\coth(\pi t).</math>
 
Then
:<math display=block>\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}+t^{2}}=\frac{\pi \left(te^{2\pi t}+t\right)-e^{2\pi t}+1}{2\left(t^{2} e^{2\pi t}-t^2\right)}=-\frac{1}{2t^{2}} + \frac{\pi}{2t} \coth(\pi t).</math>
 
Now we take the [[limit (mathematics)|limit]] as <math>t</math> approaches zero and use [[lL'hôpitalHôpital's rule]] thrice. By [[Tannery's theorem]] applied to <math display="inline">\lim_{t\to\infty}\sum_{n=1}^\infty 1/(n^2+1/t^2)</math>, we can [[Interchange of limiting operations|interchange the limit and infinite series]] so that <math display="inline">\lim_{t\to 0}\sum_{n=1}^\infty 1/(n^2+t^2)=\sum_{n=1}^\infty 1/n^2</math> and by L'Hôpital's rule]] thrice:
<math display=block>\begin{align}\sum_{n=1}^\infty \frac{1}{n^2}&=\lim_{t\to 0}\frac{\pi}{4}\frac{2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^2 e^{2\pi t} + te^{2\pi t}-t}\\[6pt]
&=\lim_{t\to 0}\frac{\pi^3 te^{2\pi t}}{2\pi \left(\pi t^2 e^{2\pi t}+2te^{2\pi t} \right)+e^{2\pi t}-1}\\[6pt]
&=\lim_{t\to 0}\frac{\pi^2 (2\pi t+1)}{4\pi^2 t^2+12\pi t+6}\\[6pt]
&=\frac{\pi^2}{6}.\end{align}</math>
 
==A proof using Fourier series==
:<math>\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}}=\displaystyle\lim_{t\to 0}\frac{\pi}{4}\frac{2\pi te^{2\pi t}-e^{2\pi t}+1}{\pi t^{2}e^{2\pi t}+te^{2\pi t}-t}</math>
 
:<math>\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}}=\displaystyle\lim_{t\to 0}\frac{\pi ^{3}te^{2\pi t}}{2\pi \left(\pi t^{2}e^{2\pi t}+2te^{2\pi t}\right)+e^{2\pi t}-1}</math>
 
:<math>\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}}=\displaystyle\lim_{t\to 0}\frac{\pi ^{2}(2\pi t+1)}{4\pi ^{2}t^{2}+12\pi t+6}</math>
 
:<math>\displaystyle\sum_{n=1}^\infty \frac{1}{n^{2}}=\frac{\pi ^{2}}{6}.</math>
 
==A rigorous proof using Fourier series==
 
Use [[Parseval's identity]] (applied to the function {{math|1=''f''(''x'') = ''x''}}) to obtain
<math display=block>\sum_{n=-\infty}^\infty |c_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 \, dx,</math>
 
:<math>\sum_{n=-\infty}^\infty |c_n|^2 = \frac{1}{2\pi}\int_{-\pi}^\pi x^2 \, dx,</math>
 
where
<math display=block>\begin{align}
 
c_n &= \frac{1}{2\pi}\int_{-\pi}^\pi x e^{-inx} \, dx \\[4pt]
:<math>\begin{align}
c_n &= \frac{1}{2n\pi} \cos(n\int_{pi)-\sin(n\pi)}^{\pi x en^{-inx2} \, dxi \\[4pt]
&= \frac{n\pi \cos(n\pi)-\sin(n\pi)}{\pi n^2} i \\[4pt]
&= \frac{\cos(n\pi)}{n} i \\
&= \frac{(-1)^n}{n} i
\end{align}</math>
 
for {{math|''n'' ≠ 0}}, and {{math|''c''<sub>0</sub> {{=}} 0}}. Thus,
<math display=block>|c_n|^2 = \begin{cases}
 
\dfrac{1}{n^2}, & \text{for } n \neq 0, \\
:<math>|c_n|^2 = \begin{cases}
\dfrac{1}{n^2}, & \text{for } n \neq 0, \\
0, & \text{for } n = 0,
\end{cases}
Line 169 ⟶ 133:
 
and
<math display=block>\sum_{n=-\infty}^\infty |c_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 \, dx.</math>
 
:<math>\sum_{n=-\infty}^\infty |c_n|^2 = 2\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{2\pi} \int_{-\pi}^\pi x^2 \, dx.</math>
 
Therefore,
<math display=block>\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 \, dx = \frac{\pi^2}{6}</math>
 
:<math>\sum_{n=1}^\infty \frac{1}{n^2} = \frac{1}{4\pi}\int_{-\pi}^\pi x^2 \, dx = \frac{\pi^2}{6}</math>
 
as required.
 
==Another rigorous proof using Parseval's identity==
 
Given a [[complete orthonormal basis]] in the space <math>L^2_{\operatorname{per}}(0, 1)</math> of [[Lp space#Special cases|L2]] [[periodic functionsfunction]]s over <math>(0, 1)</math> (i.e., the subspace of [[square-integrable functionsfunction]]s which are also [[periodic function|periodic]]), denoted by <math>\{e_i\}_{i=-\infty}^{\infty}</math>, [[Parseval's identity]] tells us that
<math display=block>\|x\|^2 = \sum_{i=-\infty}^{\infty} |\langle e_i, x\rangle|^2, </math>
 
:<math>\|x\|^2 = \sum_{i=-\infty}^{\infty} |\langle e_i, x\rangle|^2, </math>
 
where <math>\|x\| := \sqrt{\langle x,x\rangle}</math> is defined in terms of the [[inner product]] on this [[Hilbert space]] given by
<math display=block>\langle f, g\rangle = \int_0^1 f(x) \overline{g(x)} \, dx,\ f,g \in L^2_{\operatorname{per}}(0, 1).</math>
 
We can consider the [[orthonormal basis]] on this space defined by <math>e_k \equiv e_k(\vartheta) := \exp(2\pi\imath k \vartheta)</math> such that <math>\langle e_k,e_j\rangle = \int_0^1 e^{2\pi\imath (k-j) \vartheta} \, d\vartheta = \delta_{k,j}</math>. Then if we take <math>f(\vartheta) := \vartheta</math>, we can compute both that
:<math>\langle f, g\rangle = \int_0^1 f(x) \overline{g(x)} dx,\ f,g \in L^2_{\operatorname{per}}(0, 1).</math>
<math display=block>
 
We can consider the [[orthonormal basis]] on this space defined by <math>e_k \equiv e_k(\vartheta) := \exp(2\pi\imath k \vartheta)</math> such that <math>\langle e_k,e_j\rangle = \int_0^1 e^{2\pi\imath (k-j) \vartheta} d\vartheta = \delta_{k,j}</math>. Then if we take <math>f(\vartheta) := \vartheta</math>, we can compute both that
 
:<math>
\begin{align}
\|f\|^2 & = \int_0^1 \vartheta^2 \, d\vartheta = \frac{1}{3} \\
\langle f, e_k\rangle & = \int_0^1 \vartheta e^{-2\pi\imath k\vartheta} \, d\vartheta = \Biggl\{\begin{array}{ll} \frac{1}{2}, & k = 0 \\ -\frac{1}{2\pi\imath k} & k \neq 0, \end{array}
\end{align}
</math>
 
by [[calculus|elementary [[calculus]] and [[integration by parts]], respectively. Finally, by [[Parseval's identity]] stated in the form above, we obtain that
<math display=block>
 
\begin{align}
:<math>
\|f\|^2 = \frac{1}{3} & = \sum_{\stackrel{k=-\infty}{k \neq 0}}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4}
\begin{align}
\|f\|^2 = \frac{1}{3} & = 2 \sum_{\stackrel{k=-\infty}{k \neq 0}1}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} \\
= 2 \sum_{k=1}^{\infty} \frac{1}{(2\pi k)^2}+ \frac{1}{4} \\
& \implies \frac{\pi^2}{6} = \frac{2 \pi^2}{3} - \frac{\pi^2}{2} = \zeta(2).
\end{align}
</math>
 
===Generalizations and recurrence relations===
 
Note that by considering higher-order powers of <math>f_j(\vartheta) := \vartheta^j \in L^2_{\operatorname{per}}(0, 1)</math> we can use [[integration by parts]] to extend this method to enumerating formulas for <math>\zeta(2j)</math> when <math>j > 1</math>. In particular, suppose we let
<math display=block>I_{j,k} := \int_0^1 \vartheta^j e^{-2\pi\imath k\vartheta} \, d\vartheta, </math>
 
:<math>I_{j,k} := \int_0^1 \vartheta^j e^{-2\pi\imath k\vartheta} d\vartheta, </math>
 
so that [[integration by parts]] yields the [[recurrence relation]] that
<math display=block>
 
:<math>
\begin{align}
I_{j,k} & = \Biggl\{\begin{array}{llcases} \frac{1}{j+1}, & k=0; \\[4pt] -\frac{1}{2\pi\imath \cdot k} + \frac{j}{2\pi\imath \cdot k} I_{j-1,k}, & k \neq 0\end{arraycases} \\ [6pt]
& = \Biggl\{\begin{array}{llcases} \frac{1}{j+1}, & k=0; \\[4pt] -\sum\limits_{m=1}^{j} \frac{j!}{(j+1-m)!} \cdot \frac{1}{(2\pi\imath \cdot k)^{m}}, & k \neq 0. \end{arraycases}.
\end{align}
</math>
 
Then by applying [[Parseval's identity]] as we did for the first case above along with the linearity of the [[inner product]] yields that
<math display=block>
 
\begin{align}
:<math>
\|f_j\|^2 = \frac{1}{2j+1} & = 2 \sum_{k \geq 1} I_{j,k} \bar{I}_{j,k} + \frac{1}{(j+1)^2} \\[6pt]
\begin{align}
\|f_j\|^2 = \frac{1}{2j+1} & = 2 \sum_{km=1}^j \geq sum_{r=1}^j I_\frac{j,k!^2}{(j+1-m)! (j+1-r)!} \barfrac{I(-1)^r}_{j,k\imath^{m+r}} \frac{\zeta(m+r)}{(2\pi)^{m+r}} + \frac{1}{(j+1)^2} \\ .
\end{align}
& = 2 \sum_{m=1}^j \sum_{r=1}^j \frac{j!^2}{(j+1-m)! (j+1-r)!} \frac{(-1)^r}{\imath^{m+r}} \frac{\zeta(m+r)}{(2\pi)^{m+r}} + \frac{1}{(j+1)^2}.
\end{align}
</math>
 
==Proof using differentiation under the integral sign==
 
It's possible to prove the result using elementary calculus by applying the [[Leibniz integral rule|differentiation under the integral sign]] technique to an integral due to Freitas:<ref>{{cite arXiv|last1=Freitas|first1=F. L.|title=Solution of the Basel problem using the Feynman integral trick|year=2023|class=math.CA|eprint=2312.04608|mode=cs2}}</ref>
<math display=block>I(\alpha) = \int_0^\infty \ln\left(1+\alpha e^{-x}+e^{-2x}\right)dx.</math>
 
While the [[primitive function]] of the integrand cannot be expressed in terms of elementary functions, by differentiating with respect to <math>\alpha</math> we arrive at
 
<math display=block>\frac{dI}{d\alpha} = \int_0^\infty \frac{e^{-x}}{1+\alpha e^{-x}+e^{-2x}}dx,</math>
which can be integrated by [[Integration by substitution|substituting]] <math>u=e^{-x}</math> and decomposing into [[partial fractions]]. In the range <math>-2\leq\alpha\leq 2</math> the definite integral reduces to
 
<math display=block>\frac{dI}{d\alpha} = \frac{2}{\sqrt{4-\alpha^2}}\left[\arctan\left(\frac{\alpha+2}{\sqrt{4-\alpha^2}}\right)-\arctan\left(\frac{\alpha}{\sqrt{4-\alpha^2}}\right)\right].</math>
 
The expression can be simplified using the [[Inverse trigonometric functions#Arctangent addition formula|arctangent addition formula]] and integrated with respect to <math>\alpha</math> by means of [[trigonometric substitution]], resulting in
 
<math display=block>I(\alpha) = -\frac{1}{2}\arccos\left(\frac{\alpha}{2}\right)^2 + c.</math>
 
The [[integration constant]] <math>c</math> can be determined by noticing that two distinct values of <math>I(\alpha)</math> are related by
 
<math display=block>I(2) = 4I(0),</math>
because when calculating <math>I(2)</math> we can [[Factorization|factor]] <math>1+2e^{-x}+e^{-2x} = (1+e^{-x})^2</math> and express it in terms of <math>I(0)</math> using the [[List of logarithmic identities#Logarithm of a power|logarithm of a power identity]] and the [[Integration by substitution|substitution]] <math>u=x/2</math>. This makes it possible to determine <math>c = \frac{\pi^2}{6}</math>, and it follows that
 
<math display=block>I(-2) = 2\int_0^\infty \ln(1-e^{-x})dx = -\frac{\pi^2}{3}.</math>
 
This final integral can be evaluated by expanding the natural logarithm into its [[Mercator series|Taylor series]]:
 
<math display=block>\int_0^\infty \ln(1-e^{-x})dx = - \sum_{n=1}^\infty \int_0^\infty \frac{e^{-nx}}{n}dx = -\sum_{n=1}^\infty\frac{1}{n^2}.</math>
 
The last two identities imply
 
<math display=block>\sum_{n=1}^\infty\frac{1}{n^2} = \frac{\pi^2}{6}.</math>
 
==Cauchy's proof==
While most proofs use results from advanced [[mathematics]], such as [[Fourier analysis]], [[complex analysis]], and [[multivariable calculus]], the following does not even require single-variable [[calculus]] (until a single [[limit of a function|limit]] is taken at the end).
 
For a proof using the [[residue theorem]], see the[[Residue linkedtheorem#Evaluating articlezeta functions|here]].
 
===History of this proof===
The proof goes back to [[Augustin Louis Cauchy]] (Cours d'Analyse, 1821, Note VIII). In 1954, this proof appeared in the book of [[Akiva Yaglom|Akiva]] and [[Isaak Yaglom]] "Nonelementary Problems in an Elementary Exposition". Later, in 1982, it appeared in the journal ''Eureka'',<ref>{{citation |last1=Ransford |first1=T J |title=An Elementary Proof of <math>\sum_{1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}</math> |journal=Eureka |date=Summer 1982 |volume=42 |issue=1 |pages=3–4|url=https://www.archim.org.uk/eureka/archive/Eureka-42.pdf | archive-url = https://web.archive.org/web/20200610100509/https://www.archim.org.uk/eureka/archive/Eureka-42.pdf | archive-date = June 10, 2020}}</ref> attributed to John Scholes, but Scholes claims he learned the proof from [[Peter Swinnerton-Dyer]], and in any case he maintains the proof was "common knowledge at [[University of Cambridge|Cambridge]] in the late 1960s".<ref>{{citation
| last1 = Aigner | first1 = Martin | author1-link = Martin Aigner
| last2 = Ziegler | first2 = Günter M. | author2-link = Günter M. Ziegler
| edition = 2nd
| isbn = 9783662043158
| page = 32
| publisher = Springer
| title = Proofs from THE BOOK
| url = https://books.google.com/books?id=QETtCAAAQBAJ&pg=PA32
| year = 2001}}; this anecdote is missing from later editions of this book, which replace it with earlier history of the same proof.</ref>
 
===The proof===
[[File:limit circle FbN.jpeg|thumb|The inequality<br>
<math>\tfrac{1}{2}r^2\tan\theta > \tfrac{1}{2}r^2\theta > \tfrac{1}{2}r^2\sin\theta</math><br>
is shown pictorially for any <math>\theta \in (0, \pi/2)</math>. The three terms are the areas of the triangle OAC, circle section OAB, and the triangle OAB.
is shown. Taking reciprocals and squaring gives<br>
 
Taking reciprocals and squaring gives<br>
<math>\cot^2\theta<\tfrac{1}{\theta^2}<\csc^2\theta</math>.]]
The main idea behind the proof is to bound the partial (finite) sums
:<math display=block>\sum_{k=1}^m \frac{1}{k^2} = \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}</math>
 
between two expressions, each of which will tend to {{sfrac|{{pi}}<sup>2</sup>|6}} as {{math|''m''}} approaches infinity. The two expressions are derived from identities involving the [[cotangent]] and [[cosecant]] functions. These identities are in turn derived from [[de Moivre's formula]], and we now turn to establishing these identities.
 
Let {{math|''x''}} be a real number with {{math|0 < ''x'' < {{sfrac|{{pi}}|2}}}}, and let {{math|''n''}} be a positive odd integer. Then from de Moivre's formula and the definition of the cotangent function, we have
<math display=block>\begin{align}
 
\frac{\cos (nx) + i \sin (nx)}{\sin^n x} &= \frac{(\cos x + i\sin x)^n}{\sin^n x} \\[4pt]
:<math>\begin{align}
\frac{\cos (nx) + i \sin (nx)}{\sin^n x} &= \left(\frac{(\cos x + i \sin x)^n}{\sin^n x}\right)^n \\[4pt]
&= \left(\frac{\cos x + i \sin x}{\sin x}\right)^n \\
&= (\cot x + i)^n.
\end{align}</math>
 
From the [[binomial theorem]], we have
<math display=block>\begin{align}
 
(\cot x + i)^n
:<math>\begin{align}
(\cot x + i)^n
= & {n \choose 0} \cot^n x + {n \choose 1} (\cot^{n - 1} x)i + \cdots + {n \choose {n - 1}} (\cot x)i^{n - 1} + {n \choose n} i^n \\[6pt]
= & \Bigg( {n \choose 0} \cot^n x - {n \choose 2} \cot^{n - 2} x \pm \cdots \Bigg) \; + \; i\Bigg( {n \choose 1} \cot^{n-1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg).
Line 266 ⟶ 258:
 
Combining the two equations and equating imaginary parts gives the identity
<math display=block>\frac{\sin (nx)}{\sin^n x} = \Bigg( {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg).</math>
 
:<math>\frac{\sin (nx)}{\sin^n x} = \Bigg( {n \choose 1} \cot^{n - 1} x - {n \choose 3} \cot^{n - 3} x \pm \cdots \Bigg).</math>
 
We take this identity, fix a positive integer {{math|''m''}}, set {{math|''n'' {{=}} 2''m'' + 1}}, and consider {{math|''x<sub>r</sub>'' {{=}} {{sfrac|''r''{{pi}}|2''m'' + 1}}}} for {{math|''r'' {{=}} 1, 2, ..., ''m''}}. Then {{math|''nx<sub>r</sub>''}} is a multiple of {{pi}} and therefore {{math|sin(''nx<sub>r</sub>'') {{=}} 0}}. So,
<math display=block>0 = {{2m + 1} \choose 1} \cot^{2m} x_r - {{2m + 1} \choose 3} \cot^{2m - 2} x_r \pm \cdots + (-1)^m{{2m + 1} \choose {2m + 1}}</math>
 
:<math>0 = {{2m + 1} \choose 1} \cot^{2m} x_r - {{2m + 1} \choose 3} \cot^{2m - 2} x_r \pm \cdots + (-1)^m{{2m + 1} \choose {2m + 1}}</math>
 
for every {{math|''r'' {{=}} 1, 2, ..., ''m''}}. The values {{math|''x<sub>r</sub>'' {{=}} ''x''<sub>1</sub>, ''x''<sub>2</sub>, ..., ''x<sub>m</sub>''}} are distinct numbers in the interval {{math|0 < {{math|''x<sub>r</sub>''}} < {{sfrac|{{pi}}|2}}}}. Since the function {{math|cot<sup>2</sup> ''x''}} is [[Injective function|one-to-one]] on this interval, the numbers {{math|''t<sub>r</sub>'' {{=}} cot<sup>2</sup> ''x<sub>r</sub>''}} are distinct for {{math|''r'' {{=}} 1, 2, ..., ''m''}}. By the above equation, these {{math|''m''}} numbers are the roots of the {{math|''m''}}th degree polynomial
<math display=block>p(t) = {{2m + 1} \choose 1}t^m - {{2m + 1} \choose 3}t^{m - 1} \pm \cdots + (-1)^m{{2m+1} \choose {2m + 1}}.</math>
 
:<math>p(t) = {{2m + 1} \choose 1}t^m - {{2m + 1} \choose 3}t^{m - 1} \pm \cdots + (-1)^m{{2m+1} \choose {2m + 1}}.</math>
 
By [[Vieta's formulas]] we can calculate the sum of the roots directly by examining the first two coefficients of the polynomial, and this comparison shows that
<math display=block>\cot ^2 x_1 + \cot ^2 x_2 + \cdots + \cot ^2 x_m = \frac{\binom{2m + 1}3} {\binom{2m + 1}1} = \frac{2m(2m - 1)}6.</math>
 
:<math>\cot ^2 x_1 + \cot ^2 x_2 + \cdots + \cot ^2 x_m = \frac{\binom{2m + 1}3} {\binom{2m + 1}1} = \frac{2m(2m - 1)}6.</math>
 
Substituting the [[list of trigonometric identities|identity]] {{math|csc<sup>2</sup> ''x'' {{=}} cot<sup>2</sup> ''x'' + 1}}, we have
<math display=block>\csc ^2 x_1 + \csc ^2 x_2 + \cdots + \csc ^2 x_m = \frac{2m(2m - 1)}6 + m = \frac{2m(2m + 2)}6.</math>
 
:<math>\csc ^2 x_1 + \csc ^2 x_2 + \cdots + \csc ^2 x_m = \frac{2m(2m - 1)}6 + m = \frac{2m(2m + 2)}6.</math>
 
Now consider the inequality {{math|cot<sup>2</sup> ''x'' < {{sfrac|1|''x''<sup>2</sup>}} < csc<sup>2</sup> ''x''}} (illustrated geometrically above). If we add up all these inequalities for each of the numbers {{math|''x<sub>r</sub>'' {{=}} {{sfrac|''r''{{pi}}|2''m'' + 1}}}}, and if we use the two identities above, we get
<math display=block>\frac{2m(2m - 1)}6 < \left(\frac{2m + 1}{\pi} \right)^2 + \left(\frac{2m + 1}{2\pi} \right)^2 + \cdots + \left(\frac{2m + 1}{m \pi} \right)^2 < \frac{2m(2m + 2)}6.</math>
 
:<math>\frac{2m(2m - 1)}6 < \left(\frac{2m + 1}{\pi} \right)^2 + \left(\frac{2m + 1}{2\pi} \right)^2 + \cdots + \left(\frac{2m + 1}{m \pi} \right)^2 < \frac{2m(2m + 2)}6.</math>
 
Multiplying through by {{math|<big><big>(</big></big>{{sfrac|{{pi}}|2''m'' + 1}}<big><big>)</big></big>{{su|p=2}}}}, this becomes
<math display=block>\frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m - 1}{2m + 1}\right) < \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} < \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m + 2}{2m + 1}\right).</math>
 
:<math>\frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m - 1}{2m + 1}\right) < \frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2} < \frac{\pi ^2}{6}\left(\frac{2m}{2m + 1}\right)\left(\frac{2m + 2}{2m + 1}\right).</math>
 
As {{math|''m''}} approaches infinity, the left and right hand expressions each approach {{sfrac|{{pi}}<sup>2</sup>|6}}, so by the [[squeeze theorem]],
<math display=block>\zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} =
 
:<math>\zeta(2) = \sum_{k=1}^\infty \frac{1}{k^2} =
\lim_{m \to \infty}\left(\frac{1}{1^2} + \frac{1}{2^2} + \cdots + \frac{1}{m^2}\right) = \frac{\pi ^2}{6}</math>
 
and this completes the proof.
 
==Proof assuming Weil's conjecture on Tamagawa numbers==
== Euler's second proof ==
A proof is also possible assuming [[Weil's conjecture on Tamagawa numbers]].<ref>{{citation|title=Algebraic groups and number theory|translator=Rachel Rowen|publisher=Academic Press|author1=[[Vladimir Platonov]]|author2=Andrei Rapinchuk|year=1994}}|</ref> The conjecture asserts for the case of the [[algebraic group]] SL<sub>2</sub>('''R''') that the [[Tamagawa number]] of the group is one. That is, the quotient of the special linear group over the rational [[Adele ring|adeles]] by the special linear group of the rationals (a [[compact set]], because <math>SL_2(\mathbb Q)</math> is a lattice in the adeles) has Tamagawa measure 1:
<math display="block">\tau(SL_2(\mathbb Q)\setminus SL_2(A_{\mathbb Q}))=1.</math>
 
To determine a Tamagawa measure, the group <math>SL_2</math> consists of matrices
The first accepted and definitive proof of the theorem was made by Euler in 1741<ref>{{Cite book|title=Opera Omnia, series 1, volume 14|last=Euler|first=Leonhard|publisher=|year=1741|isbn=|___location=|pages=177-186}}</ref><ref>{{Cite book|title=Journal littéraire d'Allemange, de Suisse et du Nord, article E063|last=Euler|first=Leonhard|publisher=|year=1743|isbn=|___location=|pages=p. 115-127}}</ref><ref>{{Cite web|url=http://eulerarchive.maa.org/hedi/HEDI-2004-03.pdf|title=How Euler did it|last=Sandifer|first=Ed|date=March 2004|website=MAA Online|url-status=live|archive-url=http://archive.wikiwix.com/cache/?url=http%3A%2F%2Feulerarchive.maa.org%2Fhedi%2FHEDI-2004-03.pdf|archive-date=|access-date=}}</ref><ref>{{Cite web|url=https://www.apmep.fr/IMG/pdf/Article_probleme_Bale.pdf|title=Euler and Basel problem (page 17-19)|last=Association of Professors of mathematics of public education|first=Apmep (France)|date=|website=apmep.fr|url-status=live|archive-url=|archive-date=|access-date=}}</ref>, six years after his first proof. The rigor of the latter was challenged at that time because the Weierstrass factorization theorem had not been discovered yet. The following proof is almost the same as Euler's second proof. However it is shorter as it uses [[Wallis' integrals]]. The integration by substitution <math>\ u = sin(t)</math> links these two proofs.
<math display="block">\begin{bmatrix}x&y\\z&t\end{bmatrix}</math>
with <math>xt-yz=1</math>. An invariant [[volume form]] on the group is
<math display="block">\omega = \frac1x dx\wedge dy\wedge dz.</math>
 
The measure of the quotient is the product of the measures of <math>SL_2(\mathbb Z)\setminus SL_2(\mathbb R)</math> corresponding to the infinite place, and the measures of <math>SL_2(\mathbb Z_p)</math> in each finite place, where <math>\mathbb Z_p</math> is the [[p-adic integers]].
First of all, <math>\displaystyle \int ^{\frac{\pi }{2}}_{0} \operatorname{Arcsin}( \sin\ t) \ dt\ =\int ^{\frac{\pi }{2}}_{0} t\ dt\
 
=\frac{\pi ^{2}}{8}
For the local factors,
<math display="block">\omega(SL_2(\mathbb Z_p)) = |SL_2(F_p)|\omega(SL_2(\mathbb Z_p,p))</math>
where <math>F_p</math> is the field with <math>p</math> elements, and <math>SL_2(\mathbb Z_p,p)</math> is the [[congruence subgroup]] modulo <math>p</math>. Since each of the coordinates <math>x,y,z</math> map the latter group onto <math>p\mathbb Z_p</math> and <math>\left|\frac1x\right|_p=1</math>, the measure of <math>SL_2(\mathbb Z_p,p)</math> is <math>\mu_p(p\mathbb Z_p)^3=p^{-3}</math>, where <math>\mu_p</math> is the normalized [[Haar measure]] on <math>\mathbb Z_p</math>. Also, a standard computation shows that <math>|SL_2(F_p)|=p(p^2-1)</math>. Putting these together gives <math>\omega(SL_2(\mathbb Z_p))=(1-1/p^2)</math>.
 
At the infinite place, an integral computation over the fundamental ___domain of <math>SL_2(\mathbb Z)</math> shows that <math>\omega(SL_2(\mathbb Z)\setminus SL_2(\mathbb R)=\pi^2/6</math>, and therefore the Weil conjecture finally gives
<math display="block">1 = \frac{\pi^2}6\prod_p \left(1-\frac1{p^2}\right).</math>
On the right-hand side, we recognize the [[Euler product]] for <math>1/\zeta(2)</math>, and so this gives the solution to the Basel problem.
 
This approach shows the connection between (hyperbolic) geometry and arithmetic, and can be inverted to give a proof of the Weil conjecture for the special case of <math>SL_2</math>, contingent on an independent proof that <math>\zeta(2)=\pi^2/6</math>.
 
==Geometric proof==
 
 
The Basel problem can be proved with [[Euclidean geometry]], using the insight that ''the real line can be seen as a circle of infinite radius''. An intuitive, if not completely rigorous, sketch is given here.
 
* Choose an integer <math>N</math>, and take <math>N</math> equally spaced points on a circle with ''circumference'' equal to <math>2N</math>. The radius of the circle is <math>N/\pi</math> and the length of each [[Circular arc|arc]] between two points is <math>2</math>. Call the points <math>P_{1..N}</math>.
* Take another generic point <math>Q</math> on the circle, which will lie at a fraction <math>0 < \alpha < 1</math> of the arc between two consecutive points (say <math>P_1</math> and <math>P_2</math> without loss of generality).
* Draw all the [[Chord (geometry)|chords]] joining <math>Q</math> with each of the <math>P_{1..N}</math> points. Now (this is the key to the proof), compute the ''sum of the inverse squares'' of the lengths of all these chords, call it <math>sisc</math>.
* The proof relies on the notable fact that (for a fixed <math>\alpha</math>), ''the <math>sisc</math> does not depend on <math>N</math>.'' Note that intuitively, as <math>N</math> increases, the number of chords increases, but their length increases too (as the circle gets bigger), so their inverse square decreases.
* In particular, take the case where <math>\alpha = 1/2</math>, meaning that <math>Q</math> is the midpoint of the arc between two consecutive <math>P</math>'s. The <math>sisc</math> can then be found trivially from the case <math>N=1</math>, where there is only one <math>P</math>, and one <math>Q</math> on the opposite side of the circle. Then the chord is the diameter of the circle, of length <math>2/\pi</math>. The <math>sisc</math> is then <math>\pi^2/4</math>.
* When <math>N</math> goes to infinity, the circle approaches the real line. If you set the origin at <math>Q</math>, the points <math>P_{1..N}</math> are positioned at the ''odd'' integer positions (positive and negative), since the arcs have length 1 from <math>Q</math> to <math>P_1</math>, and 2 onward. You hence get this variation of the Basel Problem:
 
<math display="block">
\sum_{z=-\infty}^{\infty} \frac{1}{(2z-1)^2} = \frac{\pi^2}{4}
</math>
 
Now* oneFrom here, you can userecover the [[Listoriginal offormulation mathematicalwith series|powera seriesbit of Arcsin]]algebra, as:
 
<math display="block">
<math>\begin{align}
& \int ^{ \fracsum_{n=1}^\piinfty \frac{1}{n^2}}_{0} \= (\sum ^{+\infty }_sum_{n=01}^\infty \frac{1}{( 2n-1) !^2}{\left( n!+ \ 2^sum_{n=1}\right)^\infty \frac{21}{( 2n+1) \ ^2} \= \sin( t)^frac{2n+1}) dt{2}\ & sum_{z= & -\ \ \sum infty}^{+\infty} }_{n=0}\frac{( 2n) !1}{\left( n!\ 2^{n}\right2z-1)^{2}( 2n+1) \ } \ \int ^{\frac{\pi 1}{24}}_\sum_{0n=1} ^\sin(infty t)^\frac{2n+1} dt\\{n^2}
</math>
\\
 
& & = & \ \ \sum ^{+\infty }_{n=0}\frac{( 2n) !}{\left( n!\ 2^{n}\right)^{2}( 2n+1) \ } \ \frac{\left( n!\ 2^{n}\right)^{2}}{( 2n+1) !}\\
that is,
& & = & \ \ \sum ^{+\infty }_{n=0}\frac{1}{( 2n+1)^{2}} \\
 
& & = & \ \ \frac{\pi ^{2}}{8}
<math display="block">
\end{align}
\frac{3}{4}\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{8}
</math>
 
or
The sum and the integral signs were interchanged thanks to Beppo Levi's [[monotone convergence theorem]] for Lebesgue integral. Then, [[Wallis' integrals]] enabled us to integrate the powers of sine.
 
<math display="block">
One can separate even and odd numbers :
\sum_{n=1}^\infty \frac{1}{n^2} = \frac{\pi^2}{6}
</math>.
 
The independence of the <math>sisc</math> from <math>N</math> can be proved easily with Euclidean geometry for the more restrictive case where <math>N</math> is a power of 2, i.e. <math>N = 2^n</math>, which still allows the limiting argument to be applied. The proof proceeds by [[Mathematical induction|induction]] on <math>n</math>, and uses the [[Inverse Pythagorean theorem|Inverse Pythagorean Theorem]], which states that:
<math>\displaystyle \sum ^{+\infty }_{n=1}\frac{1}{n^{2}} =\sum ^{+\infty }_{n=1}\frac{1}{( 2n)^{2}} \ +\sum ^{+\infty }_{n=0}\frac{1}{( 2n+1)^{2}} \
 
=\frac{1}{4}\sum ^{+\infty }_{n=1}\frac{1}{n^{2}} +\displaystyle \ \frac{\pi ^{2}}{8}
<math display="block">
\frac{1}{a^2} + \frac{1}{b^2} = \frac{1}{h^2}
</math>
 
where <math>a</math> and <math>b</math> are the legs and <math>h</math> is the height of a right triangle.
Finally,
 
* In the base case of <math>n=0</math>, there is only 1 chord. In the case of <math>\alpha = 1/2</math>, it corresponds to the diameter and the <math>sisc</math> is <math>\pi^2/4</math> as stated above.
<math>\displaystyle \sum ^{+\infty }_{n=1}\frac{1}{n^{2}}
* Now, assume that you have <math>2^n</math> points on a circle with radius <math>2^n/\pi</math> and center <math>O</math>, and <math>2^{n+1}</math> points on a circle with radius <math>2^{n+1}/\pi</math> and center <math>R</math>. The induction step consists in showing that these 2 circles have the same <math>sisc</math> for a given <math>\alpha</math>.
=\frac{4}{3}\frac{\pi ^{2}}{8} \ =\frac{\pi ^{2}}{6}
 
* Start by drawing the circles so that they share point <math>Q</math>. Note that <math>R</math> lies on the smaller circle. Then, note that <math>2^{n+1}</math> is always even, and a simple geometric argument shows that you can pick ''pairs'' of opposite points <math>P_1</math> and <math>P_2</math> on the larger circle by joining each pair with a diameter. Furthermore, for each pair, one of the points will be in the "lower" half of the circle (closer to <math>Q</math>) and the other in the "upper" half.
[[File:Induction step 1728668638527.jpg|thumb|The sum of inverse squares of distances of P1 and P2 from Q equals the inverse square distance from P to Q.]]
* The diameter of the bigger circle <math>P_1P_2</math> cuts the smaller circle at <math>R</math> and at another point <math>P</math>. You can then make the following considerations:
** <math>P_1 \widehat{Q} P_2</math> is a right angle, since <math>P_1P_2</math> is a diameter.
** <math>Q \widehat{P} R</math> is a right angle, since <math>QR</math> is a diameter.
** <math>Q \widehat{R} P_2 = Q \widehat{R} P</math> is half of <math>Q \widehat{O} P</math> for the [[Inscribed angle|Inscribed Angle Theorem]].
** Hence, the arc <math>QP</math> is equal to the arc <math>QP_2</math>, again because the radius is half.
** The chord <math>QP</math> is the height of the right triangle <math>QP_1P_2</math>, hence for the Inverse Pythagorean Theorem:
<math display="block">
\frac{1}{\overline{QP}^2} = \frac{1}{\overline{QP_1}^2} + \frac{1}{\overline{QP_2}^2}
</math>
 
 
* Hence for half of the points on the bigger circle (the ones in the lower half) there is a corresponding point on the smaller circle with the same arc distance from <math>Q</math> (since the circumference of the smaller circle is half that of the bigger circle, the last two points closer to <math>R</math> must have arc distance 2 as well). Vice versa, for each of the <math>2^n</math> points on the smaller circle, we can build a pair of points on the bigger circle, and all of these points are equidistant and have the same arc distance from <math>Q</math>.
* Furthermore, the total <math>sisc</math> for the bigger circle is the same as the <math>sisc</math> for the smaller circle, since each pair of points on the bigger circle has the same inverse square sum as the corresponding point on the smaller circle.<ref>{{cite web |url= https://www.math.chalmers.se/~wastlund/Cosmic.pdf |title= Summing Inverse Squares by Euclidean Geometry |author= Johan Wästlund |date= December 8, 2010 |access-date= 2024-10-11 |website= Chalmers University of Technology |publisher= Department of Mathematics, Chalmers University}}</ref>
 
==Other identities==
 
See the special cases of the identities for the [[Riemann zeta function#Representations|Riemann zeta function]] when <math>s = 2.</math> Other notably special identities and representations of this constant appear in the sections below.
 
===Series representations===
The following are series representations of the constant:<ref name="MWZETA2">{{mathworld|title=Riemann Zeta Function \zeta(2)|id=RiemannZetaFunctionZeta2|mode=cs2}}</ref>
 
<math display=block>\begin{align}
The following are series representations of the constant:<ref name="MWZETA2">{{cite web|last1=Weisstein|first1=Eric W.|title=Riemann Zeta Function \zeta(2)|url=http://mathworld.wolfram.com/RiemannZetaFunctionZeta2.html|website=MathWorld|accessdate=29 April 2018}}</ref>
\zeta(2) &= 3 \sum_{k=1}^\infty \frac{1}{k^2 \binom{2k}{k}} \\[6pt]
 
&= \sum_{i=1}^\infty \sum_{j=1}^\infty \frac{(i-1)! (j-1)!}{(i+j)!}.
:<math>\begin{align}
\zeta(2) &= 3 \sum_{k=1}^\infty \frac{1}{k^2 \binom{2k}{k}} \\
&= \sum_{i=1}^\infty \sum_{j=1}^\infty \frac{(i-1)! (j-1)!}{(i+j)!}. \\
\end{align}</math>
 
Line 350 ⟶ 380:
===Integral representations===
 
The following are integral representations of <math>\zeta(2)\text{:}</math><ref>{{cite arxivarXiv|last1=Connon|first1=D. F.|title=Some series and integrals involving the Riemann zeta function, binomial coefficients and the harmonic numbers (Volume I)|arxivyear=2007|class=math.HO|eprint=0710.4022|mode=cs2}}</ref><ref>{{cite web|last1=Weisstein, Eric W.mathworld|title=Double Integral|urlid=http://mathworld.wolfram.com/DoubleIntegral.html|websitemode=MathWorld|accessdate=29 April 2018cs2}}</ref><ref>{{cite web|last1=Weisstein, Eric W.mathworld|title=Hadjicostas's Formula|urlid=http://mathworld.wolfram.com/HadjicostassFormula.html|websitemode=MathWorld|accessdate=29 April 2018cs2}}</ref>
<math display=block> \begin{align}
 
\zeta(2) & = -\int_0^1 \frac{\log x}{1-x} \, dx \\[6pt]
:<math> \begin{align}
\zeta(2) & = -\int_0^1{\infty} \frac{\log x}{1-e^x-1} \, dx \\[6pt]
& = \int_0^{\infty}1 \frac{(\log x)^2}{e^x-(1+x)^2} \, dx \\[6pt]
& = 2 + 2\int_0int_1^1{\infty} \frac{(\loglfloor x)^2 \rfloor -x}{(1+x)^23} \, dx \\[6pt]
& = \exp\left(2 + 2\int_1int_2^{\infty} \frac{\lfloor x \rfloor -pi(x)}{x(x^32-1)} \, dx\right) \\[6pt]
& = \exp\left(2int_0^1 \int_2int_0^{\infty}1 \frac{dx \pi(x), dy}{x(x^21-1)xy} \,dx\right) \\[6pt]
& = \frac{4}{3} \int_0^1 \int_0^1 \frac{dx \, dy}{1-(xy)^2} \\[6pt]
& = \frac{4}{3} \int_0^1 \int_0^1 \frac{dx \, dy}{1-(xy)^2} \\
& = \int_0^1 \int_0^1 \frac{1-x}{1-xy} \, dx \, dy + \frac{2}{3}.
\end{align}</math>
 
===Continued fractions===
 
In van der Poorten's classic article chronicling [[Apéry's constant|Apéry's proof of the irrationality of <math>\zeta(3)</math>]],<ref>{{Citation
|first=Alfred
|last=van der Poorten
|author-link=Alfred van der Poorten
|title=A proof that Euler missed ... Apéry’sApéry's proof of the irrationality of {{math|''ζ''(3)}}
|journal=[[The Mathematical Intelligencer]]
|volume=1
|issue=4
|year=1979
|pages=195–203
|doi=10.1007/BF03028234
|s2cid=121589323
|url=http://www.maths.mq.edu.au/~alf/45.pdf
|url=http://www.maths.mq.edu.au/~alf/45.pdf
|url-status=dead
|url-status=dead
|archiveurl=https://web.archive.org/web/20110706114957/http://www.maths.mq.edu.au/~alf/45.pdf
|archive-url=https://web.archive.org/web/20110706114957/http://www.maths.mq.edu.au/~alf/45.pdf
|archivedate=2011-07-06
|archive-date=2011-07-06
}}</ref> the author notes several parallels in proving the irrationality of <math>\zeta(2)</math> to Apéry's proof. In particular, he documents recurrence relations for [[almost integer]] sequences converging to the constant and continued fractions for the constant. Other continued fractions for this constant include <ref>{{cite web|title=Continued fractions for Zeta(2) and Zeta(3)|url=https://tpiezas.wordpress.com/2012/05/04/continued-fractions-for-zeta2-and-zeta3/|website=tpiezas: A COLLECTION OF ALGEBRAIC IDENTITIES|accessdate=29 April 2018}}</ref>
}}</ref> the author notes as "a red herring" the similarity of a [[simple continued fraction]] for Apery's constant, and the following one for the Basel constant:
 
:<math display=block>\frac{\zeta(2)}{5} = \cfrac{1}{v_1\widetilde{v}_1 - \cfrac{1^4}{v_2\widetilde{v}_2-\cfrac{2^4}{v_3\widetilde{v}_3-\cfrac{3^4}{v_4\widetilde{v}_4-\ddots}}}}, </math>
where <math>\widetilde{v}_n = 11n^2-11n+3 \mapsto \{3,25,69,135,\ldots\}</math>. Another continued fraction of a similar form is:<ref name="Berndt">{{citation |last1=Berndt |first1=Bruce C. |title=Ramanujan's Notebooks: Part II |date=1989 |publisher=Springer-Verlag |isbn=978-0-387-96794-3 |page=150}}</ref>
 
<math display=block>\frac{\zeta(2)}{2} = \cfrac{1}{v_1 - \cfrac{1^4}{v_2-\cfrac{2^4}{v_3-\cfrac{3^4}{v_4-\ddots}}}}, </math>
and
where <math>v_n = 2n-1 \mapsto \{1,3,5,7,9,\ldots\}</math>.
 
:<math>\frac{\zeta(2)}{5} = \cfrac{1}{\widetilde{v}_1 - \cfrac{1^4}{\widetilde{v}_2-\cfrac{2^4}{\widetilde{v}_3-\cfrac{3^4}{\widetilde{v}_4-\ddots}}}}, </math>
 
where <math>v_n = 2n-1 \mapsto \{1,3,5,7,9,\ldots\}</math> and <math>\widetilde{v}_n = 11n^2-11n+3 \mapsto \{3,25,69,135,\ldots\}</math>.
 
==See also==
*[[List of sums of reciprocals]]
*[[Riemann zeta function]]
*[[Apéry's constant]]
 
==References==
* {{Citation | title=Number Theory: An Approach Through History | first=André | last=Weil | authorlinkauthor-link=André Weil | publisher=Springer-Verlag | isbn=0-8176-3141-0 | year = 1983}}.
* {{Citation | title=Euler: The Master of Us All | first=William | last=Dunham | authorlinkauthor-link=William Dunham (mathematician) | publisher=[[Mathematical Association of America]] | year=1999 | isbn=0-88385-328-0 | url-access=registration | url=https://archive.org/details/eulermasterofusa0000dunh }}.
* {{Citation | title=Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics | first=John | last=Derbyshire | authorlinkauthor-link=John Derbyshire | publisher=Joseph Henry Press | isbn=0-309-08549-7 | year=2003 | url-access=registration | url=https://archive.org/details/primeobsessionbe00derb_0 }}.
* {{Citation | last1title=AignerRiemann's |Zeta first1=MartinFunction | author1-link= Martin Aigner | last2=Ziegler | first2first=GünterHarold M. | authorlink2last=GünterEdwards M.| Zieglerauthor-link | title=[[Proofs fromHarold THEEdwards BOOK]](mathematician) | publisher=[[Springer-Verlag]]Dover | ___locationisbn=Berlin, New York0-486-41740-9 | year=19982001}}.
* {{Citation | title=Riemann's Zeta Function | first=Harold M. | last=Edwards | authorlink = Harold Edwards (mathematician) | publisher=Dover | isbn=0-486-41740-9 | year=2001}}.
*Leonhard, [[Leonhard Euler|Euler]] (1741). ''Opera Omnia''. pp.&nbsp;177–186.
 
==Notes==
{{reflistReflist}}
 
==External links==
* [httphttps://plus.maths.org/content/infinite-series-surprises An infinite series of surprises] by C. J. Sangwin
* [https://fylux.github.io/2017/03/30/Pi/ StepFrom by''ζ''(2) stepto Π. The Proof.] step-by-step proof
* {{citecitation web|url= http://wwweulerarchive.mathmaa.dartmouth.eduorg/~euler/docs/translations/E352.pdf |title= Remarques sur un beau rapport entre les series des puissances tant directes que reciproques }}&nbsp;{{small|(348&nbsp;kB)}}, English translation with notes of Euler’sEuler's paper by Lucas Willis and Thomas J. Osler
* {{citecitation web|url= http://eulerarchive.maa.org/hedi/HEDI-2003-12.pdf |title= How Euler did it }}&nbsp;{{small|(265&nbsp;kB)author= Ed Sandifer}}
* {{citecitation web|url= http://www.mathpersonal.psu.edu/sellersjjxs23/p25.pdf |title=The infiniteBeyond seriesMere ofConvergence Euler|author= andJames theA. Bernoulli'sSellers spice|date= upFebruary a5, calculus2002 class }}&nbsp;{{small|(106&nbsp;kB)access-date= 2004-02-27}}
* {{citeRobin web|url=Chapman, [http://secamlocalempslocal.ex.ac.uk/people/staff/rjchapma/etc/zeta2.pdf |title=''Evaluating'' {{math|''ζ''(2)}}] }}&nbsp;{{small|(184&nbsp;kB)}}, Fourteenfourteen proofs compiled by Robin Chapman)
* [httphttps://web.archive.org/web/20181130113058/https://giphy.com/gifs/math-visualization-algorithm-xThuW9Pyh8jXvfbrUc Visualization of Euler's factorization of the sine function]
* {{citation |url= http://www.math.chalmers.se/~wastlund/Cosmic.pdf |title= Summing inverse squares by Euclidean geometry |author= Johan W Ästlund |date= December 8, 2010}}
** {{YouTube|d-o3eB9sfls|Why is pi here? And why is it squared? A geometric answer to the Basel problem}} (animated proof based on the above)
 
{{DEFAULTSORT:Basel Problem}}
[[Category:Articles containing proofs]]
[[Category:Mathematical problems]]