Inverse function theorem: Difference between revisions

Content deleted Content added
Reverted 1 edit by SentientObject (talk): This article is about the single variable case
Over a real closed field: add disambiguating chapter no. to reference
 
(7 intermediate revisions by 4 users not shown)
Line 2:
{{Use dmy dates|date=December 2023}}
{{Calculus}}
In [[mathematicsreal analysis]], specificallya branch of [[differential calculusmathematics]], the '''inverse function theorem''' givesis a [[Necessity and sufficiency|sufficient conditiontheorem]] forthat aasserts [[functionthat, (mathematics)|function]]if to bea [[Invertiblereal function|invertible]] in''f'' has a [[Neighbourhoodcontinuously (mathematics)differentiable function|neighborhoodcontinuous derivative]] ofnear a point inwhere its [[___domainderivative ofis a function|___domain]]:nonzero, namelythen, thatnear this point, its ''derivativef'' ishas continuousan and[[inverse non-zero at the point''function]]. The theoreminverse alsofunction givesis aalso [[formuladifferentiable function|differentiable]], forand the ''[[derivativeinverse function rule]]'' ofexpresses its derivative as the [[inversemultiplicative functioninverse]] of the derivative of ''f''.
 
In [[multivariable calculus]], this theorem can be generalized to any [[continuously differentiable]], [[vector-valued function]] whose [[Jacobian determinant]] is nonzero at a point in its ___domain, giving a formula for the [[Jacobian matrix]] of the inverse. There are also versions of the inverse function theorem for [[holomorphic function]]s, for differentiable maps between [[manifold]]s, for differentiable functions between [[Banach space]]s, and so forth.
The theorem applies verbatim to [[complex-valued function]]s of a [[complex number|complex variable]]. It generalizes to functions from
''n''-[[tuples]] (of real or complex numbers) to ''n''-tuples, and to functions between [[vector space]]s of the same finite dimension, by replacing "derivative" with "[[Jacobian matrix]]" and "nonzero derivative" with "nonzero [[Jacobian determinant]]".
 
InIf [[multivariablethe calculus]],function thisof the theorem canbelongs beto generalizeda to anyhigher [[continuouslydifferentiability differentiableclass]], [[vector-valuedthe function]] whose [[Jacobian determinant]]same is nonzero at a point in its ___domain, giving a formulatrue for the [[Jacobianinverse matrix]] of the inversefunction. There are also versions of the inverse function theorem for [[holomorphic function]]s, for differentiable maps between [[manifold]]s, for differentiable functions between [[Banach space]]s, and so forth.
 
The theorem was first established by [[Émile Picard|Picard]] and [[Édouard Goursat|Goursat]] using an iterative scheme: the basic idea is to prove a [[fixed point theorem]] using the [[contraction mapping theorem]].
Line 74 ⟶ 78:
 
=== Proof for single-variable functions ===
We want to prove the following: ''Let <math>D \subseteq \R</math> be an open set with <math>x_0 \in D, f: D \to \R</math> a continuously differentiable function defined on <math>D</math>, and suppose that <math>f'(x_0) \ne 0</math>. Then there exists an open interval <math>I</math> with <math>x_0 \in I</math> such that <math>f</math> maps <math>I</math> bijectively onto the open interval <math>J = f(I)</math>, and such that the inverse function <math>f^{-1} : J \to I</math> is continuously differentiable, and for any <math>y \in J</math>, if <math>x \in I</math> is such that <math>f(x) = y</math>, then <math>(f^{-1})'(y) = \dfrac{1}{f'(x)}</math>.''
 
We may without loss of generality assume that <math>f'(x_0) > 0</math>. Given that <math>D</math> is an open set and <math>f'</math> is continuous at <math>x_0</math>, there exists <math>r > 0</math> such that <math>(x_0 - r, x_0 + r) \subseteq D</math> and<math display="block">|f'(x) - f'(x_0)| < \dfrac{f'(x_0)}{2} \qquad \text{for all } |x - x_0| < r.</math>
 
In particular,<math display="block">f'(x) > \dfrac{f'(x_0)}{2} >0 \qquad \text{for all } |x - x_0| < r.</math>
Line 99 ⟶ 103:
To check that <math>g=f^{-1}</math> is C<sup>1</sup>, write <math>g(y+k) = x+h</math> so that
<math>f(x+h)=f(x)+k</math>. By the inequalities above, <math>\|h-k\| <\|h\|/2</math> so that <math>\|h\|/2<\|k\| < 2\|h\|</math>.
On the other hand, if <math>A=f^\prime(x)</math>, then <math>\|A-I\|<1/2</math>. Using the [[geometric series]] for <math>B=I-A</math>, it follows that <math>\|A^{-1}\| < 2</math>. But then
 
:<math> {\|g(y+k) -g(y) - f^\prime(g(y))^{-1}k \| \over \|k\|}
Line 232 ⟶ 236:
 
=== Over a real closed field ===
The inverse function theorem also holds over a [[real closed field]] ''k'' (or an [[Oo-minimal structure]]).<ref>Chapter 7, Theorem 2.11. in {{cite book |doi=10.1017/CBO9780511525919|title=Tame Topology and O-minimal Structures. London Mathematical Society lecture note series, no. 248|year=1998 |last1=Dries |first1=L. P. D. van den |authorlink = Lou van den Dries|isbn=9780521598385|publisher=Cambridge University Press|___location=Cambridge, New York, and Oakleigh, Victoria }}</ref> Precisely, the theorem holds for a semialgebraic (or definable) map between open subsets of <math>k^n</math> that is continuously differentiable.
 
The usual proof of the IFT uses Banach's fixed point theorem, which relies on the Cauchy completeness. That part of the argument is replaced by the use of the [[extreme value theorem]], which does not need completeness. Explicitly, in {{section link||A_proof_using_the_contraction_mapping_principle}}, the Cauchy completeness is used only to establish the inclusion <math>B(0, r/2) \subset f(B(0, r))</math>. Here, we shall directly show <math>B(0, r/4) \subset f(B(0, r))</math> instead (which is enough). Given a point <math>y</math> in <math>B(0, r/4)</math>, consider the function <math>P(x) = |f(x) - y|^2</math> defined on a neighborhood of <math>\overline{B}(0, r)</math>. If <math>P'(x) = 0</math>, then <math>0 = P'(x) = 2[f_1(x) - y_1 \cdots f_n(x) - y_n]f'(x)</math> and so <math>f(x) = y</math>, since <math>f'(x)</math> is invertible. Now, by the extreme value theorem, <math>P</math> admits a minimal at some point <math>x_0</math> on the closed ball <math>\overline{B}(0, r)</math>, which can be shown to lie in <math>B(0, r)</math> using <math>2^{-1}|x| \le |f(x)|</math>. Since <math>P'(x_0) = 0</math>, <math>f(x_0) = y</math>, which proves the claimed inclusion. <math>\square</math>
 
Alternatively, one can deduce the theorem from the one over real numbers by [[Tarski's principle]].{{factcitation needed|date=December 2024}}
 
==See also==