Content deleted Content added
Duckmather (talk | contribs) User:SDZeroBot/NPP sorting/STEM/Mathematics: add short description, copy-edit/LaTeXify, add images, move "Further" reading section to bottom |
Rkieferbaum (talk | contribs) m error 64 in CWP + clean up |
||
Line 1:
{{Short description|Computing the fixed point of a function}}
'''Fixed-point computation''' refers to the process of computing an exact or approximate [[Fixed point (mathematics)|fixed point]] of a given function.<ref name=":1">{{cite book |doi=10.1007/978-3-642-50327-6 |title=The Computation of Fixed Points and Applications |series=Lecture Notes in Economics and Mathematical Systems |year=1976 |volume=124 |isbn=978-3-540-07685-8 }}{{
== Definitions ==
Line 11:
* The '''residual criterion''': given an approximation parameter <math>\varepsilon>0</math> , An '''{{mvar|ε}}-residual fixed-point of''' '''''f''''' is a point ''x'' in ''E<sup>d</sup>'' such that <math>|f(x)-x|\leq \varepsilon</math>, where here ''|.|'' denotes the [[maximum norm]]. That is, all ''d'' coordinates of the difference <math>f(x)-x</math> should be at most {{mvar|ε}}.<ref name=":0" />{{Rp|page=4}}
* The '''absolute criterion''': given an approximation parameter <math>\delta>0</math>, A '''δ-absolute fixed-point of''' '''''f''''' is a point ''x'' in ''E<sup>d</sup>'' such that <math>|x-x_0|\leq \delta</math>, where <math>x_0</math> is any fixed-point of ''f.''
* The '''relative criterion''': given an approximation parameter <math>\delta>0</math>, A '''δ-relative fixed-point of''' '''''f''''' is a point ''x'' in ''E<sup>d</sup>'' such that <math>|x-x_0|/|x_0|\leq \delta</math>, where <math>x_0</math> is any fixed-point of ''f.''
For Lipschitz-continuous functions, the absolute criterion is stronger than the residual criterion: If ''f'' is Lipschitz-continuous with constant ''L'', then <math>|x-x_0|\leq \delta</math> implies <math>|f(x)-f(x_0)|\leq L\cdot \delta</math>. Since <math>x_0</math> is a fixed-point of ''f'', this implies <math>|f(x)-x_0|\leq L\cdot \delta</math>, so <math>|f(x)-x|\leq (1+L)\cdot \delta</math>. Therefore, a δ-absolute fixed-point is also an {{mvar|ε}}-residual fixed-point with <math>\varepsilon = (1+L)\cdot \delta</math>.
The most basic step of a fixed-point computation algorithm is a '''value query''': given any ''x'' in ''E<sup>d</sup>'', the{{Sentence fragment|date=April 2023}}
The function ''f'' is accessible via '''evaluation''' queries: for any ''x'', the algorithm can evaluate ''f''(''x''). The run-time complexity of an algorithm is usually given by the number of required evaluations.
▲The function ''f'' is accessible via '''evaluation''' queries: for any ''x'', the algorithm can evaluate ''f''(''x''). The run-time complexity of an algorithm is usually given by the number of required evaluations.
== Contractive functions ==
A Lipschitz-continuous function with constant ''L'' is called
[[File:Fixed point anime.gif|alt=computing a fixed point using function iteration|thumb|Computing a fixed point using function iteration]]
The first algorithm for fixed-point computation was the
When ''L'' < 1 and ''d'' = 1, the optimal algorithm is the '''Fixed Point Envelope''' (FPE) algorithm of Sikorski and Wozniakowski.<ref name=":5" /> It finds a ''δ''-relative fixed point using <math>O(\log(1/\delta) + \log \log(1/(1-L))) </math> queries, and a ''δ''-absolute fixed point using <math>O(\log(1/\delta)) </math> queries. This is much faster than the fixed-point iteration algorithm.<ref>{{cite book |doi=10.1007/978-1-4615-9552-6_4 |chapter=Fast Algorithms for the Computation of Fixed Points |title=Robustness in Identification and Control |year=1989 |last1=Sikorski |first1=K. |pages=49–58 |isbn=978-1-4615-9554-0 }}</ref>
Line 29 ⟶ 30:
Shellman and Sikorski<ref>{{cite journal |last1=Shellman |first1=Spencer |last2=Sikorski |first2=K. |title=A Two-Dimensional Bisection Envelope Algorithm for Fixed Points |journal=Journal of Complexity |date=June 2002 |volume=18 |issue=2 |pages=641–659 |doi=10.1006/jcom.2001.0625 }}</ref> presented an algorithm called '''BEFix''' (Bisection Envelope Fixed-point) for computing an {{mvar|ε}}-residual fixed-point of a two-dimensional function with ''L ≤'' 1, using only <math>2 \lceil\log_2(1/\varepsilon)\rceil+1</math> queries. They later<ref>{{cite journal |last1=Shellman |first1=Spencer |last2=Sikorski |first2=K. |title=Algorithm 825: A deep-cut bisection envelope algorithm for fixed points |journal=ACM Transactions on Mathematical Software |date=September 2003 |volume=29 |issue=3 |pages=309–325 |doi=10.1145/838250.838255 |s2cid=7786886 }}</ref> presented an improvement called '''BEDFix''' (Bisection Envelope Deep-cut Fixed-point), with the same worst-case guarantee but better empirical performance. When ''L''<1, BEDFix can also compute a δ-absolute fixed-point using <math>O(\log(1/\varepsilon)+\log(1/(1-L)))</math> queries.
Shellman and Sikorski<ref name=":3" /> presented an algorithm called '''PFix''' for computing an {{mvar|ε}}-residual fixed-point of a ''d''-dimensional function with ''L ≤'' 1, using <math>O(\log^d(1/\varepsilon))</math> queries. When ''L'' < 1, PFix can be executed with <math>\varepsilon = (1-L)\cdot \delta</math>, and in that case, it computes a δ-absolute fixed-point, using <math>O(\log^d(1/[(1-L)\delta]))</math> queries. It is more efficient than the iteration algorithm when ''L'' is close to 1. The algorithm is recursive: it handles a ''d''-dimensional function by recursive calls on (''d''-1)-dimensional functions.
=== Algorithms for differentiable functions ===
Line 43 ⟶ 44:
For functions in two or more dimensions, the problem is much more challenging. Shellman and Sikorski<ref name=":3" /> proved that, for any integers ''d'' ≥ 2 and ''L'' > 1, finding a δ-absolute fixed-point of ''d''-dimensional ''L''-Lipschitz functions might require infinitely-many evaluations. The proof idea is as follows. For any integer ''T'' > 1, and for any sequence of ''T'' of evaluation queries (possibly adaptive), one can construct two functions that are Lipschitz-continuous with constant ''L'', and yield the same answer to all these queries, but one of them has a unique fixed-point at (''x'', 0) and the other has a unique fixed-point at (''x'', 1). Any algorithm using ''T'' evaluations cannot differentiate between these functions, so cannot find a δ-absolute fixed-point. This is true for any finite integer ''T''.
Several algorithms based on function evaluations have been developed for finding an {{mvar|ε}}-residual fixed-point
* The first algorithm to approximate a fixed point of a general function was developed by [[Herbert Scarf]] in 1967.<ref>{{cite journal |last1=Scarf |first1=Herbert |title=The Approximation of Fixed Points of a Continuous Mapping |journal=SIAM Journal on Applied Mathematics |date=September 1967 |volume=15 |issue=5 |pages=1328–1343 |doi=10.1137/0115116 }}</ref><ref>H. Scarf found the first algorithmic proof: {{SpringerEOM|title=Brouwer theorem|first=M.I.|last=Voitsekhovskii|isbn=1-4020-0609-8}}.</ref> Scarf's algorithm finds an {{mvar|ε}}-residual fixed-point by finding a fully-labelled "primitive set", in a construction similar to [[Sperner's lemma]].
Line 66 ⟶ 67:
== Discrete fixed-point computation ==
A '''discrete function''' is a function defined on a subset of ''<math>\mathbb{Z}^d</math>'' (the ''d''-dimensional integer grid). There are several [[
Let ''f'' be a direction-preserving function from the integer cube <math>\{1, \dots, n\}^d</math> to itself. Chen and Deng<ref name=":2" /> prove that, for any ''d'' ≥ 2 and ''n'' > 48''d'', computing such a fixed point requires <math>\Theta(n^{d-1})</math> function evaluations.
Chen and Deng<ref>{{cite journal |last1=Chen |first1=Xi |last2=Deng |first2=Xiaotie |title=On the complexity of 2D discrete fixed point problem |journal=Theoretical Computer Science |date=October 2009 |volume=410 |issue=44 |pages=4448–4456 |doi=10.1016/j.tcs.2009.07.052 |s2cid=2831759 }}</ref> define a different discrete-fixed-point problem, which they call '''2D-BROUWER'''. It considers a discrete function ''f'' on <math>\{0,\dots, n\}^2</math> such that, for every ''x'' on the grid, ''f''(''x'') - ''x'' is either (0, 1) or (1, 0) or (-1, -1). The goal is to find a square in the grid, in which all three labels occur. The function ''f'' must map the square <math>\{0,\dots, n\}^2</math>to itself, so it must map the lines ''x'' = 0 and ''y'' = 0 to either (0, 1) or (1, 0); the line ''x'' = ''n'' to either (-1, -1) or (0, 1); and the line ''y'' = ''n'' to either (-1, -1) or (1,0). The problem can be reduced to '''2D-SPERNER''' (computing a fully-labeled triangle in a triangulation satisfying the conditions to [[Sperner's lemma]]), and therefore it is [[PPAD-complete]]. This implies that computing an approximate fixed-point is PPAD-complete even for very simple functions.
== Relation between fixed-point computation and root-finding algorithms ==
Line 79 ⟶ 80:
The opposite is not true: finding an approximate root of a general function may be harder than finding an approximate fixed point. In particular, Sikorski<ref>{{cite journal |last1=Sikorski |first1=K. |title=Optimal solution of nonlinear equations satisfying a Lipschitz condition |journal=Numerische Mathematik |date=June 1984 |volume=43 |issue=2 |pages=225–240 |doi=10.1007/BF01390124 |s2cid=120937024 }}</ref> proved that finding an {{mvar|ε}}-root requires <math>\Omega(1/\varepsilon^d)</math> function evaluations. This gives an exponential lower bound even for a one-dimensional function (in contrast, an {{mvar|ε}}-residual fixed-point of a one-dimensional function can be found using <math>O(\log(1/\varepsilon))</math> queries using the [[bisection method]]). Here is a proof sketch.<ref name=":0" />{{Rp|page=35}} Construct a function ''g'' that is slightly larger than {{mvar|ε}} everywhere in ''E<sup>d</sup>'' except in some small cube around some point ''x''<sub>0</sub>, where ''x''<sub>0</sub> is the unique root of ''g''. If ''g'' is [[Lipschitz continuous]] with constant ''L'', then the cube around ''x''<sub>0</sub> can have a side-length of <math>\varepsilon/L</math>. Any algorithm that finds an {{mvar|ε}}-root of ''g'' must check a set of cubes that covers the entire ''E<sup>d</sup>''; the number of such cubes is at least <math>(L/\varepsilon)^d</math>.
However, there are classes of functions for which finding an approximate root is equivalent to finding an approximate fixed point. One example<ref name=":2">{{cite book |doi=10.1145/1060590.1060638 |chapter=On algorithms for discrete and approximate brouwer fixed points |title=Proceedings of the thirty-seventh annual ACM symposium on Theory of computing |year=2005 |last1=Chen |first1=Xi |last2=Deng |first2=Xiaotie |pages=323–330 |isbn=1581139608 |s2cid=16942881 }}</ref> is the class of functions ''g'' such that <math>g(x)+x</math> maps ''E<sup>d</sup>'' to itself (that is: <math>g(x)+x</math> is in ''E<sup>d</sup>'' for all x in ''E<sup>d</sup>''). This is because, for every such function, the function <math>f(x) := g(x)+x</math> satisfies the conditions to Brouwer's fixed-point theorem. Clearly, ''x'' is a fixed-point of ''f'' if and only if ''x'' is a root of ''g'', and ''x'' is an {{mvar|ε}}-residual fixed-point of ''f'' if and only if ''x'' is an {{mvar|ε}}-root of ''g''. Chen and Deng<ref name=":2" /> show that the discrete variants of these problems are computationally equivalent: both problems require <math>\Theta(n^{d-1})</math> function evaluations.
== Communication complexity ==
Roughgarden and Weinstein<ref>{{cite book |doi=10.1109/FOCS.2016.32 |chapter=On the Communication Complexity of Approximate Fixed Points |title=2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) |year=2016 |last1=Roughgarden |first1=Tim |last2=Weinstein |first2=Omri |pages=229–238 |isbn=978-1-5090-3933-3 |s2cid=87553 }}</ref> studied the [[communication complexity]] of computing an approximate fixed-point. In their model, there are two agents: one of them knows a function ''f'' and the other knows a function ''g''. Both functions are Lipschitz continuous and satisfy Brouwer's conditions. The goal is to compute an approximate fixed point of the [[composite function]] <math>g\circ f</math>. They show that the deterministic communication complexity is in <math>\Omega(2^d)</math>.
== References ==
|