Fixed-point computation: Difference between revisions

Content deleted Content added
Linny356 (talk | contribs)
m minor grammar and spelling fixes
No edit summary
 
(13 intermediate revisions by 9 users not shown)
Line 1:
{{Short description|Computing the fixed point of a function}}
{{CS1 config|mode=cs1}}
'''Fixed-point computation''' refers to the process of computing an exact or approximate [[Fixed point (mathematics)|fixed point]] of a given function.<ref name=":1">{{cite book |doi=10.1007/978-3-642-50327-6 |title=The Computation of Fixed Points and Applications |series=Lecture Notes in Economics and Mathematical Systems |year=1976 |volume=124 |isbn=978-3-540-07685-8 }}{{page needed|date=April 2023}}</ref> In its most common form, the given function ''<math>f''</math> satisfies the condition to the [[Brouwer fixed-point theorem]]: that is, ''<math>f''</math> is continuous and maps the unit [[N-cube|''d''-cube]] to itself. The [[Brouwer fixed-point theorem]] guarantees that ''<math>f''</math> has a fixed point, but the proof is not [[Constructive proof|constructive]]. Various algorithms have been devised for computing an approximate fixed point. Such algorithms are used in economicsvarious for computing a [[market equilibrium]]tasks, insuch [[game theory]] for computing a [[Nash equilibrium]], and in [[dynamic system]] analysis.as
 
* [[Nash equilibrium computation]],
'''Fixed-point computation''' refers to the process of computing an exact or approximate [[Fixed point (mathematics)|fixed point]] of a given function.<ref name=":1">{{cite book |doi=10.1007/978-3-642-50327-6 |title=The Computation of Fixed Points and Applications |series=Lecture Notes in Economics and Mathematical Systems |year=1976 |volume=124 |isbn=978-3-540-07685-8 }}{{page needed|date=April 2023}}</ref> In its most common form, the given function ''f'' satisfies the condition to the [[Brouwer fixed-point theorem]]: that is, ''f'' is continuous and maps the unit [[N-cube|''d''-cube]] to itself. The [[Brouwer fixed-point theorem]] guarantees that ''f'' has a fixed point, but the proof is not constructive. Various algorithms have been devised for computing an approximate fixed point. Such algorithms are used in economics for computing a [[market equilibrium]], in [[game theory]] for computing a [[Nash equilibrium]], and in [[dynamic system]] analysis.
* [[Market equilibrium computation]],
* [[Dynamic system]] analysis.
 
== Definitions ==
[[File:Fixed point example.svg|alt=an example function with three fixed points|thumb|The graph of an example function with three fixed points]]
The unit interval is denoted by <math>E := [0, 1]</math>, and the unit [[N-cube|''d''-dimensional cube]] is denoted by ''E<supmath>E^d</supmath>''. A [[continuous function]] ''<math>f''</math> is defined on ''E<supmath>E^d</supmath>'' (from ''E<supmath>E^d</supmath>'' to itself)''.'' Often, it is assumed that ''<math>f''</math> is not only continuous but also [[Lipschitz continuous]], that is, for some constant ''<math>L''</math>, <math>|f(x)-f(y)| \leq L\cdot |x-y|</math> for all ''<math>x,y''</math> in ''E<supmath>E^d</supmath>''.
 
A '''fixed point''' of ''<math>f''</math> is a point ''<math>x''</math> in ''E<supmath>E^d</supmath>'' such that ''<math>f''(''x'') = ''x''</math>. By the [[Brouwer fixed-point theorem]], any continuous function from ''E<supmath>E^d</supmath>'' to itself has a fixed point. But for general functions, it is impossible to compute a fixed point precisely, since it can be an arbitrary real number. Fixed-point computation algorithms look for ''approximate'' fixed points. There are several criteria for an approximate fixed point. Several common criteria are:<ref name=":3">{{cite journal |last1=Shellman |first1=Spencer |last2=Sikorski |first2=K. |title=A recursive algorithm for the infinity-norm fixed point problem |journal=Journal of Complexity |date=December 2003 |volume=19 |issue=6 |pages=799–834 |doi=10.1016/j.jco.2003.06.001 |doi-access=free }}</ref>
 
* The '''residual criterion''': given an approximation parameter <math>\varepsilon>0</math> , An '''{{mvar|ε}}-residual fixed-point of''' '''''<math>f''</math>''' is a point ''<math>x''</math> in ''E<supmath>E^d</supmath>'' such that <math>|f(x)-x|\leq \varepsilon</math>, where here <math>|\cdot|</math> denotes the [[maximum norm]]. That is, all ''<math>d''</math> coordinates of the difference <math>f(x)-x</math> should be at most {{mvar|ε}}.<ref name=":0" />{{Rp|page=4}}
* The '''absolute criterion''': given an approximation parameter <math>\delta>0</math>, A '''δ-absolute fixed-point of''' '''''<math>f''</math>''' is a point ''<math>x''</math> in ''E<supmath>E^d</supmath>'' such that <math>|x-x_0| \leq \delta</math>, where <math>x_0</math> is any fixed-point of ''<math>f</math>.''
* The '''relative criterion''': given an approximation parameter <math>\delta>0</math>, A '''δ-relative fixed-point of''' '''''<math>f''</math>''' is a point ''x'' in ''E<supmath>E^d</supmath>'' such that <math>|x-x_0|/|x_0|\leq \delta</math>, where <math>x_0</math> is any fixed-point of ''<math>f</math>.''
 
For Lipschitz-continuous functions, the absolute criterion is stronger than the residual criterion: If ''<math>f''</math> is Lipschitz-continuous with constant ''<math>L''</math>, then <math>|x-x_0|\leq \delta</math> implies <math>|f(x)-f(x_0)|\leq L\cdot \delta</math>. Since <math>x_0</math> is a fixed-point of ''<math>f''</math>, this implies <math>|f(x)-x_0|\leq L\cdot \delta</math>, so <math>|f(x)-x|\leq (1+L)\cdot \delta</math>. Therefore, a δ-absolute fixed-point is also an {{mvar|ε}}-residual fixed-point with <math>\varepsilon = (1+L)\cdot \delta</math>.
 
The most basic step of a fixed-point computation algorithm is a '''value query''': given any ''<math>x''</math> in ''E<supmath>E^d</supmath>'', the algorithm is provided with an oracle <math>\tilde{{Sentencef}</math> fragment|date=Aprilto 2023}<math>f</math> that returns the value <math>f(x)</math>. The accuracy of the approximate fixed-point depends upon the error in the oracle <math>\tilde{f}(x)</math>.
 
The function ''<math>f''</math> is accessible via '''evaluation''' queries: for any ''<math>x''</math>, the algorithm can evaluate ''<math>f''(''x'')</math>. The run-time complexity of an algorithm is usually given by the number of required evaluations.
 
== Contractive functions ==
A Lipschitz-continuous function with constant ''<math>L''</math> is called '''[[contractive]]''' if ''<math>L'' < 1</math>; it is called '''[[weakly-contractive]]''' if ''<math>L ≤''\le 1</math>. Every contractive function satisfying Brouwer's conditions has a ''unique'' fixed point. Moreover, fixed-point computation for contractive functions is easier than for general functions.
[[File:Fixed point anime.gif|alt=computing a fixed point using function iteration|thumb|Computing a fixed point using function iteration]]
The first algorithm for fixed-point computation was the '''[[fixed-point iteration]]''' algorithm of Banach. [[Banach fixed point theorem|Banach's fixed-point theorem]] implies that, when fixed-point iteration is applied to a contraction mapping, the error after ''<math>t''</math> iterations is in <math>O(L^t)</math>. Therefore, the number of evaluations required for a ''δ''<math>\delta</math>-relative fixed-point is approximately <math>\log_L(\delta) = \log(\delta)/\log(L) = \log(1/\delta)/\log(1/L) </math>. Sikorski and Wozniakowski<ref name=":5">{{cite journal |last1=Sikorski |first1=K |last2=Woźniakowski |first2=H |title=Complexity of fixed points, I |journal=Journal of Complexity |date=December 1987 |volume=3 |issue=4 |pages=388–405 |doi=10.1016/0885-064X(87)90008-2 |doi-access=free }}</ref> showed that Banach's algorithm is optimal when the dimension is large. Specifically, when <math>d\geq \log(1/\delta)/\log(1/L) </math>, the number of required evaluations of ''any'' algorithm for ''δ''<math>\delta</math>-relative fixed-point is larger than 50% the number of evaluations required by the iteration algorithm. Note that when ''<math>L''</math> approaches 1, the number of evaluations approaches infinity. No finite algorithm can compute a ''δ''<math>\delta</math>-absolute fixed point for all functions with <math>L=1</math>.<ref name=":4">{{cite book |last1=Sikorski |first1=Krzysztof A. |title=Optimal Solution of Nonlinear Equations |date=2001 |publisher=Oxford University Press |isbn=978-0-19-510690-9 }}{{page needed|date=April 2023}}</ref>
 
When ''<math>L''</math> < 1 and ''d'' = 1, the optimal algorithm is the '''Fixed Point Envelope''' (FPE) algorithm of Sikorski and Wozniakowski.<ref name=":5" /> It finds a ''δ''-relative fixed point using <math>O(\log(1/\delta) + \log \log(1/(1-L))) </math> queries, and a ''δ''-absolute fixed point using <math>O(\log(1/\delta)) </math> queries. This is faster than the fixed-point iteration algorithm.<ref>{{cite book |doi=10.1007/978-1-4615-9552-6_4 |chapter=Fast Algorithms for the Computation of Fixed Points |title=Robustness in Identification and Control |year=1989 |last1=Sikorski |first1=K. |pages=49–58 |isbn=978-1-4615-9554-0 }}</ref>
 
When ''<math>d''>1</math> but not too large, and ''<math>L ≤''\le 1</math>, the optimal algorithm is the interior-ellipsoid algorithm (based on the [[ellipsoid method]]).<ref>{{cite journal |last1=Huang |first1=Z |last2=Khachiyan |first2=L |last3=Sikorski |first3=K |title=Approximating Fixed Points of Weakly Contracting Mappings |journal=Journal of Complexity |date=June 1999 |volume=15 |issue=2 |pages=200–213 |doi=10.1006/jcom.1999.0504 |doi-access=free }}</ref> It finds an {{mvar|ε}}-residual fixed-point using <math>O(d\cdot \log(1/\varepsilon)) </math> evaluations. When ''<math>L''<1</math>, it finds a ''δ''<math>\delta</math>-absolute fixed point using <math>O(d\cdot [\log(1/\delta) + \log(1/(1-L))]) </math> evaluations.
 
Shellman and Sikorski<ref>{{cite journal |last1=Shellman |first1=Spencer |last2=Sikorski |first2=K. |title=A Two-Dimensional Bisection Envelope Algorithm for Fixed Points |journal=Journal of Complexity |date=June 2002 |volume=18 |issue=2 |pages=641–659 |doi=10.1006/jcom.2001.0625 |doi-access=free }}</ref> presented an algorithm called '''BEFix''' (Bisection Envelope Fixed-point) for computing an {{mvar|ε}}-residual fixed-point of a two-dimensional function with ''<math>L ≤''\le 1</math>, using only <math>2 \lceil\log_2(1/\varepsilon)\rceil+1</math> queries. They later<ref>{{cite journal |last1=Shellman |first1=Spencer |last2=Sikorski |first2=K. |title=Algorithm 825: A deep-cut bisection envelope algorithm for fixed points |journal=ACM Transactions on Mathematical Software |date=September 2003 |volume=29 |issue=3 |pages=309–325 |doi=10.1145/838250.838255 |s2cid=7786886 }}</ref> presented an improvement called '''BEDFix''' (Bisection Envelope Deep-cut Fixed-point), with the same worst-case guarantee but better empirical performance. When ''<math>L''<1</math>, '''BEDFix''' can also compute a δ<math>\delta</math>-absolute fixed-point using <math>O(\log(1/\varepsilon)+\log(1/(1-L)))</math> queries.
 
Shellman and Sikorski<ref name=":3" /> presented an algorithm called '''PFix''' for computing an {{mvar|ε}}-residual fixed-point of a ''d''-dimensional function with ''L ≤'' 1, using <math>O(\log^d(1/\varepsilon))</math> queries. When ''<math>L''</math> < 1, '''PFix''' can be executed with <math>\varepsilon = (1-L)\cdot \delta</math>, and in that case, it computes a δ-absolute fixed-point, using <math>O(\log^d(1/[(1-L)\delta]))</math> queries. It is more efficient than the iteration algorithm when ''<math>L''</math> is close to 1. The algorithm is recursive: it handles a ''d''-dimensional function by recursive calls on (''d''-1)-dimensional functions.
 
=== Algorithms for differentiable functions ===
When the function ''<math>f''</math> is differentiable, and the algorithm can evaluate its derivative (not only ''<math>f''</math> itself), the [[Newton's method in optimization|Newton method]] can be used and it is much faster.<ref>{{cite journal |last1=Kellogg |first1=R. B. |last2=Li |first2=T. Y. |last3=Yorke |first3=J. |title=A Constructive Proof of the Brouwer Fixed-Point Theorem and Computational Results |journal=SIAM Journal on Numerical Analysis |date=September 1976 |volume=13 |issue=4 |pages=473–483 |doi=10.1137/0713041 }}</ref><ref>{{cite journal |last1=Smale |first1=Steve |title=A convergent process of price adjustment and global newton methods |journal=Journal of Mathematical Economics |date=July 1976 |volume=3 |issue=2 |pages=107–120 |doi=10.1016/0304-4068(76)90019-7 }}</ref>
 
== General functions ==
For functions with Lipschitz constant ''<math>L''</math> > 1, computing a fixed-point is much harder.
 
=== One dimension ===
For a 1-dimensional function (''d'' = 1), a δ<math>\delta</math>-absolute fixed-point can be found using <math>O(\log(1/\delta))</math> queries using the [[bisection method]]: start with the interval <math>E := [0, 1]</math>; at each iteration, let ''<math>x''</math> be the center of the current interval, and compute ''<math>f''(''x'')</math>; if ''<math>f''(''x'') > ''x''</math> then recurse on the sub-interval to the right of ''<math>x''</math>; otherwise, recurse on the interval to the left of ''<math>x''</math>. Note that the current interval always contains a fixed point, so after <math>O(\log(1/\delta))</math> queries, any point in the remaining interval is a δ<math>\delta</math>-absolute fixed-point of ''<math>f.''</math> Setting <math>\delta := \varepsilon/(L+1)</math>, where ''<math>L''</math> is the Lipschitz constant, gives an {{mvar|ε}}-residual fixed-point, using <math>O(\log(L/\varepsilon) = \log(L) + \log(1/\varepsilon))</math> queries.<ref name=":0" />
 
=== Two or more dimensions ===
For functions in two or more dimensions, the problem is much more challenging. Shellman and Sikorski<ref name=":3" /> proved that for any integers ''d'' ≥ 2 and ''<math>L''</math> > 1, finding a δ-absolute fixed-point of ''d''-dimensional ''<math>L''</math>-Lipschitz functions might require infinitely many evaluations. The proof idea is as follows. For any integer ''T'' > 1 and any sequence of ''T'' of evaluation queries (possibly adaptive), one can construct two functions that are Lipschitz-continuous with constant ''<math>L''</math>, and yield the same answer to all these queries, but one of them has a unique fixed-point at (''x'', 0) and the other has a unique fixed-point at (''x'', 1). Any algorithm using ''T'' evaluations cannot differentiate between these functions, so cannot find a δ-absolute fixed-point. This is true for any finite integer ''T''.
 
Several algorithms based on function evaluations have been developed for finding an {{mvar|ε}}-residual fixed-point
Line 49 ⟶ 53:
* A later algorithm by [[Harold W. Kuhn|Harold Kuhn]]<ref>{{Cite journal |last=Kuhn |first=Harold W. |date=1968 |title=Simplicial Approximation of Fixed Points |jstor=58762 |journal=Proceedings of the National Academy of Sciences of the United States of America |volume=61 |issue=4 |pages=1238–1242 |doi=10.1073/pnas.61.4.1238 |pmid=16591723 |pmc=225246 |doi-access=free }}</ref> used simplices and simplicial partitions instead of primitive sets.
* Developing the simplicial approach further, Orin Harrison Merrill<ref>{{cite thesis |last1=Merrill |first1=Orin Harrison |date=1972 |title=Applications and Extensions of an Algorithm that Computes Fixed Points of Certain Upper Semi-continuous Point to Set Mappings |id={{NAID|10006142329}} |oclc=570461463 |url=https://www.proquest.com/openview/9bd010ff744833cb3a23ef521046adcb/1 }}</ref> presented the ''restart algorithm''.
* B. Curtis Eaves<ref>{{cite journal |last1=Eaves |first1=B. Curtis |title=Homotopies for computation of fixed points |journal=Mathematical Programming |date=December 1972 |volume=3-3 |issue=1 |pages=1–22 |doi=10.1007/BF01584975 |s2cid=39504380 }}</ref> presented the ''[[homotopyHomotopy method]] algorithm''. The algorithm works by starting with an affine function that approximates ''<math>f''</math>, and deforming it towards ''<math>f''</math> while following the fixed point''.''
* A book by Michael Todd<ref name=":1" /> surveys various algorithms developed until 1976.
* [[David Gale]]<ref>{{cite journal |first1=David |last1=Gale |year=1979 |title=The Game of Hex and Brouwer Fixed-Point Theorem |journal=The American Mathematical Monthly |volume=86 |issue=10 |pages=818–827 |doi=10.2307/2320146 |jstor=2320146 }}</ref> showed that computing a fixed point of an ''n''-dimensional function (on the unit ''d''-dimensional cube) is equivalent to deciding who is the winner in a ''d''-dimensional game of [[Hex (board game)|Hex]] (a game with ''d'' players, each of whom needs to connect two opposite faces of a ''d''-cube). Given the desired accuracy ''{{mvar|ε}}''
** Construct a Hex board of size ''kd'', where <math>k > 1/\varepsilon</math>. Each vertex ''z'' corresponds to a point ''z''/''k'' in the unit ''n''-cube.
** Compute the difference ''<math>f''</math>(''z''/''k'') - ''z''/''k''; note that the difference is an ''n''-vector.
** Label the vertex ''z'' by a label in 1, ..., ''d'', denoting the largest coordinate in the difference vector.
** The resulting labeling corresponds to a possible play of the ''d''-dimensional Hex game among ''d'' players. This game must have a winner, and Gale presents an algorithm for constructing the winning path.
** In the winning path, there must be a point in which ''f<sub>i</sub>''(''z''/''k'') - ''z''/''k'' is positive, and an adjacent point in which ''f<sub>i</sub>''(''z''/''k'') - ''z''/''k'' is negative. This means that there is a fixed point of ''<math>f''</math> between these two points.
In the worst case, the number of function evaluations required by all these algorithms is exponential in the binary representation of the accuracy, that is, in <math>\Omega(1/\varepsilon)</math>.
 
Line 67 ⟶ 72:
 
== Discrete fixed-point computation ==
A '''discrete function''' is a function defined on a subset of ''<math>\mathbb{Z}^d</math>'' (the ''d''-dimensional integer grid). There are several [[discrete fixed-point theorem]]s, stating conditions under which a discrete function has a fixed point. For example, the '''Iimura-Murota-Tamura theorem''' states that (in particular) if ''<math>f''</math> is a function from a rectangle subset of ''<math>\mathbb{Z}^d</math>'' to itself, and ''<math>f''</math> is ''hypercubic [[Direction-preserving function|direction-preserving]]'', then ''<math>f''</math> has a fixed point.
 
Let ''<math>f''</math> be a direction-preserving function from the integer cube <math>\{1, \dots, n\}^d</math> to itself. Chen and Deng<ref name=":2" /> prove that, for any ''d'' ≥ 2 and ''n'' > 48''d'', computing such a fixed point requires <math>\Theta(n^{d-1})</math> function evaluations.
 
Chen and Deng<ref>{{cite journal |last1=Chen |first1=Xi |last2=Deng |first2=Xiaotie |title=On the complexity of 2D discrete fixed point problem |journal=Theoretical Computer Science |date=October 2009 |volume=410 |issue=44 |pages=4448–4456 |doi=10.1016/j.tcs.2009.07.052 |s2cid=2831759 }}</ref> define a different discrete-fixed-point problem, which they call '''2D-BROUWER'''. It considers a discrete function ''<math>f''</math> on <math>\{0,\dots, n\}^2</math> such that, for every ''x'' on the grid, ''<math>f''</math>(''x'') - ''x'' is either (0, 1) or (1, 0) or (-1, -1). The goal is to find a square in the grid, in which all three labels occur. The function ''<math>f''</math> must map the square <math>\{0,\dots, n\}^2</math>to itself, so it must map the lines ''x'' = 0 and ''y'' = 0 to either (0, 1) or (1, 0); the line ''x'' = ''n'' to either (-1, -1) or (0, 1); and the line ''y'' = ''n'' to either (-1, -1) or (1,0). The problem can be reduced to '''2D-SPERNER''' (computing a fully-labeled triangle in a triangulation satisfying the conditions to [[Sperner's lemma]]), and therefore it is [[PPAD-complete]]. This implies that computing an approximate fixed-point is PPAD-complete even for very simple functions.
 
== Relation between fixed-point computation and root-finding algorithms ==
Given a function ''<math>g''</math> from ''E<supmath>E^d</supmath>'' to ''R'', a '''root''' of ''<math>g''</math> is a point ''x'' in ''E<supmath>E^d</supmath>'' such that ''<math>g''</math>(''x'')=0. An '''{{mvar|ε}}-root''' of g is a point ''x'' in ''E<supmath>E^d</supmath>'' such that <math>g(x)\leq \varepsilon</math>.
 
Fixed-point computation is a special case of root-finding: given a function ''<math>f''</math> on ''E<supmath>E^d</supmath>'', define <math>g(x) := |f(x)-x|</math>. ''X'' is a fixed-point of ''<math>f''</math> if and only if ''x'' is a root of ''<math>g''</math>, and ''x'' is an {{mvar|ε}}-residual fixed-point of ''<math>f''</math> if and only if ''x'' is an {{mvar|ε}}-root of ''<math>g''</math>. Therefore, any [[root-finding algorithm]] (an algorithm that computes an approximate root of a function) can be used to find an approximate fixed-point.
 
The opposite is not true: finding an approximate root of a general function may be harder than finding an approximate fixed point. In particular, Sikorski<ref>{{cite journal |last1=Sikorski |first1=K. |title=Optimal solution of nonlinear equations satisfying a Lipschitz condition |journal=Numerische Mathematik |date=June 1984 |volume=43 |issue=2 |pages=225–240 |doi=10.1007/BF01390124 |s2cid=120937024 }}</ref> proved that finding an {{mvar|ε}}-root requires <math>\Omega(1/\varepsilon^d)</math> function evaluations. This gives an exponential lower bound even for a one-dimensional function (in contrast, an {{mvar|ε}}-residual fixed-point of a one-dimensional function can be found using <math>O(\log(1/\varepsilon))</math> queries using the [[bisection method]]). Here is a proof sketch.<ref name=":0" />{{Rp|page=35}} Construct a function ''<math>g''</math> that is slightly larger than {{mvar|ε}} everywhere in ''E<supmath>E^d</supmath>'' except in some small cube around some point ''x''<sub>0</sub>, where ''x''<sub>0</sub> is the unique root of ''<math>g''</math>. If ''<math>g''</math> is [[Lipschitz continuous]] with constant ''<math>L''</math>, then the cube around ''x''<sub>0</sub> can have a side-length of <math>\varepsilon/L</math>. Any algorithm that finds an {{mvar|ε}}-root of ''<math>g''</math> must check a set of cubes that covers the entire ''E<supmath>E^d</supmath>''; the number of such cubes is at least <math>(L/\varepsilon)^d</math>.
 
However, there are classes of functions for which finding an approximate root is equivalent to finding an approximate fixed point. One example<ref name=":2">{{cite book |doi=10.1145/1060590.1060638 |chapter=On algorithms for discrete and approximate brouwer fixed points |title=Proceedings of the thirty-seventh annual ACM symposium on Theory of computing |year=2005 |last1=Chen |first1=Xi |last2=Deng |first2=Xiaotie |pages=323–330 |isbn=1581139608 |s2cid=16942881 }}</ref> is the class of functions ''<math>g''</math> such that <math>g(x)+x</math> maps ''E<supmath>E^d</supmath>'' to itself (that is: <math>g(x)+x</math> is in ''E<supmath>E^d</supmath>'' for all x in ''E<supmath>E^d</supmath>''). This is because, for every such function, the function <math>f(x) := g(x)+x</math> satisfies the conditions of Brouwer's fixed-point theorem. ''X'' is a fixed-point of ''<math>f''</math> if and only if ''x'' is a root of ''<math>g''</math>, and ''x'' is an {{mvar|ε}}-residual fixed-point of ''<math>f''</math> if and only if ''x'' is an {{mvar|ε}}-root of ''<math>g''</math>. Chen and Deng<ref name=":2" /> show that the discrete variants of these problems are computationally equivalent: both problems require <math>\Theta(n^{d-1})</math> function evaluations.
 
== Communication complexity ==
Roughgarden and Weinstein<ref>{{cite book |doi=10.1109/FOCS.2016.32 |chapter=On the Communication Complexity of Approximate Fixed Points |title=2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) |year=2016 |last1=Roughgarden |first1=Tim |last2=Weinstein |first2=Omri |pages=229–238 |isbn=978-1-5090-3933-3 |s2cid=87553 }}</ref> studied the [[communication complexity]] of computing an approximate fixed-point. In their model, there are two agents: one of them knows a function ''<math>f''</math> and the other knows a function ''<math>g''</math>. Both functions are Lipschitz continuous and satisfy Brouwer's conditions. The goal is to compute an approximate fixed point of the [[composite function]] <math>g\circ f</math>. They show that the deterministic communication complexity is in <math>\Omega(2^d)</math>.
 
== References ==