Chambolle–Pock algorithm: Difference between revisions

Content deleted Content added
Fixing links
Citation bot (talk | contribs)
Alter: url, pmc, pages. URLs might have been anonymized. Add: jstor, doi, arxiv, bibcode, isbn, s2cid, authors 1-1. Removed proxy/dead URL that duplicated identifier. Removed parameters. Formatted dashes. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Category:CS1 maint: PMC format | #UCB_Category 4/11
Line 11:
| caption2 = Example of application of the Chambolle-Pock algorithm to image reconstruction.
}}
In [[mathematics]], the '''Chambolle-Pock algorithm''' is an [[algorithm]] used to solve [[convex optimization]] problems. It was introduced by Antonin Chambolle and Thomas Pock<ref name=":0">{{Cite journal |lastlast1=Chambolle |firstfirst1=Antonin |last2=Pock |first2=Thomas |date=2011-05-01 |title=A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging |url=https://doi.org/10.1007/s10851-010-0251-1 |journal=Journal of Mathematical Imaging and Vision |language=en |volume=40 |issue=1 |pages=120–145 |doi=10.1007/s10851-010-0251-1 |s2cid=207175707 |issn=1573-7683}}</ref> in 2011 and has since become a widely used method in various fields, including [[Digital image processing|image processing]],<ref name=":1" /><ref name=":2" /><ref name=":3" /> [[computer vision]],<ref>{{Cite journal |lastlast1=Pock |firstfirst1=Thomas |last2=Cremers |first2=Daniel |last3=Bischof |first3=Horst |last4=Chambolle |first4=Antonin |date=2009 |title=An algorithm for minimizing the Mumford-Shah functional |url=https://ieeexplore.ieee.org/document/5459348/ |journal=2009 IEEE 12th International Conference on Computer Vision |pages=1133–1140 |doi=10.1109/ICCV.2009.5459348|isbn=978-1-4244-4420-5 |s2cid=15991312 }}</ref> and [[signal processing]].<ref>{{Cite journal |date=2014 |title=A Generic Proximal Algorithm for Convex Optimization—Application to Total Variation Minimization |url=httphttps://ieeexplore.ieee.org/document/6810809/ |journal=IEEE Signal Processing Letters |volume=21 |issue=8 |pages=985–989 |doi=10.1109/LSP.2014.2322123 |bibcode=2014ISPL...21..985. |s2cid=8976837 |issn=1070-9908}}</ref>
 
The Chambolle-Pock algorithm is specifically designed to efficiently solve convex optimization problems that involve the minimization of a non-smooth [[Loss function|cost function]] composed of a data fidelity term and a regularization term.<ref name=":0" /> This is a typical configuration that commonly arises in ill-posed imaging [[Inverse problem|inverse problems]] such as [[image reconstruction]],<ref name=":1">{{Cite journal |lastlast1=Sidky |firstfirst1=Emil Y |last2=Jørgensen |first2=Jakob H |last3=Pan |first3=Xiaochuan |date=2012-05-21 |title=Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm |url=https://iopscience.iop.org/article/10.1088/0031-9155/57/10/3065 |journal=Physics in Medicine and Biology |volume=57 |issue=10 |pages=3065–3091 |doi=10.1088/0031-9155/57/10/3065 |issn=0031-9155 |pmc=PMC33706583370658 |pmid=22538474|arxiv=1111.5632 |bibcode=2012PMB....57.3065S }}</ref> [[Noise reduction|denoising]]<ref name=":2">{{Cite journal |lastlast1=Fang |firstfirst1=Faming |last2=Li |first2=Fang |last3=Zeng |first3=Tieyong |date=2014-03-13 |title=Single Image Dehazing and Denoising: A Fast Variational Approach |url=http://epubs.siam.org/doi/10.1137/130919696 |journal=SIAM Journal on Imaging Sciences |language=en |volume=7 |issue=2 |pages=969–996 |doi=10.1137/130919696 |issn=1936-4954}}</ref> and [[inpainting]].<ref name=":3">{{Cite journal |lastlast1=Allag |firstfirst1=A. |last2=Benammar |first2=A. |last3=Drai |first3=R. |last4=Boutkedjirt |first4=T. |date=2019-07-01 |title=Tomographic Image Reconstruction in the Case of Limited Number of X-Ray Projections Using Sinogram Inpainting |url=https://doi.org/10.1134/S1061830919070027 |journal=Russian Journal of Nondestructive Testing |language=en |volume=55 |issue=7 |pages=542–548 |doi=10.1134/S1061830919070027 |s2cid=203437503 |issn=1608-3385}}</ref>
 
The algorithm is based on a primal-dual formulation, which allows for simultaneous updates of primal and [[Duality (mathematics)|dual]] variables. By employing the [[proximal operator]], the Chambolle-Pock algorithm efficiently handles non-smooth and non-convex regularization terms, such as the [[total variation]], specific in imaging framework.<ref name=":0" />
Line 32:
where <math>F^*, G^*</math> and <math> K^*</math> are the dual map of <math> F, G </math> and <math> K </math>, respectively.<ref name=":0" />
 
Assume that the primal and the dual problems have at least a solution <math> (\hat{x}, \hat{y}) \in \mathcal{X}\times \mathcal{Y} </math>, that means they satisfies<ref name=":4">{{Cite book |lastlast1=Ekeland |firstfirst1=Ivar |url=http://epubs.siam.org/doi/book/10.1137/1.9781611971088 |title=Convex Analysis and Variational Problems |last2=Témam |first2=Roger |date=1999 |publisher=Society for Industrial and Applied Mathematics |isbn=978-0-89871-450-0 |page=61 |language=en |doi=10.1137/1.9781611971088}}</ref>
 
<math display="block">
Line 61:
{{nowrap|<math>k\leftarrow k+1</math>}}
'''end do'''
{{algorithm-end}}To ensure the convergence of the algorithm, it is advisable to select the step size parameters such that <math>\tau \sigma \leq \lVert L \rVert</math>, where <math> L = \lVert K \rVert </math>. This straightforward selection guarantees the convergence of the algorithm.<ref name=":0" /> Moreover, with these hypotheses it is proved that for <math>\theta = 1</math> the proposed algorithm has <math> \mathcal{O}(1/N)</math> as rate of convergence for the primal-dual gap<ref name=":0" />, that [[Yurii Nesterov|Nesterov]] showed it is optimal for the class of known structured convex optimization problems.<ref>{{Cite journal |last=Nesterov |first=Yu. |date=2005-05-01 |title=Smooth minimization of non-smooth functions |url=https://doi.org/10.1007/s10107-004-0552-5 |journal=Mathematical Programming |language=en |volume=103 |issue=1 |pages=127–152 |doi=10.1007/s10107-004-0552-5 |s2cid=2391217 |issn=1436-4646}}</ref>
 
The semi-implicit Arrow-Hurwicz method<ref>{{cite book |first=H. |last=Uzawa |chapter=Iterative methods for concave programming |editor1-first=K. J. |editor1-last=Arrow |editor2-first=L. |editor2-last=Hurwicz |editor3-first=H. |editor3-last=Uzawa |title=Studies in linear and nonlinear programming |url=https://archive.org/details/studiesinlinearn0000arro |url-access=registration |publisher=Stanford University Press |year=1958 }}</ref> coincides with the particular choice of <math>\theta = 0</math> in the Chambolle-Pock algorithm.<ref name=":0" />
Line 84:
'''end do'''
{{algorithm-end}}
Moreover, the convergence of the algorithm slows down when <math>L</math>, the norm of the operator <math>K</math>, cannot be estimated easily or might be very large. Choosing proper [[Preconditioner|preconditioners]] <math>T</math> and <math>\Sigma</math>, modifying the proximal operator with the introduction of the [[Inner product space#Euclidean vector space|induced norm]] through the operators <math>T</math> and <math>\Sigma</math>, the convergence of the proposed preconditioned algorithm will be ensured.<ref>{{Cite journal |lastlast1=Pock |firstfirst1=Thomas |last2=Chambolle |first2=Antonin |title=Diagonal preconditioning for first order primal-dual algorithms in convex optimization |url=https://ieeexplore.ieee.org/document/6126441/ |date=2011-11-06 |journal=2011 International Conference on Computer Vision |pages=1762–1769 |doi=10.1109/ICCV.2011.6126441|isbn=978-1-4577-1102-2 |s2cid=17485166 }}</ref>
 
==Application==
Line 99:
| caption2 = Application of the Chambolle-Pock algorithm to the test image with noise.
}}
A typical application of this algorithm is in the image denoising framework, based on total variation<ref name=":2" />. It operates on the concept that signals containing excessive and potentially erroneous details exhibit a high total variation, which represents the integral of the absolute value gradient of the image.<ref name=":2" /> By adhering to this principle, the process aims to decrease the total variation of the signal while maintaining its similarity to the original signal, effectively eliminating unwanted details while preserving crucial features like edges. In the classical bi-dimensional discrete setting<ref>{{Cite journal |last=Chambolle |first=Antonin |date=2004-01-01 |title=An Algorithm for Total Variation Minimization and Applications |url=https://doi.org/10.1023/B:JMIV.0000011325.36760.1e |journal=Journal of Mathematical Imaging and Vision |language=en |volume=20 |issue=1 |pages=89–97 |doi=10.1023/B:JMIV.0000011325.36760.1e |s2cid=207622122 |issn=1573-7683}}</ref>, consider <math>\mathcal{X} = \mathbb{R}^{NM}</math>, where an element <math> u\in\mathcal{X} </math> represents an image with the pixels values collocated in a Cartesian grid <math>N\times M</math>.<ref name=":0" />
 
Define the inner product on <math> \mathcal{X} </math> as<ref name=":0" />
Line 175:
\tilde{u}_{i,j}+\tau\lambda g_{i,j}}{1+\tau \lambda}
\end{align}
</math>The image total-variation denoising problem can be also treated with other algorithms<ref>{{Cite journal |lastlast1=Esser |firstfirst1=Ernie |last2=Zhang |first2=Xiaoqun |last3=Chan |first3=Tony F. |date=2010 |title=A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science |url=http://epubs.siam.org/doi/10.1137/09076934X |journal=SIAM Journal on Imaging Sciences |language=en |volume=3 |issue=4 |pages=1015–1046 |doi=10.1137/09076934X |issn=1936-4954}}</ref> such as the '''[[Augmented Lagrangian method#Alternating direction method of multipliers|alternating direction method of multipliers]]''' (ADMM),<ref>{{Cite journal |lastlast1=Lions |firstfirst1=P. L. |last2=Mercier |first2=B. |date=1979 |title=Splitting Algorithms for the Sum of Two Nonlinear Operators |url=https://www.jstor.org/stable/2156649 |journal=SIAM Journal on Numerical Analysis |volume=16 |issue=6 |pages=964–979 |doi=10.1137/0716071 |jstor=2156649 |bibcode=1979SJNA...16..964L |issn=0036-1429}}</ref> [[Proximal gradient methods for learning|projected (sub)-gradient]]<ref>{{Cite journal |lastlast1=Beck |firstfirst1=Amir |last2=Teboulle |first2=Marc |date=2009 |title=A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems |url=http://epubs.siam.org/doi/10.1137/080716542 |journal=SIAM Journal on Imaging Sciences |language=en |volume=2 |issue=1 |pages=183–202 |doi=10.1137/080716542 |s2cid=3072879 |issn=1936-4954}}</ref> or fast iterative shrinkage thresholding.<ref>{{Cite journal |last=Nestorov |first=Yu.E. |title=A method of solving a convex programming problem with convergence rate <math>O\bigl(\frac1{k^2}\bigr)</math> |url=https://www.mathnet.ru/eng/dan46009 |journal=Dokl. Akad. Nauk SSSR |volume=269 |issue=3 |pages=543-547543–547}}</ref>
 
== Implementation ==