===Learning Based Models===
=== Old Text ===
Sequences of ordered images allow the estimation of motion as either instantaneous image velocities or discrete image displacements.<ref name="S. S. Beauchemin, J. L. Barron 1995" /> David J. Fleet and Yair Weiss provide a tutorial introduction to gradient based optical flow.<ref>{{Cite book |title=Handbook of Mathematical Models in Computer Vision |last1=Fleet |first1=David J. |last2=Weiss |first2=Yair |publisher=Springer |year=2006 |isbn=978-0-387-26371-7 |editor-last=Paragios |editor-first=Nikos |pages=237–257 |chapter=Optical Flow Estimation |editor-last2=Chen |editor-first2=Yunmei |editor-last3=Faugeras |editor-first3=Olivier D. |chapter-url=http://www.cs.toronto.edu/~fleet/research/Papers/flowChapter05.pdf}}</ref>
Fleet, along with John L. Barron and Steven Beauchemin, also provides a performance analysis of a number of optical flow techniques, emphasizing the accuracy and density of measurements.<ref>{{Cite journal |last1=Barron |first1=John L. |last2=Fleet |first2=David J. |last3=Beauchemin |first3=Steven |name-list-style=amp |year=1994 |title=Performance of optical flow techniques |url=http://www.cs.toronto.edu/~fleet/research/Papers/ijcv-94.pdf |journal=International Journal of Computer Vision |volume=12 |pages=43–77 |citeseerx=10.1.1.173.481 |doi=10.1007/bf01420984|s2cid=1290100 }}</ref>
The optical flow methods try to calculate the motion between two image frames which are taken at times <math>t</math> and <math>t+\Delta t</math> at every [[voxel]] position. These methods are called differential since they are based on local [[Taylor series]] approximations of the image signal; that is, they use partial derivatives with respect to the spatial and temporal coordinates.
For a (2D + ''t'')-dimensional case (3D or ''n''-D cases are similar) a voxel at ___location <math>(x,y,t)</math> with intensity <math>I(x,y,t)</math> will have moved by <math>\Delta x</math>, <math>\Delta y</math> and <math>\Delta t</math> between the two image frames, and the following ''brightness constancy constraint'' can be given:
:<math>I(x,y,t) = I(x+\Delta x, y + \Delta y, t + \Delta t)</math>
Assuming the movement to be small, the image constraint at <math>I(x,y,t)</math> with [[Taylor series]] can be developed to get:
:<math>I(x+\Delta x,y+\Delta y,t+\Delta t) = I(x,y,t) + \frac{\partial I}{\partial x}\,\Delta x+\frac{\partial I}{\partial y}\,\Delta y+\frac{\partial I}{\partial t} \, \Delta t+{}</math>[[higher-order terms]]
By truncating the higher order terms (which performs a linearization) it follows that:
:<math>\frac{\partial I}{\partial x}\Delta x+\frac{\partial I}{\partial y}\Delta y+\frac{\partial I}{\partial t}\Delta t = 0</math>
or, dividing by <math>\Delta t</math>,
:<math>\frac{\partial I}{\partial x}\frac{\Delta x}{\Delta t} + \frac{\partial I}{\partial y}\frac{\Delta y}{\Delta t} + \frac{\partial I}{\partial t} \frac{\Delta t}{\Delta t} = 0</math>
which results in
:<math>\frac{\partial I}{\partial x}V_x+\frac{\partial I}{\partial y}V_y+\frac{\partial I}{\partial t} = 0</math>
where <math>V_x,V_y</math> are the <math>x</math> and <math>y</math> components of the velocity or optical flow of <math>I(x,y,t)</math> and <math>\tfrac{\partial I}{\partial x}</math>, <math>\tfrac{\partial I}{\partial y}</math> and <math>\tfrac{\partial I}{\partial t}</math> are the derivatives of the image at <math>(x,y,t)</math> in the corresponding directions. <math>I_x</math>,<math> I_y</math> and <math> I_t</math> can be written for the derivatives in the following.
Thus:
:<math>I_xV_x+I_yV_y=-I_t</math>
or
:<math>\nabla I\cdot\vec{V} = -I_t</math>
This is an equation in two unknowns and cannot be solved as such. This is known as the ''[[Motion perception#The aperture problem|aperture problem]]'' of the optical flow algorithms. To find the optical flow another set of equations is needed, given by some additional constraint. All optical flow methods introduce additional conditions for estimating the actual flow.
=== Methods for determination ===
*[[Phase correlation]] – inverse of normalized [[cross-power spectrum]]
*Block-based methods – minimizing sum of squared differences or [[sum of absolute differences]], or maximizing normalized [[cross-correlation]]
*Differential methods of estimating optical flow, based on partial derivatives of the image signal and/or the sought flow field and higher-order partial derivatives, such as:
**[[Lucas–Kanade method]] – regarding image patches and an affine model for the flow field<ref name="Zhang2018">{{Cite journal |last1=Zhang |first1=G. |last2=Chanson |first2=H. |author-link2=Hubert Chanson |year=2018 |title=Application of Local Optical Flow Methods to High-Velocity Free-surface Flows: Validation and Application to Stepped Chutes |url=http://staff.civil.uq.edu.au/h.chanson/reprints/Zhang_Chanson_etfs_2018.pdf |journal=Experimental Thermal and Fluid Science |volume=90 |pages=186–199 |doi=10.1016/j.expthermflusci.2017.09.010|bibcode=2018ETFS...90..186Z }}</ref>
**[[Horn–Schunck method]] – optimizing a functional based on residuals from the brightness constancy constraint, and a particular regularization term expressing the expected smoothness of the flow field<ref name="Zhang2018" />
**[[Buxton–Buxton method]] – based on a model of the motion of edges in image sequences<ref>{{Cite book |url=https://books.google.com/books?id=NiQXkMbx-lUC&q=optical-flow+Buxton-and-Buxton&pg=PA107 |title=Visual Cognition |last=Glyn W. Humphreys and [[Vicki Bruce]] |publisher=Psychology Press |year=1989 |isbn=978-0-86377-124-8}}</ref>
**[[Black–Jepson method]] – coarse optical flow via correlation<ref name="S. S. Beauchemin, J. L. Barron 1995" />
**General [[variational methods]] – a range of modifications/extensions of Horn–Schunck, using other data terms and other smoothness terms.
*Discrete optimization methods – the search space is quantized, and then image matching is addressed through label assignment at every pixel, such that the corresponding deformation minimizes the distance between the source and the target image.<ref>{{Cite book |url=http://vision.mas.ecp.fr/pub/mian08.pdf |title=Dense Image Registration through MRFs and Efficient Linear Programming |last1=B. Glocker |last2=N. Komodakis |last3=G. Tziritas |last4=N. Navab |last5=N. Paragios |publisher=Medical Image Analysis Journal |year=2008}}</ref> The optimal solution is often recovered through [[Max-flow min-cut theorem]] algorithms, linear programming or [[belief propagation]] methods.
Many of these, in addition to the current state-of-the-art algorithms are evaluated on the Middlebury Benchmark Dataset.<ref>{{Cite journal |last1=Baker |first1=Simon |last2=Scharstein |first2=Daniel |last3=Lewis |first3=J. P. |last4=Roth |first4=Stefan |last5=Black |first5=Michael J. |last6=Szeliski |first6=Richard |date=March 2011 |title=A Database and Evaluation Methodology for Optical Flow |journal=International Journal of Computer Vision |language=en |volume=92 |issue=1 |pages=1–31 |doi=10.1007/s11263-010-0390-2 |s2cid=316800 |issn=0920-5691|doi-access=free }}</ref><ref>{{Cite web |url=http://vision.middlebury.edu/flow/ |title=Optical Flow |last1=Baker |first1=Simon |last2=Scharstein |first2=Daniel |website=vision.middlebury.edu |access-date=2019-10-18 |last3=Lewis |first3=J. P. |last4=Roth |first4=Stefan |last5=Black |first5=Michael J. |last6=Szeliski |first6=Richard}}</ref> Other popular benchmark datasets are [[KITTI]] and [[Sintel]].
== Uses ==
|