Dynamic programming: Difference between revisions

Content deleted Content added
Undid revision 759205890 by Walrus068 (talk) hyphens are not subtraction signs
Added links
Tags: canned edit summary Mobile edit Mobile app edit
Line 2:
{{Disputed|date=November 2015}}
 
In [[mathematicscomputer science]], [[management sciencemathematics]], [[economicsmanagement science]], [[computer scienceeconomics]], and [[bioinformatics]], '''dynamic programming''' (also known as '''dynamic optimization''') is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "[[memoization]]"<!-- NOT "memorization" -->.
 
Dynamic programming algorithms are often used for [[optimization]]. A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. In comparison, a [[greedy algorithm]] treats the solution as some sequence of steps and picks the locally optimal choice at each step. Using a greedy algorithm does not guarantee an optimal solution, because picking locally optimal choices may result in a bad global solution, but it is often faster to calculate. Fortunately, some greedy algorithms (such as [[Kruskal's algorithm|Kruskal's]] or [[Prim's algorithm|Prim's]] for [[minimum spanning tree]]s) are proven to lead to the optimal solution.