Algorithm: Difference between revisions

Content deleted Content added
ce
Athrante (talk | contribs)
m Corrections in grammar
Line 128:
: While many algorithms reach an exact solution, [[approximation algorithm]]s seek an approximation that is closer to the true solution. The approximation can be reached by either using a deterministic or a random strategy. Such algorithms have practical value for many hard problems. One of the examples of an approximate algorithm is the [[Knapsack problem]], where there is a set of given items. Its goal is to pack the knapsack to get the maximum total value. Each item has some weight and some value. The total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value.<ref>{{Cite book|url=https://www.springer.com/us/book/9783540402862|title=Knapsack Problems {{!}} Hans Kellerer {{!}} Springer|language=en|isbn=978-3-540-40286-2|publisher=Springer|year=2004|doi=10.1007/978-3-540-24777-7|access-date=September 19, 2017|archive-url=https://web.archive.org/web/20171018181055/https://www.springer.com/us/book/9783540402862|archive-date=October 18, 2017|url-status=live|last1=Kellerer|first1=Hans|last2=Pferschy|first2=Ulrich|last3=Pisinger|first3=David|s2cid=28836720 }}</ref>
; Quantum algorithm
: [[Quantum algorithm]]s run on a realistic model of [[quantum computation]]. The term is usually used for those algorithms which seem inherently quantum, or use some essential feature of [[Quantum computing]] such as [[quantum superposition]] or [[quantum entanglement]].
 
=== By design paradigm ===
Line 152:
| volume = 38| citeseerx = 10.1.1.145.4600| s2cid = 13268711
}}</ref> Whether randomized algorithms with [[P (complexity)|polynomial time complexity]] can be the fastest algorithm for some problems is an open question known as the [[P versus NP problem]]. There are two large classes of such algorithms:
# [[Monte Carlo algorithm]]s return a correct answer with high- probability. E.g. [[RP (complexity)|RP]] is the subclass of these that run in [[polynomial time]].
# [[Las Vegas algorithm]]s always return the correct answer, but their running time is only probabilistically bound, e.g. [[Zero-error Probabilistic Polynomial time|ZPP]].
; [[Reduction (complexity)|Reduction of complexity]]
Line 168:
: When a problem shows optimal substructures—meaning the optimal solution to a problem can be constructed from optimal solutions to subproblems—and [[overlapping subproblem]]s, meaning the same subproblems are used to solve many different problem instances, a quicker approach called ''dynamic programming'' avoids recomputing solutions that have already been computed. For example, [[Floyd–Warshall algorithm]], the shortest path to a goal from a vertex in a weighted [[graph (discrete mathematics)|graph]] can be found by using the shortest path to the goal from all adjacent vertices. Dynamic programming and [[memoization]] go together. The main difference between dynamic programming and divide and conquer is that subproblems are more or less independent in divide and conquer, whereas subproblems overlap in dynamic programming. The difference between dynamic programming and straightforward recursion is in caching or memoization of recursive calls. When subproblems are independent and there is no repetition, memoization does not help; hence dynamic programming is not a solution for all complex problems. By using memoization or maintaining a [[Mathematical table|table]] of subproblems already solved, dynamic programming reduces the exponential nature of many problems to polynomial complexity.
; The greedy method
: A [[greedy algorithm]] is similar to a dynamic programming algorithm in that it works by examining substructures, in this case not of the problem but of a given solution. Such algorithms start with some solution, which may be given or have been constructed in some way, and improve it by making small modifications. For some problems they can find the optimal solution while for others they stop at [[local optimum|local optima]], that is, at solutions that cannot be improved by the algorithm but are not optimum. The most popular use of greedy algorithms is for finding the minimal spanning tree where finding the optimal solution is possible with this method. [[Huffman coding|Huffman Tree]], [[kruskal's algorithm|Kruskal]], [[Prim's algorithm|Prim]], [[Sollin's algorithm|Sollin]] are greedy algorithms that can solve this optimization problem.
;The heuristic method
:In [[optimization problem]]s, [[heuristic algorithm]]s can be used to find a solution close to the optimal solution in cases where finding the optimal solution is impractical. These algorithms work by getting closer and closer to the optimal solution as they progress. In principle, if run for an infinite amount of time, they will find the optimal solution. Their merit is that they can find a solution very close to the optimal solution in a relatively short time. Such algorithms include [[local search (optimization)|local search]], [[tabu search]], [[simulated annealing]], and [[genetic algorithm]]s. Some of them, like simulated annealing, are non-deterministic algorithms while others, like tabu search, are deterministic. When a bound on the error of the non-optimal solution is known, the algorithm is further categorized as an [[approximation algorithm]].