In computer science, the maximum subarray problem for a given one-dimensional array of numbers is to find the contiguous subarray with maximum sum. The problem was first posed by Ulf Grenander of Brown University in 1977, as a simplified model for maximum likelihood estimation of patterns in digitized images. A linear time algorithm to solve it was found soon afterwards by Jay Kadane of Carnegie-Mellon University, and first published by Bentley (1984).
The algorithm consists of a scan through the array values, computing at each position the maximum subarray ending at that position. This subarray is either empty (in which case its sum is zero) or consists of one more element than the maximum subarray ending at the previous position. Thus, the problem can be solved with the following code, expressed here in Python:
def max_subarray(A):
max_so_far = max_ending_here = 0
for x in A:
max_ending_here = max(0, max_ending_here + x)
max_so_far = max(max_so_far, max_ending_here)
return max_so_far
Similar problems may be posed for higher dimensional arrays, but their solution is more complicated.
References
- Bentley, Jon (1984), "Programming pearls: algorithm design techniques", Communications of the ACM, 27 (9): 865–873, doi:10.1145/358234.381162.