
上QQ阅读APP看书,第一时间看更新
Understanding the dynamic programming strategy
Dynamic programming was a strategy proposed in the 1950s by Richard Bellman to optimize certain classes of algorithms. It is based on an intelligent caching mechanism that tries to reuse heavy computations. This intelligent caching mechanism is called memorization.
Dynamic programming gives good performance benefits when the problem we are trying to solve can be pided into subproblems. The subproblems partly involve a calculation that is repeated in those subproblems. The idea is to perform that calculation once (which is the time-consuming step) and then reuse it on the other subproblems. This is achieved using memorization, which is especially useful in solving recursive problems that may evaluate the same inputs multiple times.