Skip to content

Commit

Permalink
update README
Browse files Browse the repository at this point in the history
  • Loading branch information
joschout committed Aug 26, 2020
1 parent a67dfba commit 6724c56
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ Typically, your own class inheriting `AbstractSubmodularFunction` can contain in

#### Speeding up submodular functions evaluation with function value reuse

Submodular optimization often corresponds to repeated evaluation of the function to be optimized, using different sets as input. Evaluating the function can be costly: the computational effort is often a function of the elements in the set. However, some optimization algorithms (such as Buchbinder's Greedy Search) sequentially evaluate the submodular function on input sets where two consecutive input sets do not differ much. Often, only one element is added or removed at a time. For such functions, it is sometimes possible to reuse the function value obtained from the previous evaluation, and update it with the difference corresponding to the changed set. This difference is called the **marginal increment**. This trick is used in [Andreas Krause's SFO toolbox](https://las.inf.ethz.ch/sfo/index.html), [as described in](https://www.jmlr.org/papers/volume11/krause10a/krause10a.pdf):
Submodular optimization often corresponds to repeated evaluation of the function to be optimized, using different sets as input. Evaluating the function can be costly: the computational effort is often a function of the elements in the set. However, some optimization algorithms (such as Buchbinder's Greedy Search) sequentially evaluate the submodular function on input sets where two consecutive input sets do not differ much. Often, only one element is added or removed at a time. For such functions, it is sometimes possible to reuse the function value obtained from the previous evaluation, and update it with the difference corresponding to the changed set. This difference is called the **marginal increment**. This trick is also used in [Andreas Krause's SFO toolbox](https://las.inf.ethz.ch/sfo/index.html), [as described in](https://www.jmlr.org/papers/volume11/krause10a/krause10a.pdf):
> Krause, Andreas. "SFO: A toolbox for submodular function optimization." The Journal of Machine Learning Research 11 (2010): 1141-1144.
Using marginal increments to update the previous function value can drastically reduce the work, compared to re-evaluating the function on the whole input set.
Expand Down

0 comments on commit 6724c56

Please sign in to comment.