Skip to content

Commit

Permalink
renamed AbstractObjectiveFunction to AbstractSubmodularFunction. Link…
Browse files Browse the repository at this point in the history
…ed to Andreas Krause's SFO Matlab toolbox for future reference in the README.
  • Loading branch information
joschout committed Oct 20, 2019
1 parent 77c5fd5 commit e453104
Show file tree
Hide file tree
Showing 8 changed files with 23 additions and 23 deletions.
16 changes: 8 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,27 +26,27 @@ For a lack of a better name, this repository calls these algorithms:
The following describes how to use this repository in your own implementation.

## The set function
To use this in your own code, your function to be maximized should be contained in an object of a class inheriting from `AbstractObjectiveFunction`. This class looks as follows:
To use this in your own code, your function to be maximized should be contained in an object of a class inheriting from `AbstractSubmodularFunction`. This class looks as follows:
``` Python
class AbstractObjectiveFunction:
class AbstractSubmodularFunction:
def evaluate(self, input_set: Set[E]) -> float:
raise NotImplementedError('Abstract Method')
```
That is, `AbstractObjectiveFunction` requires its subclasses to implement an `evaluate()` method, taking as input a `Set[E]` and resulting in a `float`. This method should evaluate the set function on the given set, returning the value of the function. This class corresponds to the *'value oracle'*, which should be able to return the value of the function to be maximixed for every possible subset of the *ground set*.
That is, `AbstractSubmodularFunction` requires its subclasses to implement an `evaluate()` method, taking as input a `Set[E]` and resulting in a `float`. This method should evaluate the set function on the given set, returning the value of the function. This class corresponds to the *'value oracle'*, which should be able to return the value of the function to be maximixed for every possible subset of the *ground set*.

Typically, your own class inheriting `AbstractObjectiveFunction` can contain instance variables for parameters required by the objective function.
Typically, your own class inheriting `AbstractSubmodularFunction` can contain instance variables for parameters required by the objective function.

## The Optimizers
Every included optimizer inherits the class `AbstractOptimizer`. Each optimizer should be iniitialized with at least two arguments:
Every included optimizer inherits the class `AbstractOptimizer`. Each optimizer should be initialized with at least two arguments:
1. the objective function to be optimized
2. the ground set of items. The optimizers will search over the power set of this ground set.

The following shows the definition of the `AbstractOptimizer` class:

``` Python
class AbstractOptimizer:
def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], debug: bool = True):
self.objective_function: AbstractObjectiveFunction = objective_function
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], debug: bool = True):
self.objective_function: AbstractSubmodularFunction = objective_function
self.ground_set: Set[E] = ground_set
self.debug: bool = debug

Expand Down Expand Up @@ -93,7 +93,7 @@ Some good references for submodular maximization
>
> Buchbinder, N., & Feldman, M. (2019). Submodular Functions Maximization Problems. Handbook of Approximation Algorithms and Metaheuristics, Second Edition, 753–788. https://doi.org/10.1201/9781351236423-42
Andreas Krause and Carlos Guestrin maintain a [great website about submodular optimization and the submodularity property](https://las.inf.ethz.ch/submodularity/)
Andreas Krause and Carlos Guestrin maintain a [great website about submodular optimization and the submodularity property](https://las.inf.ethz.ch/submodularity/), linking to their [Matlab/Octave toolbox for Submodular Function Optimization](https://las.inf.ethz.ch/sfo/index.html).

Jan Vondrak hosts the [slides for some great presentations he did about submodular functions on his website.](https://theory.stanford.edu/~jvondrak/presentations.html)

Expand Down
6 changes: 3 additions & 3 deletions submodmax/abstract_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@
E = TypeVar('E')


class AbstractObjectiveFunction:
class AbstractSubmodularFunction:
def evaluate(self, input_set: Set[E]) -> float:
raise NotImplementedError('Abstract Method')


class AbstractOptimizer:
def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], debug: bool = True):
self.objective_function: AbstractObjectiveFunction = objective_function
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], debug: bool = True):
self.objective_function: AbstractSubmodularFunction = objective_function
self.ground_set: Set[E] = ground_set
self.debug: bool = debug

Expand Down
4 changes: 2 additions & 2 deletions submodmax/deterministic_double_greedy_search.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from typing import Set, TypeVar

from .abstract_optimizer import AbstractOptimizer, AbstractObjectiveFunction
from .abstract_optimizer import AbstractOptimizer, AbstractSubmodularFunction

E = TypeVar('E')

Expand All @@ -23,7 +23,7 @@ class DeterministicDoubleGreedySearch(AbstractOptimizer):
"""

def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], debug: bool = True):
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], debug: bool = True):
super().__init__(objective_function, ground_set, debug)

def optimize(self) -> Set[E]:
Expand Down
4 changes: 2 additions & 2 deletions submodmax/deterministic_local_search.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from typing import Set, Tuple, Optional, TypeVar

from .abstract_optimizer import AbstractOptimizer, AbstractObjectiveFunction
from .abstract_optimizer import AbstractOptimizer, AbstractSubmodularFunction

E = TypeVar('E')

Expand All @@ -27,7 +27,7 @@ class DeterministicLocalSearch(AbstractOptimizer):
FOCS paper: https://people.csail.mit.edu/mirrokni/focs07.pdf (page 4-5)
"""

def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], epsilon: float = 0.05,
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], epsilon: float = 0.05,
debug: bool = True):
super().__init__(objective_function, ground_set, debug)
self.epsilon: float = epsilon
Expand Down
4 changes: 2 additions & 2 deletions submodmax/deterministic_local_search_pyids.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from typing import Set, Tuple, Optional, TypeVar

from .abstract_optimizer import AbstractOptimizer, AbstractObjectiveFunction
from .abstract_optimizer import AbstractOptimizer, AbstractSubmodularFunction

E = TypeVar('E')

Expand Down Expand Up @@ -28,7 +28,7 @@ class DeterministicLocalSearchPyIDS(AbstractOptimizer):
This implementation is largely based on the one from Jiri Filip and Tomas Kliegr included in PyIDS.
"""

def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], epsilon=0.05,
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], epsilon=0.05,
debug: bool = True):
super().__init__(objective_function, ground_set, debug)
self.epsilon: float = epsilon
Expand Down
4 changes: 2 additions & 2 deletions submodmax/randomized_double_greedy_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

import numpy as np

from .abstract_optimizer import AbstractOptimizer, AbstractObjectiveFunction
from .abstract_optimizer import AbstractOptimizer, AbstractSubmodularFunction

E = TypeVar('E')

Expand All @@ -30,7 +30,7 @@ class RandomizedDoubleGreedySearch(AbstractOptimizer):
"""

def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], debug: bool = True):
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], debug: bool = True):
super().__init__(objective_function, ground_set, debug)

def optimize(self) -> Set[E]:
Expand Down
4 changes: 2 additions & 2 deletions submodmax/smooth_local_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

import numpy as np

from .abstract_optimizer import AbstractOptimizer, AbstractObjectiveFunction
from .abstract_optimizer import AbstractOptimizer, AbstractSubmodularFunction
from .random_set import sample_a_set_with_bias_delta_on_A

E = TypeVar('E')
Expand All @@ -27,7 +27,7 @@ class SmoothLocalSearch(AbstractOptimizer):
Note: the problem of maximizing a submodular function is NP-hard.
"""
def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], debug=True):
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], debug=True):
super().__init__(objective_function, ground_set, debug)

self.empty_set: Set[E] = set()
Expand Down
4 changes: 2 additions & 2 deletions submodmax/smooth_local_search_pyids.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

import numpy as np

from .abstract_optimizer import AbstractOptimizer, AbstractObjectiveFunction
from .abstract_optimizer import AbstractOptimizer, AbstractSubmodularFunction
from .random_set import sample_a_set_with_bias_delta_on_A, RandomSetOptimizer

E = TypeVar('E')
Expand All @@ -29,7 +29,7 @@ class SmoothLocalSearchPyIDS(AbstractOptimizer):
This implementation is largely based on the one from Jiri Filip and Tomas Kliegr included in PyIDS.
"""

def __init__(self, objective_function: AbstractObjectiveFunction, ground_set: Set[E], debug: bool = True):
def __init__(self, objective_function: AbstractSubmodularFunction, ground_set: Set[E], debug: bool = True):
super().__init__(objective_function, ground_set, debug)
self.rs_optimizer = RandomSetOptimizer(ground_set)

Expand Down

0 comments on commit e453104

Please sign in to comment.