Skip to content

Commit

Permalink
[pre-commit.ci] pre-commit autoupdate (#555)
Browse files Browse the repository at this point in the history
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Tim Mensinger <[email protected]>
  • Loading branch information
pre-commit-ci[bot] and timmens authored Jan 21, 2025
1 parent 15ac4c2 commit 9a40583
Show file tree
Hide file tree
Showing 21 changed files with 53 additions and 49 deletions.
10 changes: 5 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ repos:
- --blank
exclude: src/optimagic/optimization/algo_options.py
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.7.2
rev: v0.9.2
hooks:
# Run the linter.
- id: ruff
Expand All @@ -97,7 +97,7 @@ repos:
- pyi
- jupyter
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.18
rev: 0.7.21
hooks:
- id: mdformat
additional_dependencies:
Expand All @@ -109,7 +109,7 @@ repos:
- '88'
files: (README\.md)
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.18
rev: 0.7.21
hooks:
- id: mdformat
additional_dependencies:
Expand All @@ -121,7 +121,7 @@ repos:
files: (docs/.)
exclude: docs/source/how_to/how_to_specify_algorithm_and_algo_options.md
- repo: https://github.com/kynan/nbstripout
rev: 0.8.0
rev: 0.8.1
hooks:
- id: nbstripout
exclude: |
Expand All @@ -132,7 +132,7 @@ repos:
args:
- --drop-empty-cells
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.13.0
rev: v1.14.1
hooks:
- id: mypy
files: src|tests
Expand Down
2 changes: 1 addition & 1 deletion .tools/envs/testenv-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ dependencies:
- scipy>=1.2.1 # run, tests
- sqlalchemy # run, tests
- seaborn # dev, tests
- mypy=1.13 # dev, tests
- mypy=1.14.1 # dev, tests
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
Expand Down
2 changes: 1 addition & 1 deletion .tools/envs/testenv-numpy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ dependencies:
- scipy>=1.2.1 # run, tests
- sqlalchemy # run, tests
- seaborn # dev, tests
- mypy=1.13 # dev, tests
- mypy=1.14.1 # dev, tests
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
Expand Down
2 changes: 1 addition & 1 deletion .tools/envs/testenv-others.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ dependencies:
- scipy>=1.2.1 # run, tests
- sqlalchemy # run, tests
- seaborn # dev, tests
- mypy=1.13 # dev, tests
- mypy=1.14.1 # dev, tests
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
Expand Down
2 changes: 1 addition & 1 deletion .tools/envs/testenv-pandas.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ dependencies:
- scipy>=1.2.1 # run, tests
- sqlalchemy # run, tests
- seaborn # dev, tests
- mypy=1.13 # dev, tests
- mypy=1.14.1 # dev, tests
- pyyaml # dev, tests
- jinja2 # dev, tests
- annotated-types # dev, tests
Expand Down
4 changes: 2 additions & 2 deletions docs/source/development/ep-01-pytrees.md
Original file line number Diff line number Diff line change
Expand Up @@ -384,7 +384,7 @@ much complexity by avoiding complex pytrees as inputs and outputs at the same ti
To see this in action, let's look at an example. We repeat the example from the JAX
interface above with the following changes:

1. The 1d numpy array in x\["a"\] is replaced by a DataFrame with `"value"` column
1. The 1d numpy array in x["a"] is replaced by a DataFrame with `"value"` column
1. The "d" entry in the output becomes a Series instead of a 1d numpy array.

```python
Expand Down Expand Up @@ -460,7 +460,7 @@ very first jacobian:
+--------+----------+----------+----------+----------+----------+----------+----------+
```

The indices \["j", "k", "l", "m"\] unfortunately never made it into the result because
The indices ["j", "k", "l", "m"] unfortunately never made it into the result because
they were only applied to elements that already came from a 2d array and thus always
have a 3d Jacobian, i.e. the result entry `["c"][b"]` is a reshaped version of the upper
right 2 by 4 array and the result entry `["d"]["b"]` is a reshaped version of the lower
Expand Down
4 changes: 2 additions & 2 deletions docs/source/how_to/how_to_scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ these considerations are typically tighter than for parameters that have a small
on the objective function.

Thus, a natural approach to improve the scaling of the optimization problem is to re-map
all parameters such that the bounds are \[0, 1\] for all parameters. This has the
all parameters such that the bounds are [0, 1] for all parameters. This has the
additional advantage that absolute and relative convergence criteria on parameter
changes become the same.

Expand Down Expand Up @@ -164,7 +164,7 @@ to a strictly non-zero number for the `"start_values"` and `"gradient"` approach
`"bounds"` approach avoids division by exact zeros by construction. The
`"clipping_value"` can still be used to avoid extreme upscaling of parameters with very
tight bounds. However, this means that the bounds of the re-scaled problem are not
exactly \[0, 1\] for all parameters.
exactly [0, 1] for all parameters.

(scaling-default-values)=

Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ dependencies:
- sphinx-panels # docs
- sphinxcontrib-bibtex # docs
- seaborn # dev, tests
- mypy=1.13 # dev, tests
- mypy=1.14.1 # dev, tests
- pyyaml # dev, tests
- jinja2 # dev, tests
- furo # dev, docs
Expand Down
8 changes: 4 additions & 4 deletions src/optimagic/differentiation/derivatives.py
Original file line number Diff line number Diff line change
Expand Up @@ -328,7 +328,7 @@ def first_derivative(
f0 = np.array(f0, dtype=np.float64)

# convert the raw evaluations to numpy arrays
raw_evals = _convert_evals_to_numpy(
raw_evals_arr = _convert_evals_to_numpy(
raw_evals=raw_evals,
unpacker=unpacker,
registry=registry,
Expand All @@ -337,9 +337,9 @@ def first_derivative(
)

# apply finite difference formulae
evals_data = np.array(raw_evals).reshape(2, n_steps, len(x), -1)
evals_data = np.transpose(evals_data, axes=(0, 1, 3, 2))
evals = Evals(pos=evals_data[0], neg=evals_data[1])
evals_data = np.array(raw_evals_arr).reshape(2, n_steps, len(x), -1)
evals_data_transposed = np.transpose(evals_data, axes=(0, 1, 3, 2))
evals = Evals(pos=evals_data_transposed[0], neg=evals_data_transposed[1])

jac_candidates = {}
for m in ["forward", "backward", "central"]:
Expand Down
6 changes: 3 additions & 3 deletions src/optimagic/differentiation/generate_steps.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,9 +90,9 @@ def generate_steps(
)
min_steps = base_steps if min_steps is None else min_steps

assert (
bounds.upper - bounds.lower >= 2 * min_steps
).all(), "min_steps is too large to fit into bounds."
assert (bounds.upper - bounds.lower >= 2 * min_steps).all(), (
"min_steps is too large to fit into bounds."
)

upper_step_bounds = bounds.upper - x
lower_step_bounds = bounds.lower - x
Expand Down
12 changes: 6 additions & 6 deletions src/optimagic/differentiation/richardson_extrapolation.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,13 +52,13 @@ def richardson_extrapolation(sequence, steps, method="central", num_terms=None):
n_steps = steps.shape[0]
num_terms = n_steps if num_terms is None else num_terms

assert (
seq_len == n_steps
), "Length of ``steps`` must coincide with length of ``sequence``."
assert seq_len == n_steps, (
"Length of ``steps`` must coincide with length of ``sequence``."
)
assert num_terms > 0, "``num_terms`` must be greater than zero."
assert (
seq_len - 1 >= num_terms
), "``num_terms`` cannot be greater than ``seq_len`` - 1."
assert seq_len - 1 >= num_terms, (
"``num_terms`` cannot be greater than ``seq_len`` - 1."
)

step_ratio = _compute_step_ratio(steps)
order, exponentiation_step = _get_order_and_exponentiation_step(method)
Expand Down
2 changes: 1 addition & 1 deletion src/optimagic/examples/criterion_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ def rosenbrock_gradient(params: PyTree) -> PyTree:
l4 = np.delete(x, [0])
l4 = np.append(l4, 0)
l5 = np.full((len(x) - 1), 2)
l5 = np.append(l5, 0)
l5 = np.append(l5, 0) # type: ignore[assignment]
flat = 100 * (4 * (l1**3) + 2 * l2 - 2 * (l3**2) - 4 * (l4 * x)) + 2 * l1 - l5
return _unflatten_gradient(flat, params)

Expand Down
5 changes: 4 additions & 1 deletion src/optimagic/logging/logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,10 @@ def _build_history_dataframe(self) -> pd.DataFrame:

times = np.array(history["time"])
times -= times[0]
history["time"] = times.tolist()
# For numpy arrays with ndim = 0, tolist() returns a scalar, which violates the
# type hinting list[Any] from above. As history["time"] is always a list, this
# case is safe to ignore.
history["time"] = times.tolist() # type: ignore[assignment]

df = pd.DataFrame(history)
df = df.merge(
Expand Down
6 changes: 4 additions & 2 deletions src/optimagic/optimization/multistart.py
Original file line number Diff line number Diff line change
Expand Up @@ -313,9 +313,11 @@ def run_explorations(
"""
internal_problem = internal_problem.with_step_id(step_id)
x_list = list(sample)
x_list: list[NDArray[np.float64]] = list(sample)

raw_values = np.array(internal_problem.exploration_fun(x_list, n_cores=n_cores))
raw_values = np.asarray(
internal_problem.exploration_fun(x_list, n_cores=n_cores), dtype=np.float64
)

is_valid = np.isfinite(raw_values)

Expand Down
3 changes: 1 addition & 2 deletions src/optimagic/optimization/optimize_result.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,8 +173,7 @@ def __repr__(self) -> str:

if self.start_fun is not None and self.fun is not None:
improvement = (
f"The value of criterion improved from {self.start_fun} to "
f"{self.fun}."
f"The value of criterion improved from {self.start_fun} to {self.fun}."
)
else:
improvement = None
Expand Down
5 changes: 3 additions & 2 deletions src/optimagic/optimizers/nag_optimizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -1051,8 +1051,9 @@ def _build_options_dict(user_input, default_options):
invalid = [x for x in user_input if x not in full_options]
if len(invalid) > 0:
raise ValueError(
f"You specified illegal options {', '.join(invalid)}. Allowed are: "
", ".join(full_options.keys())
f"You specified illegal options {', '.join(invalid)}. Allowed are: , ".join(
full_options.keys()
)
)
full_options.update(user_input)
return full_options
Expand Down
6 changes: 1 addition & 5 deletions src/optimagic/optimizers/scipy_optimizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -799,11 +799,7 @@ def _solve_internal_problem(
)
raw_res = scipy.optimize.brute(
func=problem.fun,
ranges=tuple(
map(
tuple, np.column_stack((problem.bounds.lower, problem.bounds.upper))
)
),
ranges=tuple(zip(problem.bounds.lower, problem.bounds.upper, strict=True)),
Ns=self.n_grid_points,
full_output=True,
finish=self.polishing_function,
Expand Down
6 changes: 3 additions & 3 deletions src/optimagic/parameters/consolidate_constraints.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,9 +205,9 @@ def _consolidate_fixes_with_equality_constraints(
for eq in equality_constraints:
if np.isfinite(fixed_value[eq["index"]]).any():
valcounts = _unique_values(fixed_value[eq["index"]])
assert (
len(valcounts) == 1
), "Equality constrained parameters cannot be fixed to different values."
assert len(valcounts) == 1, (
"Equality constrained parameters cannot be fixed to different values."
)
fixed_value[eq["index"]] = valcounts[0]

return fixed_value
Expand Down
8 changes: 4 additions & 4 deletions tests/estimagic/test_bootstrap.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,16 +123,16 @@ def test_bootstrap_existing_outcomes(setup):
result = bootstrap(
data=setup["df"],
outcome=_outcome_func,
n_draws=2,
n_draws=3,
)
assert len(result.outcomes) == 2
assert len(result.outcomes) == 3
result = bootstrap(
outcome=_outcome_func,
data=setup["df"],
existing_result=result,
n_draws=1,
n_draws=2,
)
assert len(result.outcomes) == 1
assert len(result.outcomes) == 2


def test_bootstrap_from_outcomes(setup, expected):
Expand Down
2 changes: 1 addition & 1 deletion tests/estimagic/test_estimation_table.py
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,7 @@ def test_get_model_names():
def test_get_default_column_names_and_groups():
model_names = ["a_name", "a_name", "(3)", "(4)", "third_name"]
res_names, res_groups = _get_default_column_names_and_groups(model_names)
exp_names = [f"({i+1})" for i in range(len(model_names))]
exp_names = [f"({i + 1})" for i in range(len(model_names))]
exp_groups = ["a_name", "a_name", "(3)", "(4)", "third_name"]
assert res_names == exp_names
assert res_groups == exp_groups
Expand Down
5 changes: 4 additions & 1 deletion tests/optimagic/optimization/test_with_constraints.py
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,10 @@ def test_three_independent_constraints():
)
expected = np.array([0] * 4 + [4, 5] + [0] + [7.5] * 2 + [0])

aaae(res.params, expected, decimal=4)
# TODO: Increase precision back to decimal=4. The reduced precision is likely due
# to the re-written L-BFGS-B algorithm in SciPy 1.15.
# See https://github.com/optimagic-dev/optimagic/issues/556.
aaae(res.params, expected, decimal=3)


INVALID_CONSTRAINT_COMBIS = [
Expand Down

0 comments on commit 9a40583

Please sign in to comment.