Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sourcery refactored master branch #1

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

sourcery-ai[bot]
Copy link

@sourcery-ai sourcery-ai bot commented Jun 27, 2022

Branch master refactored by Sourcery.

If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.

See our documentation here.

Run Sourcery locally

Reduce the feedback loop during development by using the Sourcery editor plugin:

Review changes via command line

To manually merge these changes, make sure you're on the master branch, then run:

git fetch origin sourcery/master
git merge --ff-only FETCH_HEAD
git reset HEAD^

Help us improve this pull request!

@sourcery-ai sourcery-ai bot requested a review from xinetzone June 27, 2022 01:47
Copy link
Author

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to GitHub API limits, only the first 60 comments can be shown.

htmlhelp_basename = project + "-doc"
htmlhelp_basename = f"{project}-doc"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 178-221 refactored with the following changes:

transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=num_layers)
return transformer_encoder
return nn.TransformerEncoder(encoder_layer, num_layers=num_layers)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function get_user_model_encoder refactored with the following changes:

output_text = re.sub(self.pattern, " ", text.lower())

return output_text
return re.sub(self.pattern, " ", text.lower())
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function UserNormalizer.__call__ refactored with the following changes:

Comment on lines -68 to +66
output_tokens = re.split(self.pattern, text)

return output_tokens
return re.split(self.pattern, text)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function UserTokenizer.__call__ refactored with the following changes:

self._groups = {}
for idx, values in enumerate(temp.values()):
self._groups[idx] = values
self._groups = dict(enumerate(temp.values()))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MetricCollection._merge_compute_groups refactored with the following changes:

Comment on lines -577 to +571
if len(gt) == 0 and len(det) == 0:
if not gt and not det:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MeanAveragePrecision._evaluate_image refactored with the following changes:

Comment on lines -709 to +707
mean_prec = torch.tensor([-1.0]) if len(prec[prec > -1]) == 0 else torch.mean(prec[prec > -1])
return mean_prec
return (
torch.tensor([-1.0])
if len(prec[prec > -1]) == 0
else torch.mean(prec[prec > -1])
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function MeanAveragePrecision._summarize refactored with the following changes:

if preds.shape[0:2] != target.shape[0:2]:
if preds.shape[:2] != target.shape[:2]:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function permutation_invariant_training refactored with the following changes:

Comment on lines -180 to +182
preds_pmted = torch.stack([torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)])
return preds_pmted
return torch.stack(
[torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)]
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function pit_permutate refactored with the following changes:

elif not _TORCH_GREATER_EQUAL_1_8:
else:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function signal_distortion_ratio refactored with the following changes:

mode = _check_classification_inputs(
return _check_classification_inputs(
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _mode refactored with the following changes:

Comment on lines -397 to +398
if average in ["macro", "weighted", "none", None] and (not num_classes or num_classes < 1):
if average in {"macro", "weighted", "none", None} and (
(not num_classes or num_classes < 1)
):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function accuracy refactored with the following changes:

Comment on lines -154 to +157
res = []
for p, r in zip(precision, recall):
res.append(-torch.sum((r[1:] - r[:-1]) * p[:-1]))
res = [
-torch.sum((r[1:] - r[:-1]) * p[:-1])
for p, r in zip(precision, recall)
]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _average_precision_compute_with_precision_recall refactored with the following changes:

Comment on lines -146 to +150
if not ((0 <= preds) * (preds <= 1)).all():
if not ((preds >= 0) * (preds <= 1)).all():
preds = preds.sigmoid()
confidences, accuracies = preds, target
elif mode == DataType.MULTICLASS:
if not ((0 <= preds) * (preds <= 1)).all():
if not ((preds >= 0) * (preds <= 1)).all():
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _ce_update refactored with the following changes:

confmat = confmat.float() if not confmat.is_floating_point() else confmat
confmat = confmat if confmat.is_floating_point() else confmat.float()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _cohen_kappa_compute refactored with the following changes:

Comment on lines -38 to +44
if sample_weight is not None:
if sample_weight.ndim != 1 or sample_weight.shape[0] != preds.shape[0]:
raise ValueError(
"Expected sample weights to be 1 dimensional and have same size"
f" as the first dimension of preds and target but got {sample_weight.shape}"
)
if sample_weight is not None and (
sample_weight.ndim != 1 or sample_weight.shape[0] != preds.shape[0]
):
raise ValueError(
"Expected sample weights to be 1 dimensional and have same size"
f" as the first dimension of preds and target but got {sample_weight.shape}"
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _check_ranking_input refactored with the following changes:

Comment on lines -189 to +197
if average in ["macro", "weighted", "none", None] and (not num_classes or num_classes < 1):
if average in {"macro", "weighted", "none", None} and (
(not num_classes or num_classes < 1)
):
raise ValueError(f"When you set `average` as {average}, you have to provide the number of classes.")

if num_classes and ignore_index is not None and (not 0 <= ignore_index < num_classes or num_classes == 1):
raise ValueError(f"The `ignore_index` {ignore_index} is not valid for inputs with {num_classes} classes")

reduce = "macro" if average in ["weighted", "none", None] else average
reduce = "macro" if average in {"weighted", "none", None} else average
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function specificity refactored with the following changes:

if weights is None:
weights = torch.ones_like(denominator)
else:
weights = weights.float()

weights = torch.ones_like(denominator) if weights is None else weights.float()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _reduce_stat_scores refactored with the following changes:

if isinstance(dim, int):
dim_list = [dim]
else:
dim_list = list(dim)
dim_list = [dim] if isinstance(dim, int) else list(dim)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _psnr_update refactored with the following changes:

Comment on lines -33 to +42
if y is not None:
if y.ndim != 2 or y.shape[1] != x.shape[1]:
raise ValueError(
"Expected argument `y` to be a 2D tensor of shape `[M, d]` where"
" `d` should be same as the last dimension of `x`"
)
zero_diagonal = False if zero_diagonal is None else zero_diagonal
else:
if y is None:
y = x.clone()
zero_diagonal = True if zero_diagonal is None else zero_diagonal
elif y.ndim != 2 or y.shape[1] != x.shape[1]:
raise ValueError(
"Expected argument `y` to be a 2D tensor of shape `[M, d]` where"
" `d` should be same as the last dimension of `x`"
)
else:
zero_diagonal = False if zero_diagonal is None else zero_diagonal
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _check_input refactored with the following changes:

mean_ape = _mean_absolute_percentage_error_compute(sum_abs_per_error, num_obs)

return mean_ape
return _mean_absolute_percentage_error_compute(sum_abs_per_error, num_obs)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function mean_absolute_percentage_error refactored with the following changes:

Comment on lines -95 to -100
mean_ape = _symmetric_mean_absolute_percentage_error_compute(
return _symmetric_mean_absolute_percentage_error_compute(
sum_abs_per_error,
num_obs,
)

return mean_ape
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function symmetric_mean_absolute_percentage_error refactored with the following changes:

Comment on lines -55 to +63
deviance_score = 2 * (_safe_xlogy(targets, targets / preds) + preds - targets)
else:
deviance_score = 2 * (_safe_xlogy(targets, targets / preds) + preds - targets)
elif power == 2:
# Gamma distribution
if torch.any(preds <= 0) or torch.any(targets <= 0):
raise ValueError(f"For power={power}, both 'preds' and 'targets' have to be strictly positive.")

deviance_score = 2 * (torch.log(preds / targets) + (targets / preds) - 1)
else:
deviance_score = 2 * (torch.log(preds / targets) + (targets / preds) - 1)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _tweedie_deviance_score_update refactored with the following changes:

Comment on lines -88 to -93
weighted_ape = _weighted_mean_absolute_percentage_error_compute(
return _weighted_mean_absolute_percentage_error_compute(
sum_abs_error,
sum_scale,
)

return weighted_ape
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function weighted_mean_absolute_percentage_error refactored with the following changes:

Comment on lines -48 to +56
res = torch.div((torch.arange(len(positions), device=positions.device, dtype=torch.float32) + 1), positions).mean()
return res
return torch.div(
(
torch.arange(
len(positions), device=positions.device, dtype=torch.float32
)
+ 1
),
positions,
).mean()
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function retrieval_average_precision refactored with the following changes:

for i in range(0, len(hyp) + 1):
for i in range(len(hyp) + 1):
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _eed_function refactored with the following changes:

sentence = " " + sentence + " "
sentence = f" {sentence} "
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _preprocess_en refactored with the following changes:

Comment on lines -241 to +245
if len(sentence_level_scores) == 0:
return tensor(0.0)

average = sum(sentence_level_scores) / tensor(len(sentence_level_scores))
return average
return (
sum(sentence_level_scores) / tensor(len(sentence_level_scores))
if sentence_level_scores
else tensor(0.0)
)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _eed_compute refactored with the following changes:

if not isinstance(param, float) or isinstance(param, float) and param < 0:
if not isinstance(param, float) or param < 0:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function extended_edit_distance refactored with the following changes:

Comment on lines -280 to +285
empty_row = [(int(_EDIT_OPERATIONS_COST.OP_UNDEFINED), _EDIT_OPERATIONS.OP_UNDEFINED)] * (length + 1)
return empty_row
return [
(
int(_EDIT_OPERATIONS_COST.OP_UNDEFINED),
_EDIT_OPERATIONS.OP_UNDEFINED,
)
] * (length + 1)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Function _LevenshteinEditDistance._get_empty_row refactored with the following changes:

@sourcery-ai
Copy link
Author

sourcery-ai bot commented Jun 27, 2022

Sourcery Code Quality Report

✅  Merging this PR will increase code quality in the affected files by 0.08%.

Quality metrics Before After Change
Complexity 6.49 ⭐ 6.39 ⭐ -0.10 👍
Method Length 66.12 🙂 65.68 🙂 -0.44 👍
Working memory 9.25 🙂 9.24 🙂 -0.01 👍
Quality 65.29% 🙂 65.37% 🙂 0.08% 👍
Other metrics Before After Change
Lines 16589 16545 -44
Changed files Quality Before Quality After Quality Change
docs/source/conf.py 73.12% 🙂 73.05% 🙂 -0.07% 👎
examples/bert_score-own_model.py 80.66% ⭐ 80.42% ⭐ -0.24% 👎
examples/rouge_score-own_normalizer_and_tokenizer.py 92.83% ⭐ 93.63% ⭐ 0.80% 👍
src/torchmetrics/collections.py 63.08% 🙂 64.04% 🙂 0.96% 👍
src/torchmetrics/metric.py 76.48% ⭐ 76.49% ⭐ 0.01% 👍
src/torchmetrics/classification/stat_scores.py 52.78% 🙂 52.78% 🙂 0.00%
src/torchmetrics/detection/mean_ap.py 49.12% 😞 48.94% 😞 -0.18% 👎
src/torchmetrics/functional/audio/pit.py 55.88% 🙂 56.00% 🙂 0.12% 👍
src/torchmetrics/functional/audio/sdr.py 54.84% 🙂 55.54% 🙂 0.70% 👍
src/torchmetrics/functional/classification/accuracy.py 51.94% 🙂 51.47% 🙂 -0.47% 👎
src/torchmetrics/functional/classification/average_precision.py 60.27% 🙂 60.25% 🙂 -0.02% 👎
src/torchmetrics/functional/classification/calibration_error.py 57.51% 🙂 57.51% 🙂 0.00%
src/torchmetrics/functional/classification/cohen_kappa.py 59.83% 🙂 59.92% 🙂 0.09% 👍
src/torchmetrics/functional/classification/confusion_matrix.py 61.16% 🙂 62.00% 🙂 0.84% 👍
src/torchmetrics/functional/classification/dice.py 52.88% 🙂 52.42% 🙂 -0.46% 👎
src/torchmetrics/functional/classification/f_beta.py 40.07% 😞 40.28% 😞 0.21% 👍
src/torchmetrics/functional/classification/precision_recall.py 48.81% 😞 48.81% 😞 0.00%
src/torchmetrics/functional/classification/precision_recall_curve.py 59.60% 🙂 60.13% 🙂 0.53% 👍
src/torchmetrics/functional/classification/ranking.py 71.62% 🙂 71.74% 🙂 0.12% 👍
src/torchmetrics/functional/classification/specificity.py 51.71% 🙂 51.71% 🙂 0.00%
src/torchmetrics/functional/classification/stat_scores.py 52.68% 🙂 53.01% 🙂 0.33% 👍
src/torchmetrics/functional/image/psnr.py 68.85% 🙂 69.53% 🙂 0.68% 👍
src/torchmetrics/functional/pairwise/helpers.py 73.10% 🙂 74.16% 🙂 1.06% 👍
src/torchmetrics/functional/regression/mape.py 90.88% ⭐ 90.64% ⭐ -0.24% 👎
src/torchmetrics/functional/regression/symmetric_mape.py 88.95% ⭐ 88.55% ⭐ -0.40% 👎
src/torchmetrics/functional/regression/tweedie_deviance.py 39.01% 😞 39.84% 😞 0.83% 👍
src/torchmetrics/functional/regression/wmape.py 93.14% ⭐ 93.07% ⭐ -0.07% 👎
src/torchmetrics/functional/retrieval/average_precision.py 74.68% 🙂 77.47% ⭐ 2.79% 👍
src/torchmetrics/functional/retrieval/reciprocal_rank.py 83.30% ⭐ 83.77% ⭐ 0.47% 👍
src/torchmetrics/functional/text/bert.py 58.36% 🙂 58.13% 🙂 -0.23% 👎
src/torchmetrics/functional/text/bleu.py 61.27% 🙂 61.37% 🙂 0.10% 👍
src/torchmetrics/functional/text/chrf.py 68.93% 🙂 69.05% 🙂 0.12% 👍
src/torchmetrics/functional/text/eed.py 67.13% 🙂 66.45% 🙂 -0.68% 👎
src/torchmetrics/functional/text/helper.py 69.11% 🙂 68.26% 🙂 -0.85% 👎
src/torchmetrics/functional/text/rouge.py 57.29% 🙂 56.96% 🙂 -0.33% 👎
src/torchmetrics/functional/text/sacre_bleu.py 79.43% ⭐ 78.58% ⭐ -0.85% 👎
src/torchmetrics/functional/text/squad.py 75.82% ⭐ 75.88% ⭐ 0.06% 👍
src/torchmetrics/functional/text/ter.py 71.05% 🙂 71.11% 🙂 0.06% 👍
src/torchmetrics/image/kid.py 58.80% 🙂 58.58% 🙂 -0.22% 👎
src/torchmetrics/retrieval/precision_recall_curve.py 57.59% 🙂 58.51% 🙂 0.92% 👍
src/torchmetrics/text/bert.py 50.63% 🙂 49.86% 😞 -0.77% 👎
src/torchmetrics/text/chrf.py 70.49% 🙂 70.70% 🙂 0.21% 👍
src/torchmetrics/text/eed.py 68.11% 🙂 69.20% 🙂 1.09% 👍
src/torchmetrics/utilities/checks.py 56.39% 🙂 56.63% 🙂 0.24% 👍
src/torchmetrics/utilities/data.py 79.66% ⭐ 80.49% ⭐ 0.83% 👍
src/torchmetrics/utilities/distributed.py 63.16% 🙂 63.49% 🙂 0.33% 👍
src/torchmetrics/utilities/enums.py 92.12% ⭐ 93.23% ⭐ 1.11% 👍
src/torchmetrics/utilities/imports.py 81.54% ⭐ 81.41% ⭐ -0.13% 👎
src/torchmetrics/wrappers/bootstrapping.py 67.06% 🙂 66.93% 🙂 -0.13% 👎
src/torchmetrics/wrappers/multioutput.py 76.27% ⭐ 76.40% ⭐ 0.13% 👍
tests/integrations/lightning/boring_model.py 95.31% ⭐ 95.49% ⭐ 0.18% 👍
tests/unittests/audio/test_pit.py 79.04% ⭐ 79.07% ⭐ 0.03% 👍
tests/unittests/bases/test_collections.py 69.99% 🙂 70.06% 🙂 0.07% 👍
tests/unittests/bases/test_metric.py 84.71% ⭐ 84.91% ⭐ 0.20% 👍
tests/unittests/classification/test_accuracy.py 67.53% 🙂 67.64% 🙂 0.11% 👍
tests/unittests/classification/test_binned_precision_recall.py 77.66% ⭐ 77.73% ⭐ 0.07% 👍
tests/unittests/classification/test_calibration_error.py 64.56% 🙂 65.59% 🙂 1.03% 👍
tests/unittests/classification/test_f_beta.py 62.27% 🙂 62.41% 🙂 0.14% 👍
tests/unittests/classification/test_specificity.py 58.09% 🙂 58.79% 🙂 0.70% 👍
tests/unittests/helpers/reference_metrics.py 41.81% 😞 41.52% 😞 -0.29% 👎
tests/unittests/regression/test_cosine_similarity.py 79.16% ⭐ 80.37% ⭐ 1.21% 👍
tests/unittests/retrieval/helpers.py 62.87% 🙂 63.53% 🙂 0.66% 👍
tests/unittests/retrieval/test_precision_recall_curve.py 29.42% 😞 32.54% 😞 3.12% 👍
tests/unittests/text/test_bertscore.py 74.07% 🙂 73.97% 🙂 -0.10% 👎
tests/unittests/text/test_rouge.py 61.17% 🙂 61.15% 🙂 -0.02% 👎
tests/unittests/wrappers/test_bootstrapping.py 67.05% 🙂 67.08% 🙂 0.03% 👍
tests/unittests/wrappers/test_multioutput.py 87.25% ⭐ 87.61% ⭐ 0.36% 👍

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
src/torchmetrics/functional/text/bert.py bert_score 26 😞 405 ⛔ 25 ⛔ 14.75% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
src/torchmetrics/functional/text/rouge.py _rouge_score_update 42 ⛔ 280 ⛔ 17 ⛔ 15.58% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
src/torchmetrics/detection/mean_ap.py MeanAveragePrecision._evaluate_image 22 😞 486 ⛔ 20 ⛔ 18.74% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
tests/unittests/helpers/reference_metrics.py _calibration_error 22 😞 453 ⛔ 16 ⛔ 22.24% ⛔ Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions
tests/unittests/retrieval/test_precision_recall_curve.py _compute_precision_recall_curve 21 😞 307 ⛔ 16 ⛔ 25.39% 😞 Refactor to reduce nesting. Try splitting into smaller methods. Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

  • ⭐ excellent
  • 🙂 good
  • 😞 poor
  • ⛔ very poor

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Help us improve this quality report!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants