-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sourcery refactored master branch #1
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Due to GitHub API limits, only the first 60 comments can be shown.
htmlhelp_basename = project + "-doc" | ||
htmlhelp_basename = f"{project}-doc" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 178-221
refactored with the following changes:
- Use f-string instead of string concatenation [×5] (
use-fstring-for-concatenation
)
transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=num_layers) | ||
return transformer_encoder | ||
return nn.TransformerEncoder(encoder_layer, num_layers=num_layers) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function get_user_model_encoder
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
output_text = re.sub(self.pattern, " ", text.lower()) | ||
|
||
return output_text | ||
return re.sub(self.pattern, " ", text.lower()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function UserNormalizer.__call__
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
output_tokens = re.split(self.pattern, text) | ||
|
||
return output_tokens | ||
return re.split(self.pattern, text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function UserTokenizer.__call__
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
self._groups = {} | ||
for idx, values in enumerate(temp.values()): | ||
self._groups[idx] = values | ||
self._groups = dict(enumerate(temp.values())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function MetricCollection._merge_compute_groups
refactored with the following changes:
- Convert for loop into dictionary comprehension (
dict-comprehension
) - Replace identity comprehension with call to collection constructor (
identity-comprehension
)
if len(gt) == 0 and len(det) == 0: | ||
if not gt and not det: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function MeanAveragePrecision._evaluate_image
refactored with the following changes:
- Simplify sequence length comparison [×2] (
simplify-len-comparison
)
mean_prec = torch.tensor([-1.0]) if len(prec[prec > -1]) == 0 else torch.mean(prec[prec > -1]) | ||
return mean_prec | ||
return ( | ||
torch.tensor([-1.0]) | ||
if len(prec[prec > -1]) == 0 | ||
else torch.mean(prec[prec > -1]) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function MeanAveragePrecision._summarize
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
if preds.shape[0:2] != target.shape[0:2]: | ||
if preds.shape[:2] != target.shape[:2]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function permutation_invariant_training
refactored with the following changes:
- Replace a[0:x] with a[:x] and a[x:len(a)] with a[x:] [×3] (
remove-redundant-slice-index
) - Remove redundant conditional (
remove-redundant-if
)
preds_pmted = torch.stack([torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)]) | ||
return preds_pmted | ||
return torch.stack( | ||
[torch.index_select(pred, 0, p) for pred, p in zip(preds, perm)] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function pit_permutate
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
elif not _TORCH_GREATER_EQUAL_1_8: | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function signal_distortion_ratio
refactored with the following changes:
- Remove redundant conditional (
remove-redundant-if
) - Replace if statement with if expression (
assign-if-exp
)
mode = _check_classification_inputs( | ||
return _check_classification_inputs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _mode
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
if average in ["macro", "weighted", "none", None] and (not num_classes or num_classes < 1): | ||
if average in {"macro", "weighted", "none", None} and ( | ||
(not num_classes or num_classes < 1) | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function accuracy
refactored with the following changes:
- Use set when checking membership of a collection of literals [×2] (
collection-into-set
)
res = [] | ||
for p, r in zip(precision, recall): | ||
res.append(-torch.sum((r[1:] - r[:-1]) * p[:-1])) | ||
res = [ | ||
-torch.sum((r[1:] - r[:-1]) * p[:-1]) | ||
for p, r in zip(precision, recall) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _average_precision_compute_with_precision_recall
refactored with the following changes:
- Convert for loop into list comprehension (
list-comprehension
)
if not ((0 <= preds) * (preds <= 1)).all(): | ||
if not ((preds >= 0) * (preds <= 1)).all(): | ||
preds = preds.sigmoid() | ||
confidences, accuracies = preds, target | ||
elif mode == DataType.MULTICLASS: | ||
if not ((0 <= preds) * (preds <= 1)).all(): | ||
if not ((preds >= 0) * (preds <= 1)).all(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _ce_update
refactored with the following changes:
- Ensure constant in comparison is on the right [×2] (
flip-comparison
)
confmat = confmat.float() if not confmat.is_floating_point() else confmat | ||
confmat = confmat if confmat.is_floating_point() else confmat.float() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _cohen_kappa_compute
refactored with the following changes:
- Swap if/else branches of if expression to remove negation (
swap-if-expression
)
if sample_weight is not None: | ||
if sample_weight.ndim != 1 or sample_weight.shape[0] != preds.shape[0]: | ||
raise ValueError( | ||
"Expected sample weights to be 1 dimensional and have same size" | ||
f" as the first dimension of preds and target but got {sample_weight.shape}" | ||
) | ||
if sample_weight is not None and ( | ||
sample_weight.ndim != 1 or sample_weight.shape[0] != preds.shape[0] | ||
): | ||
raise ValueError( | ||
"Expected sample weights to be 1 dimensional and have same size" | ||
f" as the first dimension of preds and target but got {sample_weight.shape}" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _check_ranking_input
refactored with the following changes:
- Merge nested if conditions (
merge-nested-ifs
)
if average in ["macro", "weighted", "none", None] and (not num_classes or num_classes < 1): | ||
if average in {"macro", "weighted", "none", None} and ( | ||
(not num_classes or num_classes < 1) | ||
): | ||
raise ValueError(f"When you set `average` as {average}, you have to provide the number of classes.") | ||
|
||
if num_classes and ignore_index is not None and (not 0 <= ignore_index < num_classes or num_classes == 1): | ||
raise ValueError(f"The `ignore_index` {ignore_index} is not valid for inputs with {num_classes} classes") | ||
|
||
reduce = "macro" if average in ["weighted", "none", None] else average | ||
reduce = "macro" if average in {"weighted", "none", None} else average |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function specificity
refactored with the following changes:
- Use set when checking membership of a collection of literals [×2] (
collection-into-set
)
if weights is None: | ||
weights = torch.ones_like(denominator) | ||
else: | ||
weights = weights.float() | ||
|
||
weights = torch.ones_like(denominator) if weights is None else weights.float() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _reduce_stat_scores
refactored with the following changes:
- Replace if statement with if expression (
assign-if-exp
)
if isinstance(dim, int): | ||
dim_list = [dim] | ||
else: | ||
dim_list = list(dim) | ||
dim_list = [dim] if isinstance(dim, int) else list(dim) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _psnr_update
refactored with the following changes:
- Replace if statement with if expression (
assign-if-exp
)
if y is not None: | ||
if y.ndim != 2 or y.shape[1] != x.shape[1]: | ||
raise ValueError( | ||
"Expected argument `y` to be a 2D tensor of shape `[M, d]` where" | ||
" `d` should be same as the last dimension of `x`" | ||
) | ||
zero_diagonal = False if zero_diagonal is None else zero_diagonal | ||
else: | ||
if y is None: | ||
y = x.clone() | ||
zero_diagonal = True if zero_diagonal is None else zero_diagonal | ||
elif y.ndim != 2 or y.shape[1] != x.shape[1]: | ||
raise ValueError( | ||
"Expected argument `y` to be a 2D tensor of shape `[M, d]` where" | ||
" `d` should be same as the last dimension of `x`" | ||
) | ||
else: | ||
zero_diagonal = False if zero_diagonal is None else zero_diagonal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _check_input
refactored with the following changes:
- Swap if/else branches (
swap-if-else-branches
) - Merge else clause's nested if statement into elif (
merge-else-if-into-elif
) - Lift code into else after jump in control flow (
reintroduce-else
)
mean_ape = _mean_absolute_percentage_error_compute(sum_abs_per_error, num_obs) | ||
|
||
return mean_ape | ||
return _mean_absolute_percentage_error_compute(sum_abs_per_error, num_obs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function mean_absolute_percentage_error
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
mean_ape = _symmetric_mean_absolute_percentage_error_compute( | ||
return _symmetric_mean_absolute_percentage_error_compute( | ||
sum_abs_per_error, | ||
num_obs, | ||
) | ||
|
||
return mean_ape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function symmetric_mean_absolute_percentage_error
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
deviance_score = 2 * (_safe_xlogy(targets, targets / preds) + preds - targets) | ||
else: | ||
deviance_score = 2 * (_safe_xlogy(targets, targets / preds) + preds - targets) | ||
elif power == 2: | ||
# Gamma distribution | ||
if torch.any(preds <= 0) or torch.any(targets <= 0): | ||
raise ValueError(f"For power={power}, both 'preds' and 'targets' have to be strictly positive.") | ||
|
||
deviance_score = 2 * (torch.log(preds / targets) + (targets / preds) - 1) | ||
else: | ||
deviance_score = 2 * (torch.log(preds / targets) + (targets / preds) - 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _tweedie_deviance_score_update
refactored with the following changes:
- Simplify conditional into switch-like form [×3] (
switch
) - Lift code into else after jump in control flow [×2] (
reintroduce-else
) - Merge else clause's nested if statement into elif (
merge-else-if-into-elif
)
weighted_ape = _weighted_mean_absolute_percentage_error_compute( | ||
return _weighted_mean_absolute_percentage_error_compute( | ||
sum_abs_error, | ||
sum_scale, | ||
) | ||
|
||
return weighted_ape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function weighted_mean_absolute_percentage_error
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
res = torch.div((torch.arange(len(positions), device=positions.device, dtype=torch.float32) + 1), positions).mean() | ||
return res | ||
return torch.div( | ||
( | ||
torch.arange( | ||
len(positions), device=positions.device, dtype=torch.float32 | ||
) | ||
+ 1 | ||
), | ||
positions, | ||
).mean() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function retrieval_average_precision
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
for i in range(0, len(hyp) + 1): | ||
for i in range(len(hyp) + 1): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _eed_function
refactored with the following changes:
- Replace range(0, x) with range(x) (
remove-zero-from-range
)
sentence = " " + sentence + " " | ||
sentence = f" {sentence} " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _preprocess_en
refactored with the following changes:
- Use f-string instead of string concatenation [×2] (
use-fstring-for-concatenation
)
if len(sentence_level_scores) == 0: | ||
return tensor(0.0) | ||
|
||
average = sum(sentence_level_scores) / tensor(len(sentence_level_scores)) | ||
return average | ||
return ( | ||
sum(sentence_level_scores) / tensor(len(sentence_level_scores)) | ||
if sentence_level_scores | ||
else tensor(0.0) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _eed_compute
refactored with the following changes:
- Swap if/else branches of if expression to remove negation (
swap-if-expression
) - Lift code into else after jump in control flow (
reintroduce-else
) - Replace if statement with if expression (
assign-if-exp
) - Simplify sequence length comparison (
simplify-len-comparison
) - Inline variable that is immediately returned (
inline-immediately-returned-variable
)
if not isinstance(param, float) or isinstance(param, float) and param < 0: | ||
if not isinstance(param, float) or param < 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function extended_edit_distance
refactored with the following changes:
- Remove redundant conditional (
remove-redundant-if
)
empty_row = [(int(_EDIT_OPERATIONS_COST.OP_UNDEFINED), _EDIT_OPERATIONS.OP_UNDEFINED)] * (length + 1) | ||
return empty_row | ||
return [ | ||
( | ||
int(_EDIT_OPERATIONS_COST.OP_UNDEFINED), | ||
_EDIT_OPERATIONS.OP_UNDEFINED, | ||
) | ||
] * (length + 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _LevenshteinEditDistance._get_empty_row
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
Sourcery Code Quality Report✅ Merging this PR will increase code quality in the affected files by 0.08%.
Here are some functions in these files that still need a tune-up:
Legend and ExplanationThe emojis denote the absolute quality of the code:
The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request. Please see our documentation here for details on how these metrics are calculated. We are actively working on this report - lots more documentation and extra metrics to come! Help us improve this quality report! |
Branch
master
refactored by Sourcery.If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.
See our documentation here.
Run Sourcery locally
Reduce the feedback loop during development by using the Sourcery editor plugin:
Review changes via command line
To manually merge these changes, make sure you're on the
master
branch, then run:Help us improve this pull request!