-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sourcery refactored master branch #1
base: master
Are you sure you want to change the base?
Conversation
if inp.sum() > 0.: | ||
output = self.weight + inp | ||
else: | ||
output = self.weight - inp | ||
return output | ||
return self.weight + inp if inp.sum() > 0. else self.weight - inp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function SimpleIf.forward
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
) - Replace if statement with if expression (
assign-if-exp
)
if inp.mean() > 0.: | ||
output = self.weight + inp | ||
else: | ||
output = self.weight - inp | ||
return self.weight + inp if inp.mean() > 0. else self.weight - inp | ||
else: | ||
if inp.mean() > 0.: | ||
output = self.weight * inp | ||
else: | ||
output = self.weight / inp | ||
|
||
return output | ||
return self.weight * inp if inp.mean() > 0. else self.weight / inp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function NestedIf.forward
refactored with the following changes:
- Lift return into if (
lift-return-into-if
) - Replace if statement with if expression [×2] (
assign-if-exp
)
for i in range(inp.size(0)): | ||
for _ in range(a.size(0)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function SimpleLoop.forward
refactored with the following changes:
- Use previously assigned local variable (
use-assigned-variable
) - Replace unused for index with underscore (
for-index-underscore
)
for i in range(inp.size(0)): | ||
for _ in range(a.size(0)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function LoopWithIf.forward
refactored with the following changes:
- Use previously assigned local variable (
use-assigned-variable
) - Replace unused for index with underscore (
for-index-underscore
)
while i < inp.size(0): | ||
while i < a.size(0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function SimpleWhileLoop.forward
refactored with the following changes:
- Use previously assigned local variable (
use-assigned-variable
)
if True: | ||
with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_rtx3070.log"): | ||
with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}): | ||
desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]} | ||
seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)]) | ||
mod = seq(mod) | ||
vm_exec = relay.vm.compile(mod, target=target, params=params) | ||
else: | ||
# with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_nvptx.log"): | ||
# # with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_cuda.log"): | ||
# with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}): | ||
with tvm.transform.PassContext(opt_level=3): | ||
# desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]} | ||
# seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)]) | ||
# mod = seq(mod) | ||
with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_rtx3070.log"): | ||
with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}): | ||
desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]} | ||
seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)]) | ||
mod = seq(mod) | ||
vm_exec = relay.vm.compile(mod, target=target, params=params) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function bench_tvm
refactored with the following changes:
- Remove redundant conditional (
remove-redundant-if
)
This removes the following comments ( why? ):
# desired_layouts = {'nn.conv2d': ['NHWC', 'default'], "vision.roi_align": ["NHWC", "default"]}
# seq = tvm.transform.Sequential([relay.transform.ConvertLayout(desired_layouts)])
# with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_nvptx.log"):
# # with auto_scheduler.ApplyHistoryBest("logs/maskrcnn_cuda.log"):
# mod = seq(mod)
# with tvm.transform.PassContext(opt_level=3, config={"relay.backend.use_auto_scheduler": True}):
rois = torch.cat([ids, concat_boxes], dim=1) | ||
return rois | ||
return torch.cat([ids, concat_boxes], dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function MultiScaleRoIAlign.convert_to_roi_format
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
assert len(image_shapes) != 0 | ||
assert image_shapes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function MultiScaleRoIAlign.setup_scales
refactored with the following changes:
- Simplify sequence length comparison (
simplify-len-comparison
)
x_filtered = [] | ||
for k, v in x.items(): | ||
if k in self.featmap_names: | ||
x_filtered.append(v) | ||
x_filtered = [v for k, v in x.items() if k in self.featmap_names] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function MultiScaleRoIAlign.forward
refactored with the following changes:
- Convert for loop into list comprehension (
list-comprehension
)
mask_loss = F.binary_cross_entropy_with_logits( | ||
mask_logits[torch.arange(labels.shape[0], device=labels.device), labels], mask_targets | ||
return F.binary_cross_entropy_with_logits( | ||
mask_logits[ | ||
torch.arange(labels.shape[0], device=labels.device), labels | ||
], | ||
mask_targets, | ||
) | ||
return mask_loss |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function maskrcnn_loss
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
keypoint_loss = F.cross_entropy(keypoint_logits[valid], keypoint_targets[valid]) | ||
return keypoint_loss | ||
return F.cross_entropy(keypoint_logits[valid], keypoint_targets[valid]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function keypointrcnn_loss
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
boxes_exp = torch.stack((boxes_exp0, boxes_exp1, boxes_exp2, boxes_exp3), 1) | ||
return boxes_exp | ||
return torch.stack((boxes_exp0, boxes_exp1, boxes_exp2, boxes_exp3), 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _onnx_expand_boxes
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
im_mask = torch.cat((zeros_x0, | ||
concat_0, | ||
zeros_x1), 1)[:, :im_w] | ||
return im_mask | ||
return torch.cat((zeros_x0, concat_0, zeros_x1), 1)[:, :im_w] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function _onnx_paste_mask_in_image
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
)
if len(res) > 0: | ||
ret = torch.stack(res, dim=0)[:, None] | ||
else: | ||
ret = masks.new_empty((0, 1, im_h, im_w)) | ||
return ret | ||
return ( | ||
torch.stack(res, dim=0)[:, None] | ||
if res | ||
else masks.new_empty((0, 1, im_h, im_w)) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function paste_masks_in_image
refactored with the following changes:
- Inline variable that is immediately returned (
inline-immediately-returned-variable
) - Replace if statement with if expression (
assign-if-exp
) - Simplify sequence length comparison (
simplify-len-comparison
)
for img_idx, (pos_inds_img, neg_inds_img) in enumerate( | ||
zip(sampled_pos_inds, sampled_neg_inds) | ||
): | ||
for pos_inds_img, neg_inds_img in zip(sampled_pos_inds, sampled_neg_inds): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function RoIHeads.subsample
refactored with the following changes:
- Remove unnecessary calls to
enumerate
when the index is not used (remove-unused-enumerate
)
for i in range(3): | ||
for _ in range(3): | ||
pt_model(**inputs) | ||
|
||
t1 = time.time() | ||
for i in range(n_repeat): | ||
for _ in range(n_repeat): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function perf_bench_torch
refactored with the following changes:
- Replace unused for index with underscore [×2] (
for-index-underscore
)
quantized_output_dir = configs.output_dir + "quantized/" | ||
quantized_output_dir = f"{configs.output_dir}quantized/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 298-298
refactored with the following changes:
- Use f-string instead of string concatenation (
use-fstring-for-concatenation
)
(args.output_dir, args.output_dir + "-MM") | ||
(args.output_dir, f"{args.output_dir}-MM") | ||
if args.task_name == "mnli" | ||
else (args.output_dir,) | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function evaluate_tvm
refactored with the following changes:
- Use f-string instead of string concatenation [×2] (
use-fstring-for-concatenation
) - Replace call to format with f-string [×2] (
use-fstring-for-formatting
) - Merge dictionary updates via the union operator (
dict-assign-update-to-union
)
# PyTorch eval | ||
if True: | ||
# # Evaluate the original FP32 BERT model | ||
# print("Evaluating PyTorch full precision accuracy and performance:") | ||
# time_model_evaluation(model, configs, tokenizer) | ||
# # Evaluate the original FP32 BERT model | ||
# print("Evaluating PyTorch full precision accuracy and performance:") | ||
# time_model_evaluation(model, configs, tokenizer) | ||
|
||
# Evaluate the INT8 BERT model after the dynamic quantization | ||
print("Evaluating PyTorch quantization accuracy and performance:") | ||
time_model_evaluation(quantized_model, configs, tokenizer) | ||
# Evaluate the INT8 BERT model after the dynamic quantization | ||
print("Evaluating PyTorch quantization accuracy and performance:") | ||
time_model_evaluation(quantized_model, configs, tokenizer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lines 437-445
refactored with the following changes:
- Remove redundant conditional (
remove-redundant-if
)
This removes the following comments ( why? ):
# PyTorch eval
|
||
for idx, task in enumerate(tasks): | ||
print("========== Task %d (workload key: %s) ==========" % (idx, task.workload_key)) | ||
print(task.compute_dag) | ||
|
||
measure_ctx = auto_scheduler.LocalRPCMeasureContext(repeat=1, min_repeat_ms=300, timeout=100) | ||
|
||
tuner = auto_scheduler.TaskScheduler(tasks, task_weights) | ||
# tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file) | ||
tune_option = auto_scheduler.TuningOptions( | ||
num_measure_trials=50000, # change this to 20000 to achieve the best performance | ||
runner=measure_ctx.runner, | ||
measure_callbacks=[auto_scheduler.RecordToFile(log_file)], | ||
) | ||
|
||
tuner.tune(tune_option) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function auto_schedule
refactored with the following changes:
- Remove unreachable code (
remove-unreachable-code
)
This removes the following comments ( why? ):
# tuner = auto_scheduler.TaskScheduler(tasks, task_weights, load_log_file=log_file)
# change this to 20000 to achieve the best performance
Sourcery Code Quality Report✅ Merging this PR will increase code quality in the affected files by 0.43%.
Here are some functions in these files that still need a tune-up:
Legend and ExplanationThe emojis denote the absolute quality of the code:
The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request. Please see our documentation here for details on how these metrics are calculated. We are actively working on this report - lots more documentation and extra metrics to come! Help us improve this quality report! |
Branch
master
refactored by Sourcery.If you're happy with these changes, merge this Pull Request using the Squash and merge strategy.
See our documentation here.
Run Sourcery locally
Reduce the feedback loop during development by using the Sourcery editor plugin:
Review changes via command line
To manually merge these changes, make sure you're on the
master
branch, then run:Help us improve this pull request!