We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I did unstructured pruning for yolov5s and saved it as onnx but it did not give good improvements in terms of inference speed and model size. I would like to do structured pruning using sparseML library. How that could be done? I tried using the modifier on the following link: https://github.com/neuralmagic/sparseml/blob/main/src/sparseml/pytorch/sparsification/pruning/modifier_pruning_structured.py.
I created a .yaml file as follows:
modifiers:
But I am unable to apply it to my model. The following code I used for fine tuning unstructured pruned model:
!sparseml.yolov5.train --weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none --recipe zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned75_quant-none --teacher-weights zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/base-none --data coco128.yaml --hyp hyps/hyp.scratch-low.yaml --cfg yolov5s.yaml --patience 0 --gradient-accum-steps 4
How would I do the same in the case of structured pruning?
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I did unstructured pruning for yolov5s and saved it as onnx but it did not give good improvements in terms of inference speed and model size. I would like to do structured pruning using sparseML library. How that could be done? I tried using the modifier on the following link: https://github.com/neuralmagic/sparseml/blob/main/src/sparseml/pytorch/sparsification/pruning/modifier_pruning_structured.py.
I created a .yaml file as follows:
Structured pruning
modifiers:
param_groups = [
# Convolutional layers
{
'name': 'conv',
'params': [
'model.model.model.0.conv.weight', 'model.model.model.1.conv.weight',
'model.model.model.2.cv1.conv.weight', 'model.model.model.2.cv2.conv.weight',
........
],
'prune_ratio': 0.5 # Example pruning ratio for conv layers
},
# Batch normalization layer
{
'name': 'bn',
'params': [
'model.model.model.0.bn.weight', 'model.model.model.0.bn.bias',
'model.model.model.1.bn.weight', 'model.model.model.1.bn.bias',
........
],
'prune_ratio': 0.3 # Example pruning ratio for batch norm layers
}
]
mask_type: filter
init_sparsity: 0.05
final_sparsity: 0.8
start_epoch: 0.0
end_epoch: 10.0
update_frequency: 1.0
params: ALL_PRUNABLE
leave_enabled: True
inter_func: cubic
But I am unable to apply it to my model. The following code I used for fine tuning unstructured pruned model:
!sparseml.yolov5.train
--weights zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/base-none
--recipe zoo:cv/detection/yolov5-s/pytorch/ultralytics/coco/pruned75_quant-none
--teacher-weights zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/base-none
--data coco128.yaml
--hyp hyps/hyp.scratch-low.yaml --cfg yolov5s.yaml --patience 0 --gradient-accum-steps 4
How would I do the same in the case of structured pruning?
The text was updated successfully, but these errors were encountered: