Skip to content

Commit

Permalink
Fix typos under torch directory (pytorch#88172)
Browse files Browse the repository at this point in the history
This PR fixes typos in '.md' files under torch directory

Pull Request resolved: pytorch#88172
Approved by: https://github.com/malfet
  • Loading branch information
kiszk authored and pytorchmergebot committed Nov 1, 2022
1 parent 72958b9 commit 0131a66
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The objective of this exercise is to use the data sparsifier to prune the embedd

1. **Disk usage savings**: Savings in model size after pruning.
2. **Model Quality**: How and by how much does performance deteriorate after pruning the embedding bags?
3. **Model forward time**: Can we speed up the model forward time by utilizing the sparsity? Specificially, can we introduce torch.sparse interim to reduce number of computations.
3. **Model forward time**: Can we speed up the model forward time by utilizing the sparsity? Specifically, can we introduce torch.sparse interim to reduce number of computations.

## Scope
The [DataNormSparsifier](https://github.com/pytorch/pytorch/blob/master/torch/ao/sparsity/_experimental/data_sparsifier/data_norm_sparsifier.py) is used to sparsify the embeddings of the DLRM model. The model is sparsified for all the combinations of -
Expand Down
4 changes: 2 additions & 2 deletions torch/ao/quantization/fx/_model_report/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ModelReport

> ⚠️ *While the example below uses the Fx Workflow, the use of the ModelReport class **does not depend** on the Fx Workflow to work*.
The requirements are detector dependent.
Most detectors require a **traceable GraphModule**, but some (ex. `PerChannelDetector`) require just a `nn.Module`.
Most detectors require a **traceable GraphModule**, but some (ex. `PerChannelDetector`) require just an `nn.Module`.

#### Typical Fx Workflow
- Initialize model → Prepare model → Callibrate model → Convert model → ...
Expand Down Expand Up @@ -62,7 +62,7 @@ This is so that we can keep track of where we want to insert observers on a dete
It then returns the GraphModule with the detectors inserted into both the regular module structure as well as the node structure.
- `generate_model_report(self, remove_inserted_observers: bool)` → `Dict[str, Tuple[str, Dict]]` uses callibrated GraphModule to optionally removes inserted observers, and generate, for each detector the ModelReport instance was initialized with:
- A string-based report that is easily digestable and actionable explaining the data collected by relevant observers for that detector
- A dictionary containing statistics collected by the relevant observers and values calculated by the detector for futher analysis or plotting
- A dictionary containing statistics collected by the relevant observers and values calculated by the detector for further analysis or plotting

## ModelReportVisualizer Overview

Expand Down
2 changes: 1 addition & 1 deletion torch/csrc/jit/OVERVIEW.md
Original file line number Diff line number Diff line change
Expand Up @@ -894,7 +894,7 @@ def LSTMCellS(x, hx, cx, w_ih, w_hh, b_ih, b_hh):
return hy, cy
```

After going through the the frontend, we start with this unoptimized graph:
After going through the frontend, we start with this unoptimized graph:

```
graph(%x : Tensor,
Expand Down
2 changes: 1 addition & 1 deletion torch/csrc/jit/codegen/onednn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ with torch.no_grad():

# run the model
with torch.no_grad():
# oneDNN graph fusion will be trigerred during runtime
# oneDNN graph fusion will be triggered during runtime
output = model(images)
```

Expand Down
2 changes: 1 addition & 1 deletion torch/csrc/jit/operator_upgraders/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -177,7 +177,7 @@ When making changes to the operators, the first thing to identify is if it's BC/
except Exception as e:
self.skipTest("Failed to load fixture!")
# Step4. Load the new model and it won't apply the ugprader
# Step4. Load the new model and it won't apply the upgrader
current_mobile_module_float = self._save_load_mobile_module(MyModuleFloat)
current_server_module_float = self._save_load_module(MyModuleFloat)
Expand Down
4 changes: 2 additions & 2 deletions torch/csrc/jit/runtime/static/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,9 +142,9 @@ is selected instead.
When loading a model, ops are selected for each `torch::jit::Node` in the graph as follows:

1) If an out variant is registered, pass the node to the function that produces the `SROperator`. If
the result is not `nulltpr`, use that op.
the result is not `nullptr`, use that op.
2) If a native function is registered, pass the node to the function that produces the `SROperator`. If
the result is not `nulltpr`, use that op.
the result is not `nullptr`, use that op.
3) Use the JIT implementation. Static runtime will throw an exception if it does not exist.

## Implementation Details
Expand Down

0 comments on commit 0131a66

Please sign in to comment.