Skip to content

Commit

Permalink
fixed a bug of ops/torch/rrsda; update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
rayleizhu committed Apr 6, 2023
1 parent b0ccf7a commit 5b15121
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 6 deletions.
3 changes: 2 additions & 1 deletion INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,9 @@ If you are using slurm clusters, it is recommended to create a slurm config file

```bash
export CLUSTER_ID=[YOUR_CLUSTER_ALIAS]
vim configs/slurm/${CLUSTER_ID}.yaml
cp configs/slurm/sz10.yaml configs/slurm/${CLUSTER_ID}.yaml && vim configs/slurm/${CLUSTER_ID}.yaml
```
hence you can launch experiments in any available cluster consistently with `+slurm=${CLUSTER_ID}`.

## Dataset Preparation

Expand Down
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,23 @@
Official PyTorch implementation of **BiFormer**, from the following paper:

[BiFormer: Vision Transformer with Bi-Level Routing Attention](https://arxiv.org/abs/2303.08810). CVPR 2023.\
[Lei Zhu](https://github.com/rayleizhu), [Xinjiang Wang](https://www.linkedin.com/in/wang-xinjiang-784a3462), [Zhanghan Ke](https://zhke.io/), [Wayne Zhang](http://www.statfe.com/), and [Rynson Lau](https://www.cs.cityu.edu.hk/~rynson/)
[Lei Zhu](https://github.com/rayleizhu), [Xinjiang Wang](https://scholar.google.com/citations?user=q4lnWaoAAAAJ&hl=en), [Zhanghan Ke](https://zhke.io/), [Wayne Zhang](http://www.statfe.com/), and [Rynson Lau](https://www.cs.cityu.edu.hk/~rynson/)

---
<p align="left">
<img src="assets/teaser.png" width=60% height=60%
class="center">
</p>


<!-- ✅ ⬜️ -->

## News

* 2023-03-24: For better readability, BRA and BiFormer-STL has been refactored. See [ops/bra_nchw.py](ops/bra_nchw.py) and [models/biformer_stl_nchw.py](models/biformer_stl_nchw.py). We still keep the [legacy (and a little bit messy) implementation](ops/bra_legacy.py) for compatiability of previously released checkpoints.

* 2023-03-24: For better memory and computation efficieny, we are diving into the optimization of BRA with CUDA. Please stay tuned.
- Collaborations and contributions are welcome, especially if you are an expert in CUDA/[cutlass](https://github.com/NVIDIA/cutlass). There is a chance to co-author a paper.

* 2023-03-24: For better readability, BRA and BiFormer-STL has been refactored. See [ops/bra_nchw.py](ops/bra_nchw.py) and [models/biformer_stl_nchw.py](models/biformer_stl_nchw.py). We still keep the [legacy (and a little bit messy) implementation](ops/bra_legacy.py) for compatiability of previously released checkpoints.

## Results and Pre-trained Models

Expand Down Expand Up @@ -98,7 +98,7 @@ This project is released under the MIT license. Please see the [LICENSE](LICENSE

## Citation
If you find this repository helpful, please consider citing:
```
```bibtex
@Article{zhu2022biformer,
author = {Lei Zhu and Xinjiang Wang and Zhanghan Ke and Wayne Zhang and Rynson Lau},
title = {BiFormer: Vision Transformer with Bi-Level Routing Attention},
Expand Down
2 changes: 1 addition & 1 deletion ops/torch/rrsda.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,6 @@ def regional_routing_attention_torch(

# remove paddings if needed
if auto_pad and (q_pad_b > 0 or q_pad_r > 0):
output = output[:, :, :-q_pad_b, :-q_pad_r]
output = output[:, :, :Hq, :Wq]

return output, attn

0 comments on commit 5b15121

Please sign in to comment.