Skip to content

Commit

Permalink
add use case to readme (#34)
Browse files Browse the repository at this point in the history
* add using case to readme

* Update README.md

Co-authored-by: Leo <[email protected]>

* Update README.md

Co-authored-by: Leo <[email protected]>

* Update README.md

Co-authored-by: Leo <[email protected]>

* Update README.md

Co-authored-by: Leo <[email protected]>

* update readme_CN consistently

Co-authored-by: Leo <[email protected]>
  • Loading branch information
jieli-matrix and GiggleLiu authored Sep 8, 2021
1 parent ba506ae commit 477ea0e
Show file tree
Hide file tree
Showing 2 changed files with 52 additions and 4 deletions.
28 changes: 26 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@

[中文版本](README_CN.md)

This is a repository for the Summer 2021 of Open Source Promotion Plan. NiSparseArrays implements operations in [SparseArrays](https://docs.julialang.org/en/v1/stdlib/SparseArrays/) in a reversible way by [NiLang](https://giggleliu.github.io/NiLang.jl/dev/).
`NiSparseArrays` is a part of the [Summer 2021 of Open Source Promotion Plan](https://summer.iscas.ac.cn/#/?lang=en). It implements the backward rules for sparse matrix operations using [`NiLang`](https://giggleliu.github.io/NiLang.jl/dev/) and ports these rules to [`ChainRules`](https://github.com/JuliaDiff/ChainRules.jl).

## Background

Sparse matrices are extensively used in scientific computing, however there is no automatic differentiation package in Julia yet to handle sparse matrix operations yet. This project will utilize the reversible embedded domain-specific language NiLang.jl to differentiate sparse matrix operations by re-writing the sparse functions in Julia base in a reversible style. Furthermore, the generated backward rules would be generated to ChainRules.jl as an extension.
Sparse matrices are extensively used in scientific computing, however there is no automatic differentiation package in Julia yet to handle sparse matrix operations. This project utilizes the reversible embedded domain-specific language `NiLang.jl` to differentiate sparse matrix operations by writing the sparse matrix operations in a reversible style. The generated backward rules are ported to `ChainRules.jl` as an extension, so that one can access these features in an automatic differentiation package like [`Zygote`](https://github.com/FluxML/Zygote.jl), [`Flux`](https://github.com/FluxML/Flux.jl) and [`Diffractor`](https://github.com/JuliaDiff/Diffractor.jl) directly.

## Install

Expand Down Expand Up @@ -40,6 +40,30 @@ pkg> add NiSparseArrays

More to add in the next stage...

## A Simple Use Case

Here we present a minimal use case to illustrate how to use `NiSparseArrays` to speed up `Zygote`'s gradient computation. To access more examples, please navigate to the `examples` directory.

``` julia
julia> using SparseArrays, LinearAlgebra, Random, BenchmarkTools

julia> A = sprand(1000, 1000, 0.1);

julia> x = rand(1000);

julia> using Zygote

julia> @btime Zygote.gradient((A, x) -> sum(A*x), $A, $x)
15.065 ms (27 allocations: 8.42 MiB)

julia> using NiSparseArrays

julia> @btime Zygote.gradient((A, x) -> sum(A*x), $A, $x)
644.035 μs (32 allocations: 3.86 MiB)
```

You will see that using `NiSparseArrays` would not only speed up the computation process but also save much memory since our implementation does not convert a sparse matrix to a dense arrays in gradient computation.

## Contribute

Suggestions and Comments in the Issues are welcome.
Expand Down
28 changes: 26 additions & 2 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@

[英文版本](README.md)

这是开源软件供应链点亮计划-暑期2021仓库NiSparseArrays 通过[NiLang](https://giggleliu.github.io/NiLang.jl/dev/)以可逆编程地形式对 [SparseArrays](https://docs.julialang.org/en/v1/stdlib/SparseArrays/)进行实现。
`NiSparseArrays`[开源软件供应链点亮计划-暑期2021仓库](https://summer.iscas.ac.cn/#/?lang=chi)之一。`NiSparseArrays` 使用[`NiLang`](https://giggleliu.github.io/NiLang.jl/dev/)对稀疏矩阵操作进行实现从而得到其微分规则,并将这些规则导入至[`ChainRules`](https://github.com/JuliaDiff/ChainRules.jl)

## 背景

稀疏矩阵在科学计算中应用广泛,但是在Julia语言里面却没有很好的软件包实现对稀疏矩阵的自动微分,这个项目将会使用可逆嵌入式语言 NiLang.jl 通过对 Julia Base 里的稀疏矩阵操作的改写实现对其自动微分。我们将会把生成的自动微分规则接入到 Julia 生态中最流行的自动微分规则库 ChainRules
稀疏矩阵在科学计算中应用广泛,但是在Julia语言里面却没有很好的软件包实现对稀疏矩阵的自动微分,这个项目使用可逆嵌入式语言 `NiLang.jl`对稀疏矩阵操作进行实现从而得到其微分规则。生成的微分规则将以扩展的形式导入到`ChainRules.jl`中,使用者可以直接通过使用自动微分包比如[`Zygote`](https://github.com/FluxML/Zygote.jl), [`Flux`](https://github.com/FluxML/Flux.jl)[`Diffractor`](https://github.com/JuliaDiff/Diffractor.jl)来获取这些特性

## 安装

Expand Down Expand Up @@ -41,6 +41,30 @@ pkg> add NiSparseArrays

API还在不断扩充中...

## 一个简单的用例

这里我们用一个最小的用例去展示如何使用`NiSparseArrays`去加速`Zygote`梯度。更多测试用例,请前往`examples`文件夹查看。

``` julia
julia> using SparseArrays, LinearAlgebra, Random, BenchmarkTools

julia> A = sprand(1000, 1000, 0.1);

julia> x = rand(1000);

julia> using Zygote

julia> @btime Zygote.gradient((A, x) -> sum(A*x), $A, $x)
15.065 ms (27 allocations: 8.42 MiB)

julia> using NiSparseArrays

julia> @btime Zygote.gradient((A, x) -> sum(A*x), $A, $x)
644.035 μs (32 allocations: 3.86 MiB)
```

你会发现使用`NiSparseArrays`不仅能够加速计算过程,还能够节省内存分配——这是因为我们的实现在梯度计算的过程中并不会将一个稀疏矩阵转换为稠密矩阵。

## 贡献

欢迎提出Issue和PR👏
Expand Down

0 comments on commit 477ea0e

Please sign in to comment.