Skip to content

Commit

Permalink
docs: relax BayesianNN deps
Browse files Browse the repository at this point in the history
  • Loading branch information
avik-pal committed Nov 15, 2024
1 parent 7062d8f commit addf19b
Show file tree
Hide file tree
Showing 6 changed files with 95 additions and 14 deletions.
2 changes: 2 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ LuxCore = "bb33d45b-7691-41d6-9220-0943567d0623"
LuxLib = "82251201-b29d-42c6-8e01-566dec8acb11"
LuxTestUtils = "ac9de150-d08f-4546-94fb-7472b5760531"
MLDataDevices = "7e8f7934-dd98-4c1a-8fe8-92b47a384d40"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
Pkg = "44cfe95a-1eb2-52ea-b672-e2afdf69b78f"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Expand Down Expand Up @@ -50,6 +51,7 @@ LuxCore = "1.2"
LuxLib = "1.3.4"
LuxTestUtils = "1.5"
MLDataDevices = "1.6"
NNlib = "0.9.24"
Optimisers = "0.3.4, 0.4"
Pkg = "1.10"
Printf = "1.10"
Expand Down
9 changes: 7 additions & 2 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
using Documenter, DocumenterVitepress, Pkg
using Lux, LuxCore, LuxLib, WeightInitializers
using Lux, LuxCore, LuxLib, WeightInitializers, NNlib
using LuxTestUtils, MLDataDevices
using LuxCUDA

Expand Down Expand Up @@ -85,8 +85,13 @@ makedocs(; sitename="Lux.jl Docs",
authors="Avik Pal et al.",
clean=true,
doctest=false, # We test it in the CI, no need to run it here
modules=[Lux, LuxCore, LuxLib, WeightInitializers, LuxTestUtils, MLDataDevices],
modules=[
Lux, LuxCore, LuxLib, WeightInitializers, LuxTestUtils, MLDataDevices, NNlib
],
linkcheck=true,
linkcheck_ignore=[
"http://www.iro.umontreal.ca/~lisa/publications2/index.php/attachments/single/205"
],
repo="https://github.com/LuxDL/Lux.jl/blob/{commit}{path}#{line}",
format=DocumenterVitepress.MarkdownVitepress(;
repo="github.com/LuxDL/Lux.jl", devbranch="main", devurl="dev",
Expand Down
4 changes: 4 additions & 0 deletions docs/src/api/NN_Primitives/ActivationFunctions.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
```@meta
CollapsedDocStrings = true
CurrentModule = NNlib
```

# [Activation Functions](@id NNlib-ActivationFunctions-API)
Expand All @@ -13,19 +14,22 @@ celu
elu
gelu
hardsigmoid
NNlib.hardσ
sigmoid_fast
hardtanh
tanh_fast
leakyrelu
lisht
logcosh
logsigmoid
NNlib.logσ
mish
relu
relu6
rrelu
selu
sigmoid
NNlib.σ
softplus
softshrink
softsign
Expand Down
1 change: 1 addition & 0 deletions docs/src/api/NN_Primitives/LuxLib.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
```@meta
CollapsedDocStrings = true
CurrentModule = LuxLib
```

# [LuxLib](@id LuxLib-API)
Expand Down
89 changes: 79 additions & 10 deletions docs/src/api/NN_Primitives/NNlib.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
```@meta
CollapsedDocStrings = true
CurrentModule = NNlib
```

# [NNlib](@id NNlib-API)
Expand All @@ -9,7 +10,7 @@ Neural Network Primitives with custom bindings for different accelerator backend
!!! note "Reexport of `NNlib`"

Lux doesn't re-export all of `NNlib` for now. Directly loading `NNlib` is the
recommended appraoch for accessing these functions.
recommended approach for accessing these functions.

## Attention

Expand Down Expand Up @@ -54,8 +55,8 @@ ConvDims
depthwiseconv
DepthwiseConvDims
DenseConvDims
Lux.NNlib.unfold
Lux.NNlib.fold
NNlib.unfold
NNlib.fold
```

## Upsampling
Expand Down Expand Up @@ -93,10 +94,10 @@ batched_vec
## Gather and Scatter

```@docs
Lux.NNlib.gather
Lux.NNlib.gather!
Lux.NNlib.scatter
Lux.NNlib.scatter!
NNlib.gather
NNlib.gather!
NNlib.scatter
NNlib.scatter!
```

## Sampling
Expand Down Expand Up @@ -126,14 +127,82 @@ glu
not part of the public API.

```@docs
Lux.NNlib.within_gradient
NNlib.within_gradient
```

!!! tip

Use [`LuxLib.bias_activation!!`](@ref) or [`LuxLib.bias_activation`](@ref) instead of
`NNlib.bias_act!`.
Use [`LuxLib.API.bias_activation!!`](@ref) or [`LuxLib.API.bias_activation`](@ref)
instead of `NNlib.bias_act!`.

```@docs
bias_act!
```

## Dropout

!!! tip

Use [`LuxLib.API.dropout`](@ref) instead of `NNlib.dropout`.

```@docs
NNlib.dropout
NNlib.dropout!
```

## Internal NNlib Functions

These functions are not part of the public API and are subject to change without notice.

```@docs
NNlib.BatchedAdjoint
NNlib.∇conv_filter_direct!
NNlib._check_trivial_rotations!
NNlib.fast_act
NNlib.spectrogram
NNlib.is_strided
NNlib.conv_direct!
NNlib.gemm!
NNlib.calc_padding_regions
NNlib.∇depthwiseconv_data_im2col!
NNlib._prepare_imrotate
NNlib.insert_singleton_spatial_dimension
NNlib._fast_broadcast!
NNlib.hann_window
NNlib._rng_from_array
NNlib.∇depthwiseconv_filter_im2col!
NNlib.istft
NNlib.transpose_swapbatch
NNlib.transpose_pad
NNlib.power_to_db
NNlib.col2im!
NNlib.depthwiseconv_im2col!
NNlib.storage_type
NNlib.im2col_dims
NNlib.∇depthwiseconv_filter_direct!
NNlib.reverse_indices
NNlib.∇conv_filter_im2col!
NNlib.conv_im2col!
NNlib.∇conv_data_direct!
NNlib.scatter_dims
NNlib.∇conv_data_im2col!
NNlib.storage_typejoin
NNlib.add_blanks
NNlib.∇filter_im2col_dims
NNlib._bilinear_helper
NNlib._triangular_filterbanks
NNlib.∇depthwiseconv_data_direct!
NNlib.db_to_power
NNlib.predilated_size
NNlib.stft
NNlib.hamming_window
NNlib.maximum_dims
NNlib.BatchedTranspose
NNlib._rotate_coordinates
NNlib.melscale_filterbanks
NNlib.logaddexp
NNlib.depthwiseconv_direct!
NNlib.im2col!
NNlib.predilate
NNlib.safe_div
```
4 changes: 2 additions & 2 deletions examples/BayesianNN/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,9 @@ Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"

[compat]
CairoMakie = "0.12"
Functors = "0.5"
Functors = "0.4, 0.5"
LinearAlgebra = "1"
Lux = "1"
Lux = "1.2"
Random = "1"
Tracker = "0.2.36"
Turing = "0.34, 0.35"
Expand Down

0 comments on commit addf19b

Please sign in to comment.