Skip to content

Commit

Permalink
cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
CarloLucibello committed Nov 23, 2024
1 parent ba5f33e commit ba86591
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 24 deletions.
13 changes: 0 additions & 13 deletions src/deprecations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -116,19 +116,6 @@ function Optimisers.update!(opt::Optimisers.AbstractRule, model::Chain, grad::Tu
end


macro functor(ex)
Base.depwarn("""The macro `Flux.@functor` is deprecated.
Most likely, you should write `Flux.@layer MyLayer` which will add
various convenience methods for your type, such as pretty-printing, and use with Adapt.jl.
However, this is not strictly required: Flux.jl v0.15 uses Functors.jl v0.5,
which makes exploration of most nested `struct`s opt-out instead of opt-in...
so Flux will automatically see inside any custom struct definitions to take care of things
like moving data to the GPU.
""", Symbol("@functor"))

return :(Functors.functor($(esc(expr))))
end

### v0.16 deprecations ####################


Expand Down
16 changes: 5 additions & 11 deletions src/functor.jl
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ end
cpu(m)
Copies `m` onto the CPU, the opposite of [`gpu`](@ref).
Recurses into structs marked [`@functor`](@ref).
Recurses into structs (thanks to Functors.jl).
# Example
```julia-repl
Expand Down Expand Up @@ -119,16 +119,14 @@ end
Copies `m` to the current GPU device (using current GPU backend), if one is available.
If no GPU is available, it does nothing (but prints a warning the first time).
On arrays, this calls CUDA's `cu`, which also changes arrays
with Float64 elements to Float32 while copying them to the device (same for AMDGPU).
To act on arrays within a struct, the struct type must be marked with [`@functor`](@ref).
It recurses into structs according to Functors.jl.
Use [`cpu`](@ref) to copy back to ordinary `Array`s.
See also [`f32`](@ref) and [`f16`](@ref) to change element type only.
See the [CUDA.jl docs](https://juliagpu.github.io/CUDA.jl/stable/usage/multigpu/)
to help identify the current device.
This function is just defined for convenience around [`gpu_device`](@ref),
and is equivalent to `gpu_device()(m)`.
You may consider defining `device = gpu_device()` once and then using `device(m)` to move data.
# Example
```julia-repl
Expand All @@ -147,10 +145,6 @@ CUDA.CuArray{Float32, 2, CUDA.Mem.DeviceBuffer}
"""
gpu(x) = gpu_device()(x)

# TODO remove after https://github.com/LuxDL/Lux.jl/pull/1089
ChainRulesCore.@non_differentiable gpu_device()
ChainRulesCore.@non_differentiable gpu_device(::Any)

# Precision

struct FluxEltypeAdaptor{T} end
Expand Down

0 comments on commit ba86591

Please sign in to comment.