Skip to content

Commit

Permalink
Some fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
aurorarossi committed Sep 20, 2024
1 parent 24e4354 commit 8ab1db0
Show file tree
Hide file tree
Showing 5 changed files with 102 additions and 104 deletions.
14 changes: 6 additions & 8 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,25 +22,23 @@ makedocs(;
format = Documenter.HTML(; mathengine, prettyurls, assets = assets, size_threshold=nothing),
sitename = "GraphNeuralNetworks.jl",
pages = ["Monorepo" => [
"Home" => "monorepo.md",
"Home" => "index.md",
"Developer guide" => "dev.md",
"Google Summer of Code" => "gsoc.md",


],
"GraphNeuralNetworks.jl" =>[
"Home" => "index.md",
"Home" => "home.md",
"Models" => "models.md",],



"API Reference" => [

"Basic" => "api/basic.md",
"Conv" => "api/conv.md",
"Pool" => "api/pool.md",
"TempConv" => "api/temporalconv.md",
"HeteroConv" => "api/heteroconv.md"
"Convolutional layers" => "api/conv.md",
"Pooling layers" => "api/pool.md",
"Temporal Convolutional layers" => "api/temporalconv.md",
"Hetero Convolutional layers" => "api/heteroconv.md"


],
Expand Down
2 changes: 1 addition & 1 deletion docs/src/api/heteroconv.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ CurrentModule = GraphNeuralNetworks

# Hetero Graph-Convolutional Layers

Convolutions for time-varying graphs (temporal graphs) such as the [`GNNHeteroGraph`](@ref).
Heterogeneous graph convolutions are implemented in the type `HeteroGraphConv`. `HeteroGraphConv` relies on standard graph convolutional layers to perform message passing on the different relations.

## Docs

Expand Down
87 changes: 87 additions & 0 deletions docs/src/home.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# GraphNeuralNetworks

This is the documentation page for [GraphNeuralNetworks.jl](https://github.com/CarloLucibello/GraphNeuralNetworks.jl), a graph neural network library written in Julia and based on the deep learning framework [Flux.jl](https://github.com/FluxML/Flux.jl).
GraphNeuralNetworks.jl is largely inspired by [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/), [Deep Graph Library](https://docs.dgl.ai/),
and [GeometricFlux.jl](https://fluxml.ai/GeometricFlux.jl/stable/).

Among its features:

* Implements common graph convolutional layers.
* Supports computations on batched graphs.
* Easy to define custom layers.
* CUDA support.
* Integration with [Graphs.jl](https://github.com/JuliaGraphs/Graphs.jl).
* [Examples](https://github.com/CarloLucibello/GraphNeuralNetworks.jl/tree/master/examples) of node, edge, and graph level machine learning tasks.


## Package overview

Let's give a brief overview of the package by solving a
graph regression problem with synthetic data.

Usage examples on real datasets can be found in the [examples](https://github.com/CarloLucibello/GraphNeuralNetworks.jl/tree/master/examples) folder.

### Data preparation

We create a dataset consisting in multiple random graphs and associated data features.

```julia
using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics, MLUtils
using Flux: DataLoader

all_graphs = GNNGraph[]

for _ in 1:1000
g = rand_graph(10, 40,
ndata=(; x = randn(Float32, 16,10)), # input node features
gdata=(; y = randn(Float32))) # regression target
push!(all_graphs, g)
end
```

### Model building

We concisely define our model as a [`GNNChain`](@ref) containing two graph convolutional layers. If CUDA is available, our model will live on the gpu.

```julia
device = CUDA.functional() ? Flux.gpu : Flux.cpu;

model = GNNChain(GCNConv(16 => 64),
BatchNorm(64), # Apply batch normalization on node features (nodes dimension is batch dimension)
x -> relu.(x),
GCNConv(64 => 64, relu),
GlobalPool(mean), # aggregate node-wise features into graph-wise features
Dense(64, 1)) |> device

opt = Flux.setup(Adam(1f-4), model)
```

### Training

Finally, we use a standard Flux training pipeline to fit our dataset.
We use Flux's `DataLoader` to iterate over mini-batches of graphs
that are glued together into a single `GNNGraph` using the `MLUtils.batch` method. This is what happens under the hood when creating a `DataLoader` with the
`collate=true` option.

```julia
train_graphs, test_graphs = MLUtils.splitobs(all_graphs, at=0.8)

train_loader = DataLoader(train_graphs,
batchsize=32, shuffle=true, collate=true)
test_loader = DataLoader(test_graphs,
batchsize=32, shuffle=false, collate=true)

loss(model, g::GNNGraph) = mean((vec(model(g, g.x)) - g.y).^2)

loss(model, loader) = mean(loss(model, g |> device) for g in loader)

for epoch in 1:100
for g in train_loader
g = g |> device
grad = gradient(model -> loss(model, g), model)
Flux.update!(opt, model, grad[1])
end

@info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader))
end
```
83 changes: 8 additions & 75 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,87 +1,20 @@
# GraphNeuralNetworks
# GraphNeuralNetworks Monorepo

This is the documentation page for [GraphNeuralNetworks.jl](https://github.com/CarloLucibello/GraphNeuralNetworks.jl), a graph neural network library written in Julia and based on the deep learning framework [Flux.jl](https://github.com/FluxML/Flux.jl).
GraphNeuralNetworks.jl is largely inspired by [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/), [Deep Graph Library](https://docs.dgl.ai/),
and [GeometricFlux.jl](https://fluxml.ai/GeometricFlux.jl/stable/).
This repository is a monorepo that contains all the code for the GraphNeuralNetworks project. The project is organized as a monorepo to facilitate code sharing and reusability across different components of the project. The monorepo contains the following packages:

Among its features:
- `GraphNeuralNetwork.jl`: Package that contains stateful graph convolutional layers based on the machine learning framework [Flux.jl](https://fluxml.ai/Flux.jl/stable/). This is fronted package for Flux users. It depends on GNNlib.jl, GNNGraphs.jl, and Flux.jl packages.

* Implements common graph convolutional layers.
* Supports computations on batched graphs.
* Easy to define custom layers.
* CUDA support.
* Integration with [Graphs.jl](https://github.com/JuliaGraphs/Graphs.jl).
* [Examples](https://github.com/CarloLucibello/GraphNeuralNetworks.jl/tree/master/examples) of node, edge, and graph level machine learning tasks.
- `GNNLux.jl`: Package that contains stateless graph convolutional layers based on the machine learning framework [Lux.jl](https://lux.csail.mit.edu/stable/). This is fronted package for Lux users. It depends on GNNlib.jl, GNNGraphs.jl, and Lux.jl packages.

- `GNNlib.jl`: Package that contains the core graph neural network layers and utilities. It depends on GNNGraphs.jl and GNNlib.jl packages and serves for code base for GraphNeuralNetwork.jl and GNNLux.jl packages.

## Package overview
- `GNNGraphs.jl`: Package that contains the graph data structures and helper functions for working with graph data. It depends on Graphs.jl package.

Let's give a brief overview of the package by solving a
graph regression problem with synthetic data.
Here is a schema of the dependencies between the packages:

Usage examples on real datasets can be found in the [examples](https://github.com/CarloLucibello/GraphNeuralNetworks.jl/tree/master/examples) folder.
![Monorepo schema](assets/schema.png)

### Data preparation

We create a dataset consisting in multiple random graphs and associated data features.

```julia
using GraphNeuralNetworks, Graphs, Flux, CUDA, Statistics, MLUtils
using Flux: DataLoader

all_graphs = GNNGraph[]

for _ in 1:1000
g = rand_graph(10, 40,
ndata=(; x = randn(Float32, 16,10)), # input node features
gdata=(; y = randn(Float32))) # regression target
push!(all_graphs, g)
end
```

### Model building

We concisely define our model as a [`GNNChain`](@ref) containing two graph convolutional layers. If CUDA is available, our model will live on the gpu.

```julia
device = CUDA.functional() ? Flux.gpu : Flux.cpu;

model = GNNChain(GCNConv(16 => 64),
BatchNorm(64), # Apply batch normalization on node features (nodes dimension is batch dimension)
x -> relu.(x),
GCNConv(64 => 64, relu),
GlobalPool(mean), # aggregate node-wise features into graph-wise features
Dense(64, 1)) |> device

opt = Flux.setup(Adam(1f-4), model)
```

### Training

Finally, we use a standard Flux training pipeline to fit our dataset.
We use Flux's `DataLoader` to iterate over mini-batches of graphs
that are glued together into a single `GNNGraph` using the `MLUtils.batch` method. This is what happens under the hood when creating a `DataLoader` with the
`collate=true` option.

```julia
train_graphs, test_graphs = MLUtils.splitobs(all_graphs, at=0.8)

train_loader = DataLoader(train_graphs,
batchsize=32, shuffle=true, collate=true)
test_loader = DataLoader(test_graphs,
batchsize=32, shuffle=false, collate=true)

loss(model, g::GNNGraph) = mean((vec(model(g, g.x)) - g.y).^2)

loss(model, loader) = mean(loss(model, g |> device) for g in loader)

for epoch in 1:100
for g in train_loader
g = g |> device
grad = gradient(model -> loss(model, g), model)
Flux.update!(opt, model, grad[1])
end

@info (; epoch, train_loss=loss(model, train_loader), test_loss=loss(model, test_loader))
end
```
20 changes: 0 additions & 20 deletions docs/src/monorepo.md

This file was deleted.

0 comments on commit 8ab1db0

Please sign in to comment.