Skip to content

Commit

Permalink
fix docs
Browse files Browse the repository at this point in the history
  • Loading branch information
CarloLucibello committed Jul 24, 2024
1 parent a32fb04 commit 3ab7bf6
Show file tree
Hide file tree
Showing 16 changed files with 4,408 additions and 1,832 deletions.
10 changes: 8 additions & 2 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,15 @@ jobs:
- uses: actions/checkout@v4
- uses: julia-actions/setup-julia@latest
with:
version: '1.9.1'
version: '1.10.4'
- name: Install dependencies
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
shell: julia --project=docs/ {0}
run: |
using Pkg
# dev mono repo versions
pkg"registry up"
Pkg.update()
pkg"dev ./GNNGraphs ."
- name: Build and deploy
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # If authenticating with GitHub Actions token
Expand Down
4 changes: 2 additions & 2 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
DemoCards = "311a05b2-6137-4a5a-b473-18580a3d38b5"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Flux = "587475ba-b771-5e3f-ad9e-33799f191a9c"
GNNGraphs = "aed8fd31-079b-4b5a-b342-a13352159b8c"
GraphNeuralNetworks = "cffab07f-9bc2-4db1-8861-388f63bf7694"
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
MarkdownLiteral = "736d6165-7244-6769-4267-6b50796e6954"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Pluto = "c3e4b0f8-55cb-11ea-2926-15256bba5781"
Expand All @@ -17,4 +17,4 @@ Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"

[compat]
DemoCards = "0.5.0"
Documenter = "0.27"
Documenter = "1.5"
11 changes: 6 additions & 5 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
using Flux, NNlib, GraphNeuralNetworks, Graphs, SparseArrays
using GNNGraphs
using Pluto, PlutoStaticHTML # for tutorials
using Documenter, DemoCards

tutorials, tutorials_cb, tutorial_assets = makedemos("tutorials")
# tutorials, tutorials_cb, tutorial_assets = makedemos("tutorials")

assets = []
isnothing(tutorial_assets) || push!(assets, tutorial_assets)
# isnothing(tutorial_assets) || push!(assets, tutorial_assets)

DocMeta.setdocmeta!(GraphNeuralNetworks, :DocTestSetup,
:(using GraphNeuralNetworks, Graphs, SparseArrays, NNlib, Flux);
Expand All @@ -15,7 +16,7 @@ prettyurls = get(ENV, "CI", nothing) == "true"
mathengine = MathJax3()

makedocs(;
modules = [GraphNeuralNetworks, NNlib, Flux, Graphs, SparseArrays],
modules = [GraphNeuralNetworks, GNNGraphs],
doctest = false,
clean = true,
format = Documenter.HTML(; mathengine, prettyurls, assets = assets),
Expand All @@ -25,7 +26,7 @@ makedocs(;
"Message Passing" => "messagepassing.md",
"Model Building" => "models.md",
"Datasets" => "datasets.md",
"Tutorials" => tutorials,
# "Tutorials" => tutorials,
"API Reference" => [
"GNNGraph" => "api/gnngraph.md",
"Basic Layers" => "api/basic.md",
Expand All @@ -40,6 +41,6 @@ makedocs(;
"Summer Of Code" => "gsoc.md",
])

tutorials_cb()
# tutorials_cb()

deploydocs(repo = "github.com/CarloLucibello/GraphNeuralNetworks.jl.git")
30 changes: 15 additions & 15 deletions docs/pluto_output/gnn_intro_pluto.md

Large diffs are not rendered by default.

22 changes: 11 additions & 11 deletions docs/pluto_output/graph_classification_pluto.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
```@raw html
<style>
table {
#documenter-page table {
display: table !important;
margin: 2rem auto !important;
border-top: 2pt solid rgba(0,0,0,0.2);
border-bottom: 2pt solid rgba(0,0,0,0.2);
}
pre, div {
#documenter-page pre, #documenter-page div {
margin-top: 1.4rem !important;
margin-bottom: 1.4rem !important;
}
Expand All @@ -25,8 +25,8 @@
<!--
# This information is used for caching.
[PlutoStaticHTML.State]
input_sha = "f145b80b8f1e399d4cd5686b529cf173942102c538702952fe0743defca62210"
julia_version = "1.9.1"
input_sha = "62d9b08cdb51a5d174d1d090f3e4834f98df0c30b8b515e5befdd8fa22bd5c7f"
julia_version = "1.10.4"
-->
<pre class='language-julia'><code class='language-julia'>begin
using Flux
Expand Down Expand Up @@ -102,7 +102,7 @@ end</code></pre>
<div class="markdown"><p>We have some useful utilities for working with graph datasets, <em>e.g.</em>, we can shuffle the dataset and use the first 150 graphs as training graphs, while using the remaining ones for testing:</p></div>
<pre class='language-julia'><code class='language-julia'>train_data, test_data = splitobs((graphs, y), at = 150, shuffle = true) |&gt; getobs</code></pre>
<pre class="code-output documenter-example-output" id="var-train_data">((GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(12, 24) with x: 7×12 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(25, 56) with x: 7×25 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(18, 38) with x: 7×18 data, GNNGraph(23, 52) with x: 7×23 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(20, 46) with x: 7×20 data … GNNGraph(16, 34) with x: 7×16 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(12, 24) with x: 7×12 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(16, 36) with x: 7×16 data], Bool[1 0 … 1 0; 0 1 … 0 1]), (GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(21, 44) with x: 7×21 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(27, 66) with x: 7×27 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(13, 28) with x: 7×13 data … GNNGraph(11, 22) with x: 7×11 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(18, 40) with x: 7×18 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(22, 50) with x: 7×22 data], Bool[0 0 … 0 0; 1 1 … 1 1]))</pre>
<pre class="code-output documenter-example-output" id="var-train_data">((GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(18, 38) with x: 7×18 data … GNNGraph(12, 26) with x: 7×12 data, GNNGraph(19, 40) with x: 7×19 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(26, 60) with x: 7×26 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(22, 50) with x: 7×22 data], Bool[0 0 … 0 0; 1 1 … 1 1]), (GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(26, 60) with x: 7×26 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(24, 50) with x: 7×24 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(21, 44) with x: 7×21 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(17, 38) with x: 7×17 data … GNNGraph(12, 26) with x: 7×12 data, GNNGraph(23, 52) with x: 7×23 data, GNNGraph(12, 24) with x: 7×12 data, GNNGraph(23, 50) with x: 7×23 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(18, 40) with x: 7×18 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(13, 26) with x: 7×13 data, GNNGraph(28, 62) with x: 7×28 data, GNNGraph(11, 22) with x: 7×11 data], Bool[0 0 … 0 1; 1 1 … 1 0]))</pre>
<pre class='language-julia'><code class='language-julia'>begin
train_loader = DataLoader(train_data, batchsize = 32, shuffle = true)
Expand All @@ -113,7 +113,7 @@ end</code></pre>
(32-element Vector{GraphNeuralNetworks.GNNGraphs.GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}}, 2×32 OneHotMatrix(::Vector{UInt32}) with eltype Bool,)</pre>
<div class="markdown"><p>Here, we opt for a <code>batch_size</code> of 32, leading to 5 (randomly shuffled) mini-batches, containing all <span class="tex">$4 \cdot 32+22 = 150$</span> graphs.</p></div>
<div class="markdown"><p>Here, we opt for a <code>batch_size</code> of 32, leading to 5 (randomly shuffled) mini-batches, containing all <span class="tex">\(4 \cdot 32+22 = 150\)</span> graphs.</p></div>
```
Expand All @@ -123,15 +123,15 @@ end</code></pre>
<p>Since graphs in graph classification datasets are usually small, a good idea is to <strong>batch the graphs</strong> before inputting them into a Graph Neural Network to guarantee full GPU utilization. In the image or language domain, this procedure is typically achieved by <strong>rescaling</strong> or <strong>padding</strong> each example into a set of equally-sized shapes, and examples are then grouped in an additional dimension. The length of this dimension is then equal to the number of examples grouped in a mini-batch and is typically referred to as the <code>batchsize</code>.</p><p>However, for GNNs the two approaches described above are either not feasible or may result in a lot of unnecessary memory consumption. Therefore, GraphNeuralNetworks.jl opts for another approach to achieve parallelization across a number of examples. Here, adjacency matrices are stacked in a diagonal fashion (creating a giant graph that holds multiple isolated subgraphs), and node and target features are simply concatenated in the node dimension (the last dimension).</p><p>This procedure has some crucial advantages over other batching procedures:</p><ol><li><p>GNN operators that rely on a message passing scheme do not need to be modified since messages are not exchanged between two nodes that belong to different graphs.</p></li><li><p>There is no computational or memory overhead since adjacency matrices are saved in a sparse fashion holding only non-zero entries, <em>i.e.</em>, the edges.</p></li></ol><p>GraphNeuralNetworks.jl can <strong>batch multiple graphs into a single giant graph</strong>:</p></div>
<pre class='language-julia'><code class='language-julia'>vec_gs, _ = first(train_loader)</code></pre>
<pre class="code-output documenter-example-output" id="var-vec_gs">(GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(17, 38) with x: 7×17 data, GNNGraph(19, 42) with x: 7×19 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(24, 50) with x: 7×24 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(15, 34) with x: 7×15 data … GNNGraph(16, 34) with x: 7×16 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(26, 60) with x: 7×26 data, GNNGraph(23, 54) with x: 7×23 data, GNNGraph(24, 50) with x: 7×24 data], Bool[0 0 … 0 0; 1 1 … 1 1])</pre>
<pre class="code-output documenter-example-output" id="var-vec_gs">(GNNGraph{Tuple{Vector{Int64}, Vector{Int64}, Nothing}}[GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 46) with x: 7×20 data, GNNGraph(15, 34) with x: 7×15 data, GNNGraph(25, 56) with x: 7×25 data, GNNGraph(17, 38) with x: 7×17 data, GNNGraph(20, 44) with x: 7×20 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(19, 44) with x: 7×19 data, GNNGraph(20, 44) with x: 7×20 data … GNNGraph(12, 24) with x: 7×12 data, GNNGraph(12, 26) with x: 7×12 data, GNNGraph(16, 36) with x: 7×16 data, GNNGraph(11, 22) with x: 7×11 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(13, 28) with x: 7×13 data, GNNGraph(14, 30) with x: 7×14 data, GNNGraph(16, 34) with x: 7×16 data, GNNGraph(22, 50) with x: 7×22 data, GNNGraph(23, 54) with x: 7×23 data], Bool[0 0 … 0 0; 1 1 … 1 1])</pre>
<pre class='language-julia'><code class='language-julia'>MLUtils.batch(vec_gs)</code></pre>
<pre class="code-output documenter-example-output" id="var-hash102363">GNNGraph:
num_nodes: 585
num_edges: 1292
num_nodes: 575
num_edges: 1276
num_graphs: 32
ndata:
x = 7×585 Matrix{Float32}</pre>
x = 7×575 Matrix{Float32}</pre>
<div class="markdown"><p>Each batched graph object is equipped with a <strong><code>graph_indicator</code> vector</strong>, which maps each node to its respective graph in the batch:</p><p class="tex">$$\textrm{graph\_indicator} = [1, \ldots, 1, 2, \ldots, 2, 3, \ldots ]$$</p></div>
Expand All @@ -154,7 +154,7 @@ end</code></pre>
<pre class="code-output documenter-example-output" id="var-create_model">create_model (generic function with 1 method)</pre>
<div class="markdown"><p>Here, we again make use of the <code>GCNConv</code> with <span class="tex">$\mathrm{ReLU}(x) = \max(x, 0)$</span> activation for obtaining localized node embeddings, before we apply our final classifier on top of a graph readout layer.</p><p>Let's train our network for a few epochs to see how well it performs on the training as well as test set:</p></div>
<div class="markdown"><p>Here, we again make use of the <code>GCNConv</code> with <span class="tex">\(\mathrm{ReLU}(x) = \max(x, 0)\)</span> activation for obtaining localized node embeddings, before we apply our final classifier on top of a graph readout layer.</p><p>Let's train our network for a few epochs to see how well it performs on the training as well as test set:</p></div>
<pre class='language-julia'><code class='language-julia'>function eval_loss_accuracy(model, data_loader, device)
loss = 0.0
Expand Down
18 changes: 9 additions & 9 deletions docs/pluto_output/node_classification_pluto.md

Large diffs are not rendered by default.

12 changes: 6 additions & 6 deletions docs/src/api/gnngraph.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,15 @@ GNNGraph
## DataStore

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["datastore.jl"]
Private = false
```

## Query

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["query.jl"]
Private = false
```
Expand All @@ -47,7 +47,7 @@ Graphs.inneighbors
## Transform

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["transform.jl"]
Private = false
```
Expand All @@ -62,7 +62,7 @@ GNNGraphs.color_refinement
## Generate

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["generate.jl"]
Private = false
Filter = t -> typeof(t) <: Function && t!=rand_temporal_radius_graph && t!=rand_temporal_hyperbolic_graph
Expand All @@ -72,7 +72,7 @@ Filter = t -> typeof(t) <: Function && t!=rand_temporal_radius_graph && t!=rand_
## Operators

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["operators.jl"]
Private = false
```
Expand All @@ -84,7 +84,7 @@ Graphs.intersect
## Sampling

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["sampling.jl"]
Private = false
```
2 changes: 1 addition & 1 deletion docs/src/api/heterograph.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Documentation page for the type `GNNHeteroGraph` representing heterogeneous grap


```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["gnnheterograph.jl"]
Private = false
```
Expand Down
15 changes: 15 additions & 0 deletions docs/src/api/temporalconv.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
```@meta
CurrentModule = GraphNeuralNetworks
```

# Temporal Graph-Convolutional Layers

Convolutions for time-varying graphs (temporal graphs) such as the [`TemporalSnapshotsGNNGraph`](@ref).

## Docs

```@autodocs
Modules = [GraphNeuralNetworks]
Pages = ["layers/temporalconv.jl"]
Private = false
```
2 changes: 1 addition & 1 deletion docs/src/api/temporalgraph.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Documentation page for the graph type `TemporalSnapshotsGNNGraph` and related methods, representing time varying graphs with time varying features.

```@autodocs
Modules = [GraphNeuralNetworks.GNNGraphs]
Modules = [GNNGraphs]
Pages = ["temporalsnapshotsgnngraph.jl"]
Private = false
```
Expand Down
59 changes: 59 additions & 0 deletions docs/src/democards/gridtheme.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
.grid-card-section {
display: flex;
flex-direction: row;
flex-wrap: wrap;
align-content: space-between;
}

.grid-card:hover{
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.4), 0 6px 20px 0 rgba(0, 0, 0, 0.1);
}

.grid-card {
width: 210px;
max-height: 400px;
margin: 10px 15px;
box-shadow: 0 4px 8px 0 rgba(0,0,0,0.2);
transition: 0.3s;
border-radius: 5px;
}

.grid-card-text {
padding: 0 15px;
}

.grid-card-cover img {
width: 100%;
}

.grid-card-cover {
width: 200px;
height: 220px;
padding: 5px;
box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.2);
transition: 0.3s;
border-radius: 5px;
display:block;
margin:auto;
}

.grid-card-cover .grid-card-description {
opacity: 0;
z-index: -1;
position: absolute;
top: 25%;
left: 140%;
width: 100%;
transform: translate(-50%, -50%);
padding: 10px;
border-radius: 5px;
background: rgba(0, 0, 0, 0.8);
color: #fff;
text-align: center;
font-size: 14px;
}

.grid-card-cover:hover .grid-card-description{
z-index: 3;
opacity: 1;
}
Loading

0 comments on commit 3ab7bf6

Please sign in to comment.