-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TemporalGraphConv
#369
Conversation
TemporalGraphConv(layer) | ||
|
||
A constructor for applying to each snapshot of the `TemporalSnapshotsGNNGraph` a convolutional homogeneous graph `layer`. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Paper citation is missing, and also more explanation of how it works.
Am I correct in saying that this applies layer
independently to each temporal slice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right.
This constructor has been used in the following papers, for example
And I do not know of any paper that defines it. I would say it is similar to the HeteroGraphConv for heterographs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say it is similar to the HeteroGraphConv for heterographs.
Yes, although HeteroGraphConv in the end aggregates messages coming from different edge types, while here there is no cross-talk between different times, it has to happen in separate modules (e.g. temporal attention in the last reference above)
end | ||
|
||
function (tgconv::TemporalGraphConv)(tsg::TemporalSnapshotsGNNGraph, x) | ||
return tgconv.layer.(tsg.snapshots, x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return tgconv.layer.(tsg.snapshots, x) | |
return tgconv.layer.(tsg.snapshots, x) |
Also, here broadcasting should be avoided, since both snapshots
and x
could be expanded
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return tgconv.layer.(tsg.snapshots, x) | |
return map((a, b) -> tgconv.layer(a, b), tsg.snapshots, x) |
Is this a better way to write the broadcast in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, I got confused, it is the same provided x is a vector of the same length as snapshots.
Co-authored-by: Carlo Lucibello <[email protected]>
Co-authored-by: Carlo Lucibello <[email protected]>
I think that instead of introducing a new layer, we can extend all layers to take a temporalsnapshotgraph as input and apply itself timewise. The time dimension in this way is like a batch dimension. function (l::GNNLayer)(g::TemporalSnapshotsGNNGraph, x::AbstractVector; kws...)
y = ....
return y
end Maybe the operation could be made more efficient batching the snapshots first and then unbatching the output y later, but this is a performance optimization that can be explored later. |
Ah ok I see. Do you prefer one big PR with support for TemporalSnapshotsGNNGraphs for all layers or one PR for each layer? |
This PR adds a
TemporalGraphConv
constructor to each snapshot of theTemporalSnapshotsGNNGraph
, a convolutional homogeneous graphlayer
.A PR will follow to update the documentation.