Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the graph structure change #69

Open
adverbial03 opened this issue May 22, 2023 · 4 comments
Open

About the graph structure change #69

adverbial03 opened this issue May 22, 2023 · 4 comments

Comments

@adverbial03
Copy link

Hello, thanks for sharing your excellent work!
I want to know in the testing phase,aka the online anomaly detection phase, does the graph structure change?

@d-ailin
Copy link
Owner

d-ailin commented May 22, 2023

Thanks for your interest in our work.

As shown in Eq(6) - Eq(8), the graph structure is computed based on the both the global embedding vectors and the local embedding vectors. The global embedding vectors are fixed during the test time, but the local embedding vectors are computed from the current time series input. It means that the graph structure will change w.r.t the input.

@adverbial03
Copy link
Author

Thank for your quick reply.

@adverbial03
Copy link
Author

Is the Eq(6) - Eq(8) you mentioned in the message in graph_layer?
If yes, then I think the global embedding vectors are embedding_i, embedding_j, and the local embedding vectors are x_i, x_j.
Among them, x_i and x_j are to transform the input samples into corresponding embedding vectors through the Line layer.
but from gdn

all_embeddings = self.embedding(torch.arange(node_num).to(device))

            weights_arr = all_embeddings.detach().clone()
            all_embeddings = all_embeddings.repeat(batch_num, 1)

            weights = weights_arr.view(node_num, -1)

            cos_ji_mat = torch.matmul(weights, weights.T)
            normed_mat = torch.matmul(weights.norm(dim=-1).view(-1,1), weights.norm(dim=-1).view(1,-1))
            cos_ji_mat = cos_ji_mat / normed_mat

            dim = weights.shape[-1]
            topk_num = self.topk

            topk_indices_ji = torch.topk(cos_ji_mat, topk_num, dim=-1)[1]

Looking at the graph structure, it seems that only the similarity of the global embedding vectors is calculated.
In addition, you mentioned in the ablation experiment that the embedding module can be removed. tried to set the embed to none, but an error was reported,can you tell me how did you implement this experiment??

@d-ailin
Copy link
Owner

d-ailin commented May 25, 2023

Yes, your understanding is correct. Sorry, I thought you were asking the graph including attention part. If you are asking the adjacency matrix in the Eq (3), yes, it won't change in the test phrase as only the similarity of the global embedding vectors are used.
For the ablation study, it means the sensor embedding in the attention part is to be removed, and some modifications in the attention part is needed, such as not using global embedding vectors in the message part, etc. Directly setting the embedding is not applicable, as these embeddings will be also used in the Eq. (9).

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants