读论文

读论文

Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering

看不懂

Heterogeneous Graph Transformer

Meta Relation

每条边映射到一个三元组

Dynamic Heterogeneous Graph

把时间戳赋值到每条边上,每个点也可以assign不同的时间戳

General GNN

\(l-1\) 层到 \(l\) 层:

\[H^l[t] \leftarrow \underset{\forall s \in N(t), \forall e \in E(s, t)}{\text { Aggregate }}\left(\operatorname{Extract}\left(H^{l-1}[s] ; H^{l-1}[t], e\right)\right) \]

Heterogeneous GNNs

好像要用 interaction matrices 来共享一部分 weights

HETEROGENEOUS GRAPH TRANSFORMER

Its idea is to use the meta relations of heterogeneous graphs to parameterize weight matrices for the heterogeneous mutual attention, message passing, and propagation steps.

general attention-based GNNs

\[H^l[t] \leftarrow \underset{\forall s \in N(t), \forall e \in E(s, t)}{\text { Aggregate }}(\text { Attention }(s, t) \cdot \text { Message }(s)) \]

\[\begin{aligned} \operatorname{Attention}_{G A T}(s, t) & =\underset{\forall s \in N(t)}{\operatorname{Softmax}}\left(\vec{a}\left(W H^{l-1}[t] \| W H^{l-1}[s]\right)\right) \\ \operatorname{Message}_{G A T}(s) & =W H^{l-1}[s] \\ \operatorname{Aggregate}_{G A T}(\cdot) & =\sigma(\operatorname{Mean}(\cdot)) \end{aligned} \]

HGT计算方式

注意他矩阵W都是type-aware的

\[\begin{gathered} \operatorname{Head}_k^{A T T}(i, j)=\left(\frac{\mathbf{K}_i^k \mathbf{W}_{\psi(i, j)}^{A T T} \mathbf{Q}_j^{k^{\mathrm{T}}}}{\sqrt{d}}\right) \mu(\phi(i), \psi(i, j), \phi(j)) \\ \operatorname{Attention}(i, j)=\operatorname{Softmax}_{i \in N(j)}\left(\|_k \operatorname{Head}_k^{A T T}(i, j)\right) \end{gathered} \]

\[\begin{gathered} \operatorname{Message}(i, j)=\|_k \mathbf{W}_{\phi(i)}^k \mathbf{h}_i \mathbf{W}_{\psi(i, j)}^{M S G} \\ \mathbf{h}_j=\sum_{i \in N(j)} \operatorname{Attention}(i, j) \odot \operatorname{Message}(i, j) \end{gathered} \]

RTE

就是根据一条边时间差 \(\Delta t\) 给sourse node加上一个生成的向量(由一堆正余弦函数作为基底,过一遍linear)

HGSampling

对于webscale的图需要进行sample,大概是通过当前点扩展邻居节点,对于每个type分别算概率

似乎每种点都取了n个,没有考虑数量差别

A Survey on Heterogeneous Graph Embedding: Methods, Techniques, Applications and Sources

Heterogeneous graph neural networks analysis: a survey of techniques, evaluations and applications

Network Schema: 一个paradigm

Metapath:元路径

Heterogeneous Graph Embedding

convolution-based HGNN

思想是把周围点的信息aggregate到当前点,形成embedding

更高效但是参数更多空间消耗更大

HAN

在一个metapath的相邻节点之中,attention和embedding相互计算

获得多个metapath的 different semantic node embeddings 之后,再过一遍(mlp,激活函数,attention vector)计算attention,再计算最后的embedding

HAHE:uses cosine similarity instead of attention mechanism to calculate the two kinds of importance

MAGNN:把metapath的中间节点也用encoder存进了semantic信息,不过他encoder到底怎么工作的没仔细说不太懂

这几种都要人工设置metapath

GTN

Graph Transformer Networks,分出subgraphs然后学习embedding

But GTN only considers edge types with ignoring diferent types of nodes.

HetSANN

Heterogeneous Graph Structural Attention Neural Network

首先把neighbor nodes映射进选中的点的空间,然后通过type-aware attention layer来学习每个点的embedding,这样可以考虑周围点的类型不同以及重要性不同

HGT

同论文

HetSANN和HGT都使用分层注意力机制来代替metapath,但是生成了更多的参数

HGCN

认为异质图是由多个二分图子图构成的,在每个子图上用GCN,再用attention(type-aware)aggregate起来获得最终的embedding

Autoencoder‑based approaches

HIN2Vec

conceptualNN 本来是想算出i和j对每种关系的概率,但是成本过高,于是尝试算出i和j有特定关系r的概率

\[P(r|i,j) = sigmoid\left(\sum\mathbf{W}_I\vec i \cdot \mathbf{W}_J\vec j\cdot\mathbf{W}_R\vec r\right) \]

然后计算交叉熵损失

训练数据生成是用随机游走,生成metapath,一种metapath代表一种关系

SHINE

用三个autoencoder计算三种不同的语义信息

utilizes the topological structure of heterogeneous graphs

HNE

每个点代表text或者image,adopts CNN and fully connected layers

NSHE

首先使用GCN(还是先project成向量,然后aggregate)来涵盖一阶proximity,然后使用多个autoencoder最后拼起来进行预测任务

Adversarial‑based approaches

对抗神经网络,用一个discriminator和一个generator互相学习

HEGAN

discriminator:

\[D\left(\mathbf{h}_j \mid i, r ; \theta^D\right)=\frac{1}{1+\exp \left(-\mathbf{h}_i^D \mathbf{M}_r^D \mathbf{h}_j^D\right)} \]

generator:

\[G\left(i, r ; \theta^G\right)=\sigma\left(\mathbf{W}_L \cdots \sigma\left(\mathbf{W}_1 \mathbf{h}+\mathbf{b}_1\right)+\mathbf{b}_L\right) \]

对于一个关系,每次generator生成一个fakenode,然后给discriminator进行判别,计算各自loss,如果不能分辨那么generator就可以拿来生成embedding

Dynamic heterogeneous graph learning

对每个时刻算一个embedding,然后算attention加到一起

posted @ 2023-08-26 13:26  lcyfrog  阅读(91)  评论(0编辑  收藏  举报