Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Model] Implement model Unimp #83

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
add doc
yangcd-bupt committed Oct 28, 2022
commit 9a12bf4e2001384f64e72a7f5f1cbd6d2c38c134
41 changes: 41 additions & 0 deletions gammagl/layers/conv/multi_head.py
Original file line number Diff line number Diff line change
@@ -3,6 +3,47 @@
from gammagl.utils import segment_softmax

class MultiHead(MessagePassing):

r"""A module for attention mechanisms which runs through an attention mechanism several times in parallel.

The independent attention outputs are then concatenated and linearly transformed into the expected dimension.

Intuitively, multiple attention heads allows for attending to parts of the sequence differently (e.g. longer-term dependencies versus shorter-term dependencies).

.. math::
\mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} +
\sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j},

where the attention coefficients :math:`\alpha_{i,j}` are computed as

.. math::
\alpha_{i,j} =
\frac{
\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top}
[\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j]
\right)\right)}
{\sum_{k \in \mathcal{N}(i) \cup \{ i \}}
\exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top}
[\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k]
\right)\right)}.

Parameters
----------
in_features: int
Size of each input sample, or :obj:`-1` to
derive the size from the first input(s) to the forward method.
A tuple corresponds to the sizes of source and target
dimensionalities.
out_features: int
Size of each output sample.
n_heads: int
Number of multi-head-attentions.
(default: :obj:`1`)
num_nodes: int
Number of nodes

"""

def __init__(self, in_features, out_features, n_heads,num_nodes):
super().__init__()
self.heads=n_heads
23 changes: 23 additions & 0 deletions gammagl/models/unimp.py
Original file line number Diff line number Diff line change
@@ -3,6 +3,29 @@
from gammagl.layers import MultiHead

class Unimp(tlx.nn.Module):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add rst doc here refer to this link: https://docs.qq.com/pdf/DUXRTTU9tUnB1WnFB.


r"""The graph attentional operator from the `"Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification"
<https://arxiv.org/abs/2009.03509>`_ paper

Parameters
----------
dataset:
num_node_features: int
Input feature dimension
num_nodes: int
Number of nodes
x: [num_nodes, num_node_features]
Feature of node
edge_index: [2, num_edges]
Graph connectivity in COO format
edge_attr: [num_edges, num_edge_features]
Edge feature matrix
y: [1. *]
Target to train against (may have arbitrary shape)
pos: [num_nodes, num_dimensions]
Node position matrix
"""

def __init__(self,dataset):
super(Unimp, self).__init__()