model.GCN package¶
Submodules¶
model.GCN.GCN module¶
- class model.GCN.GCN.GCNResnet(model, num_classes, in_channel=300, t=0, adj_file=None)¶
Bases:
torch.nn.modules.module.Module
- forward(feature, inp)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_config_optim(lr, lrp)¶
- class model.GCN.GCN.GraphAttentionLayer(in_features, out_features, dropout, alpha, batch_size, concat=True)¶
Bases:
torch.nn.modules.module.Module
Graph Attention Layer. Reference https://arxiv.org/abs/1710.10903 the basic GCN can be modified to this layer in recognition code
- forward(input, adj)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset_parameters()¶
- class model.GCN.GCN.GraphConvolution(in_features, out_features, dropout=0.0, bias=True)¶
Bases:
torch.nn.modules.module.Module
Simple GCN layer, similar to https://arxiv.org/abs/1609.02907
in_features: input feature dimension out_features: output feature dimension
dropout: default=0 not used, call nn.dropout function
bias: the learnable bias in GCN layer, default set as True
- forward(input, adj)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reset_parameters()¶
- model.GCN.GCN.gen_A(num_classes, t, adj_file)¶
- model.GCN.GCN.norm_adj(A)¶
- model.GCN.GCN.norm_adj_batch(A)¶
model.GCN.Graph module¶
model.GCN.Graphsage module¶
- class model.GCN.Graphsage.BatchedGraphSAGE(infeat, outfeat, use_bn=False, mean=False, add_self=False)¶
Bases:
torch.nn.modules.module.Module
- forward(x, adj, mask=None)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
model.GCN.non_local module¶
model.GCN.sparseGCN module¶
- class model.GCN.sparseGCN.SpGraphAttentionLayer(in_features, out_features, dropout, alpha, concat=True)¶
Bases:
torch.nn.modules.module.Module
Sparse version GAT layer, similar to https://arxiv.org/abs/1710.10903
- forward(input, adj)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class model.GCN.sparseGCN.SpecialSpmm¶
Bases:
torch.nn.modules.module.Module
- forward(indices, values, shape, b)¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class model.GCN.sparseGCN.SpecialSpmmFunction¶
Bases:
torch.autograd.function.Function
Special function for only sparse region backpropataion layer.
- static backward(ctx, grad_output)¶
Defines a formula for differentiating the operation.
This function is to be overridden by all subclasses.
It must accept a context
ctx
as the first argument, followed by as many outputs didforward()
return, and it should return as many tensors, as there were inputs toforward()
. Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input.The context can be used to retrieve tensors saved during the forward pass. It also has an attribute
ctx.needs_input_grad
as a tuple of booleans representing whether each input needs gradient. E.g.,backward()
will havectx.needs_input_grad[0] = True
if the first input toforward()
needs gradient computated w.r.t. the output.
- static forward(ctx, indices, values, shape, b)¶
Performs the operation.
This function is to be overridden by all subclasses.
It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).
The context can be used to store tensors that can be then retrieved during the backward pass.