Designing a GCN

Designing a GCN with NAGL primarily involves creating an instance of the GNNModel class. This can be done straightforwardly in Python:

from openff.nagl.features import atoms, bonds
from openff.nagl import GNNModel
from openff.nagl.nn import gcn
from openff.nagl.nn.activation import ActivationFunction
from openff.nagl.nn import postprocess

atom_features = (
    atoms.AtomicElement(["C", "H", "O", "N", "P", "S"]),

bond_features = (

model = GNNModel(

Or if you prefer, the same model architecture can be specified as a YAML file:

convolution_architecture: SAGEConv
postprocess_layer: compute_partial_charges

activation_function: ReLU
learning_rate: 0.001
n_convolution_hidden_features: 128
n_convolution_layers: 3
n_readout_hidden_features: 128
n_readout_layers: 4

  - AtomicElement:
      categories: ["C", "H", "O", "N", "P", "S"]
  - AtomConnectivity
  - AtomAverageFormalCharge
  - AtomHybridization
  - AtomInRingOfSize: 3
  - AtomInRingOfSize: 4
  - AtomInRingOfSize: 5
  - AtomInRingOfSize: 6
  - BondOrder
  - BondInRingOfSize: 3
  - BondInRingOfSize: 4
  - BondInRingOfSize: 5
  - BondInRingOfSize: 6

And then loaded with the [GNNModel.from_yaml()] method:

from openff.nagl import GNNModel

model = GNNModel.from_yaml("model.yml")

Here we’ll go through each option, what it means, and where to find the available choices.

atom_features and bond_features

These arguments specify the featurization scheme for the model (see Featurization). atom_features takes a tuple of features from the openff.nagl.features.atoms module, and bond_features a tuple of features from the openff.nagl.features.bonds module. Each feature is a class that must be instantiated, possibly with some arguments. Custom features may be implemented by subclassing AtomFeature or BondFeature; both share the interface of their base class Feature.


The convolution_architecture argument specifies the structure of the convolution module. Available options are provided in the openff.nagl.nn.gcn module.

Number of Features and Layers

The size and shape of the neural networks in the convolution and readout modules are specified by four arguments:

  • n_convolution_hidden_features

  • n_convolution_layers

  • n_readout_hidden_features

  • n_readout_layers

The “convolution” arguments define the update network in the convolution module, and the “readout” the network in the readout module (see The Convolution Module: Message-passing Graph Convolutional Networks and The Readout Module: The Charge Equilibration Method). Read the GNNModel docstring carefully to determine which layers are considered hidden.


The activation_function argument defines the activation function used by the readout network (see Neural Networks - a quick primer). The activation function used by the convolution network is currently fixed to ReLU. Available activation functions are the variants (attributes) of the ActivationFunction class. When using the Python API rather than the YAML interface, other activation functions from PyTorch can be used.


postprocess_layer specifies a PyTorch Module that performs post-processing in the readout module (see The Readout Module: The Charge Equilibration Method). Post-processing layers inherit from the PostprocessLayer class and are found in the openff.nagl.nn.postprocess module.