iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1186/s13321-020-00454-3
DeepGraphMolGen, a multi-objective, computational strategy for generating molecules with desirable properties: a graph convolution and reinforcement learning approach | Journal of Cheminformatics | Full Text
Skip to main content
  • Research article
  • Open access
  • Published:

DeepGraphMolGen, a multi-objective, computational strategy for generating molecules with desirable properties: a graph convolution and reinforcement learning approach

Abstract

We address the problem of generating novel molecules with desired interaction properties as a multi-objective optimization problem. Interaction binding models are learned from binding data using graph convolution networks (GCNs). Since the experimentally obtained property scores are recognised as having potentially gross errors, we adopted a robust loss for the model. Combinations of these terms, including drug likeness and synthetic accessibility, are then optimized using reinforcement learning based on a graph convolution policy approach. Some of the molecules generated, while legitimate chemically, can have excellent drug-likeness scores but appear unusual. We provide an example based on the binding potency of small molecules to dopamine transporters. We extend our method successfully to use a multi-objective reward function, in this case for generating novel molecules that bind with dopamine transporters but not with those for norepinephrine. Our method should be generally applicable to the generation in silico of molecules with desirable properties.

Introduction

The in silico (and experimental) generation of molecules or materials with desirable properties is an area of immense current interest (e.g. [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]). However, difficulties in producing novel molecules by current generative methods arise because of the discrete nature of chemical space, as well as the large number of molecules [29]. For example, the number of drug-like molecules has been estimated to be between 1023 and 1060 [30,31,32,33,34]. Moreover, a slight change in molecular structure can lead to a drastic change in a molecular property such as binding potency (so-called activity cliffs [35,36,37]).

Earlier approaches to understanding the relationship between molecular structure and properties used methods such as random forests [38, 39], shallow neural networks [40, 41], Support Vector Machines [42], and Genetic Programming [43]. However, with the recent developments in Deep Learning [44, 45], deep neural networks have come to the fore for property prediction tasks [3, 46,47,48]. Notably, Coley et al. [49] used Graph convolutional networks effectively as a feature encoder for input to the neural network.

In the past few years, there have been many approaches to applying Deep Learning for molecule generation. Most papers use the Simplified Molecular-Input Line-Entry System (SMILES) strings as inputs [50], and many use a Variational AutoEncoder architecture (e.g. [3, 17, 51]), with Bayesian Optimization in the latent space to generate novel molecules. However, the use of a sequence-based representational model has a specific difficulty, as any method using them has to learn the inherent rules, in this case of SMILES strings. More recent approaches, such as Grammar Variational AutoEncoders [52, 53] have been developed in attempts to overcome this problem but still the molecules generated are not always valid. Some other approaches try to use Reinforcement Learning for generating optimized molecule [54]. However, they too make use of SMILES strings which as indicated poses a significant problem. In particular, the SMILES grammar is entirely context-sensitive: the addition of an extra atom or bracket can change the structure of the encoded molecule dramatically, and not just ‘locally’ [55].

Earlier approaches have tended to choose a specific encoding for the molecules to be used as an input to the model, such as one hot encoding [56, 57], Extended Connectivity Fingerprints [58, 59] and Generative Examination Networks [60] use SMILES strings directly. We note that these encodings do not necessarily capture the features that need to be obtained for prediction of a specific property (and all encodings extract quite different and orthogonal features [61]).

In contrast, the most recent state-of-the-art methods, including hypergraph grammars [62], Junction Tree Variational Auto Encoders [63] and Graph Convolutional Policy Networks [34], use a graphical representation of molecules rather than SMILES strings and have achieved 100% validity in molecular generation. Graph-based methods have considerable utility (e.g. [64,65,66,67,68,69,70] and can be seen as a more natural representation of molecules as substructures map directly to subgraphs, but subsequences are usually meaningless. However, these have only been used to compare the models on deterministic properties such as the Quantitative Estimate of Drug-likeness (QED) [71], logP, etc. that can be calculated directly from molecular structures (e.g. Using RDKit, http://www.rdkit.org/). For many other applications, molecules having a higher score for a specific measured property are more useful. We here try to tackle this problem (Fig. 1).

Fig. 1
figure 1

Block diagram of our basic system. A molecule is generated by the Reinforcement Learning (RL) pathway using a Graph Convolutional Policy Networks. This molecule is then used as an input for the property prediction module which outputs the property score as predicted by the module. This score is then used as the reward feedback for the RL pathway and the cycle restarts

Methods

Our system (Fig. 1) consists of two parts: Property Prediction and Molecular Generation. For both the parts, we represent the molecules as graphs [72] since they are a more natural representation than are SMILES strings, and substructures are simply subgraphs. We train a model to predict the property scores of the molecules, specifically the binding constant of various molecules at the dopamine and norepinephrine transporters (using a dataset from BindingDB). The first part, used for (training) the property prediction part, is a Graph Convolutional Network as a feature encoder together with a Feed Forward Network. We also use an Adaptive Robust Loss Function (as suggested by [73]) since the experimental data are bound to be error prone. For the Molecular Generation task, we use the method proposed by You and colleagues [34]. In particular, we (and they) use Reinforcement Learning for this task since it allows us to incorporate both the molecular constraints and the desired properties using reward functions. This part uses graph convolution policy networks (GCPNs), a model consisting of a GCN that predicts the next action (policy) given the molecule state. It is further guided by expert pretraining and adversarial loss for generating valid molecules. Our code (https://github.com/dbkgroup/prop_gen) is essentially an integration of the property prediction code of Yang and colleagues [74, 75] (https://github.com/swansonk14/chemprop) and the reinforcement learning code provided by You and colleagues [34].

Molecular property prediction

As noted, the supervised property prediction model consists of a graph-convolution network for feature extraction followed by a fully interconnected feedforward network for property prediction.

Feature extraction

We represent the molecules as directed graphs, with each atom (\( i \)) having a feature vector \( F_{i} \)(\( {\mathbb{R}}^{133} \)) and each bond (between atom \( i \) & \( j \)) having feature vector \( F_{ij} \)(\( {\mathbb{R}}^{14} \)). For each incoming bond a feature vector is obtained by concatenating the feature vector of the atom to which the bond is incoming and the feature vector of the bond. Thus the input tensor is of the size \( N_{\text{bonds}} \times {\mathbb{R}}^{147}\). The Graph Convolution approach allows the message (feature vector) for a bond to be passed around the entire graph using the approach described below.

The initial atom-bond feature vector that we use incorporates important molecular information that the GCN encoder can then incorporate in later layers. The initial representations for the atom and bond features are taken from https://github.com/swansonk14/chemprop and summarized in Table 1, below. Each descriptor is a one-hot vector covering the index-range represented by it (except the Atomic Mass). For Atomic Number, Degree, Formal Charge, Chiral Tag, Number of Hydrogens and Hybridization, the feature vector contains one additional dimension to allow uncommon values (values not in the specified range).

Table 1 Atom and bond features used in the present work

The initial atom-bond feature vector is then passed through a linear layer followed by ReLU Activation [76, 77] to get the Depth-0 message vector for each bond. For each bond, the message vectors for the neighbouring bonds are summed up (Convolution step) and passed through a linear layer followed by ReLU and a Dropout layer to get the Depth-1 message vectors. This process is continued up to a specified Depth-(N-1) message vectors. To get the Depth-N message vectors, the Depth-(N-1) vectors of all the incoming bonds for an atom are summed and then passed through a dense layer followed by ReLU and Dropout. The final graph embedding for the molecule is obtained by averaging the depth-N message vectors over all the atoms. The exact details for this model can be found in Sect. “Hyperparameter optimization” (Fig. 2).

Fig. 2
figure 2

The property prediction pipeline for our method. The steps in green represent the feature extraction using Graph Convolution and the steps in orange represent regression of property scores. a The molecule is represented is a feature vector with features described as in Sect. “Molecular property prediction”. b The feature vector is passed through a linear layer to get Depth-0 message. c Through repeated graph convolution (message passing) followed by Linear Layer, we get Depth N-1 message. d Each atom’s final message is calculated by summing up the messages (also Graph Convolution) of the neighbouring atoms. e The resultant message is passed through a Linear Layer and the mean of all the atoms is taken to get the final embedding. f The property score is regressed from the graph embedding by a Feed Forward Neural Network. g The loss between predicted property and ground truth property is then backpropagated to change the weights

Regression

To perform property prediction the embedding extracted by the GCN is fed into a fully connected network. Each intermediate layer consists of a Linear Layer followed by ReLU activation and Dropout that map the hidden vector to another vector of the same size. Finally the penultimate nodes are passed through a Linear Layer to output the predicted property score. The Ki values present in the dataset were obtained experimentally so might contain experimental errors. If we were to train our model with a simple loss function such as root mean square (RMS) error loss, it would not be able to generalize well because of the presence of outliers in the training set. Overcoming this problem requires training the data with the help of a robust loss function that takes care of the outliers present in the training data. There are several types of robust loss functions such as Pseudo-Huber loss [78], Cauchy loss, etc., but each of them has an additional hyperparameter value (for example δ in Huber Loss) which is treated as a constant while training. This means that we have to manually tune the hyperparameter each time we train to get the optimum value which may result in extensive training time. To overcome this problem, as proposed by [73], we have used a general robust loss function that has the hyperparameters as shape parameter (α) which controls the robustness of the loss, and the scale parameter (c) which controls the size of the loss’s quadratic bowl near x = 0. This loss is dubbed as a “general” loss since it takes the form of other loss functions for particular values of α (e.g. L2 loss for α = 2, Charbonnier loss for α = 1, Cauchy loss for α = 0). The authors also propose that “by viewing the loss function as the negative log likelihood of a probability distribution, and by treating robustness of the distribution as a latent variable” we can use gradient-based methods to maximize the likelihood without manual parameter tuning. In other words, we can now train the hyperparameters α and c rather which overcomes the earlier problem of manually tuning the hyperparameters. The loss function and the corresponding probability distribution are described in Eq. 1 and  2 respectively.

$$ f\left( {x,\alpha ,c} \right) = \frac{{\left| {\alpha - 2} \right|}}{\alpha }\left( {\left( {\frac{{\left( {x/c} \right)^{2} }}{{\left| {\alpha - 2} \right|}} + 1} \right)^{\alpha /2} - 1} \right) $$
(1)
$$ \begin{aligned} p\left( {x|\alpha ,c} \right) = \frac{1}{cZ\left( \alpha \right)}\exp \left( { - f\left( {x,\alpha ,{\text{c}}} \right)} \right) \hfill \\ Z\left( \alpha \right) = \mathop \smallint \limits_{ - \infty }^{\infty } { \exp }( - f\left( {x,\alpha ,1} \right) \hfill \\ \end{aligned} $$
(2)

Reinforcement learning for molecular generation

We follow the method described by the GCPN paper [34] for the molecular generation task, with the difference being that the final property reward is the value calculated by the previously trained model for the newly generated molecules. GCPN is a state-of-the-art molecule generator that utilizes Proximal Policy Optimization (PPO) as a Reinforcement Learning paradigm for generating molecules. A comparison of GCPN with other generative approaches can be found in Tables 2 and 3 which compare the ability of generators to produce molecules having higher property scores and targeted property scores, respectively. Note that even though we have chosen GCPN for the molecule generation pipeline, our strategy can be implemented using any graph-based Reinforcement Learning generator since we just need to use the predicted property score as the reward function.

Table 2 Comparison of the top 3 property scores of generated molecules found by each model
Table 3 Comparison of the effectiveness of property targeting task

Molecular representation

As in the previous part, we represent the molecules as graphs, more specifically as (\( A,E,F \)) where \( A \) \( \left\{ {0, 1} \right\}^{n \times n} \) is the adjacency matrix, \( F \) \( {\mathbb{R}}^{n \times d} \) is the node (atom) feature matrix and \( E \in \left\{ {0, 1} \right\}^{3 \times n \times n} \) is the edge-conditioned adjacency tensor (since the number of bond-types is 3, namely single, double and triple bond), with \( n \) being the number of atoms and \( d \) being the length of feature vector for each atom. More specifically, \( E_{i,j,k} = 1 \) if there exists a bond of type \( i \) between atoms \( j \) and, and \( A_{j,k} = 1 \) if there exists any bond between atoms \( j \) and \( k \).

Reinforcement learning setup

Our model environment builds a molecule step by step with the addition of a new bond in each step. We treat graph generation as a Markov Decision Process such that the next action is predicted based only on the current state of the molecule, not on the path that the generative process has taken. This reduces the need for sequential models such as RNNs and the disadvantages of vanishing gradients associated with them, as well as reducing the memory load on the model. More specifically, the decision process follows the equation: \( p (s_{t + 1} |s_{t} , \ldots s_{0} ) = p(s_{t + 1} |s_{t} ) \), where p is the probability of next state (\( s_{t + 1} \)) given the previous state (\( s_{t} \)).

We can initialize the generative process with either a single C atom (as in Experiments 1 and 2) or with another molecule (as in Experiments 3, 4 and 5). At any point in the generation process, the state of the environment is the graph of the current molecule that has been built up so far. The action space is a vector of length 4 which contains the information—First Atom, Second Atom, Bond type and Stop. The stop signal is either 0 or 1 indicating whether the generation is complete, based on valence rules. If the action defies the rules of chemistry in the resultant molecule, the action is not considered and the state remains as it is.

We make use of both intermediate and final rewards to guide the decision-making process. The intermediate rewards include stepwise validity checks such that a small constant value is added to the reward if the molecule passes the valency checks. The final reward includes the pKi value of the final molecule as predicted by the trained model and the validity rewards (+1 for not having any steric strain and +1 for absence of functional groups that violate ZINC functional group filters). Two other metrics are the quantitative estimation of drug-likeness (QED) [71] and the synthetic accessibility (SA) [80] score. Since our final goal is to generate drug-like molecules that can be synthetically generated, we also add the QED and 2*SA score of the final molecule to the reward.

Apart from this, we also use adversarial rewards so that the generated molecules resemble (prediction) the given set of molecules (real). We define the adversarial rewards \( V\left( {\pi_{\theta } ,D_{\varphi } } \right) \) in Eq 3.

$$ \mathop { \hbox{min} }\limits_{\theta } \mathop { { \hbox{max} }}\limits_{\phi } V\left( {\pi_{\theta } ,D_{\phi } } \right) = E_{{x \sim p_{\text{data}} }} \left[ {{ \log }D_{\phi } \left( x \right)} \right] + E_{{x \sim \pi_{\theta } }} \left[ {{ \log }D_{\phi } \left( {1 - x} \right)} \right] $$
(3)

where \( \pi_{\theta } \) is the policy network, \( D_{\varphi } \) is the discriminator network, \( x \) represents the input graph and \( p_{\text{data}} \) is the underlying data distribution which is defined either over final graphs (for final rewards) or intermediate graphs (for intermediate rewards) (just as proposed by You and colleagues [34]). Alternate training of generator (policy network) and discriminator by gradient descent methods will not work in our case since \( x \) is a non-differentiable graph object. Therefore we add—\( V\left( {\pi_{\theta } ,D_{\phi } } \right) \) to our rewards and use policy gradient methods [81] to optimize the total rewards. The discriminator network comprises a Graph Convolutional Network for generating the node embedding and a Feed Forward Network to output whether the molecule is real or fake. The GCN mechanism is same as that of the policy network which is described in the next section.

Graph convolutional policy network

We use Graph Convolutional Networks (GCNs) as the policy function for the bond prediction task. This variant of graph convolution performs message passing over each edge type for a fixed depth \( L \). “The node embedding for the next depth (\( l + 1 \)) is calculated as described in Eq. 4

$$ H^{{\left( {l + 1} \right)}} = {\text{AGG}}\left( {{\text{ReLU}}\left( {\left\{ {{\tilde{\text{D}}}_{i}^{{\frac{ - 1}{2}}} \tilde{E}_{i} {\tilde{\text{D}}}_{i}^{{\frac{ - 1}{2}}} H^{\left( l \right)} W_{i}^{\left( l \right)} } \right\},\forall i \in \left( {1, \ldots ,b} \right)} \right)} \right) $$
(4)

where \( E_{i} \) is the \( i{th } \) slice of the tensor \( E \), \( \tilde{E}_{i} = E_{i} + I \), \( \tilde{D}_{i} = \mathop \sum \limits_{k} \tilde{E}_{ijk} \), \( W_{i}^{\left( l \right)} \) is a trainable weight matrix for the \( i{th } \) edge type, and \( H^{\left( l \right)} \) is the node embedding learned in the \( l{th } \) layer with \( {\mathbb{R}}^{{\left( {n + c} \right) \times d}} \) [34]. \( n \) is the number of atoms in the current molecule and \( c \) is the number of possible atom types (C,N,O etc.) that can be added to the molecule (one atom is added in each step) with \( d \) representing the dimension of the embedding. We use mean over the edge features as the Aggregate (AGG) function to obtain the node embedding for a layer. This process is repeated \( L \) times until we get the final node embedding.

This node embedding \( X \) is then used as the input to four Multilayer Perceptrons (MLP, denoted by \( m \)), that map a matrix \( Z \in \)\( {\mathbb{R}}^{p \times d} \) to \( {\mathbb{R}}^{p} \) representing the probability of selecting a particular entity from the given \( p \) entities. The specific entity is then sampled from the probability distribution thus obtained. Note that since the action space is a vector of length 4, we use 4 perceptrons to sample each component of the vector. The first atom has to be from the current molecule state while the second atom can be from the current molecule (forming a cycle) or a new atom outside the molecule (adding a new atom). For selecting the first atom, the original embedding \( X \) is passed to the MLP \( m_{f} \) and outputs a vector of length equal to \( n \). While selecting the second atom, the embedding of first atom \( X_{{a_{\text{first}} }} \) is concatenated to the original embedding \( X \) and passed to the MLP \( m_{s} \) giving a vector of length equal to \( n \) + \( c \). While selecting the edge type, the concatenated embedding of the first (\( X_{{a_{\text{first}} }} \)) and second (\( X_{{a_{\text{second}} }} \)) atom is used as an input to MLP \( m_{e} \) and outputs a vector of length equal to 3 (number of bond types). Finally, the mean embedding of the atoms is passed to MLP \( m_{t} \) to output a vector of length 2 indicating whether to stop the generation. This process is described in Eqs. 5, 6, 7, 8, 9 (Fig. 3).

$$ a_{t} = {\text{CONCAT}}\left( {a_{\text{first}} ,a_{\text{second}} ,a_{\text{edge}} ,a_{\text{step}} } \right) $$
(5)
$$ f_{\text{first}} \left( {s_{t} } \right) = {\text{SOFTMAX}}\left( {m_{f} \left( X \right)} \right) a_{\text{first}} \sim f_{\text{first}} \left( {s_{t} } \right) \in \left \{ {0,1 } \right \}^{n} $$
(6)
$$ f_{\text{second}} \left( {s_{t} } \right) = {\text{SOFTMAX}}\left( {m_{s} \left( {X_{{a_{first} }} ,X} \right)} \right)\quad a_{\text{second}} \sim f_{\text{second}} \left( {s_{t} } \right) \in \left\{ {0,1 } \right\}^{n + c} $$
(7)
$$ f_{\text{edge}} \left( {s_{t} } \right) = {\text{SOFTMAX}}\left( {m_{e} \left( {X_{{a_{\text{first}} }} ,X_{{a_{\text{second}} }} } \right)} \right) \quad a_{\text{edge}} \sim f_{\text{edge}} \left( {s_{t} } \right) \in \left\{ {0,1 } \right\}^{b} $$
(8)
$$ f_{\text{stop}} \left( {s_{t} } \right) = {\text{SOFTMAX}}\left( {m_{t} \left( {{\text{AGG}}\left( X \right)} \right)} \right) \quad a_{\text{stop}} \sim f_{\text{stop}} \left( {s_{t} } \right) \in \left\{ {0,1 } \right\} . $$
(9)
Fig. 3
figure 3

The reinforcement learning pathway for systemic generation of molecules (Redrawn from You et al. [34]). a The state is defined as the current graph \( G_{t} \) and the possible atom types \( C \). b The GCPN conducts message passing to encode the state as node embeddings and estimates the policy function. c The action to be performed (\( a_{t} \)) is sampled from the policy function. The environment performs a chemical valency check on the intermediate state and returns (d) the next state \( G_{t} \) and (e) the associated reward (\( r_{t} \))

Policy gradient training

For our experiments, we use Proximal Policy Optimization (PPO) [81], the state-of-the-art policy gradient method, for optimizing the total reward. The objective function for PPO is described in Eq 10.

$$ \begin{aligned} { \hbox{max} }L^{\text{CLIP}} \left( \theta \right) =\, & {\mathbb{E}}_{t} \left[ {{ \hbox{min} }\left( {r_{t} \left( \theta \right)\hat{A}_{t} ,{\text{clip}}\left( {r_{t} \left( \theta \right),1 - \in ,1 + \in } \right)\hat{A}_{t} } \right)} \right] \\ r_{t} \left( \theta \right) =\, & \frac{{\pi_{\theta } \left( {a_{t} |s_{t} } \right)}}{{\pi_{{\theta_{\text{old}} }} \left( {a_{t} |s_{t} } \right)}} \\ \hat{A}_{t} = \,& \delta_{t} + \left( {\gamma \lambda } \right)\delta_{{\left( {t + 1} \right)}} + \ldots + \left( {\gamma \lambda } \right)^{T - t + 1} \delta_{T - 1} \\ {\text{where }}\quad \partial_{t} = R_{t} + \gamma V\left( {S_{t + 1} } \right) - V\left( {S_{t} } \right) \\ \end{aligned} $$
(10)

Here, St, at Rt are the state, action and reward respectively at timestep t, V(St) is the value associated with state St, πθ is the policy function and γ is the discount factor. Also note that \( {\hat{\text{A}}}_{t} \), which is an estimator of the advantage function at timestep t, has been estimated using Generalized Advantage Estimation [82] with the GAE parameter λ, since it reduces the variance of the estimate.

For estimating the value of V we use an MLP with the embedding X as the input. Apart from this, we also use expert pretraining [83] which has shown to stabilise the training process. For our experiment, any ground truth molecule can be used as an expert for imitation. We randomly select a subgraph \( \hat{G}^{\prime} \) from the ground truth molecule \( \hat{G} \) as the state \( {\hat{\text{S}}}_{\text{t}} \). The action \( \hat{a}_{\text{t}} \) is also chosen randomly such that it adds an atom or bond in the graph \( \hat{G}/\hat{G}^{\prime} \). This pair (\( {\hat{\text{S}}}_{\text{t}} \), \( \hat{a}_{\text{t}} \)) is used for calculating the expert loss.

$$ \hbox{min} L^{\text{EXPERT}} \left( \theta \right) = - \log \left( {\pi_{\theta } } \right)\left( {\hat{a}_{t} |\hat{S}_{t} } \right) $$
(11)

Note that we use the same dataset of ground truth molecules for calculating the expert loss and the adversarial rewards. For the rest of the paper, we will call this dataset the “expert dataset” and the random molecule selected from the dataset the “expert molecule”.

System evaluation

In this section we evaluate the system described above on the task of generating small molecules that interact with the dopamine transporter but not (so far as possible) with the norepinephrine transporter.

Property prediction

In this section we evaluate the performance of the supervised property prediction component. Dopamine Transporter binding data was obtained from www.bindingdb.org (https://bit.ly/2YACT5u). The training data consist of some molecules which are labelled with their Ki values and some which are labelled with IC50 values. For this paper, we have used IC50 values and Ki values interchangeably in order to increase the size of the training dataset. Molecules having large Ki values in the dataset were not labelled accurately (with labels such as ~ 1000) but the use of a robust loss function allowed us to incorporate these values directly. As stated above we use log transformed values (pKi). (We also attempted to learn the Ki values of the molecules, but the distribution was found to be heteroscedastic; hence we focus on predicting the pKi values.) Data are shown in Fig. 4a for the dopamine transporter and 4b for the norepinephrine transporter pKi values.

Fig. 4
figure 4

Predicted and experimental values for the test sets of the dopamine (a) and norepinephrine (b) transporters. Lines are lines of best fit (a) y = 0.44 + 0.79x, r2 = 0.79; b y = 0.49 + 0.74x, r2 = 0.68)

Hyperparameter optimization

As the property prediction is a general algorithm with a large number of hyperparameters, we attempted to improve generalisation on the transporter problem using Bayesian optimization on the RMSE error between the predicted pKi values and the actual pKi values of the validation set. For this task we consider the hyperparameters to be the depth of the GCN encoder, the dimensions of the message vectors, the number of layers in the Feed Forward Network, and the Dropout constant. We use tenfold cross validation on the train and validation dataset with the test set held out. The model score is defined as the mean RMS error of the ten-folds and we use Bayesian optimization to minimize the model score.

For the case of the dopamine transporter, the optimum hyperparameters that were obtained are 3 (depth of GCN), 1300 (dimensions of message vector), 2 (FFN layers) and 0.1 (Dropout).The RMS error on the test dataset for the dopamine transporter after Hyperparameter Optimization was found to be 0.57 as compared to an error of 0.65 without it. We attribute this quite significant remaining error to the errors present in the dataset. Similarly for the norepinephrine transporter, the test RMS error was found to be 0.66 after hyperparameter optimization and the optimum hyperparameters obtained are 5 (depth of GCN), 900 (dimensions of message vector), 3 (FFN layers), 0.15 (Dropout).

Implementation details

For the prediction of pKi value of both Dopamine and Norepinephrine transporters, we split the overall dataset into train (80%), validation (10%) and test (10%) datasets randomly. The training is done with a batch size of 50 molecules and for 100 epochs. All the network weights were initialized using Xavier initialization [84]. The first two epochs are warmup epochs [85] where the learning rate increases from 1e−4 to 1e−3 linearly and after that it decreases exponentially to 1e−4 by the last epoch. The model is saved after an epoch if the RMS error on the validation dataset is less than the previous best and the error for the test dataset is calculated using the saved model which has the least error on the validation dataset. The code was written in PyTorch library and the training was done using an NVIDIA RTX 2080Ti GPU on a Windows 10 system with 256 GB RAM and Intel 18-Core Xeon W-2195 processor.

Single-objective molecular generation

To begin the RL evaluation we consider molecular generation with a single objective (dopamine transporter interaction). For all the experiments we use the following implementation details. The learning rate for training all the networks is taken to be 1e−3 and linearly decreasing to 0 by 3e7 timesteps. The depth of GCN network for both the GCPN and the Discriminator network is taken to be 3 and the node embedding size was taken to be 128. The code was written using the TensorFlow library and training was done using an NVIDIA RTX 2080Ti GPU as per the previous paragraph.

For the task of analysing the results we provide the ‘top 10’ molecules generated as in Fig. 5. However, we aim to generate molecules that are in some sense similar to the original training dataset by systematically modifying the RL pathway in the following experiments. For each experiment, we find the closest molecule in the BindingDB dataset to the top 10 generated molecules. The relative closeness is measured by calculating its Tanimoto Similarity between the RDKit fingerprints and visualize the distribution of the TS values.

Fig. 5
figure 5

In silico generation by DeepGraphMolGen of novel molecules with predicted binding capacity to the dopamine transporter. Molecules were generated as described in the text. a Top 10 molecules as predicted by DeepGraphMolGen versus the closest molecule in the BindingdB dataset and the Tanimoto similarity thereto (encoded using the RDKit patterned fingerprint). b Distribution of Tanimoto similarities to a molecule in BindingdB dataset of the top 500 molecules

First, we initialize the molecule with a single Carbon atom in the beginning of the generative process. The expert dataset in this case is chosen to be the ZINC dataset [86], which is a free dataset containing (at that time) some 230 M commercially available compounds. However, for our experiments, we use 250 K randomly selected molecules from ZINC as our expert dataset to make the experiments computationally tractable. The top generated molecules and their predicted properties are given in Additional file 1: Table S1 (including data on QED and SA) with a subset of the data illustrated in Fig. 5. Note that in all cases the values of QED and SA both exceeded 0.8.

Although the above experiment was able to generate optimized molecules, there is no certainty that the predictions are correct due to the errors in the model as well as the errors that were propagated by the experimental errors in the data. We thus attempt to generate molecules that are similar to the more potent molecules. In the next experiment, we choose the expert dataset to be the original dataset on which we trained the molecules (we will call this the Dopamine Dataset), while omitting molecules having Ki greater than 1000. We again choose the initial molecule to be a single carbon atom. The equivalent data are given in Additional file 2: Table S2, with similar plots to those of Fig. 5 given in Fig. 6.

Fig. 6
figure 6

In silico generation by DeepGraphMolGen of novel molecules with predicted binding capacity to the dopamine transporter. Molecules were generated as described in the text. a Top 10 molecules as predicted by DeepGraphMolGen versus the closest molecule in the BindingdB dataset and the Tanimoto similarity thereto (encoded using the RDKit patterned fingerprint). b Distribution of Tanimoto similarities to a molecule in BindingdB dataset of the top 500 molecules

Another way to ensure that the generated molecules will have a high affinity towards dopamine transporter is to explicitly ensure that the molecules have higher TS with already known molecules that have high pKi values. We attempt to achieve this by initializing the generative process with a random molecule from the Dopamine Dataset having Ki < 1000. We conduct two experiments using this process, one where we restrict the number of atoms (other than hydrogen) to be lower than 25 (Additional file 3: Table S3 and Fig. 7), and another (Additional file 4: Table S4 and Fig. 8) where we restrict the number of atoms to be less than 15. For both these experiments, we use the ZINC dataset as the expert dataset. The results are summarized in the tables below. Note that in some cases we obtain a TS of 1; this is encouraging as in this case the algorithm found no need to add anything to the original molecule and could recapitulate it.

Fig. 7
figure 7

In silico generation by DeepGraphMolGen of novel molecules with predicted binding capacity to the dopamine transporter using a generative method in which the number of heavy atoms is constrained to be lower than 25. Molecules were generated as described in the text. a Top 10 molecules as predicted by DeepGraphMolGen versus the closest molecule in the BindingdB dataset and the TS thereto (encoded using the RDKit patterned fingerprint). b Distribution of Tanimoto similarities (RDKit patterned encoding) to a molecule in BindingdB dataset of the top 500 molecules

Fig. 8
figure 8

In silico generation by DeepGraphMolGen of novel molecules with predicted binding capacity to the dopamine transporter using a generative method in which the number of heavy atoms is constrained to be lower than 15. Molecules were generated as described in the text. a Top 10 molecules as predicted by DeepGraphMolGen versus the closest molecule in the BindingdB dataset and the TS thereto (encoded using the RDKit patterned fingerprint). b Distribution of Tanimoto similarities (RDKit patterned encoding) to the closest molecule in BindingdB dataset of the top 500 molecules

Multi-objective molecular generation

Even though generating molecules having higher affinity towards a particular ligand in itself is quite sought after, in many cases we might wish to seek molecules that bind to one receptor but explicitly do not bind to another one (kinase inhibitors might be one such example). We attempt to achieve this here with the help of our Reinforcement Learning pipeline by modifying the reward function to be a weighted combination of pKi values for the two different targets. Explicitly, we attempt to generate molecules that have high binding affinity to the Dopamine Transporter but a much lower binding affinity to the Norepinephrine Transporter. Thus, we modify the reward function used in the previous experiments to add 2 times the predicted pKi values for Dopamine Transporter and -1 times the predicted pKi values for the Norepinephrine Transporter. The higher weight is given to the dopamine component since we wish to generate molecules that do bind to it. Clearly we could use any other weightings as part of the reward function, so those chosen are simply illustrative. For this experiment we initialize the process with a random molecule from the Dopamine dataset having a number of atoms lower than 25 and choose the expert dataset to be ZINC. The results of this experiment are summarized in Additional file 5: Table S5 and Fig. 9. As above, some molecules have a TS of 1 to examples in the dataset, for the same reasons.

Fig. 9
figure 9

In silico generation by DeepGraphMolGen of novel molecules with predicted binding capacity to the dopamine transporter using a generative method in which the number of heavy atoms is constrained to be lower than 25. Molecules were generated as described in the text. a Top 10 molecules as predicted by DeepGraphMolGen versus the closest molecule in the BindingdB dataset and the TS thereto (encoded using the RDKit patterned fingerprint). b Distribution of Tanimoto similarities (RDKit patterned encoding) to the closest molecule in BindingdB dataset of the top 500 molecules. c Plot of those molecules with differential affinities for the dopamine and norepinephrine transporters

Only in rare cases do candidate solutions for multi- (in this case two-)objective optimisation problems have unique solutions that are optimal for both [87], and there is a trade-off that is left to the choice of the experimenter. Thus, Fig. 9c also illustrates the molecules on the Pareto front for the two objectives, showing how quite changes in structure can move one swiftly along the Pareto front. Consequently our method also provides a convenient means of attacking multi-objective molecular optimisation problems.

Conclusions

Overall, the present molecular graph-based generative method has a number of advantages over grammar-based encodings, in particular that it necessarily creates valid molecules. As stressed by Coley and colleagues [49], such methods still retain any inherent limitations of 2D methods as a priori they do not encode 3D information. This said, there is evidence that 3D structures do not add much benefit when forming QSAR models [88,89,90,91,92], so we do not consider this a major limitation for now. Some of the molecules generated might be seen by some (however subjectively) as ‘unusual, even though they scored well on both drug-likeness and synthetic accessibility metrics. This probably says much about the size of plausible drug space that exists relative to the fraction that has actually been explored [93,94,95], and implies that generative methods can have an important role to play in medicinal chemistry. Also, for generating desired molecules, the QSAR models need to be accurate and robust in order to evaluate accurately the property of the generated molecules. Recent works such as [96] include uncertainty metrics for property discrimination, and benchmarking models are also available [97]. In conclusion, we here add to the list of useful, generative molecular methods for virtual screening by combining molecular graph encoding, reinforcement learning and multi-objective optimisation within a single strategy.

References

  1. Yang X, Zhang J, Yoshizoe K, Terayama K, Tsuda K (2017) ChemTS: an efficient python library for de novo molecular generation. Sci Technol Adv Mater 18(1):972–976

    CAS  PubMed  PubMed Central  Google Scholar 

  2. Gómez-Bombarelli R, Aguilera-Iparraguirre J, Hirzel TD, Duvenaud D, Maclaurin D, Blood-Forsythe MA, Chae HS, Einzinger M, Ha DG, Wu T et al (2016) Design of efficient molecular organic light-emitting diodes by a high-throughput virtual screening and experimental approach. Nat Mater 15(10):1120

    PubMed  Google Scholar 

  3. Gómez-Bombarelli R, Wei JN, Duvenaud D, Hernández-Lobato JM, Sánchez-Lengeling B, Sheberla D, Aguilera-Iparraguirre J, Hirzel TD, Adams RP, Aspuru-Guzik A (2018) Automatic chemical design using a data-driven continuous representation of molecules. ACS Cent Sci 4(2):268–276

    PubMed  PubMed Central  Google Scholar 

  4. Sanchez-Lengeling B, Aspuru-Guzik A (2018) Inverse molecular design using machine learning: generative models for matter engineering. Science 361(6400):360–365

    CAS  PubMed  Google Scholar 

  5. Kadurin A, Nikolenko S, Khrabrov K, Aliper A, Zhavoronkov A (2017) druGAN: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico. Mol Pharm 14(9):3098–3104

    CAS  PubMed  Google Scholar 

  6. Olier I, Sadawi N, Bickerton GR, Vanschoren J, Grosan C, Soldatova L, King RD (2018) Meta-QSAR: a large-scale application of meta-learning to drug design and discovery. Mach Learn 107(1):285–311

    PubMed  Google Scholar 

  7. Popova M, Isayev O, Tropsha A (2018) Deep reinforcement learning for de novo drug design. Sci Adv 4(7):eaap7885

    CAS  PubMed  PubMed Central  Google Scholar 

  8. Tabor DP, Roch LM, Saikin SK, Kreisbeck C, Sheberla D, Montoya JH, Dwaraknath S, Aykol M, Ortiz C, Tribukait H et al (2018) Accelerating the discovery of materials for clean energy in the era of smart automation. Nat Rev Mater 3:5–20

    CAS  Google Scholar 

  9. Colby SM, Nuñez JR, Hodas NO, Corley CD, Renslow RR (2020) Deep learning to generate in silico chemical property libraries and candidate molecules for small molecule identification in complex samples. Anal Chem 92(2):1720–1729

    CAS  PubMed  Google Scholar 

  10. Baskin II (2020) The power of deep learning to ligand-based novel drug discovery. Expert Opin Drug Discov. https://doi.org/10.1080/17460441.2020.1745183

    Article  PubMed  Google Scholar 

  11. Hong SH, Ryu S, Lim J, Kim WY (2020) Molecular generative model based on an adversarially regularized autoencoder. J Chem Inf Model 60(1):29–36

    CAS  PubMed  Google Scholar 

  12. Lim J, Hwang SY, Moon S, Kim S, Kim WY (2020) Scaffold-based molecular design with a graph generative model. Chem Sci 11(4):1153–1164

    CAS  Google Scholar 

  13. Rifaioglu AS, Nalbat E, Atalay V, Martin MJ, Cetin-Atalay R, Doğan T (2020) DEEPScreen: high performance drug-target interaction prediction with convolutional neural networks using 2-D structural compound representations. Chem Sci 11(9):2531–2557

    CAS  PubMed  PubMed Central  Google Scholar 

  14. Yasonik J (2020) Multiobjective de novo drug design with recurrent neural networks and nondominated sorting. J Cheminform 12(1):14

    PubMed Central  PubMed  Google Scholar 

  15. Yoshimori A, Kawasaki E, Kanai C, Tasaka T (2020) Strategies for design of molecular structures with a desired pharmacophore using deep reinforcement learning. Chem Pharm Bull (Tokyo) 68(3):227–233

    Google Scholar 

  16. Walters WP, Murcko M (2020) Assessing the impact of generative AI on medicinal chemistry. Nat Biotechnol 38(2):143–145

    CAS  PubMed  Google Scholar 

  17. Griffiths RR, Hernández-Lobato JM (2020) Constrained Bayesian optimization for automatic chemical design using variational autoencoders. Chem Sci 11(2):577–586

    CAS  PubMed  Google Scholar 

  18. Cova TFGG, Pais AACC (2019) Deep learning for deep chemistry: optimizing the prediction of chemical patterns. Front Chem 7:809

    PubMed  PubMed Central  Google Scholar 

  19. Noh J, Kim J, Stein HS, Sanchez-Lengeling B, Gregoire JM, Aspuru-Guzik A, Jung Y (2019) Inverse design of solid-state materials via a continuous representation. Matter 1(5):1370–1384

    Google Scholar 

  20. Grisoni F, Schneider G (2019) De novo molecular design with generative long short-term memory. Chimia 73(12):1006–1011

    CAS  PubMed  Google Scholar 

  21. Grisoni F, Merk D, Friedrich L, Schneider G (2019) Design of natural-product-inspired multitarget ligands by machine learning. ChemMedChem 14(12):1129–1134

    CAS  PubMed  Google Scholar 

  22. Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G (2018) Generative Recurrent Networks for de novo drug design. Mol Inform 37(1–2):1700111

    Google Scholar 

  23. Merk D, Friedrich L, Grisoni F, Schneider G (2018) De novo design of bioactive small molecules by artificial intelligence. Mol Inform 37(1–2):1700153

    PubMed Central  Google Scholar 

  24. Schneider G (2018) Generative models for artificially-intelligent molecular design. Mol Inform 37(1–2):188031

    Google Scholar 

  25. Schneider P, Walters WP, Plowright AT, Sieroka N, Listgarten J, Goodnow RA, Fisher J, Jansen JM, Duca JS, Rush TS et al (2020) Rethinking drug design in the artificial intelligence era. Nat Rev Drug Discov 19:353–364

    PubMed  Google Scholar 

  26. Button A, Merk D, Hiss JA, Schneider G (2019) Automated de novo molecular design by hybrid machine intelligence and rule-driven chemical synthesis. Nat mach Intell 1(7):307–315

    Google Scholar 

  27. Moret M, Friedrich L, Grisoni F, Merk D, Schneider G (2020) Generative molecular design in low data regimes. Nat Mach Intell 2:171–180

    Google Scholar 

  28. Ståhl N, Falkman G, Karlsson A, Mathiason G, Boström J (2019) Deep reinforcement learning for multiparameter optimization in de novo drug design. J Chem Inf Model 59(7):3166–3176

    PubMed  Google Scholar 

  29. Arús-Pous J, Blaschke T, Ulander S, Reymond JL, Chen H, Engkvist O (2019) Exploring the GDB-13 chemical space using deep generative models. J Cheminform 11(1):20

    PubMed  PubMed Central  Google Scholar 

  30. Reymond JL (2015) The Chemical Space Project. Acc Chem Res 48(3):722–730

    CAS  PubMed  Google Scholar 

  31. Bohacek RS, McMartin C, Guida WC (1996) The art and practice of structure-based drug design: a molecular modeling perspective. Med Res Rev 16(1):3–50

    CAS  PubMed  Google Scholar 

  32. Ertl P (2003) Cheminformatics analysis of organic substituents: identification of the most common substituents, calculation of substituent properties, and automatic identification of drug-like bioisosteric groups. J Chem Inf Comput Sci 43(2):374–380

    CAS  PubMed  Google Scholar 

  33. O’Hagan S, Kell DB (2018) Analysing and navigating natural products space for generating small, diverse, but representative chemical libraries. Biotechnol J 13(1):1700503

    Google Scholar 

  34. You J, Liu B, Ying R, Pande V, Leskovec J: Graph Convolutional Policy Network for Goal-Directed Molecular Graph Generation. arXiv 2018:1806.02473v02471

  35. Dimova D, Stumpfe D, Bajorath J (2014) Method for the evaluation of structure-activity relationship information associated with coordinated activity cliffs. J Med Chem 57:6553–6563

    CAS  PubMed  Google Scholar 

  36. Stumpfe D, Hu Y, Dimova D, Bajorath J (2014) Recent progress in understanding activity cliffs and their utility in medicinal chemistry. J Med Chem 57(1):18–28

    CAS  PubMed  Google Scholar 

  37. Stumpfe D, Dimova D, Bajorath J (2014) Composition and topology of activity cliff clusters formed by bioactive compounds. J Chem Inf Model 54(2):451–461

    CAS  PubMed  Google Scholar 

  38. Teixeira AL, Leal JP, Falcao AO (2013) Random forests for feature selection in QSPR models—an application for predicting standard enthalpy of formation of hydrocarbons. J Cheminform 5(1):9

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Ambure P, Halder AK, Gonzalez Diaz H, Cordeiro M (2019) QSAR-co: an open source software for developing robust multitasking or multitarget classification-based QSAR models. J Chem Inf Model 59(6):2538–2544

    CAS  PubMed  Google Scholar 

  40. Zupan J, Gasteiger J (1993) Neural networks for chemists. Verlag Chemie, Weinheim

    Google Scholar 

  41. Livingstone D (1995) Data analysis for chemists. Oxford University Press, Oxford

    Google Scholar 

  42. Mahé P, Vert JP (2009) Virtual screening with support vector machines and structure kernels. Comb Chem High Throughput Screen 12(4):409–423

    PubMed  Google Scholar 

  43. O’Hagan S, Kell DB (2015) The KNIME workflow environment and its applications in Genetic Programming and machine learning. Genetic Progr Evol Mach 16:387–391

    Google Scholar 

  44. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    CAS  PubMed  Google Scholar 

  45. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    PubMed  Google Scholar 

  46. Gawehn E, Hiss JA, Schneider G (2016) Deep learning in drug discovery. Mol Inform 35(1):3–14

    CAS  PubMed  Google Scholar 

  47. Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, Ferrero E, Agapow PM, Zietz M, Hoffman MM et al (2018) Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface 15(141):20170387

    PubMed  PubMed Central  Google Scholar 

  48. Mater AC, Coote ML (2019) Deep Learning in Chemistry. J Chem Inf Model 59(6):2545–2559

    CAS  PubMed  Google Scholar 

  49. Coley CW, Barzilay R, Green WH, Jaakkola TS, Jensen KF (2017) Convolutional embedding of attributed molecular graphs for physical property prediction. J Chem Inf Model 57(8):1757–1772

    CAS  PubMed  Google Scholar 

  50. Weininger D (1988) SMILES, a chemical language and information system.1. Introduction to methodology and encoding rules. J Chem Inf Comput Sci 28(1):31–36

    CAS  Google Scholar 

  51. Dai H, Tian Y, Dai B, Skiena S, Song L (2018) Syntax-directed variational autoencoder for structured data. arXiv. 1802.08786v08721

  52. Kusner MJ, Paige B, Hernández-Lobato JM (2017) Grammar Variational Autoencoder. arXiv. 1703.01925v01921

  53. Blaschke T, Olivecrona M, Engkvist O, Bajorath J, Chen HM (2018) Application of generative autoencoder in de novo molecular design. Mol Inform 37(1–2):1700123

    Google Scholar 

  54. Xu Y, Lin K, Wang S, Wang L, Cai C, Song C, Lai L, Pei J (2019) Deep learning for molecular generation. Future Med Chem 11(6):567–597

    CAS  PubMed  Google Scholar 

  55. O’Boyle N, Dalke A (2018) DeepSMILES: an adaptation of SMILES for use in machine-learning of chemical structures. ChemRxiv. 7097960.v7097961

  56. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Boston

    Google Scholar 

  57. Stokes JM, Yang K, Swanson K, Jin W, Cubillos-Ruiz A, Donghia NM, MacNair CR, French S, Carfrae LA, Bloom-Ackerman Z et al (2020) A deep learning approach to antibiotic discovery. Cell 180(4):688–702

    CAS  PubMed  PubMed Central  Google Scholar 

  58. Zahoránszky-Kőhalmi G, Bologa CG, Oprea TI (2016) Impact of similarity threshold on the topology of molecular similarity networks and clustering outcomes. J Cheminform 8:16

    PubMed  PubMed Central  Google Scholar 

  59. Segler MHS, Kogej T, Tyrchan C, Waller MP (2017) Generating focussed molecule libraries for drug discovery with recurrent neural networks. arXiv. 1701.01329v01321

  60. van Deursen R, Ertl P, Tetko IV, Godin G (2020) GEN: highly efficient SMILES explorer using autodidactic generative examination networks. J Cheminform 12(1):22

    PubMed Central  PubMed  Google Scholar 

  61. O’Hagan S, Kell DB (2017) Consensus rank orderings of molecular fingerprints illustrate the ‘most genuine’ similarities between marketed drugs and small endogenous human metabolites, but highlight exogenous natural products as the most important ‘natural’ drug transporter substrates. ADMET DMPK 5(2):85–125

    Google Scholar 

  62. Kajino H (2018) Molecular hypergraph grammar with its application to molecular optimization. arXiv. 1809.02745v02741

  63. Jin W, Barzilay R, Jaakkola T. Junction tree variational autoencoder for molecular graph generation. arXiv 2018:1802.04364v04362

  64. Zang C, Wang F (2020) MoFlow: an invertible flow model for generating molecular graphs. arXiv. 2006.10137

  65. Tavakoli M, Baldi P (2020) Continuous representation of molecules using graph variational autoencoder. arXiv:2004.08152v08151

  66. Samanta B, De A, Ganguly N, Gomez-Rodriguez M (2018) Designing random graph models using variational autoencoders with applications to chemical design. arXiv.1802.05283

  67. Flam-Shepherd D, Wu T, Aspuru-Guzik A (2020) Graph deconvolutional generation. arXiv. 2002.07087v07081

  68. Kearnes S, McCloskey K, Berndl M, Pande V, Riley P (2016) Molecular graph convolutions: moving beyond fingerprints. J Comput Aided Mol Des 30(8):595–608

    CAS  PubMed  PubMed Central  Google Scholar 

  69. Bresson X, Laurent T (2019) A two-step graph convolutional decoder for molecule generation. arXiv. 1906.03412

  70. Kearnes S, Li L, Riley P (2019) Decoding molecular graph embeddings with reinforcement learning. arXiv. 1904.08915

  71. Bickerton GR, Paolini GV, Besnard J, Muresan S, Hopkins AL (2012) Quantifying the chemical beauty of drugs. Nat Chem 4(2):90–98

    CAS  PubMed  PubMed Central  Google Scholar 

  72. Zhang Z, Cui P, Zhu W (2018) Deep learning on graphs: a survey. arXi: 1812.04202v04201

  73. Barron JT (2017) A general and adaptive robust loss function. arXiv. 1701.03077v03010

  74. Yang K, Swanson K, Jin W, Coley C, Eiden P, Gao H, Guzman-Perez A, Hopper T, Kelley B, Mathea M et al (2019) Analyzing learned molecular representations for property prediction. arXiv. 1904.01561v01564

  75. Yang K, Swanson K, Jin W, Coley C, Eiden P, Gao H, Guzman-Perez A, Hopper T, Kelley B, Mathea M et al (2019) Analyzing learned molecular representations for property prediction. J Chem Inf Model 59(8):3370–3388

    CAS  PubMed  PubMed Central  Google Scholar 

  76. Goodacre R, Trew S, Wrigley-Jones C, Saunders G, Neal MJ, Porter N, Kell DB (1995) Rapid and quantitative analysis of metabolites in fermentor broths using pyrolysis mass spectrometry with supervised learning: application to the screening of Penicillium chryosgenum fermentations for the overproduction of penicillins. Anal Chim Acta 313:25–43

    CAS  Google Scholar 

  77. Jarrett K, Kavukcuoglu K, Ranzato M, Lecun Y (2009) What is the best multi-stage architecture for object recognition? IEEE I Conf Comp Vis; pp. 2146–2153

  78. Ashkezari-Toussi S, Sadoghi-Yazdi H (2019) Robust diffusion LMS over adaptive networks. Signal Process 158:201–209

    Google Scholar 

  79. Guimaraes GL, Sanchez-Lengeling B, Outeiral C, Farias PLC, Aspuru-Guzik A (2017) Objective-Reinforced Generative Adversarial Networks (ORGAN) for sequence generation models. arXiv. 1705.10843

  80. Ertl P, Schuffenhauer A (2009) Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions. J Cheminform 1(1):8

    PubMed  PubMed Central  Google Scholar 

  81. Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms. arXiv. 1707.06347v06342

  82. Schulman J, Moritz P, Levine S, Jordan M, Abbeel P (2015) High-dimensional continuous control using generalized advantage estimation. arXiv. 1506.02438

  83. Levine S, Koltun V (2013) Guided policy search. Proc ICML 28:1–9

    Google Scholar 

  84. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. Proc AISTATs 9:249–256

    Google Scholar 

  85. Li Y, Wei C, Ma T (2019) Towards explaining the regularization effect of initial large learning rate in training neural networks. arXiv. 1907.04595v04592

  86. Sterling T, Irwin JJ (2015) ZINC 15—ligand discovery for everyone. J Chem Inf Model 55:2324–2337

    CAS  PubMed  PubMed Central  Google Scholar 

  87. Besnard J, Ruda GF, Setola V, Abecassis K, Rodriguiz RM, Huang XP, Norval S, Sassano MF, Shin AI, Webster LA et al (2012) Automated design of ligands to polypharmacological profiles. Nature 492(7428):215–220

    CAS  PubMed  Google Scholar 

  88. Nettles JH, Jenkins JL, Bender A, Deng Z, Davies JW, Glick M (2006) Bridging chemical and biological space: “target fishing” using 2D and 3D molecular descriptors. J Med Chem 49(23):6802–6810

    CAS  PubMed  Google Scholar 

  89. Hu G, Kuang G, Xiao W, Li W, Liu G, Tang Y (2012) Performance evaluation of 2D fingerprint and 3D shape similarity methods in virtual screening. J Chem Inf Model 52(5):1103–1113

    CAS  PubMed  Google Scholar 

  90. Oprea TI (2002) On the information content of 2D and 3D descriptors for QSAR. J Brazil Chem Soc 13(6):811–815

    CAS  Google Scholar 

  91. Brown RD, Martin YC (1997) The information content of 2D and 3D structural descriptors relevant to ligand-receptor binding. J Chem Inf Comp Sci 37(1):1–9

    CAS  Google Scholar 

  92. Hong HX, Xie Q, Ge WG, Qian F, Fang H, Shi LM, Su ZQ, Perkins R, Tong WD (2008) Mold2, molecular descriptors from 2D structures for chemoinformatics and toxicoinformatics. J Chem Inf Model 48(7):1337–1344

    CAS  PubMed  Google Scholar 

  93. Hann MM, Keserü GM (2012) Finding the sweet spot: the role of nature and nurture in medicinal chemistry. Nat Rev Drug Discov 11(5):355–365

    CAS  PubMed  Google Scholar 

  94. Pitt WR, Parry DM, Perry BG, Groom CR (2009) Heteroaromatic rings of the future. J Med Chem 52:2952–2963

    CAS  PubMed  Google Scholar 

  95. Roughley SD, Jordan AM (2011) The medicinal chemist’s toolbox: an analysis of reactions used in the pursuit of drug candidates. J Med Chem 54(10):3451–3479

    CAS  PubMed  Google Scholar 

  96. Scalia G, Grambow CA, Pernici B, Li Y-P, Green WH (2019) Evaluating scalable uncertainty estimation methods for DNN-based molecular property prediction. arXiv. 1910.03127

  97. Brown N, Fiscato M, Segler MHS, Vaucher AC (2019) GuacaMol: benchmarking models for de novo molecular design. J Chem Inf Model 59(3):1096–1108

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

The work of SS and DBK is supported as part of EPSRC grant EP/S004963/1 (SuSCoRD).

Author information

Authors and Affiliations

Authors

Contributions

YK wrote most of the software, while SOH contributed some of the cheminformatics. SS, NS, TJR, DB & DBK contributed ideas and supervision. All authors contributed to and approved the final manuscript.

Corresponding author

Correspondence to Douglas B. Kell.

Ethics declarations

Competing interests

The authors have no conflicts of interest to report.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Molecules generated by Experiment 1 having QED > 0.8 and SA score > 0.8.

Additional file 2.

Molecules generated by Experiment 2 having QED > 0.8 and SA score > 0.8.

Additional file 3.

Molecules generated by Experiment 3 having QED > 0.8 and SA score > 0.8.

Additional file 4.

Molecules generated by Experiment 4 having QED > 0.8 and SA score > 0.8.

Additional file 5.

Molecules generated by Experiment 5 having QED > 0.8 and SA score > 0.8.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khemchandani, Y., O’Hagan, S., Samanta, S. et al. DeepGraphMolGen, a multi-objective, computational strategy for generating molecules with desirable properties: a graph convolution and reinforcement learning approach. J Cheminform 12, 53 (2020). https://doi.org/10.1186/s13321-020-00454-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13321-020-00454-3

Keywords