CACHE

CRITICAL ASSESSMENT OF COMPUTATIONAL HIT-FINDING EXPERIMENTS

DONATE

  • About
    • WHAT IS CACHE
    • Conferences
  • CACHE News
  • CHALLENGES
    • Challenge #1
      • Announcement
      • Computation methods
      • Preliminary results
      • Final results
    • Challenge #2
      • Announcement
      • Computation methods
      • Preliminary results
      • Final Results
    • Challenge #3
      • Announcement
      • Computation methods
      • Preliminary results
      • Final Results
    • Challenge #4
      • Announcement
      • Computation methods
      • Preliminary results
    • Challenge #5
      • Announcement
      • Computation methods
    • Challenge #6
      • Announcement
    • FAQ
  • PUBLICATIONS
  • CONTACT

Challenge #2

Hit Identification
Method type (check all that applies)
De novo design
High-throughput docking
Machine learning
Description of your approach (min 200 and max 800 words)

Our approach consists of two general steps, each of which has some flexibility.

  1. Use a fully Bayesian model to model binding activity, trained on a mixture of data from docking and experimental data from the literature.
  2. Use established techniques from batch Bayesian optimization to select a list of molecules such that the probability that at least one molecule in the list is active is maximized.

Step 2 is the critical part of our proposal, which leans on our group's expertise in Bayesian optimization and will likely differentiate us from other approaches. To motivate this, consider that the final step for most methods will be to select a list of 100 molecules from a potentially much larger list. One way to do this would be to select the 100 molecules with the highest predicted score. Although this could work well, it may be the case that the 100 top molecules are all quite similar (e.g. minor variations of the same scaffold), and therefore their activities will likely be highly correlated. This could be avoided using any number of heuristics, such as choosing at most 10 molecules with the same scaffold, but using these heuristics may end up selecting molecules with low predicted scores (e.g. if there are only 5 promising scaffolds, then 50% of the molecules will probably be poor). It is not clear how to trade-off high predictions against diversity in a general way.

Fortunately, this problem has been extensively studied in the context of Bayesian optimization (BO). A number of techniques have been proposed, which generally suggest selecting a set of data points which jointly maximize a probabilistic or information-theoretic quantity: for example, the total bits of information as measured by the model, the expected value of the best data point, or the probability that one data point exceeds the best point in the dataset. These objectives will naturally not select batches of very similar data points because their outcomes will be highly correlated: e.g. if one molecule is inactive, the probability that a similar molecule will also be active will be very low, and therefore including it will only marginally improve any of these objectives. At the same time, including molecules will low predicted scores will also not optimize these objectives. In general, these techniques have the potential to optimally trade-off quality and diversity, with the additional advantage of being principled and usually having an intuitive interpretation. However, calculating these quantities requires special kinds of models, such as Gaussian processes, where the full predictive distribution can be easily calculated. Our group has extensive expertise in these methods.

Although we cannot know exactly which model we will use before fitting the data, in general we expect to use a Gaussian process model trained either on molecular fingerprints or on the outputs of a deep neural network, trained on a mixture of docking data and real-world activity data. This will be the model in step 1. Similarly, we will examine many possible techniques and equations to select the batch of data, many of which are listed here: https://botorch.org/api/acquisition.html

What makes your approach stand out from the community? (<100 words)

As explained in more detail above, the difference is the way in which we form a batch of 100 molecules from a much larger list of molecules and predicted scores in a way that trades off diversity and high scores in a principled way. In general, other existing approaches just use heuristics to do this, which are likely to be sub-optimal.

Method Name
One-shot Batch Bayesopt
Commercial software packages used

None

Free software packages used

Python
Autodock vina
pytorch, gpytorch, botorch

Relevant publications of previous uses by your group of this software/method

Aspects of our method have been used in the following publications, although the entire procedure has not been used before by us, although it is a natural combination of existing work.

https://pubs.acs.org/doi/full/10.1021/acs.jcim.1c01334

https://arxiv.org/abs/2205.02708

https://openreview.net/forum?id=W1tcNQNG1S

https://realworldml.github.io/files/cr/paper8.pdf

https://proceedings.neurips.cc/paper/2014/hash/069d3bb002acd8d7dd095917f9efe4cb-Abstract.html

https://proceedings.mlr.press/v48/hernandez-lobatoa16.html

Cache

All rights reserved
v5.47.19.49

Footer first

  • Login
  • Applicant Login
  • Terms of Participation
  • Privacy Policy
  • FAQ
  • Docs
This website is licensed under CC-BY 4.0

Toronto website development by Rebel Trail