Skip to content

debiasing

The module that contains the DebiasedDTA training framework.

DebiasedDTA training framework comprises two-stages, which we call the “guide” and “predictor” models. The guide learns a weighting of the training dataset such that a model trained thereupon can learn a robust relationship between biomolecules and binding affinity, instead of spurious associations. The predictor then uses the weights produced by the guide to progressively weight the training data during its training, in order to obtain a predictor that can generalize well to unseen molecules.

DebiasedDTA leverages the guides to identify protein-ligand pairs that bear more information about the mechanisms of protein-ligand binding. We hypothesize that, if the guides, models designed to learn misleading spurious patterns, perform poorly on a protein-ligand pair, then the pair is more likely to bear generalizable information on binding and deserves higher attention by the DTA predictors.

DebiasedDTA adopts k-fold cross-validation (k= 1 / mini_val_frac) to measure the performance of a guide on the training interactions. First, it randomly divide the training set into five folds and construct five different mini-training and mini-validation sets. DebiasedDTA trains the guide on each mini-training set and compute the squared errors on the corresponding mini-validation set. One run of cross-validation yields one squared-error measurement per protein-ligand pair as each pair is placed in the mini-validation set exactly once. In order to better estimate the performance on each sample, DebiasedDTA runs the cross-validation n_bootstrapping times and obtains n_bootstrapping error measurements per sample. DebiasedDTA computes the median of the n_bootstrapping squared errors and calls it the "importance coefficient" of a protein-ligand pair. The importance coefficients guide the training of the predictor after being converted into training weights.

In the DebiasedDTA training framework, the predictor is the model that will be trained with the training samples weighted by the guide to ultimately predict target protein-chemical affinities. The predictor can adopt any biomolecule representation, but has to be able to weight the training samples during training to comply with the weight adaptation strategy proposed in DebiasedDTA.

The proposed strategy initializes the training sample weights to 1 and updates them at each epoch such that the weight of each training sample converges to its importance coefficient at the last epoch. When trained with this strategy, the predictor attributes higher importance to samples with more information on binding rules (i.e. samples with higher importance coefficient) as the learning continues. Our weight adaptation strategy is formulated as

\[\vec{w}_e = (1 - \frac{e}{E}) + \vec{i} \times \frac{e}{E}, \]

where \(\vec{w}_e\) is the vector of training sample weights at epoch \(e\), \(E\) is the number of training epochs, and \(\vec{i}\) is the importance coefficients vector. Here, \(e/E\) increases as the training continues, and so does the impact of \(\vec{i}\), importance coefficients, on the sample weights.

DebiasedDTA

Source code in pydebiaseddta/debiasing/debiaseddta.py
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
class DebiasedDTA:
    def __init__(
        self,
        guide_cls: Type[guides.Guide],
        predictor_cls: Type[predictors.BasePredictor],
        mini_val_frac: float = 0.2,
        n_bootstrapping: int = 10,
        guide_params: Dict = None,
        predictor_params: Dict = None,
    ):
        """Constructor to initiate a DebiasedDTA training framework. 

        Parameters
        ----------
        guide_cls : Type[guides.AbstractGuide]
            The `Guide` class for debiasing. Note that the input is not an instance, but a class, *e.g.*, `BoWDTA`, not `BoWDTA()`.
            The instance is created during the model training by the DebiasedDTA module.
        predictor_cls : Type[predictors.BasePredictor]
            Class of the `Predictor` to debias. Note that the input is not an instance, but a class, *e.g.*, `BPEDTA`, not `BPEDTA()`.
            The instance is created during the model training by the DebiasedDTA module.
        mini_val_frac : float, optional
            Fraction of the validation data to separate for guide evaluation, by default 0.2
        n_bootstrapping : int, optional
            Number of times to train guides on the training set, by default 10
        guide_params : Dict, optional
            Parameter dictionary necessary to create the `Guide`. 
            The dictionary should map the name of the constructor parameters to their values. 
            An empty dictionary is used during the creation by default.
        predictor_params : Dict, optional
            Parameter dictionary necessary to create the `Predictor`. 
            The dictionary should map the name of the constructor parameters to their values, and
            `n_epochs` **must** be among the parameters for debiasing to work.
            An empty dictionary is used during the creation by default.

        Raises
        ------
        ValueError
            A `ValueError` is raised if `n_epochs` is not among the predictor parameters.
        """
        self.guide_cls = guide_cls
        self.predictor_cls = predictor_cls
        self.mini_val_frac = mini_val_frac
        self.n_bootstrapping = n_bootstrapping

        self.guide_params = dict() if guide_params is None else guide_params
        self.predictor_params = {} if predictor_params is None else predictor_params
        self.predictor_instance = self.predictor_cls(**self.predictor_params)
        if "n_epochs" not in self.predictor_instance.__dict__:
            raise ValueError(
                'The predictor must have a field named "n_epochs" to be debiased'
            )

    @staticmethod
    def save_importance_coefficients(
        interactions: List[Tuple[int, Any, Any, float]], importance_coefficients: List[float], savedir: str
    ):
        """Saves the importance coefficients learned by the `guide`.

        Parameters
        ----------
        interactions : List[Tuple[int, Any, Any, float]]
            The List of training interactions as a Tuple of interaction id (assigned by the guide),
            ligand, chemical, and affinity score.
        importance_coefficients : List[float]
            The importance coefficient for each interaction.
        savedir : str
            Path to save the coefficients.
        """    
        dump_content = []
        for interaction_id, ligand, protein, label in interactions:
            importance_coefficient = importance_coefficients[interaction_id]
            dump_content.append(f"{ligand},{protein},{label},{importance_coefficient}")
        dump = "\n".join(dump_content)
        with open(savedir) as f:
            f.write(dump)

    def learn_importance_coefficients(
        self,
        train_ligands: List[Any],
        train_proteins: List[Any],
        train_labels: List[float],
        savedir: str = None,
    ) -> List[float]:
        """Learns importance coefficients using the `Guide` model specified during the construction.

        Parameters
        ----------
        train_ligands : List[Any]
            List of the training ligands used by the `Guide` model. 
            DebiasedDTA training framework imposes no restriction on the representation type of the ligands.
        train_proteins : List[Any]
            List of the training proteins used by the `Guide` model. 
            DebiasedDTA training framework imposes no restriction on the representation type of the proteins.
        train_labels : List[float]
            Affinity scores of the training protein-ligand pairs.
        savedir : str, optional
            Path to save the learned importance coefficients. By default `None` and the coefficients are not saved.

        Returns
        -------
        List[float]
            The importance coefficients learned by the guide.
        """
        train_size = len(train_ligands)
        train_interactions = list(
            zip(range(train_size), train_ligands, train_proteins, train_labels,)
        )
        mini_val_data_size = int(train_size * self.mini_val_frac) + 1
        interaction_id_to_sq_diff = [[] for _ in range(train_size)]

        for _ in range(self.n_bootstrapping):
            random.shuffle(train_interactions)
            n_mini_val = int(1 / self.mini_val_frac)
            for mini_val_ix in range(n_mini_val):
                val_start_ix = mini_val_ix * mini_val_data_size
                val_end_ix = val_start_ix + mini_val_data_size
                mini_val_interactions = train_interactions[val_start_ix:val_end_ix]
                mini_train_interactions = (
                    train_interactions[:val_start_ix] + train_interactions[val_end_ix:]
                )

                mini_train_ligands = [
                    interaction[1] for interaction in mini_train_interactions
                ]
                mini_train_proteins = [
                    interaction[2] for interaction in mini_train_interactions
                ]
                mini_train_labels = [
                    interaction[3] for interaction in mini_train_interactions
                ]
                guide_instance = self.guide_cls(**self.guide_params)
                guide_instance.train(
                    mini_train_ligands, mini_train_proteins, mini_train_labels,
                )

                mini_val_ligands = [
                    interaction[1] for interaction in mini_val_interactions
                ]
                mini_val_proteins = [
                    interaction[2] for interaction in mini_val_interactions
                ]
                preds = guide_instance.predict(mini_val_ligands, mini_val_proteins)
                mini_val_labels = [
                    interaction[3] for interaction in mini_val_interactions
                ]
                mini_val_sq_diffs = (np.array(mini_val_labels) - np.array(preds)) ** 2
                mini_val_interaction_ids = [
                    interaction[0] for interaction in mini_val_interactions
                ]
                for interaction_id, sq_diff in zip(
                    mini_val_interaction_ids, mini_val_sq_diffs
                ):
                    interaction_id_to_sq_diff[interaction_id].append(sq_diff)

        interaction_id_to_med_diff = [
            np.median(diffs) for diffs in interaction_id_to_sq_diff
        ]
        importance_coefficients = [
            med / sum(interaction_id_to_med_diff) for med in interaction_id_to_med_diff
        ]

        if savedir is not None:
            DebiasedDTA.save_importance_coefficients(
                train_interactions, importance_coefficients, savedir
            )

        return importance_coefficients

    def train(
        self,
        train_ligands: List[Any],
        train_proteins: List[Any],
        train_labels: List[float],
        val_ligands: List[Any] = None,
        val_proteins: List[Any] = None,
        val_labels: List[float] = None,
        coeffs_save_path: str = None,
    ) -> Any:
        """Starts the DebiasedDTA training framework.
        The importance coefficients are learned with the guide and used to weight the samples during the predictor's training.
        Performance on the validation set is also measured, if provided.
        Parameters
        ----------
        train_ligands : List[Any]
            List of the training ligands used by the `Predictor`. 
            DebiasedDTA training framework imposes no restriction on the representation type of the ligands.
        train_proteins : List[Any]
            List of the training ligands used by the `Predictor`. 
            DebiasedDTA training framework imposes no restriction on the representation type of the proteins.
        train_labels : List[float]
            Affinity scores of the training protein-ligand pairs.
        val_ligands : List[Any], optional
            Validation ligands to measure predictor performance, by default `None` and no validation is applied.
        val_proteins : List[Any], optional
            Validation proteins to measure predictor performance, by default `None` and no validation is applied.
        val_labels : List[float], optional
            Affinity scores of the Validatio pairs, by default `None` and no validation is applied.
        coeffs_save_path : str, optional
            Path to save importance coefficients learned by the `guide`. Defaults to `None` and no saving is performed.

        Returns
        -------
        Any
            Output of the train function of the predictor.

        """
        train_ligands = train_ligands.copy()
        train_proteins = train_proteins.copy()

        importance_coefficients = self.learn_importance_coefficients(
            train_ligands, train_proteins, train_labels, savedir=coeffs_save_path,
        )

        n_epochs = self.predictor_instance.n_epochs
        ic = np.array(importance_coefficients)
        weights_by_epoch = [
            1 - (e / n_epochs) + ic * (e / n_epochs) for e in range(n_epochs)
        ]

        if (
            val_ligands is not None
            and val_proteins is not None
            and val_labels is not None
        ):
            return self.predictor_instance.train(
                train_ligands,
                train_proteins,
                train_labels,
                val_ligands=val_ligands,
                val_proteins=val_proteins,
                val_labels=val_labels,
                sample_weights_by_epoch=weights_by_epoch,
            )

        return self.predictor_instance.train(
            train_ligands,
            train_proteins,
            train_labels,
            sample_weights_by_epoch=weights_by_epoch,
        )

__init__(guide_cls, predictor_cls, mini_val_frac=0.2, n_bootstrapping=10, guide_params=None, predictor_params=None)

Constructor to initiate a DebiasedDTA training framework.

Parameters:

Name Type Description Default
guide_cls Type[guides.AbstractGuide]

The Guide class for debiasing. Note that the input is not an instance, but a class, e.g., BoWDTA, not BoWDTA(). The instance is created during the model training by the DebiasedDTA module.

required
predictor_cls Type[predictors.BasePredictor]

Class of the Predictor to debias. Note that the input is not an instance, but a class, e.g., BPEDTA, not BPEDTA(). The instance is created during the model training by the DebiasedDTA module.

required
mini_val_frac float, optional

Fraction of the validation data to separate for guide evaluation, by default 0.2

0.2
n_bootstrapping int, optional

Number of times to train guides on the training set, by default 10

10
guide_params Dict, optional

Parameter dictionary necessary to create the Guide. The dictionary should map the name of the constructor parameters to their values. An empty dictionary is used during the creation by default.

None
predictor_params Dict, optional

Parameter dictionary necessary to create the Predictor. The dictionary should map the name of the constructor parameters to their values, and n_epochs must be among the parameters for debiasing to work. An empty dictionary is used during the creation by default.

None

Raises:

Type Description
ValueError

A ValueError is raised if n_epochs is not among the predictor parameters.

Source code in pydebiaseddta/debiasing/debiaseddta.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
def __init__(
    self,
    guide_cls: Type[guides.Guide],
    predictor_cls: Type[predictors.BasePredictor],
    mini_val_frac: float = 0.2,
    n_bootstrapping: int = 10,
    guide_params: Dict = None,
    predictor_params: Dict = None,
):
    """Constructor to initiate a DebiasedDTA training framework. 

    Parameters
    ----------
    guide_cls : Type[guides.AbstractGuide]
        The `Guide` class for debiasing. Note that the input is not an instance, but a class, *e.g.*, `BoWDTA`, not `BoWDTA()`.
        The instance is created during the model training by the DebiasedDTA module.
    predictor_cls : Type[predictors.BasePredictor]
        Class of the `Predictor` to debias. Note that the input is not an instance, but a class, *e.g.*, `BPEDTA`, not `BPEDTA()`.
        The instance is created during the model training by the DebiasedDTA module.
    mini_val_frac : float, optional
        Fraction of the validation data to separate for guide evaluation, by default 0.2
    n_bootstrapping : int, optional
        Number of times to train guides on the training set, by default 10
    guide_params : Dict, optional
        Parameter dictionary necessary to create the `Guide`. 
        The dictionary should map the name of the constructor parameters to their values. 
        An empty dictionary is used during the creation by default.
    predictor_params : Dict, optional
        Parameter dictionary necessary to create the `Predictor`. 
        The dictionary should map the name of the constructor parameters to their values, and
        `n_epochs` **must** be among the parameters for debiasing to work.
        An empty dictionary is used during the creation by default.

    Raises
    ------
    ValueError
        A `ValueError` is raised if `n_epochs` is not among the predictor parameters.
    """
    self.guide_cls = guide_cls
    self.predictor_cls = predictor_cls
    self.mini_val_frac = mini_val_frac
    self.n_bootstrapping = n_bootstrapping

    self.guide_params = dict() if guide_params is None else guide_params
    self.predictor_params = {} if predictor_params is None else predictor_params
    self.predictor_instance = self.predictor_cls(**self.predictor_params)
    if "n_epochs" not in self.predictor_instance.__dict__:
        raise ValueError(
            'The predictor must have a field named "n_epochs" to be debiased'
        )

learn_importance_coefficients(train_ligands, train_proteins, train_labels, savedir=None)

Learns importance coefficients using the Guide model specified during the construction.

Parameters:

Name Type Description Default
train_ligands List[Any]

List of the training ligands used by the Guide model. DebiasedDTA training framework imposes no restriction on the representation type of the ligands.

required
train_proteins List[Any]

List of the training proteins used by the Guide model. DebiasedDTA training framework imposes no restriction on the representation type of the proteins.

required
train_labels List[float]

Affinity scores of the training protein-ligand pairs.

required
savedir str, optional

Path to save the learned importance coefficients. By default None and the coefficients are not saved.

None

Returns:

Type Description
List[float]

The importance coefficients learned by the guide.

Source code in pydebiaseddta/debiasing/debiaseddta.py
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
def learn_importance_coefficients(
    self,
    train_ligands: List[Any],
    train_proteins: List[Any],
    train_labels: List[float],
    savedir: str = None,
) -> List[float]:
    """Learns importance coefficients using the `Guide` model specified during the construction.

    Parameters
    ----------
    train_ligands : List[Any]
        List of the training ligands used by the `Guide` model. 
        DebiasedDTA training framework imposes no restriction on the representation type of the ligands.
    train_proteins : List[Any]
        List of the training proteins used by the `Guide` model. 
        DebiasedDTA training framework imposes no restriction on the representation type of the proteins.
    train_labels : List[float]
        Affinity scores of the training protein-ligand pairs.
    savedir : str, optional
        Path to save the learned importance coefficients. By default `None` and the coefficients are not saved.

    Returns
    -------
    List[float]
        The importance coefficients learned by the guide.
    """
    train_size = len(train_ligands)
    train_interactions = list(
        zip(range(train_size), train_ligands, train_proteins, train_labels,)
    )
    mini_val_data_size = int(train_size * self.mini_val_frac) + 1
    interaction_id_to_sq_diff = [[] for _ in range(train_size)]

    for _ in range(self.n_bootstrapping):
        random.shuffle(train_interactions)
        n_mini_val = int(1 / self.mini_val_frac)
        for mini_val_ix in range(n_mini_val):
            val_start_ix = mini_val_ix * mini_val_data_size
            val_end_ix = val_start_ix + mini_val_data_size
            mini_val_interactions = train_interactions[val_start_ix:val_end_ix]
            mini_train_interactions = (
                train_interactions[:val_start_ix] + train_interactions[val_end_ix:]
            )

            mini_train_ligands = [
                interaction[1] for interaction in mini_train_interactions
            ]
            mini_train_proteins = [
                interaction[2] for interaction in mini_train_interactions
            ]
            mini_train_labels = [
                interaction[3] for interaction in mini_train_interactions
            ]
            guide_instance = self.guide_cls(**self.guide_params)
            guide_instance.train(
                mini_train_ligands, mini_train_proteins, mini_train_labels,
            )

            mini_val_ligands = [
                interaction[1] for interaction in mini_val_interactions
            ]
            mini_val_proteins = [
                interaction[2] for interaction in mini_val_interactions
            ]
            preds = guide_instance.predict(mini_val_ligands, mini_val_proteins)
            mini_val_labels = [
                interaction[3] for interaction in mini_val_interactions
            ]
            mini_val_sq_diffs = (np.array(mini_val_labels) - np.array(preds)) ** 2
            mini_val_interaction_ids = [
                interaction[0] for interaction in mini_val_interactions
            ]
            for interaction_id, sq_diff in zip(
                mini_val_interaction_ids, mini_val_sq_diffs
            ):
                interaction_id_to_sq_diff[interaction_id].append(sq_diff)

    interaction_id_to_med_diff = [
        np.median(diffs) for diffs in interaction_id_to_sq_diff
    ]
    importance_coefficients = [
        med / sum(interaction_id_to_med_diff) for med in interaction_id_to_med_diff
    ]

    if savedir is not None:
        DebiasedDTA.save_importance_coefficients(
            train_interactions, importance_coefficients, savedir
        )

    return importance_coefficients

save_importance_coefficients(interactions, importance_coefficients, savedir) staticmethod

Saves the importance coefficients learned by the guide.

Parameters:

Name Type Description Default
interactions List[Tuple[int, Any, Any, float]]

The List of training interactions as a Tuple of interaction id (assigned by the guide), ligand, chemical, and affinity score.

required
importance_coefficients List[float]

The importance coefficient for each interaction.

required
savedir str

Path to save the coefficients.

required
Source code in pydebiaseddta/debiasing/debiaseddta.py
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
@staticmethod
def save_importance_coefficients(
    interactions: List[Tuple[int, Any, Any, float]], importance_coefficients: List[float], savedir: str
):
    """Saves the importance coefficients learned by the `guide`.

    Parameters
    ----------
    interactions : List[Tuple[int, Any, Any, float]]
        The List of training interactions as a Tuple of interaction id (assigned by the guide),
        ligand, chemical, and affinity score.
    importance_coefficients : List[float]
        The importance coefficient for each interaction.
    savedir : str
        Path to save the coefficients.
    """    
    dump_content = []
    for interaction_id, ligand, protein, label in interactions:
        importance_coefficient = importance_coefficients[interaction_id]
        dump_content.append(f"{ligand},{protein},{label},{importance_coefficient}")
    dump = "\n".join(dump_content)
    with open(savedir) as f:
        f.write(dump)

train(train_ligands, train_proteins, train_labels, val_ligands=None, val_proteins=None, val_labels=None, coeffs_save_path=None)

Starts the DebiasedDTA training framework. The importance coefficients are learned with the guide and used to weight the samples during the predictor's training. Performance on the validation set is also measured, if provided.

Parameters:

Name Type Description Default
train_ligands List[Any]

List of the training ligands used by the Predictor. DebiasedDTA training framework imposes no restriction on the representation type of the ligands.

required
train_proteins List[Any]

List of the training ligands used by the Predictor. DebiasedDTA training framework imposes no restriction on the representation type of the proteins.

required
train_labels List[float]

Affinity scores of the training protein-ligand pairs.

required
val_ligands List[Any], optional

Validation ligands to measure predictor performance, by default None and no validation is applied.

None
val_proteins List[Any], optional

Validation proteins to measure predictor performance, by default None and no validation is applied.

None
val_labels List[float], optional

Affinity scores of the Validatio pairs, by default None and no validation is applied.

None
coeffs_save_path str, optional

Path to save importance coefficients learned by the guide. Defaults to None and no saving is performed.

None

Returns:

Type Description
Any

Output of the train function of the predictor.

Source code in pydebiaseddta/debiasing/debiaseddta.py
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
def train(
    self,
    train_ligands: List[Any],
    train_proteins: List[Any],
    train_labels: List[float],
    val_ligands: List[Any] = None,
    val_proteins: List[Any] = None,
    val_labels: List[float] = None,
    coeffs_save_path: str = None,
) -> Any:
    """Starts the DebiasedDTA training framework.
    The importance coefficients are learned with the guide and used to weight the samples during the predictor's training.
    Performance on the validation set is also measured, if provided.
    Parameters
    ----------
    train_ligands : List[Any]
        List of the training ligands used by the `Predictor`. 
        DebiasedDTA training framework imposes no restriction on the representation type of the ligands.
    train_proteins : List[Any]
        List of the training ligands used by the `Predictor`. 
        DebiasedDTA training framework imposes no restriction on the representation type of the proteins.
    train_labels : List[float]
        Affinity scores of the training protein-ligand pairs.
    val_ligands : List[Any], optional
        Validation ligands to measure predictor performance, by default `None` and no validation is applied.
    val_proteins : List[Any], optional
        Validation proteins to measure predictor performance, by default `None` and no validation is applied.
    val_labels : List[float], optional
        Affinity scores of the Validatio pairs, by default `None` and no validation is applied.
    coeffs_save_path : str, optional
        Path to save importance coefficients learned by the `guide`. Defaults to `None` and no saving is performed.

    Returns
    -------
    Any
        Output of the train function of the predictor.

    """
    train_ligands = train_ligands.copy()
    train_proteins = train_proteins.copy()

    importance_coefficients = self.learn_importance_coefficients(
        train_ligands, train_proteins, train_labels, savedir=coeffs_save_path,
    )

    n_epochs = self.predictor_instance.n_epochs
    ic = np.array(importance_coefficients)
    weights_by_epoch = [
        1 - (e / n_epochs) + ic * (e / n_epochs) for e in range(n_epochs)
    ]

    if (
        val_ligands is not None
        and val_proteins is not None
        and val_labels is not None
    ):
        return self.predictor_instance.train(
            train_ligands,
            train_proteins,
            train_labels,
            val_ligands=val_ligands,
            val_proteins=val_proteins,
            val_labels=val_labels,
            sample_weights_by_epoch=weights_by_epoch,
        )

    return self.predictor_instance.train(
        train_ligands,
        train_proteins,
        train_labels,
        sample_weights_by_epoch=weights_by_epoch,
    )