Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
AI4S-Cup学习赛:小分子药物血脑屏障(BBB)渗透性分类预测 - Uni-Mol Baseline
AI4S Cup-Getting Started
AI4S Cup-Getting Started
hyb
发布于 2024-02-01
推荐镜像 :Third-party software:ai4s-cup-0.1
推荐机型 :c12_m46_1 * NVIDIA GPU B
赞 1
3
6
AI4S-Cup学习赛:小分子药物血脑屏障(BBB)渗透性分类预测 - Uni-Mol Baseline
1. 内容和目标
2. 挂载数据集路径
2.1 读取数据,并对数据进行拆分和初步的分析
3. Uni-Mol的主要特点
3.1. Pretrain-Finetuning,预训练-微调
3.2 Invariant spacial relation,分子坐标的旋转平移不变性
3.3 Pair representation and Attention,原子对表征通信
3.4 SE(3) Equivariant,SE(3)等变的坐标更新机制
4. Uni-Mol的分子、原子表示方法
5. Uni-Mol的核心模块和相关参数
5.1. “Moltrain”模块的参数说明
task:选择对应的任务,目前支持五种任务类型:
metrics: 对应需要模型优化的指标,传入的metrics可以用逗号分隔,为空默认,目前支持的指标如下:
data_type:输入的数据类型,目前仅支持molecule,后续会开放protein, crystal等更多数据源
split:unimol 默认采用5-Fold Cross Validation验证方式,划分方式支持random和按照scaffold划分
save_path:当前的任务路径,默认会覆盖文件
epochs, learning_rate, batch_size, early_stopping:unimol训练时开放的超参数
5.2. 超参数调整(Hyperparameter Finetuning)
5.3. Metrics
5.3.1. F2-score
5.3.2. AUC-ROC
5.4. “MolPredict”模块的参数说明
load_model:训练好的模型路径
6. 使用效果最好的模型对测试数据进行预测

AI4S-Cup学习赛:小分子药物血脑屏障(BBB)渗透性分类预测 - Uni-Mol Baseline

©️ Copyright 2023 @ Authors
作者: 郭文韬 📨 汪鸿帅 📨 宋珂 📨
日期:2023-11-9
共享协议:本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
快速开始:点击上方的 开始连接 按钮,选择 ai4s-cup-0.1镜像及GPU B类型节点配置,稍等片刻即可运行。

代码
文本

1. 内容和目标

在本示例中,我们将介绍和演示以下内容:

  • Uni-Mol的主要特点
  • Uni-Mol的分子、原子表示方法
  • Uni-Mol的核心模块和相关参数
  • 如何微调超参数(如学习率 learning rate)来训练模型
  • 实际应用
代码
文本

2. 挂载数据集路径

本教学案例需要使用的数据,已通过Bohrium数据集功能共享给当前项目的用户。

数据集的路径为:/bohr/ai4scup-cns-5zkz/v3/

代码
文本
[3]
Dataset_Dir = '/bohr/ai4scup-cns-5zkz/v3/'
代码
文本

2.1 读取数据,并对数据进行拆分和初步的分析

【注】Uni-Mol进行Finetuning时,默认使用5-Fold Cross Validation的方式进行模型验证。

代码
文本
[4]
import os
import pandas as pd

# 读入数据
train_data_full = pd.read_csv(os.path.join(Dataset_Dir, 'mol_train.csv'))
train_data_full.columns=["SMILES", "TARGET"]
train_data_full.to_csv("./mol_train_full.csv", index=False)

train_data_split = train_data_full[:500]
train_data_split.columns=["SMILES", "TARGET"]
train_data_split.to_csv("./mol_train_split.csv", index=False)

valid_data_split = train_data_full[500:]
valid_data_split.columns=["SMILES", "TARGET"]
valid_data_split.to_csv("/data/UniMol_QSAR/mol_valid_split.csv", index=False)

test_data = pd.read_csv(os.path.join(Dataset_Dir, 'mol_test.csv'))
test_data.columns=["SMILES", "TARGET"]
test_data.to_csv("./mol_test.csv", index=False)
代码
文本

3. Uni-Mol的主要特点

在开始任务之前,让我们一起了解一下Uni-Mol:分子3D表示预训练模型的主要特点。

3.1. Pretrain-Finetuning,预训练-微调

Uni-Mol提供在大数据下经过预训练(Pretraining)的模型.预训练是一种常见的深度学习策略,通常在大型数据集上进行,然后将训练得到的模型应用到特定的任务上,这通常被称为微调(Fine-tuning)。 预训练的优点有: 1. 数据效率:预训练模型可以用少量标注数据在特定任务上达到很好的效果,因为它们已经在大规模数据集上学习了有用的特征和模式。 2. 性能:预训练模型通常可以达到比从头开始训练更好的性能,特别是在数据稀缺的情况下。 3. 迁移学习:预训练模型可以跨任务和领域进行迁移学习,即在一个任务上学习的知识可以用于另一个任务,这在许多实际应用中是非常有价值的。 4. 节省时间:使用预训练模型可以节省大量的训练时间,因为不需要从零开始训练模型。

**因此,预训练和微调是现代深度学习中的一种非常重要的策略。** 

3.2 Invariant spacial relation,分子坐标的旋转平移不变性

  • 使用原子对的欧式距离来表征分子构象
  • 融合可学习的原子对类型,经过高斯核函数函数得到位置编码
代码
文本

节选代码自 https://github.com/dptech-corp Repository位置: unimol/models/unimol.py

def get_dist_features(dist, et):
    n_node = dist.size(-1)
    # 通过高斯核函数smooth从而得到位置编码
    gbf_feature = self.gbf(dist, et) 
    gbf_result = self.gbf_proj(gbf_feature)
    # 通过原子对信息(距离,种类)得到自注意力机制的attention bias
    graph_attn_bias = gbf_result
    graph_attn_bias = graph_attn_bias.permute(0, 3, 1, 2).contiguous() 
    graph_attn_bias = graph_attn_bias.view(-1, n_node, n_node)
    return graph_attn_bias

graph_attn_bias = get_dist_features(src_distance, src_edge_type) 

# encode原子对信息
(encoder_rep, encoder_pair_rep, delta_encoder_pair_rep, x_norm, delta_encoder_pair_rep_norm) = self.encoder(x, padding_mask=padding_mask, attn_mask=graph_attn_bias)
encoder_pair_rep[encoder_pair_rep == float("-inf")] = 0 # 空间位置原子对信息编码
self.gbf = GaussianLayer(K, n_edge_type) #高斯核函数
代码
文本

3.3 Pair representation and Attention,原子对表征通信

原子对驱动的表征作为bias,辅助更新自注意力过程

# 节选代码自 https://github.com/dptech-corp Respitory位置: Uni-Mol/unimol/models/transformer_encoder_with_pair.py
# encode pairs into transformer
for i in range(len(self.layers)):
  x, attn_mask, _ = self.layers[i](x, padding_mask=padding_mask, attn_bias=attn_mask,return_attn=True)


# 节选代码自 https://github.com/dptech-corp Respitory位置: Uni-Mol/unimol/models/transformer_encoder_with_pair.py
# attention weights
attn_weights = torch.bmm(q, k.transpose(1, 2)) # get q_{ij}, 通过自注意力机制中的multi-head Query-Key乘积
#进行原子表征到原子对表征


# 节选代码自 https://github.com/dptech-corp Respitory位置: Uni-Core/unicore/modules/multihead_attention.py
# involve attention bias into attention weights
if attn_bias is not None:
    attn_weights += attn_bias #在原有weights上根据原子对信息加入bias

    
# 节选代码自 https://github.com/dptech-corp Respitory位置: Uni-Core/unicore/modules/multihead_attention.py
# 通过softmax函数得到最后的attention
attn = softmax_dropout(attn_weights, self.dropout, self.training, inplace=False,)  # softmax函数
o = torch.bmm(attn, v) # get attention
代码
文本

3.4 SE(3) Equivariant,SE(3)等变的坐标更新机制

  • 让模型具有直接输出坐标的能力
  • 更换预测目标 ->
  • SE(3) equivalent coodinate head
# 节选代码自 https://github.com/dptech-corp Respitory位置: Uni-Mol/unimol/models/docking_pose.py
self.cross_distance_project = NonLinearHead(
args.mol.encoder_embed_dim * 2 + args.mol.encoder_attention_heads, 1,"relu")

self.holo_distance_project = DistanceHead(
args.mol.encoder_embed_dim + args.mol.encoder_attention_heads, "relu" )
代码
文本

4. Uni-Mol的分子、原子表示方法

代码
文本
[5]
# 导入Uni-Mol
from unimol import UniMolRepr
import numpy as np

# single smiles unimol representation
clf = UniMolRepr(data_type='molecule', remove_hs=False)

smiles_list = train_data_split["SMILES"].values.tolist()[:1]

unimol_repr = clf.get_repr(smiles_list, return_atomic_reprs=True)

# Uni-Mol分子表征, 使用cls token
print(np.array(unimol_repr['cls_repr']).shape)
print(np.array(unimol_repr['cls_repr']))

# Uni-Mol 原子级别表征,和rdkit Mol的原子顺序一致
print(np.array(unimol_repr['atomic_reprs']).shape)
print(np.array(unimol_repr['atomic_reprs']))
/opt/conda/lib/python3.8/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html

  from .autonotebook import tqdm as notebook_tqdm

2023-11-23 18:31:23 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:31:28 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

1it [00:01,  1.19s/it]

2023-11-23 18:31:29 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:31:29 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

100%|██████████| 1/1 [00:00<00:00,  7.15it/s](1, 512)

[[-3.32554668e-01 -6.47540271e-01 -1.17192067e-01 -2.55985528e-01

   1.89212725e-01 -1.47828460e+00  9.14437473e-01  3.71350020e-01

  -4.63203192e-01  1.68884963e-01  1.00766444e+00  1.29866290e+00

   3.67947310e-01 -1.94589293e+00 -2.40664053e+00 -4.72459882e-01

  -1.23380733e+00 -5.64435601e-01  1.51950538e-01 -2.95339286e-01

  -7.14764655e-01 -5.22103965e-01 -1.07941818e+00 -8.52694884e-02

   1.28951846e-02  1.37045667e-01  6.51304185e-01 -1.05672657e+00

  -7.19786763e-01 -4.29590493e-01  5.67669034e-01 -4.16625530e-01

   5.63530743e-01 -1.28591311e+00  1.92323953e-01  6.53902709e-01

   3.67634386e-01 -7.90517151e-01  7.00345755e-01 -1.05778122e+00

   7.00050354e-01 -1.24514706e-01  8.04073572e-01 -1.96923316e-01

   1.75236452e+00  2.27731198e-01  2.22742820e+00  1.48964489e+00

  -2.16372192e-01 -4.72671181e-01 -1.15012836e+00  9.64360759e-02

   1.41741246e-01 -4.84767169e-01  2.11575770e+00  5.77082634e-01

   2.23466206e+00 -6.12441421e-01 -7.38597453e-01 -9.15696084e-01

  -7.72625983e-01 -3.65566820e-01  8.75945762e-02 -2.27671593e-01

   1.18833041e+00 -2.32791901e+00 -8.97887647e-02 -2.04156339e-01

  -4.00574416e-01  7.86585271e-01  1.05929542e+00  1.14978218e+00

   1.62871253e+00  3.27518433e-01  1.31464720e+00 -4.81054813e-01

  -2.16493145e-01 -2.72090137e-01  1.47835642e-01 -2.45051920e-01

   6.36249244e-01  9.59287703e-01 -4.64016289e-01 -1.47612238e+00

   2.72167146e-01 -7.10992873e-01  3.08713347e-01 -2.49249548e-01

   2.36961216e-01  1.53993368e+00  2.76574016e-01 -1.51648894e-01

  -2.96530992e-01 -3.10177684e-01 -3.31898957e-01 -6.61867678e-01

  -1.90654099e-01  6.25269338e-02 -1.71492606e-01 -2.39322567e+00

  -1.55721760e+00 -2.17529058e-01 -1.73571408e+00 -2.47613117e-01

  -2.14201242e-01  1.22281444e+00 -1.15414000e+00  2.79558003e-01

   2.63130617e+00  1.09196343e-02 -1.52370656e+00  1.40588462e+00

  -4.92867053e-01 -2.13278998e-02 -2.96229124e-01  6.26625240e-01

   1.82763970e+00  3.42123896e-01  5.03076792e-01  2.22541928e-01

  -2.50883460e-01  1.16079235e+00 -1.41154036e-01  1.58351028e+00

   1.54637110e+00  2.03260370e-02  1.46919215e+00 -1.14725244e+00

   5.54314889e-02 -3.78462970e-01 -2.86740839e-01 -2.78106421e-01

  -1.63298821e+00  4.05428698e-03  1.08447127e-01  1.47815216e+00

   8.48994434e-01  1.39249635e+00 -9.01334345e-01  7.51405954e-01

   1.60244599e-01 -1.43781352e+00  1.35189962e+00  1.04329124e-01

   1.03035435e-01 -5.14849901e-01  2.91365325e-01 -2.45389298e-01

  -7.32825279e-01 -9.93854642e-01 -5.51007271e-01  5.07225357e-02

  -3.50308967e+00  1.31221533e+00  8.41437399e-01  5.71189463e-01

   2.37307072e-01 -6.98165715e-01  8.11533630e-01  1.38877705e-01

   2.05987597e+00 -1.15697551e+00  9.34201300e-01  9.86937761e-01

  -9.24103856e-01  9.88259017e-01 -6.18014634e-02 -3.75752985e-01

   9.24053267e-02 -4.01214093e-01 -1.37113526e-01 -8.61997783e-01

  -8.86687517e-01 -7.98759103e-01  5.69748640e-01 -1.99316695e-01

   6.13961279e-01  1.78032264e-01  4.52667803e-01  7.12260664e-01

   1.41934669e+00 -5.34613132e-01 -2.62815189e+00 -2.30688527e-01

  -2.73515892e+00  2.20888400e+00  3.29603761e-01  1.68397868e+00

  -1.46502435e-01 -7.19244242e-01 -3.70014966e-01 -2.32404813e-01

   1.09458280e+00 -2.76675552e-01 -1.02226281e+00 -4.90558267e-01

   3.83356601e-01  6.25017881e-02  1.82110190e-01  1.68072268e-01

   1.13302565e+00 -3.23945656e-02  1.55280781e+00  1.17295802e+00

   1.71486366e+00 -7.97451198e-01  1.87749982e+00  2.37947211e-01

  -1.02912092e+00  8.64002407e-01 -2.86402017e-01 -6.23834252e-01

  -1.42762750e-01  3.75074476e-01 -8.17701519e-01 -5.87562501e-01

  -4.18921858e-01  5.03876865e-01  1.24396408e+00  6.35554865e-02

   1.71067908e-01  8.05899143e-01 -6.60397887e-01 -6.99844182e-01

   5.39434314e-01 -1.06382430e+00 -4.48326170e-01  3.55028570e-01

   9.52047631e-02 -4.29467171e-01  9.23953295e-01 -5.16094387e-01

   6.54251501e-02 -5.00979662e-01  7.51375630e-02 -9.17932272e-01

  -2.93151230e-01 -4.16528642e-01  1.33377299e-01 -6.04566157e-01

   1.56730425e+00  2.61426598e-01  3.48588675e-01  1.16920054e+00

   4.74914938e-01  6.33628964e-01  1.24952900e+00  2.84363151e+00

  -1.78926989e-01 -1.67877948e+00  2.72370547e-01  2.80954003e+00

   1.92997718e+00 -8.61477628e-02  6.00180356e-04  1.72736430e+00

  -1.94470808e-01  1.66078520e+00 -1.60343134e+00  1.03893995e+00

   2.95234531e-01 -1.14412296e+00  4.51855928e-01  1.39336777e+00

  -4.17041421e-01 -2.29649231e-01  2.89638162e-01  1.76228356e+00

  -2.65674734e+00 -4.02600229e-01 -1.84201145e+00  1.58857250e+00

  -4.27861154e-01  1.26178014e+00 -5.27637117e-02  8.95912886e-01

  -1.08256388e+00  7.08077729e-01  5.27616680e-01  2.58203292e+00

  -1.56338751e+00  1.04009390e-01 -1.33078742e+00 -5.51137924e-02

  -9.73407388e-01 -2.31707264e-02  5.38125634e-02  2.02845857e-01

   9.28955197e-01 -2.95542932e+00 -1.63452423e+00 -3.51660192e-01

   4.19857860e-01  3.08711007e-02  7.85533369e-01  4.09183443e-01

  -3.15276217e+00  5.10020912e-01  1.19884592e-02  3.00638258e-01

   5.27891159e-01 -1.12812507e+00 -1.08418429e+00  1.60568565e-01

  -6.53212667e-01  5.55396974e-01  1.19746971e+00 -1.96748167e-01

  -2.10811186e+00  1.03773332e+00  1.38183057e+00  6.91923857e-01

  -2.98601389e-01 -4.64623600e-01 -8.76489758e-01 -1.65695345e+00

   1.23665404e+00  6.04354203e-01  1.44651699e+00  2.81298923e+00

   3.60977426e-02 -6.63971961e-01  8.25184405e-01 -6.43130019e-03

  -8.82009506e-01  1.56546295e+00  3.14083844e-01  3.71781558e-01

  -5.51717222e-01  2.12567449e+00 -1.45291641e-01  2.16725183e+00

   4.54249978e-01 -3.46830696e-01 -4.06565726e-01  1.08023202e+00

  -6.52960598e-01  1.91030130e-02  1.09590542e+00  4.20378149e-01

   5.14038019e-02 -2.60591209e-01  5.07573843e-01 -4.06939462e-02

   4.71051902e-01 -4.36022729e-01  1.81213915e+00  7.75996864e-01

  -3.84685904e-01  2.12697417e-01  9.78083014e-01 -1.40462853e-02

  -6.21906936e-01  1.00905180e+00  5.15580885e-02 -3.38789320e+00

   9.91766632e-01 -2.40274668e-01  4.96663228e-02 -6.90002739e-01

  -2.46033758e-01  1.08183658e+00 -5.95889986e-01  1.23837268e+00

  -7.29982734e-01  6.78744376e-01  1.65027952e+00 -1.82839072e+00

   2.57831603e-01 -1.07772028e+00  5.32530546e-02 -5.38418889e-01

   6.72293484e-01 -1.73002231e+00 -7.26004064e-01 -7.55856514e-01

  -9.02567565e-01 -7.83256948e-01  3.24008167e-01 -6.84050322e-01

   6.82157815e-01 -8.53606537e-02 -1.34740174e+00 -5.86329401e-01

   9.34620142e-01 -8.71709406e-01 -1.27678549e+00 -1.56893516e+00

  -8.77102554e-01 -2.06807688e-01  6.93844378e-01  6.18871808e-01

   4.89558280e-01 -7.47574210e-01  2.26789843e-02 -1.54304600e+00

   9.48660076e-02  5.24390519e-01  1.50856745e+00  2.28852853e-01

   9.35102761e-01 -2.32921481e+00  2.30151653e-01  3.69370699e-01

   7.43259668e-01  8.61902416e-01  5.67739189e-01  6.57566845e-01

   1.25179386e+00 -1.37114751e+00 -1.50062764e+00  6.97443366e-01

  -1.02281809e+00 -1.11973822e-01  2.92578340e-01  1.30236590e+00

  -1.50602531e+00 -1.20321524e+00 -2.47976279e+00 -2.85148859e-01

   3.05449665e-01  5.99353373e-01 -1.48324120e+00 -4.11621332e-02

   1.04127324e+00  9.39282715e-01 -6.65442586e-01  9.01170492e-01

  -2.45098099e-01 -3.63586724e-01  5.36697209e-01  4.02396955e-02

   3.55809927e-01  3.56090248e-01  7.47226715e-01  5.56131899e-01

  -6.96290255e-01 -1.85459626e+00 -2.22732544e+00 -2.03205477e-02

  -5.30886836e-02 -3.74699354e-01  9.23318684e-01 -9.26710367e-01

  -2.68131733e-01 -8.64134014e-01 -1.89551616e+00 -9.03271735e-01

  -1.72996855e+00  8.62457752e-01 -1.54348379e-02 -8.70698988e-01

  -9.13360596e-01 -6.04054332e-02  5.08368790e-01  1.02435374e+00

  -4.20150638e-01 -1.04386818e+00 -3.83825064e-01  3.01845729e-01

  -1.26365364e+00  1.15015531e+00 -1.45956314e+00 -1.96097386e+00

  -8.38011444e-01 -1.21718431e-02  1.19632220e+00  1.04088807e+00

  -1.15515399e+00  2.21468449e+00 -7.71084011e-01 -7.44305432e-01

  -2.75291264e-01 -6.75556660e-01  7.74556622e-02  1.79760560e-01

  -1.33365110e-01  3.51098567e-01  7.19473243e-01  8.55933070e-01

   3.91026944e-01  1.44948661e+00 -2.92995512e-01  6.87693357e-01

   1.95743549e+00  3.18385288e-02 -1.69346106e+00 -8.09013620e-02

  -2.56299329e+00  9.36020970e-01 -8.54420900e-01 -4.61971834e-02

   2.36308277e-01 -1.55761313e+00 -7.86614478e-01  3.39991421e-01

  -1.03140044e+00  2.90483743e-01 -2.94517279e-01 -3.49745840e-01

   7.22372353e-01  2.19730586e-01  9.76900220e-01 -3.07685852e-01

   1.40521646e+00 -2.88316512e+00  1.21343791e+00  9.11809653e-02

   3.42132568e-01  2.03332353e+00  2.89853275e-01 -2.18388867e+00]]

(1, 65, 512)

[[[-1.0528241  -0.24181914  0.522954   ...  1.5538865   0.5066706

   -1.6655072 ]

  [-2.2335687   0.920575   -0.63172686 ...  2.4422293   1.3804872

   -2.2780924 ]

  [ 0.03954349  0.5343058  -0.01555062 ...  2.148034    0.41345966

   -2.255646  ]

  ...

  [-1.5921654  -0.62526304  0.54613507 ...  2.4692652   0.66252726

   -2.4752347 ]

  [-0.05565414 -1.5578     -0.05022473 ...  2.4439347  -0.04021268

   -2.2517097 ]

  [-0.6395371  -1.5551064  -0.17523812 ...  2.526401   -1.090919

   -2.415597  ]]]

代码
文本
[7]
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.decomposition import PCA

smiles_list = valid_data_split["SMILES"].values.tolist()
y = valid_data_split["TARGET"].values.tolist()
repr_dict = clf.get_repr(smiles_list)
unimol_repr_list = np.array(repr_dict['cls_repr'])

pca = PCA(n_components=2)
X_reduced = pca.fit_transform(unimol_repr_list)

# 可视化
colors = ['r', 'g', 'b']
markers = ['s', 'o', 'D']
labels = ['Target:0','Target:1']

plt.figure(figsize=(8, 6))

for label, color, marker in zip(np.unique(y), colors, markers):
plt.scatter(X_reduced[y == label, 0],
X_reduced[y == label, 1],
c=color,
marker=marker,
label=labels[label],
edgecolors='black')

plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend(loc='best')
plt.title('Unimol Repr')
plt.show()
2023-11-23 18:32:04 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

200it [00:03, 56.14it/s]

2023-11-23 18:32:08 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:32:08 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

100%|██████████| 7/7 [00:06<00:00,  1.05it/s]
代码
文本

5. Uni-Mol的核心模块和相关参数

代码
文本
[8]
# 导入Uni-Mol
from unimol import MolTrain, MolPredict
代码
文本

5.1. “Moltrain”模块的参数说明

task:选择对应的任务,目前支持五种任务类型:

  • classification: 0/1分类
  • regression: 回归
  • multiclass: 多分类
  • multilabel_classification: 多标签0/1分类
  • multilabel_regression: 多标签回归

metrics: 对应需要模型优化的指标,传入的metrics可以用逗号分隔,为空默认,目前支持的指标如下:

  • classification: auc,auprc,log_loss,f1_score, mcc,recall,precision,cohen_kappa;
  • regression: mae, mse, rmse, r2, spearmanr;
  • multiclass: log_loss, acc;
  • multilabel_classification: log_loss, acc, auprc, cohen_kappa;
  • multilabel_regression: mse, mae, rmse, r2;

data_type:输入的数据类型,目前仅支持molecule,后续会开放protein, crystal等更多数据源

split:unimol 默认采用5-Fold Cross Validation验证方式,划分方式支持random和按照scaffold划分

save_path:当前的任务路径,默认会覆盖文件

epochs, learning_rate, batch_size, early_stopping:unimol训练时开放的超参数

代码
文本

5.2. 超参数调整(Hyperparameter Finetuning)

超参数是机器学习模型中的参数,其值不能通过训练数据集直接学习得到,而是需要人为设定。例如,学习率、批处理大小、训练周期数、神经网络的层数和每层的节点数等都是典型的超参数。超参数调整的目的是为了找到一组能使模型在测试数据集上表现最好的超参数。因为超参数的值直接影响到模型的训练效果和预测性能,所以合理的调整超参数对于获得高质量的机器学习模型是非常重要的。

5.3. Metrics

5.3.1. F2-score

F2-scoreF1-beta 时的特殊形式。 F1-beta 允许我们调整精确率(Precision)和召回率(Recall)之间的权重,其一般形式如下:

在本次赛事中,我们更在意模型的召回率,取 。此时我们得到 F2-scoreF2-score 的计算公式为:

其中,Precision = 真正例(TP)/(真正例(TP)+假正例(FP)),Recall = 真正例(TP)/(真正例(TP)+假负例(FN))。 TP(True Positive)表示实际为正例且预测为正例的样本数;TN(True Negative)表示实际为负例且预测为负例的样本数;FP(False Positive)表示实际为负例但预测为正例的样本数;FN(False Negative)表示实际为正例但预测为负例的样本数。

最终排行榜展示结果只保留 F2-score 的四位小数,并根据F2-score来评估模型的好坏:

  • 接近1的F2-score表示模型性能很好,意味着模型在正确预测正例方面做得很好,同时尽量减少了假负例(即避免将实际为正的样本预测为负)。

  • 接近0的F2-score表示模型性能很差,意味着模型在正确预测正例方面做得很差,同时产生了很多假负例。

5.3.2. AUC-ROC

  • AUC-ROC是一种用于评估分类模型性能的指标,全称为Area Under the Receiver Operating Characteristic Curve(接收者操作特征曲线下的面积)。在这里,ROC曲线是一个描绘了真正类率(True Positive Rate, TPR)和假正类率(False Positive Rate, FPR)在不同分类阈值下变化情况的曲线。
  • 真正类率(TPR)也叫敏感性,它度量的是所有实际为正的样本中,被正确地判断为正的比例。TPR = TP / (TP + FN),其中TP是真正类的数量,FN是假负类的数量。假正类率(FPR)度量的是所有实际为负的样本中,被错误地判断为正的比例。FPR = FP / (FP + TN),其中FP是假正类的数量,TN是真负类的数量。
  • AUC值即为ROC曲线下的面积,它反映了模型在任意分类阈值下的整体性能。AUC值在0.5到1之间,值越接近1,模型的性能越好。一般来说,如果模型的AUC值大于0.5,那么模型就比随机猜测好;如果AUC值等于0.5,那么模型的性能就等同于随机猜测;如果AUC值小于0.5,那么模型的性能就比随机猜测差。
  • AUC-ROC值的优点在于,它不依赖于特定的分类阈值,因此对于评估模型的整体性能非常有用,尤其是在正负样本不均衡的情况下。

【注】本次赛题采用 F2-score 作为评价指标。

代码
文本
[10]
# 使用全量数据进行Fine-Tuneing,并使用Cross-Validation的方法验证模型的性能
from sklearn.metrics import roc_auc_score, roc_curve, fbeta_score # 导入sklearn.metrics以输出模型的ROC-AUC和F2-score

threshold = 0.5

train_full_results = {}

font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 8}

lr_ft = [1e-5, 1e-4, 1e-3] # 学习率
for i in range(len(lr_ft)): # 循环输入每个学习率
clf = MolTrain(task='classification',
data_type='molecule',
epochs=20,
learning_rate=lr_ft[i],
batch_size=16,
early_stopping=5,
metrics='auc',
split='random',
save_path='./full_learning_rate_'+str(lr_ft[i]),
)
clf.fit("./mol_train_full.csv") # 训练模型
cv_results = pd.DataFrame({'pred':clf.cv_pred.flatten(),
'SMILES':train_data_full["SMILES"],
'Target_BBB':train_data_full["TARGET"]})
auc = roc_auc_score(cv_results.Target_BBB, cv_results.pred)
fpr, tpr, _ = roc_curve(cv_results.Target_BBB, cv_results.pred)
f2_score = fbeta_score(
cv_results.Target_BBB,
[1 if p > threshold else 0 for p in cv_results.pred],
beta=2
)
train_full_results[f"Full Learning Rate: {lr_ft[i]}"] = {"AUC": auc, "FPR": fpr, "TPR": tpr, "F2_Score": f2_score}
print(f"[Learning Rate: {lr_ft[i]}]\tAUC:{auc:.4f}\tF2_Score:{f2_score:.4f}")

sorted_train_full_results = sorted(train_full_results.items(), key=lambda x: x[1]["F2_Score"], reverse=True) # 根据AUC对结果进行排序

# 绘制ROC曲线
plt.figure(figsize=(10, 6), dpi=150)
plt.title("ROC Curves")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")

for name, result in sorted_train_full_results:
if name.startswith("Full Learning Rate"):
plt.plot(result["FPR"], result["TPR"], label=f"{name} (AUC:{result['AUC']:.4f} F2_Score:{result['F2_Score']:.4f})")

plt.legend(loc="lower right")
plt.title("Train_Full_Model", fontdict=font)
plt.show()
2023-11-23 18:32:45 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

700it [00:10, 69.41it/s] 

2023-11-23 18:32:55 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:32:55 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 18:32:55 | unimol/train.py | 105 | INFO | Uni-Mol(QSAR) | Output directory already exists: ./full_learning_rate_1e-05

2023-11-23 18:32:55 | unimol/train.py | 106 | INFO | Uni-Mol(QSAR) | Warning: Overwrite output directory: ./full_learning_rate_1e-05

2023-11-23 18:32:56 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:32:56 | unimol/models/nnmodel.py | 103 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1

2023-11-23 18:33:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6566, val_loss: 0.6118, val_auc: 0.7796, lr: 0.000010, 7.9s

2023-11-23 18:33:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6285, val_loss: 0.5628, val_auc: 0.8391, lr: 0.000009, 3.1s

2023-11-23 18:33:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5796, val_loss: 0.4939, val_auc: 0.8749, lr: 0.000009, 3.1s

2023-11-23 18:33:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5077, val_loss: 0.4130, val_auc: 0.8849, lr: 0.000008, 3.2s

2023-11-23 18:33:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4585, val_loss: 0.3817, val_auc: 0.8926, lr: 0.000008, 3.1s

2023-11-23 18:33:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4153, val_loss: 0.3550, val_auc: 0.8999, lr: 0.000007, 3.1s

2023-11-23 18:33:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3662, val_loss: 0.3846, val_auc: 0.8957, lr: 0.000007, 3.1s

2023-11-23 18:33:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3489, val_loss: 0.3612, val_auc: 0.9001, lr: 0.000006, 3.1s

2023-11-23 18:33:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3485, val_loss: 0.3496, val_auc: 0.8980, lr: 0.000006, 3.1s

2023-11-23 18:33:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3222, val_loss: 0.3592, val_auc: 0.8992, lr: 0.000005, 3.1s

2023-11-23 18:33:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.2945, val_loss: 0.3967, val_auc: 0.8929, lr: 0.000005, 3.1s

2023-11-23 18:33:43 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.2912, val_loss: 0.3677, val_auc: 0.8929, lr: 0.000004, 3.1s

2023-11-23 18:33:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.2884, val_loss: 0.4721, val_auc: 0.8943, lr: 0.000004, 3.1s

2023-11-23 18:33:46 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 13

2023-11-23 18:33:48 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:33:48 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 0, result {'auc': 0.9001169590643274, 'auroc': 0.9001169590643274, 'auprc': 0.8697499082177511, 'log_loss': 0.3581545714688088, 'acc': 0.8928571428571429, 'f1_score': 0.8314606741573033, 'mcc': 0.7530433262998734, 'precision': 0.8409090909090909, 'recall': 0.8222222222222222, 'cohen_kappa': 0.7529411764705882}

2023-11-23 18:33:49 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:33:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6730, val_loss: 0.6560, val_auc: 0.7647, lr: 0.000010, 3.2s

2023-11-23 18:33:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6160, val_loss: 0.5730, val_auc: 0.8142, lr: 0.000009, 3.1s

2023-11-23 18:34:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5643, val_loss: 0.5085, val_auc: 0.8271, lr: 0.000009, 3.2s

2023-11-23 18:34:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.4965, val_loss: 0.4913, val_auc: 0.8433, lr: 0.000008, 3.1s

2023-11-23 18:34:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4518, val_loss: 0.4961, val_auc: 0.8547, lr: 0.000008, 3.2s

2023-11-23 18:34:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4102, val_loss: 0.4869, val_auc: 0.8598, lr: 0.000007, 3.1s

2023-11-23 18:34:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3866, val_loss: 0.5120, val_auc: 0.8576, lr: 0.000007, 3.2s

2023-11-23 18:34:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3805, val_loss: 0.5353, val_auc: 0.8609, lr: 0.000006, 3.2s

2023-11-23 18:34:22 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3423, val_loss: 0.4845, val_auc: 0.8678, lr: 0.000006, 3.1s

2023-11-23 18:34:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3198, val_loss: 0.4762, val_auc: 0.8736, lr: 0.000005, 3.1s

2023-11-23 18:34:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3065, val_loss: 0.4871, val_auc: 0.8738, lr: 0.000005, 3.1s

2023-11-23 18:34:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3043, val_loss: 0.5307, val_auc: 0.8760, lr: 0.000004, 3.1s

2023-11-23 18:34:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.2690, val_loss: 0.5200, val_auc: 0.8713, lr: 0.000004, 3.2s

2023-11-23 18:34:41 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.2697, val_loss: 0.5092, val_auc: 0.8769, lr: 0.000003, 3.2s

2023-11-23 18:34:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.2569, val_loss: 0.5770, val_auc: 0.8776, lr: 0.000003, 3.1s

2023-11-23 18:34:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.2619, val_loss: 0.5367, val_auc: 0.8793, lr: 0.000002, 3.1s

2023-11-23 18:34:53 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.2565, val_loss: 0.5355, val_auc: 0.8800, lr: 0.000002, 3.1s

2023-11-23 18:34:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.2339, val_loss: 0.5355, val_auc: 0.8798, lr: 0.000001, 3.1s

2023-11-23 18:34:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.2434, val_loss: 0.5367, val_auc: 0.8802, lr: 0.000001, 3.1s

2023-11-23 18:35:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.2480, val_loss: 0.5321, val_auc: 0.8789, lr: 0.000000, 3.1s

2023-11-23 18:35:04 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:35:05 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 1, result {'auc': 0.8802222222222222, 'auroc': 0.8802222222222222, 'auprc': 0.775995945642201, 'log_loss': 0.5477380958585335, 'acc': 0.7857142857142857, 'f1_score': 0.7169811320754718, 'mcc': 0.5477225575051662, 'precision': 0.6785714285714286, 'recall': 0.76, 'cohen_kappa': 0.5454545454545454}

2023-11-23 18:35:05 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:35:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6565, val_loss: 0.6171, val_auc: 0.8481, lr: 0.000010, 3.2s

2023-11-23 18:35:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5939, val_loss: 0.5735, val_auc: 0.8576, lr: 0.000009, 3.1s

2023-11-23 18:35:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5436, val_loss: 0.4739, val_auc: 0.8709, lr: 0.000009, 3.1s

2023-11-23 18:35:20 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.4900, val_loss: 0.4275, val_auc: 0.8872, lr: 0.000008, 3.2s

2023-11-23 18:35:24 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4657, val_loss: 0.4212, val_auc: 0.8902, lr: 0.000008, 3.1s

2023-11-23 18:35:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4192, val_loss: 0.3991, val_auc: 0.8893, lr: 0.000007, 3.1s

2023-11-23 18:35:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4025, val_loss: 0.3996, val_auc: 0.8954, lr: 0.000007, 3.1s

2023-11-23 18:35:35 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3675, val_loss: 0.4050, val_auc: 0.9040, lr: 0.000006, 3.2s

2023-11-23 18:35:39 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3624, val_loss: 0.4009, val_auc: 0.9019, lr: 0.000006, 3.2s

2023-11-23 18:35:42 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3439, val_loss: 0.4591, val_auc: 0.9090, lr: 0.000005, 3.2s

2023-11-23 18:35:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3384, val_loss: 0.4003, val_auc: 0.8990, lr: 0.000005, 3.2s

2023-11-23 18:35:49 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3254, val_loss: 0.4099, val_auc: 0.9065, lr: 0.000004, 3.2s

2023-11-23 18:35:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3051, val_loss: 0.4144, val_auc: 0.9029, lr: 0.000004, 3.2s

2023-11-23 18:35:55 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.2971, val_loss: 0.4159, val_auc: 0.9072, lr: 0.000003, 3.1s

2023-11-23 18:35:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.2806, val_loss: 0.4168, val_auc: 0.9087, lr: 0.000003, 3.1s

2023-11-23 18:35:58 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 15

2023-11-23 18:36:00 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:36:00 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 2, result {'auc': 0.9089673913043478, 'auroc': 0.9089673913043478, 'auprc': 0.8759291140057978, 'log_loss': 0.4576842188835144, 'acc': 0.8357142857142857, 'f1_score': 0.6933333333333334, 'mcc': 0.638589790822718, 'precision': 0.9629629629629629, 'recall': 0.5416666666666666, 'cohen_kappa': 0.5928174001011633}

2023-11-23 18:36:00 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:36:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6882, val_loss: 0.6613, val_auc: 0.7341, lr: 0.000010, 3.2s

2023-11-23 18:36:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6267, val_loss: 0.6099, val_auc: 0.7707, lr: 0.000009, 3.1s

2023-11-23 18:36:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5949, val_loss: 0.5629, val_auc: 0.7640, lr: 0.000009, 3.1s

2023-11-23 18:36:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5397, val_loss: 0.5189, val_auc: 0.8112, lr: 0.000008, 3.1s

2023-11-23 18:36:18 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4832, val_loss: 0.4931, val_auc: 0.8244, lr: 0.000008, 3.2s

2023-11-23 18:36:22 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4538, val_loss: 0.4850, val_auc: 0.8308, lr: 0.000007, 3.2s

2023-11-23 18:36:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4221, val_loss: 0.4785, val_auc: 0.8423, lr: 0.000007, 3.1s

2023-11-23 18:36:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3925, val_loss: 0.5030, val_auc: 0.8511, lr: 0.000006, 3.1s

2023-11-23 18:36:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3632, val_loss: 0.4572, val_auc: 0.8658, lr: 0.000006, 3.2s

2023-11-23 18:36:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3536, val_loss: 0.4606, val_auc: 0.8696, lr: 0.000005, 3.2s

2023-11-23 18:36:41 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3555, val_loss: 0.4516, val_auc: 0.8801, lr: 0.000005, 3.2s

2023-11-23 18:36:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3365, val_loss: 0.4576, val_auc: 0.8795, lr: 0.000004, 3.1s

2023-11-23 18:36:49 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3137, val_loss: 0.4537, val_auc: 0.8793, lr: 0.000004, 3.2s

2023-11-23 18:36:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.3204, val_loss: 0.4495, val_auc: 0.8806, lr: 0.000003, 3.2s

2023-11-23 18:36:55 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.3185, val_loss: 0.4652, val_auc: 0.8850, lr: 0.000003, 3.2s

2023-11-23 18:36:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.3011, val_loss: 0.4435, val_auc: 0.8850, lr: 0.000002, 3.1s

2023-11-23 18:37:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.2885, val_loss: 0.4399, val_auc: 0.8872, lr: 0.000002, 3.2s

2023-11-23 18:37:07 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.2933, val_loss: 0.4419, val_auc: 0.8901, lr: 0.000001, 3.2s

2023-11-23 18:37:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.2837, val_loss: 0.4457, val_auc: 0.8905, lr: 0.000001, 3.2s

2023-11-23 18:37:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.2937, val_loss: 0.4427, val_auc: 0.8909, lr: 0.000000, 3.2s

2023-11-23 18:37:17 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:37:17 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 3, result {'auc': 0.8909451421017844, 'auroc': 0.8909451421017844, 'auprc': 0.8546695591359414, 'log_loss': 0.4505697185971907, 'acc': 0.8357142857142857, 'f1_score': 0.7722772277227723, 'mcc': 0.6438825283299818, 'precision': 0.78, 'recall': 0.7647058823529411, 'cohen_kappa': 0.6438053097345133}

2023-11-23 18:37:18 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:37:21 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6681, val_loss: 0.6556, val_auc: 0.8686, lr: 0.000010, 3.2s

2023-11-23 18:37:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6059, val_loss: 0.5591, val_auc: 0.8608, lr: 0.000009, 3.1s

2023-11-23 18:37:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5541, val_loss: 0.4670, val_auc: 0.8738, lr: 0.000009, 3.2s

2023-11-23 18:37:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.4838, val_loss: 0.4259, val_auc: 0.8926, lr: 0.000008, 3.2s

2023-11-23 18:37:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4402, val_loss: 0.3908, val_auc: 0.9018, lr: 0.000008, 3.1s

2023-11-23 18:37:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4110, val_loss: 0.3859, val_auc: 0.9050, lr: 0.000007, 3.2s

2023-11-23 18:37:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3788, val_loss: 0.3950, val_auc: 0.9050, lr: 0.000007, 3.2s

2023-11-23 18:37:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3780, val_loss: 0.3724, val_auc: 0.9087, lr: 0.000006, 3.1s

2023-11-23 18:37:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3655, val_loss: 0.3499, val_auc: 0.9204, lr: 0.000006, 3.1s

2023-11-23 18:37:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3359, val_loss: 0.3645, val_auc: 0.9178, lr: 0.000005, 3.2s

2023-11-23 18:37:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3330, val_loss: 0.3559, val_auc: 0.9185, lr: 0.000005, 3.2s

2023-11-23 18:38:02 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.2919, val_loss: 0.3773, val_auc: 0.9156, lr: 0.000004, 3.1s

2023-11-23 18:38:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3007, val_loss: 0.3780, val_auc: 0.9152, lr: 0.000004, 3.2s

2023-11-23 18:38:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.2760, val_loss: 0.3995, val_auc: 0.9169, lr: 0.000003, 3.1s

2023-11-23 18:38:08 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 14

2023-11-23 18:38:10 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:38:10 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 4, result {'auc': 0.9204077206679679, 'auroc': 0.9204077206679679, 'auprc': 0.8427991929875038, 'log_loss': 0.3524619281824146, 'acc': 0.8714285714285714, 'f1_score': 0.826923076923077, 'mcc': 0.7250497698718489, 'precision': 0.8431372549019608, 'recall': 0.8113207547169812, 'cohen_kappa': 0.7247105090670745}

2023-11-23 18:38:10 | unimol/models/nnmodel.py | 144 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 

{'auc': 0.8892672332895408, 'auroc': 0.8892672332895408, 'auprc': 0.7964455718446836, 'log_loss': 0.4333217065980924, 'acc': 0.8442857142857143, 'f1_score': 0.7705263157894738, 'mcc': 0.6541715524242179, 'precision': 0.8026315789473685, 'recall': 0.7408906882591093, 'cohen_kappa': 0.6529736023432241}

2023-11-23 18:38:10 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!

2023-11-23 18:38:10 | unimol/utils/metrics.py | 260 | INFO | Uni-Mol(QSAR) | metrics for threshold: accuracy_score

2023-11-23 18:38:10 | unimol/utils/metrics.py | 274 | INFO | Uni-Mol(QSAR) | best threshold: 0.572874155091612, metrics: 0.8471428571428572
[Learning Rate: 1e-05]	AUC:0.8893	F2_Score:0.7525
2023-11-23 18:38:11 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

700it [00:10, 64.97it/s] 

2023-11-23 18:38:21 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:38:21 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 18:38:21 | unimol/train.py | 105 | INFO | Uni-Mol(QSAR) | Output directory already exists: ./full_learning_rate_0.0001

2023-11-23 18:38:21 | unimol/train.py | 106 | INFO | Uni-Mol(QSAR) | Warning: Overwrite output directory: ./full_learning_rate_0.0001

2023-11-23 18:38:22 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:38:22 | unimol/models/nnmodel.py | 103 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1

2023-11-23 18:38:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6305, val_loss: 0.4395, val_auc: 0.8622, lr: 0.000098, 3.2s

2023-11-23 18:38:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.4798, val_loss: 0.3444, val_auc: 0.9106, lr: 0.000093, 3.1s

2023-11-23 18:38:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.3736, val_loss: 0.3570, val_auc: 0.9214, lr: 0.000088, 3.1s

2023-11-23 18:38:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3126, val_loss: 0.3012, val_auc: 0.9380, lr: 0.000082, 3.2s

2023-11-23 18:38:41 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.3618, val_loss: 0.3749, val_auc: 0.9338, lr: 0.000077, 3.1s

2023-11-23 18:38:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.3486, val_loss: 0.3281, val_auc: 0.9396, lr: 0.000072, 3.1s

2023-11-23 18:38:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.1642, val_loss: 0.3883, val_auc: 0.9345, lr: 0.000067, 3.2s

2023-11-23 18:38:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.1267, val_loss: 0.5149, val_auc: 0.9212, lr: 0.000062, 3.2s

2023-11-23 18:38:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.0709, val_loss: 0.5735, val_auc: 0.9142, lr: 0.000057, 3.2s

2023-11-23 18:38:57 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.0775, val_loss: 0.6967, val_auc: 0.9011, lr: 0.000052, 3.2s

2023-11-23 18:39:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.0605, val_loss: 0.6158, val_auc: 0.9167, lr: 0.000046, 3.2s

2023-11-23 18:39:00 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 11

2023-11-23 18:39:00 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:39:01 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 0, result {'auc': 0.9396491228070175, 'auroc': 0.9396491228070175, 'auprc': 0.8464653944167542, 'log_loss': 0.3325334537608017, 'acc': 0.8785714285714286, 'f1_score': 0.8210526315789474, 'mcc': 0.7318645578207432, 'precision': 0.78, 'recall': 0.8666666666666667, 'cohen_kappa': 0.7295454545454545}

2023-11-23 18:39:01 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:39:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6433, val_loss: 0.5321, val_auc: 0.8036, lr: 0.000098, 3.2s

2023-11-23 18:39:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5135, val_loss: 0.5292, val_auc: 0.8469, lr: 0.000093, 3.2s

2023-11-23 18:39:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.3511, val_loss: 0.3982, val_auc: 0.8991, lr: 0.000088, 3.2s

2023-11-23 18:39:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.2591, val_loss: 0.6040, val_auc: 0.8971, lr: 0.000082, 3.2s

2023-11-23 18:39:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.2371, val_loss: 0.4068, val_auc: 0.9158, lr: 0.000077, 3.2s

2023-11-23 18:39:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.1947, val_loss: 0.4819, val_auc: 0.9089, lr: 0.000072, 3.2s

2023-11-23 18:39:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.1537, val_loss: 0.5564, val_auc: 0.9302, lr: 0.000067, 3.2s

2023-11-23 18:39:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.1892, val_loss: 0.7090, val_auc: 0.9053, lr: 0.000062, 3.1s

2023-11-23 18:39:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.1554, val_loss: 0.8847, val_auc: 0.8873, lr: 0.000057, 3.2s

2023-11-23 18:39:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.1240, val_loss: 0.7946, val_auc: 0.8956, lr: 0.000052, 3.2s

2023-11-23 18:39:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.0733, val_loss: 0.8870, val_auc: 0.9080, lr: 0.000046, 3.2s

2023-11-23 18:39:43 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.0802, val_loss: 0.7632, val_auc: 0.9158, lr: 0.000041, 3.2s

2023-11-23 18:39:43 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 12

2023-11-23 18:39:45 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:39:45 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 1, result {'auc': 0.9302222222222222, 'auroc': 0.9302222222222222, 'auprc': 0.8851233809772862, 'log_loss': 0.5614358809294312, 'acc': 0.8857142857142857, 'f1_score': 0.8490566037735849, 'mcc': 0.7607257743127308, 'precision': 0.8035714285714286, 'recall': 0.9, 'cohen_kappa': 0.7575757575757576}

2023-11-23 18:39:46 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:39:49 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6578, val_loss: 0.5244, val_auc: 0.8743, lr: 0.000098, 3.2s

2023-11-23 18:39:53 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.4774, val_loss: 0.3985, val_auc: 0.8879, lr: 0.000093, 3.1s

2023-11-23 18:39:57 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.4025, val_loss: 0.4796, val_auc: 0.9076, lr: 0.000088, 3.1s

2023-11-23 18:40:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3479, val_loss: 0.4801, val_auc: 0.9139, lr: 0.000082, 3.1s

2023-11-23 18:40:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.3117, val_loss: 0.5586, val_auc: 0.8881, lr: 0.000077, 3.2s

2023-11-23 18:40:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.2256, val_loss: 0.5343, val_auc: 0.8723, lr: 0.000072, 3.2s

2023-11-23 18:40:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.2261, val_loss: 0.6309, val_auc: 0.8972, lr: 0.000067, 3.2s

2023-11-23 18:40:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.2006, val_loss: 0.4637, val_auc: 0.9072, lr: 0.000062, 3.2s

2023-11-23 18:40:17 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.1341, val_loss: 0.4794, val_auc: 0.9047, lr: 0.000057, 3.2s

2023-11-23 18:40:17 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 9

2023-11-23 18:40:19 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:40:19 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 2, result {'auc': 0.913949275362319, 'auroc': 0.913949275362319, 'auprc': 0.8931997864374212, 'log_loss': 0.4677879341146243, 'acc': 0.8642857142857143, 'f1_score': 0.7764705882352941, 'mcc': 0.6932591100539378, 'precision': 0.8918918918918919, 'recall': 0.6875, 'cohen_kappa': 0.6813608049832296}

2023-11-23 18:40:19 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:40:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6238, val_loss: 0.5296, val_auc: 0.7885, lr: 0.000098, 3.1s

2023-11-23 18:40:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.4261, val_loss: 0.4154, val_auc: 0.8799, lr: 0.000093, 3.2s

2023-11-23 18:40:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.3458, val_loss: 0.5670, val_auc: 0.8751, lr: 0.000088, 3.2s

2023-11-23 18:40:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3304, val_loss: 0.4566, val_auc: 0.8927, lr: 0.000082, 3.2s

2023-11-23 18:40:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.2583, val_loss: 0.4610, val_auc: 0.9011, lr: 0.000077, 3.2s

2023-11-23 18:40:42 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.1784, val_loss: 0.6175, val_auc: 0.9057, lr: 0.000072, 3.2s

2023-11-23 18:40:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.1818, val_loss: 0.5289, val_auc: 0.9117, lr: 0.000067, 3.2s

2023-11-23 18:40:50 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.0727, val_loss: 0.6473, val_auc: 0.8920, lr: 0.000062, 3.1s

2023-11-23 18:40:53 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.0932, val_loss: 0.7184, val_auc: 0.8764, lr: 0.000057, 3.2s

2023-11-23 18:40:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.0477, val_loss: 0.8069, val_auc: 0.8103, lr: 0.000052, 3.1s

2023-11-23 18:40:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.0376, val_loss: 0.8537, val_auc: 0.8843, lr: 0.000046, 3.2s

2023-11-23 18:41:02 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.0375, val_loss: 0.8219, val_auc: 0.8786, lr: 0.000041, 3.2s

2023-11-23 18:41:02 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 12

2023-11-23 18:41:04 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:41:04 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 3, result {'auc': 0.9116545494602335, 'auroc': 0.9116545494602335, 'auprc': 0.878982272980919, 'log_loss': 0.5405090335340771, 'acc': 0.8642857142857143, 'f1_score': 0.8080808080808081, 'mcc': 0.7040306799994877, 'precision': 0.8333333333333334, 'recall': 0.7843137254901961, 'cohen_kappa': 0.7032574743418116}

2023-11-23 18:41:05 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:41:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.5952, val_loss: 0.4814, val_auc: 0.8564, lr: 0.000098, 3.2s

2023-11-23 18:41:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.4701, val_loss: 0.4555, val_auc: 0.9011, lr: 0.000093, 3.2s

2023-11-23 18:41:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.3867, val_loss: 0.3632, val_auc: 0.9250, lr: 0.000088, 3.2s

2023-11-23 18:41:20 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3083, val_loss: 0.3459, val_auc: 0.9382, lr: 0.000082, 3.2s

2023-11-23 18:41:24 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.2860, val_loss: 0.2818, val_auc: 0.9440, lr: 0.000077, 3.2s

2023-11-23 18:41:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.2160, val_loss: 0.3497, val_auc: 0.9473, lr: 0.000072, 3.1s

2023-11-23 18:41:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.2157, val_loss: 0.4175, val_auc: 0.9417, lr: 0.000067, 3.2s

2023-11-23 18:41:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.1683, val_loss: 0.7165, val_auc: 0.9516, lr: 0.000062, 3.2s

2023-11-23 18:41:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.1377, val_loss: 0.4671, val_auc: 0.9219, lr: 0.000057, 3.2s

2023-11-23 18:41:42 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.1168, val_loss: 0.5101, val_auc: 0.9306, lr: 0.000052, 3.2s

2023-11-23 18:41:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.0757, val_loss: 0.6059, val_auc: 0.9150, lr: 0.000046, 3.2s

2023-11-23 18:41:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.0810, val_loss: 0.3765, val_auc: 0.9534, lr: 0.000041, 3.2s

2023-11-23 18:41:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.0529, val_loss: 0.4802, val_auc: 0.9289, lr: 0.000036, 3.2s

2023-11-23 18:41:55 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.0377, val_loss: 0.6090, val_auc: 0.9213, lr: 0.000031, 3.2s

2023-11-23 18:41:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.0305, val_loss: 0.5656, val_auc: 0.9414, lr: 0.000026, 3.2s

2023-11-23 18:42:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.0361, val_loss: 0.5932, val_auc: 0.9453, lr: 0.000021, 3.2s

2023-11-23 18:42:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.0261, val_loss: 0.6249, val_auc: 0.9414, lr: 0.000015, 3.2s

2023-11-23 18:42:04 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 17

2023-11-23 18:42:05 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:42:06 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 4, result {'auc': 0.9533723704185643, 'auroc': 0.9533723704185643, 'auprc': 0.9431094940834746, 'log_loss': 0.3726106896663883, 'acc': 0.9214285714285714, 'f1_score': 0.8910891089108911, 'mcc': 0.8323643862856426, 'precision': 0.9375, 'recall': 0.8490566037735849, 'cohen_kappa': 0.8298718515245249}

2023-11-23 18:42:06 | unimol/models/nnmodel.py | 144 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 

{'auc': 0.9149350707384865, 'auroc': 0.9149350707384865, 'auprc': 0.8650064188193167, 'log_loss': 0.45497539840106455, 'acc': 0.8828571428571429, 'f1_score': 0.831275720164609, 'mcc': 0.7418331856489546, 'precision': 0.8451882845188284, 'recall': 0.8178137651821862, 'cohen_kappa': 0.7415974141734268}

2023-11-23 18:42:06 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!

2023-11-23 18:42:06 | unimol/utils/metrics.py | 260 | INFO | Uni-Mol(QSAR) | metrics for threshold: accuracy_score

2023-11-23 18:42:06 | unimol/utils/metrics.py | 274 | INFO | Uni-Mol(QSAR) | best threshold: 0.5259960032080447, metrics: 0.8857142857142857
[Learning Rate: 0.0001]	AUC:0.9149	F2_Score:0.8231
2023-11-23 18:42:06 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

700it [00:10, 63.91it/s] 

2023-11-23 18:42:17 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:42:17 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 18:42:17 | unimol/train.py | 105 | INFO | Uni-Mol(QSAR) | Output directory already exists: ./full_learning_rate_0.001

2023-11-23 18:42:17 | unimol/train.py | 106 | INFO | Uni-Mol(QSAR) | Warning: Overwrite output directory: ./full_learning_rate_0.001

2023-11-23 18:42:18 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:42:18 | unimol/models/nnmodel.py | 103 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1

2023-11-23 18:42:22 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6565, val_loss: 0.4916, val_auc: 0.8854, lr: 0.000979, 3.2s

2023-11-23 18:42:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5531, val_loss: 0.9222, val_auc: 0.9027, lr: 0.000928, 3.2s

2023-11-23 18:42:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5677, val_loss: 0.4942, val_auc: 0.8996, lr: 0.000876, 3.2s

2023-11-23 18:42:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5622, val_loss: 0.3933, val_auc: 0.9177, lr: 0.000825, 3.2s

2023-11-23 18:42:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4393, val_loss: 0.3794, val_auc: 0.9088, lr: 0.000773, 3.2s

2023-11-23 18:42:39 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4366, val_loss: 0.4490, val_auc: 0.9135, lr: 0.000722, 3.2s

2023-11-23 18:42:43 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3926, val_loss: 0.3313, val_auc: 0.9270, lr: 0.000670, 3.2s

2023-11-23 18:42:47 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3485, val_loss: 0.4115, val_auc: 0.9057, lr: 0.000619, 3.2s

2023-11-23 18:42:50 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3740, val_loss: 0.4876, val_auc: 0.9008, lr: 0.000567, 3.2s

2023-11-23 18:42:53 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3814, val_loss: 0.4252, val_auc: 0.8952, lr: 0.000515, 3.2s

2023-11-23 18:42:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3006, val_loss: 0.4011, val_auc: 0.9088, lr: 0.000464, 3.2s

2023-11-23 18:43:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.2411, val_loss: 0.4169, val_auc: 0.9261, lr: 0.000412, 3.2s

2023-11-23 18:43:00 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 12

2023-11-23 18:43:01 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:43:01 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 0, result {'auc': 0.9270175438596491, 'auroc': 0.9270175438596491, 'auprc': 0.8667205224869711, 'log_loss': 0.333548138450299, 'acc': 0.85, 'f1_score': 0.7407407407407408, 'mcc': 0.6448871546210482, 'precision': 0.8333333333333334, 'recall': 0.6666666666666666, 'cohen_kappa': 0.6370370370370371}

2023-11-23 18:43:02 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:43:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.7049, val_loss: 0.5631, val_auc: 0.8258, lr: 0.000979, 3.2s

2023-11-23 18:43:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5164, val_loss: 0.5648, val_auc: 0.8147, lr: 0.000928, 3.2s

2023-11-23 18:43:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5138, val_loss: 0.5009, val_auc: 0.8273, lr: 0.000876, 3.2s

2023-11-23 18:43:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5218, val_loss: 0.5443, val_auc: 0.7989, lr: 0.000825, 3.2s

2023-11-23 18:43:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4732, val_loss: 0.8194, val_auc: 0.8164, lr: 0.000773, 3.2s

2023-11-23 18:43:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4476, val_loss: 0.5261, val_auc: 0.8271, lr: 0.000722, 3.2s

2023-11-23 18:43:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4498, val_loss: 0.5646, val_auc: 0.8422, lr: 0.000670, 3.2s

2023-11-23 18:43:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3456, val_loss: 0.5415, val_auc: 0.8571, lr: 0.000619, 3.2s

2023-11-23 18:43:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3375, val_loss: 0.5926, val_auc: 0.8324, lr: 0.000567, 3.2s

2023-11-23 18:43:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.2474, val_loss: 0.5186, val_auc: 0.8556, lr: 0.000515, 3.2s

2023-11-23 18:43:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.2354, val_loss: 0.4082, val_auc: 0.8856, lr: 0.000464, 3.2s

2023-11-23 18:43:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.1712, val_loss: 0.7957, val_auc: 0.8587, lr: 0.000412, 3.2s

2023-11-23 18:43:47 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.1975, val_loss: 0.5666, val_auc: 0.8727, lr: 0.000361, 3.2s

2023-11-23 18:43:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.1467, val_loss: 0.9321, val_auc: 0.8069, lr: 0.000309, 3.2s

2023-11-23 18:43:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.1035, val_loss: 1.0224, val_auc: 0.8384, lr: 0.000258, 3.2s

2023-11-23 18:43:57 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.1032, val_loss: 0.8956, val_auc: 0.8607, lr: 0.000206, 3.2s

2023-11-23 18:43:57 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 16

2023-11-23 18:43:57 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:43:57 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 1, result {'auc': 0.8855555555555557, 'auroc': 0.8855555555555557, 'auprc': 0.8211312198081776, 'log_loss': 0.4051442439268742, 'acc': 0.8571428571428571, 'f1_score': 0.8000000000000002, 'mcc': 0.6888888888888889, 'precision': 0.8, 'recall': 0.8, 'cohen_kappa': 0.6888888888888889}

2023-11-23 18:43:58 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:44:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.7381, val_loss: 0.7428, val_auc: 0.6995, lr: 0.000979, 3.2s

2023-11-23 18:44:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6643, val_loss: 0.6397, val_auc: 0.7267, lr: 0.000928, 3.2s

2023-11-23 18:44:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5623, val_loss: 0.5996, val_auc: 0.7045, lr: 0.000876, 3.2s

2023-11-23 18:44:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5526, val_loss: 0.6014, val_auc: 0.7289, lr: 0.000825, 3.2s

2023-11-23 18:44:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5399, val_loss: 0.5618, val_auc: 0.7477, lr: 0.000773, 3.2s

2023-11-23 18:44:20 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.5670, val_loss: 0.6152, val_auc: 0.7407, lr: 0.000722, 3.2s

2023-11-23 18:44:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.5302, val_loss: 0.5973, val_auc: 0.7509, lr: 0.000670, 3.1s

2023-11-23 18:44:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.5315, val_loss: 0.5716, val_auc: 0.7260, lr: 0.000619, 3.1s

2023-11-23 18:44:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.4849, val_loss: 0.5964, val_auc: 0.7462, lr: 0.000567, 3.2s

2023-11-23 18:44:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.4873, val_loss: 0.5988, val_auc: 0.7726, lr: 0.000515, 3.2s

2023-11-23 18:44:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.4829, val_loss: 0.5633, val_auc: 0.7665, lr: 0.000464, 3.2s

2023-11-23 18:44:41 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.4835, val_loss: 0.5765, val_auc: 0.7507, lr: 0.000412, 3.2s

2023-11-23 18:44:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.4393, val_loss: 0.5755, val_auc: 0.7649, lr: 0.000361, 3.2s

2023-11-23 18:44:47 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.4240, val_loss: 0.6250, val_auc: 0.7627, lr: 0.000309, 3.2s

2023-11-23 18:44:50 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.3976, val_loss: 0.6164, val_auc: 0.7588, lr: 0.000258, 3.2s

2023-11-23 18:44:50 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 15

2023-11-23 18:44:52 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:44:52 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 2, result {'auc': 0.7726449275362319, 'auroc': 0.7726449275362319, 'auprc': 0.6061283446080501, 'log_loss': 0.6010128563403019, 'acc': 0.7071428571428572, 'f1_score': 0.4383561643835616, 'mcc': 0.2918770058011386, 'precision': 0.64, 'recall': 0.3333333333333333, 'cohen_kappa': 0.2659846547314578}

2023-11-23 18:44:53 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:44:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6426, val_loss: 0.5502, val_auc: 0.8092, lr: 0.000979, 3.2s

2023-11-23 18:45:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5877, val_loss: 1.0310, val_auc: 0.7510, lr: 0.000928, 3.2s

2023-11-23 18:45:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6033, val_loss: 0.7432, val_auc: 0.8209, lr: 0.000876, 3.2s

2023-11-23 18:45:07 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.6473, val_loss: 0.5936, val_auc: 0.7781, lr: 0.000825, 3.2s

2023-11-23 18:45:10 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4780, val_loss: 0.5009, val_auc: 0.8290, lr: 0.000773, 3.2s

2023-11-23 18:45:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4285, val_loss: 0.4920, val_auc: 0.8273, lr: 0.000722, 3.2s

2023-11-23 18:45:18 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4028, val_loss: 0.5007, val_auc: 0.8528, lr: 0.000670, 3.2s

2023-11-23 18:45:21 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3738, val_loss: 0.4951, val_auc: 0.8420, lr: 0.000619, 3.2s

2023-11-23 18:45:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3551, val_loss: 0.5562, val_auc: 0.8143, lr: 0.000567, 3.2s

2023-11-23 18:45:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3235, val_loss: 0.4967, val_auc: 0.8583, lr: 0.000515, 3.2s

2023-11-23 18:45:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3383, val_loss: 0.5166, val_auc: 0.8579, lr: 0.000464, 3.2s

2023-11-23 18:45:35 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.2879, val_loss: 0.5311, val_auc: 0.8594, lr: 0.000412, 3.2s

2023-11-23 18:45:39 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.2659, val_loss: 0.4981, val_auc: 0.8912, lr: 0.000361, 3.2s

2023-11-23 18:45:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.2043, val_loss: 0.5560, val_auc: 0.8632, lr: 0.000309, 3.2s

2023-11-23 18:45:47 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.1808, val_loss: 0.6966, val_auc: 0.8978, lr: 0.000258, 3.2s

2023-11-23 18:45:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.1671, val_loss: 0.5071, val_auc: 0.9178, lr: 0.000206, 3.2s

2023-11-23 18:45:55 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.0733, val_loss: 0.5427, val_auc: 0.9099, lr: 0.000155, 3.2s

2023-11-23 18:45:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.0435, val_loss: 0.5466, val_auc: 0.9068, lr: 0.000103, 3.2s

2023-11-23 18:46:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.0466, val_loss: 0.5873, val_auc: 0.9048, lr: 0.000052, 3.2s

2023-11-23 18:46:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.0262, val_loss: 0.5898, val_auc: 0.9026, lr: 0.000000, 3.2s

2023-11-23 18:46:06 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:46:06 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 3, result {'auc': 0.9178233090989204, 'auroc': 0.9178233090989204, 'auprc': 0.8957477703218328, 'log_loss': 0.5031934106123767, 'acc': 0.8428571428571429, 'f1_score': 0.7924528301886793, 'mcc': 0.6675352762103087, 'precision': 0.7636363636363637, 'recall': 0.8235294117647058, 'cohen_kappa': 0.6663055254604551}

2023-11-23 18:46:07 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:46:10 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6793, val_loss: 0.6150, val_auc: 0.8265, lr: 0.000979, 3.2s

2023-11-23 18:46:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5574, val_loss: 0.4691, val_auc: 0.8430, lr: 0.000928, 3.2s

2023-11-23 18:46:18 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5280, val_loss: 0.4619, val_auc: 0.8937, lr: 0.000876, 3.2s

2023-11-23 18:46:22 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5299, val_loss: 0.6659, val_auc: 0.8406, lr: 0.000825, 3.2s

2023-11-23 18:46:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5605, val_loss: 0.4342, val_auc: 0.8783, lr: 0.000773, 3.2s

2023-11-23 18:46:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4739, val_loss: 0.3460, val_auc: 0.9412, lr: 0.000722, 3.2s

2023-11-23 18:46:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4066, val_loss: 0.4282, val_auc: 0.8963, lr: 0.000670, 3.2s

2023-11-23 18:46:35 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4022, val_loss: 0.4426, val_auc: 0.8857, lr: 0.000619, 3.2s

2023-11-23 18:46:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3537, val_loss: 0.4178, val_auc: 0.8970, lr: 0.000567, 3.2s

2023-11-23 18:46:41 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.2854, val_loss: 0.3121, val_auc: 0.9471, lr: 0.000515, 3.2s

2023-11-23 18:46:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.2637, val_loss: 0.3214, val_auc: 0.9386, lr: 0.000464, 3.2s

2023-11-23 18:46:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.2519, val_loss: 0.4051, val_auc: 0.9278, lr: 0.000412, 3.2s

2023-11-23 18:46:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.2148, val_loss: 0.4141, val_auc: 0.9362, lr: 0.000361, 3.2s

2023-11-23 18:46:55 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.1360, val_loss: 0.6311, val_auc: 0.9432, lr: 0.000309, 3.2s

2023-11-23 18:46:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.1470, val_loss: 0.4899, val_auc: 0.9586, lr: 0.000258, 3.2s

2023-11-23 18:47:02 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.0947, val_loss: 0.5466, val_auc: 0.9440, lr: 0.000206, 3.2s

2023-11-23 18:47:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.0420, val_loss: 0.5763, val_auc: 0.9553, lr: 0.000155, 3.2s

2023-11-23 18:47:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.0235, val_loss: 0.7172, val_auc: 0.9497, lr: 0.000103, 3.2s

2023-11-23 18:47:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.0348, val_loss: 0.7256, val_auc: 0.9525, lr: 0.000052, 3.2s

2023-11-23 18:47:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.0194, val_loss: 0.6761, val_auc: 0.9545, lr: 0.000000, 3.2s

2023-11-23 18:47:15 | unimol/utils/metrics.py | 243 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 20

2023-11-23 18:47:16 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:47:16 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 4, result {'auc': 0.9585773151160268, 'auroc': 0.9585773151160268, 'auprc': 0.9369367807196038, 'log_loss': 0.48699665962069827, 'acc': 0.8642857142857143, 'f1_score': 0.8041237113402062, 'mcc': 0.7087731033649073, 'precision': 0.8863636363636364, 'recall': 0.7358490566037735, 'cohen_kappa': 0.7016599371915657}

2023-11-23 18:47:16 | unimol/models/nnmodel.py | 144 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 

{'auc': 0.8801959049432038, 'auroc': 0.8801959049432038, 'auprc': 0.8211937338298088, 'log_loss': 0.46597906179011, 'acc': 0.8242857142857143, 'f1_score': 0.7308533916849016, 'mcc': 0.6060500466429658, 'precision': 0.7952380952380952, 'recall': 0.6761133603238867, 'cohen_kappa': 0.6016839378238342}

2023-11-23 18:47:16 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!

2023-11-23 18:47:16 | unimol/utils/metrics.py | 260 | INFO | Uni-Mol(QSAR) | metrics for threshold: accuracy_score

2023-11-23 18:47:16 | unimol/utils/metrics.py | 274 | INFO | Uni-Mol(QSAR) | best threshold: 0.4737886529795728, metrics: 0.8285714285714286
[Learning Rate: 0.001]	AUC:0.8802	F2_Score:0.6970
代码
文本
[11]
# 使用手动拆分的数据train_data_split进行Fine-Tuneing
lr_ft = [1e-5, 1e-4, 1e-3] # 学习率
for i in range(len(lr_ft)): # 循环输入每个学习率
clf = MolTrain(task='classification',
data_type='molecule',
epochs=20,
learning_rate=lr_ft[i],
batch_size=16,
early_stopping=5,
metrics='none',
split='random',
save_path='./split_learning_rate_'+str(lr_ft[i]),
)
clf.fit("./mol_train_split.csv") # 训练模型
2023-11-23 18:51:03 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

500it [00:08, 59.84it/s]

2023-11-23 18:51:12 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:51:12 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 18:51:12 | unimol/train.py | 102 | INFO | Uni-Mol(QSAR) | Create output directory: ./split_learning_rate_1e-05

2023-11-23 18:51:13 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:51:13 | unimol/models/nnmodel.py | 103 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1

2023-11-23 18:51:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6674, val_loss: 0.6486, val_log_loss: 0.6607, lr: 0.000010, 2.3s

2023-11-23 18:51:18 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6441, val_loss: 0.6201, val_log_loss: 0.6326, lr: 0.000009, 2.3s

2023-11-23 18:51:21 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6202, val_loss: 0.5613, val_log_loss: 0.5728, lr: 0.000009, 2.3s

2023-11-23 18:51:24 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5710, val_loss: 0.4911, val_log_loss: 0.5069, lr: 0.000008, 2.3s

2023-11-23 18:51:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5410, val_loss: 0.4127, val_log_loss: 0.4333, lr: 0.000008, 2.3s

2023-11-23 18:51:30 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4910, val_loss: 0.4181, val_log_loss: 0.4466, lr: 0.000007, 2.3s

2023-11-23 18:51:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4815, val_loss: 0.3353, val_log_loss: 0.3560, lr: 0.000007, 2.3s

2023-11-23 18:51:35 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4442, val_loss: 0.3176, val_log_loss: 0.3402, lr: 0.000006, 2.3s

2023-11-23 18:51:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.4382, val_loss: 0.3436, val_log_loss: 0.3743, lr: 0.000006, 2.3s

2023-11-23 18:51:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.4115, val_loss: 0.3041, val_log_loss: 0.3311, lr: 0.000005, 2.3s

2023-11-23 18:51:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3969, val_loss: 0.2923, val_log_loss: 0.3154, lr: 0.000005, 2.3s

2023-11-23 18:51:47 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3819, val_loss: 0.3052, val_log_loss: 0.3346, lr: 0.000004, 2.3s

2023-11-23 18:51:49 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3622, val_loss: 0.2822, val_log_loss: 0.3088, lr: 0.000004, 2.3s

2023-11-23 18:51:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.3424, val_loss: 0.2802, val_log_loss: 0.3066, lr: 0.000003, 2.3s

2023-11-23 18:51:55 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.3569, val_loss: 0.2840, val_log_loss: 0.3113, lr: 0.000003, 2.3s

2023-11-23 18:51:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.3368, val_loss: 0.3009, val_log_loss: 0.3313, lr: 0.000002, 2.3s

2023-11-23 18:52:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.3160, val_loss: 0.2718, val_log_loss: 0.2973, lr: 0.000002, 2.3s

2023-11-23 18:52:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.3146, val_loss: 0.2709, val_log_loss: 0.2968, lr: 0.000001, 2.3s

2023-11-23 18:52:06 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.3167, val_loss: 0.2726, val_log_loss: 0.2992, lr: 0.000001, 2.3s

2023-11-23 18:52:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.3114, val_loss: 0.2762, val_log_loss: 0.3037, lr: 0.000000, 2.3s

2023-11-23 18:52:08 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:52:08 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 0, result {'log_loss': 0.29676752734929324, 'auc': 0.9270699270699272, 'f1_score': 0.8611111111111112, 'mcc': 0.7838182461147938, 'acc': 0.9, 'precision': 0.8857142857142857, 'recall': 0.8378378378378378}

2023-11-23 18:52:09 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:52:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6838, val_loss: 0.6017, val_log_loss: 0.6060, lr: 0.000010, 2.3s

2023-11-23 18:52:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6484, val_loss: 0.5766, val_log_loss: 0.5780, lr: 0.000009, 2.3s

2023-11-23 18:52:17 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6033, val_loss: 0.5587, val_log_loss: 0.5523, lr: 0.000009, 2.3s

2023-11-23 18:52:20 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5726, val_loss: 0.5247, val_log_loss: 0.5241, lr: 0.000008, 2.3s

2023-11-23 18:52:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5330, val_loss: 0.6072, val_log_loss: 0.5843, lr: 0.000008, 2.3s

2023-11-23 18:52:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.5082, val_loss: 0.4671, val_log_loss: 0.4759, lr: 0.000007, 2.3s

2023-11-23 18:52:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4750, val_loss: 0.5558, val_log_loss: 0.5406, lr: 0.000007, 2.3s

2023-11-23 18:52:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4513, val_loss: 0.4429, val_log_loss: 0.4612, lr: 0.000006, 2.3s

2023-11-23 18:52:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.4303, val_loss: 0.4220, val_log_loss: 0.4448, lr: 0.000006, 2.3s

2023-11-23 18:52:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.4197, val_loss: 0.4299, val_log_loss: 0.4516, lr: 0.000005, 2.3s

2023-11-23 18:52:39 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3842, val_loss: 0.4536, val_log_loss: 0.4656, lr: 0.000005, 2.3s

2023-11-23 18:52:42 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3722, val_loss: 0.4998, val_log_loss: 0.5027, lr: 0.000004, 2.3s

2023-11-23 18:52:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3496, val_loss: 0.4251, val_log_loss: 0.4395, lr: 0.000004, 2.3s

2023-11-23 18:52:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.3485, val_loss: 0.4617, val_log_loss: 0.4631, lr: 0.000003, 2.3s

2023-11-23 18:52:46 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 14

2023-11-23 18:52:47 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:52:47 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 1, result {'log_loss': 0.44484827019274237, 'auc': 0.8513323983169706, 'f1_score': 0.6415094339622641, 'mcc': 0.5313537426878225, 'acc': 0.81, 'precision': 0.7727272727272727, 'recall': 0.5483870967741935}

2023-11-23 18:52:48 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:52:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6823, val_loss: 0.8203, val_log_loss: 0.7138, lr: 0.000010, 2.3s

2023-11-23 18:52:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6409, val_loss: 0.6697, val_log_loss: 0.6015, lr: 0.000009, 2.3s

2023-11-23 18:52:57 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6074, val_loss: 0.6177, val_log_loss: 0.5508, lr: 0.000009, 2.3s

2023-11-23 18:53:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5667, val_loss: 0.6065, val_log_loss: 0.5164, lr: 0.000008, 2.3s

2023-11-23 18:53:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5209, val_loss: 0.5982, val_log_loss: 0.4955, lr: 0.000008, 2.3s

2023-11-23 18:53:06 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4899, val_loss: 0.5585, val_log_loss: 0.4652, lr: 0.000007, 2.3s

2023-11-23 18:53:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4445, val_loss: 0.5815, val_log_loss: 0.4789, lr: 0.000007, 2.3s

2023-11-23 18:53:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4236, val_loss: 0.5489, val_log_loss: 0.4573, lr: 0.000006, 2.3s

2023-11-23 18:53:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.4195, val_loss: 0.6391, val_log_loss: 0.5047, lr: 0.000006, 2.3s

2023-11-23 18:53:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.4073, val_loss: 0.5468, val_log_loss: 0.4552, lr: 0.000005, 2.3s

2023-11-23 18:53:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3881, val_loss: 0.5326, val_log_loss: 0.4483, lr: 0.000005, 2.3s

2023-11-23 18:53:22 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3928, val_loss: 0.5647, val_log_loss: 0.4606, lr: 0.000004, 2.3s

2023-11-23 18:53:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3467, val_loss: 0.5752, val_log_loss: 0.4652, lr: 0.000004, 2.3s

2023-11-23 18:53:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.3444, val_loss: 0.5710, val_log_loss: 0.4585, lr: 0.000003, 2.3s

2023-11-23 18:53:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.3419, val_loss: 0.5681, val_log_loss: 0.4529, lr: 0.000003, 2.3s

2023-11-23 18:53:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.3276, val_loss: 0.5793, val_log_loss: 0.4544, lr: 0.000002, 2.3s

2023-11-23 18:53:31 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 16

2023-11-23 18:53:33 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:53:33 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 2, result {'log_loss': 0.448325347751379, 'auc': 0.854945054945055, 'f1_score': 0.7647058823529412, 'mcc': 0.6442920540664353, 'acc': 0.84, 'precision': 0.7878787878787878, 'recall': 0.7428571428571429}

2023-11-23 18:53:34 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:53:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6879, val_loss: 0.7088, val_log_loss: 0.6915, lr: 0.000010, 2.3s

2023-11-23 18:53:39 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6458, val_loss: 0.6190, val_log_loss: 0.6152, lr: 0.000009, 2.3s

2023-11-23 18:53:42 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6098, val_loss: 0.5759, val_log_loss: 0.5727, lr: 0.000009, 2.3s

2023-11-23 18:53:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5648, val_loss: 0.5379, val_log_loss: 0.5355, lr: 0.000008, 2.3s

2023-11-23 18:53:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5192, val_loss: 0.5279, val_log_loss: 0.5210, lr: 0.000008, 2.3s

2023-11-23 18:53:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4766, val_loss: 0.5444, val_log_loss: 0.5284, lr: 0.000007, 2.3s

2023-11-23 18:53:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4622, val_loss: 0.5385, val_log_loss: 0.5216, lr: 0.000007, 2.3s

2023-11-23 18:53:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4246, val_loss: 0.5506, val_log_loss: 0.5279, lr: 0.000006, 2.3s

2023-11-23 18:53:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3981, val_loss: 0.5730, val_log_loss: 0.5469, lr: 0.000006, 2.3s

2023-11-23 18:54:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3892, val_loss: 0.5560, val_log_loss: 0.5243, lr: 0.000005, 2.3s

2023-11-23 18:54:01 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 10

2023-11-23 18:54:02 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:54:03 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 3, result {'log_loss': 0.5210315176844597, 'auc': 0.80078125, 'f1_score': 0.6363636363636365, 'mcc': 0.4637130167514838, 'acc': 0.76, 'precision': 0.7, 'recall': 0.5833333333333334}

2023-11-23 18:54:03 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:54:06 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6385, val_loss: 0.6822, val_log_loss: 0.7005, lr: 0.000010, 2.3s

2023-11-23 18:54:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6312, val_loss: 0.5815, val_log_loss: 0.5929, lr: 0.000009, 2.3s

2023-11-23 18:54:12 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5869, val_loss: 0.5410, val_log_loss: 0.5614, lr: 0.000009, 2.3s

2023-11-23 18:54:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5448, val_loss: 0.4801, val_log_loss: 0.5064, lr: 0.000008, 2.3s

2023-11-23 18:54:18 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5043, val_loss: 0.4365, val_log_loss: 0.4676, lr: 0.000008, 2.3s

2023-11-23 18:54:21 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4648, val_loss: 0.4133, val_log_loss: 0.4490, lr: 0.000007, 2.3s

2023-11-23 18:54:24 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.4374, val_loss: 0.3893, val_log_loss: 0.4254, lr: 0.000007, 2.3s

2023-11-23 18:54:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4045, val_loss: 0.4193, val_log_loss: 0.4597, lr: 0.000006, 2.3s

2023-11-23 18:54:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.4047, val_loss: 0.3350, val_log_loss: 0.3680, lr: 0.000006, 2.3s

2023-11-23 18:54:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.3860, val_loss: 0.3312, val_log_loss: 0.3635, lr: 0.000005, 2.3s

2023-11-23 18:54:35 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.3794, val_loss: 0.3316, val_log_loss: 0.3647, lr: 0.000005, 2.3s

2023-11-23 18:54:37 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.3608, val_loss: 0.3170, val_log_loss: 0.3491, lr: 0.000004, 2.3s

2023-11-23 18:54:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.3519, val_loss: 0.3447, val_log_loss: 0.3791, lr: 0.000004, 2.3s

2023-11-23 18:54:43 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.3344, val_loss: 0.3073, val_log_loss: 0.3390, lr: 0.000003, 2.3s

2023-11-23 18:54:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.3307, val_loss: 0.3326, val_log_loss: 0.3670, lr: 0.000003, 2.3s

2023-11-23 18:54:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.3226, val_loss: 0.3050, val_log_loss: 0.3372, lr: 0.000002, 2.3s

2023-11-23 18:54:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.3004, val_loss: 0.3238, val_log_loss: 0.3575, lr: 0.000002, 2.3s

2023-11-23 18:54:53 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.2866, val_loss: 0.3012, val_log_loss: 0.3328, lr: 0.000001, 2.3s

2023-11-23 18:54:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.2891, val_loss: 0.3029, val_log_loss: 0.3348, lr: 0.000001, 2.3s

2023-11-23 18:54:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.2920, val_loss: 0.3051, val_log_loss: 0.3373, lr: 0.000000, 2.3s

2023-11-23 18:54:59 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:54:59 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 4, result {'log_loss': 0.3328081872873008, 'auc': 0.909205548549811, 'f1_score': 0.8311688311688312, 'mcc': 0.7256685399214777, 'acc': 0.87, 'precision': 0.8421052631578947, 'recall': 0.8205128205128205}

2023-11-23 18:54:59 | unimol/models/nnmodel.py | 144 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 

{'log_loss': 0.408756170053035, 'auc': 0.8810279852048295, 'f1_score': 0.7559523809523809, 'mcc': 0.6356663057250794, 'acc': 0.836, 'precision': 0.8037974683544303, 'recall': 0.7134831460674157}

2023-11-23 18:54:59 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!

2023-11-23 18:54:59 | unimol/utils/metrics.py | 260 | INFO | Uni-Mol(QSAR) | metrics for threshold: f1_score

2023-11-23 18:54:59 | unimol/utils/metrics.py | 274 | INFO | Uni-Mol(QSAR) | best threshold: 0.3674338783480619, metrics: 0.7692307692307693

2023-11-23 18:55:00 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

500it [00:07, 63.19it/s] 

2023-11-23 18:55:07 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:55:07 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 18:55:08 | unimol/train.py | 102 | INFO | Uni-Mol(QSAR) | Create output directory: ./split_learning_rate_0.0001

2023-11-23 18:55:08 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:55:08 | unimol/models/nnmodel.py | 103 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1

2023-11-23 18:55:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6793, val_loss: 0.6100, val_log_loss: 0.6277, lr: 0.000098, 2.3s

2023-11-23 18:55:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5730, val_loss: 0.6991, val_log_loss: 0.7400, lr: 0.000093, 2.3s

2023-11-23 18:55:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.4742, val_loss: 0.3468, val_log_loss: 0.3794, lr: 0.000088, 2.3s

2023-11-23 18:55:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3917, val_loss: 0.5422, val_log_loss: 0.5088, lr: 0.000082, 2.3s

2023-11-23 18:55:22 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.3245, val_loss: 0.3904, val_log_loss: 0.3723, lr: 0.000077, 2.3s

2023-11-23 18:55:24 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.2206, val_loss: 0.4179, val_log_loss: 0.4496, lr: 0.000072, 2.3s

2023-11-23 18:55:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.1675, val_loss: 0.4851, val_log_loss: 0.5411, lr: 0.000067, 2.3s

2023-11-23 18:55:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.1268, val_loss: 0.4406, val_log_loss: 0.4843, lr: 0.000062, 2.3s

2023-11-23 18:55:28 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 8

2023-11-23 18:55:30 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:55:30 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 0, result {'log_loss': 0.3794389639981091, 'auc': 0.9099099099099098, 'f1_score': 0.75, 'mcc': 0.6536175223875428, 'acc': 0.84, 'precision': 0.8888888888888888, 'recall': 0.6486486486486487}

2023-11-23 18:55:30 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:55:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6725, val_loss: 0.5492, val_log_loss: 0.5481, lr: 0.000098, 2.3s

2023-11-23 18:55:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5441, val_loss: 0.5752, val_log_loss: 0.5815, lr: 0.000093, 2.3s

2023-11-23 18:55:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.4641, val_loss: 0.5933, val_log_loss: 0.5922, lr: 0.000088, 2.3s

2023-11-23 18:55:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3367, val_loss: 0.3628, val_log_loss: 0.3990, lr: 0.000082, 2.3s

2023-11-23 18:55:43 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.2431, val_loss: 0.4767, val_log_loss: 0.5296, lr: 0.000077, 2.3s

2023-11-23 18:55:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.2317, val_loss: 0.4477, val_log_loss: 0.4952, lr: 0.000072, 2.3s

2023-11-23 18:55:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.2146, val_loss: 0.7077, val_log_loss: 0.6603, lr: 0.000067, 2.3s

2023-11-23 18:55:50 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.1249, val_loss: 0.5636, val_log_loss: 0.5345, lr: 0.000062, 2.3s

2023-11-23 18:55:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.0860, val_loss: 0.8990, val_log_loss: 0.8705, lr: 0.000057, 2.3s

2023-11-23 18:55:52 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 9

2023-11-23 18:55:54 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:55:54 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 1, result {'log_loss': 0.3990491611883044, 'auc': 0.8536699392239363, 'f1_score': 0.7796610169491526, 'mcc': 0.6895922337798495, 'acc': 0.87, 'precision': 0.8214285714285714, 'recall': 0.7419354838709677}

2023-11-23 18:55:55 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:55:57 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6605, val_loss: 0.6092, val_log_loss: 0.5662, lr: 0.000098, 2.3s

2023-11-23 18:56:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5816, val_loss: 0.6567, val_log_loss: 0.5192, lr: 0.000093, 2.3s

2023-11-23 18:56:02 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5343, val_loss: 0.5297, val_log_loss: 0.4763, lr: 0.000088, 2.3s

2023-11-23 18:56:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5233, val_loss: 0.6400, val_log_loss: 0.6013, lr: 0.000082, 2.3s

2023-11-23 18:56:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.3765, val_loss: 0.4583, val_log_loss: 0.3772, lr: 0.000077, 2.3s

2023-11-23 18:56:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.3541, val_loss: 0.4049, val_log_loss: 0.3679, lr: 0.000072, 2.3s

2023-11-23 18:56:14 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3285, val_loss: 0.5109, val_log_loss: 0.3929, lr: 0.000067, 2.3s

2023-11-23 18:56:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3171, val_loss: 0.3869, val_log_loss: 0.3524, lr: 0.000062, 2.3s

2023-11-23 18:56:19 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.2432, val_loss: 0.4675, val_log_loss: 0.3892, lr: 0.000057, 2.3s

2023-11-23 18:56:21 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.1851, val_loss: 0.5265, val_log_loss: 0.5825, lr: 0.000052, 2.3s

2023-11-23 18:56:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.1705, val_loss: 0.6513, val_log_loss: 0.6928, lr: 0.000046, 2.3s

2023-11-23 18:56:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.1056, val_loss: 0.5690, val_log_loss: 0.4908, lr: 0.000041, 2.3s

2023-11-23 18:56:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.0886, val_loss: 0.7190, val_log_loss: 0.8044, lr: 0.000036, 2.3s

2023-11-23 18:56:28 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 13

2023-11-23 18:56:29 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:56:29 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 2, result {'log_loss': 0.35240658287890253, 'auc': 0.9345054945054945, 'f1_score': 0.8055555555555555, 'mcc': 0.6969685789552599, 'acc': 0.86, 'precision': 0.7837837837837838, 'recall': 0.8285714285714286}

2023-11-23 18:56:30 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:56:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6505, val_loss: 0.5463, val_log_loss: 0.5465, lr: 0.000098, 2.3s

2023-11-23 18:56:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.4907, val_loss: 0.5989, val_log_loss: 0.6219, lr: 0.000093, 2.3s

2023-11-23 18:56:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.4260, val_loss: 0.5788, val_log_loss: 0.5336, lr: 0.000088, 2.3s

2023-11-23 18:56:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3334, val_loss: 0.5628, val_log_loss: 0.5297, lr: 0.000082, 2.3s

2023-11-23 18:56:42 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.2176, val_loss: 0.7918, val_log_loss: 0.7365, lr: 0.000077, 2.3s

2023-11-23 18:56:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.2178, val_loss: 0.8676, val_log_loss: 0.8076, lr: 0.000072, 2.3s

2023-11-23 18:56:45 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 6

2023-11-23 18:56:46 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:56:46 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 3, result {'log_loss': 0.5464730583876372, 'auc': 0.7925347222222222, 'f1_score': 0.46153846153846156, 'mcc': 0.3546040716334876, 'acc': 0.72, 'precision': 0.75, 'recall': 0.3333333333333333}

2023-11-23 18:56:47 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:56:49 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.7273, val_loss: 0.6589, val_log_loss: 0.6588, lr: 0.000098, 2.3s

2023-11-23 18:56:52 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6387, val_loss: 0.8539, val_log_loss: 0.8992, lr: 0.000093, 2.3s

2023-11-23 18:56:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.5069, val_loss: 0.3808, val_log_loss: 0.4100, lr: 0.000088, 2.3s

2023-11-23 18:56:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.3750, val_loss: 0.3231, val_log_loss: 0.3580, lr: 0.000082, 2.3s

2023-11-23 18:57:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.3051, val_loss: 0.3421, val_log_loss: 0.3809, lr: 0.000077, 2.3s

2023-11-23 18:57:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.2955, val_loss: 0.4154, val_log_loss: 0.4640, lr: 0.000072, 2.3s

2023-11-23 18:57:05 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.2330, val_loss: 0.4739, val_log_loss: 0.5235, lr: 0.000067, 2.3s

2023-11-23 18:57:07 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.2084, val_loss: 0.3269, val_log_loss: 0.3643, lr: 0.000062, 2.3s

2023-11-23 18:57:10 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.1114, val_loss: 0.3653, val_log_loss: 0.4086, lr: 0.000057, 2.3s

2023-11-23 18:57:10 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 9

2023-11-23 18:57:11 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:57:11 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 4, result {'log_loss': 0.35796185858547686, 'auc': 0.915931063472047, 'f1_score': 0.7887323943661971, 'mcc': 0.6821267019582338, 'acc': 0.85, 'precision': 0.875, 'recall': 0.717948717948718}

2023-11-23 18:57:11 | unimol/models/nnmodel.py | 144 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 

{'log_loss': 0.407065925007686, 'auc': 0.8849710377556006, 'f1_score': 0.7295597484276731, 'mcc': 0.6154776624006962, 'acc': 0.828, 'precision': 0.8285714285714286, 'recall': 0.651685393258427}

2023-11-23 18:57:11 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!

2023-11-23 18:57:11 | unimol/utils/metrics.py | 260 | INFO | Uni-Mol(QSAR) | metrics for threshold: f1_score

2023-11-23 18:57:11 | unimol/utils/metrics.py | 274 | INFO | Uni-Mol(QSAR) | best threshold: 0.36781403147860575, metrics: 0.7714285714285714

2023-11-23 18:57:12 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

500it [00:07, 63.23it/s] 

2023-11-23 18:57:20 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 18:57:20 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 18:57:20 | unimol/train.py | 102 | INFO | Uni-Mol(QSAR) | Create output directory: ./split_learning_rate_0.001

2023-11-23 18:57:21 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:57:21 | unimol/models/nnmodel.py | 103 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1

2023-11-23 18:57:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.7299, val_loss: 0.6655, val_log_loss: 0.6659, lr: 0.000979, 2.3s

2023-11-23 18:57:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5984, val_loss: 0.5727, val_log_loss: 0.5986, lr: 0.000928, 2.3s

2023-11-23 18:57:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6341, val_loss: 0.5869, val_log_loss: 0.5919, lr: 0.000876, 2.3s

2023-11-23 18:57:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5717, val_loss: 0.3872, val_log_loss: 0.3976, lr: 0.000825, 2.3s

2023-11-23 18:57:34 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4636, val_loss: 0.4099, val_log_loss: 0.4303, lr: 0.000773, 2.3s

2023-11-23 18:57:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.6111, val_loss: 0.6495, val_log_loss: 0.6577, lr: 0.000722, 2.3s

2023-11-23 18:57:39 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.6841, val_loss: 0.6487, val_log_loss: 0.6568, lr: 0.000670, 2.3s

2023-11-23 18:57:41 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.6395, val_loss: 0.5548, val_log_loss: 0.5587, lr: 0.000619, 2.3s

2023-11-23 18:57:43 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.5040, val_loss: 0.4327, val_log_loss: 0.4504, lr: 0.000567, 2.3s

2023-11-23 18:57:43 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 9

2023-11-23 18:57:45 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:57:45 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 0, result {'log_loss': 0.3976060327887535, 'auc': 0.9060489060489061, 'f1_score': 0.7164179104477612, 'mcc': 0.5830541973908208, 'acc': 0.81, 'precision': 0.8, 'recall': 0.6486486486486487}

2023-11-23 18:57:46 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:57:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6686, val_loss: 0.5537, val_log_loss: 0.5608, lr: 0.000979, 2.3s

2023-11-23 18:57:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6367, val_loss: 0.7104, val_log_loss: 0.7117, lr: 0.000928, 2.3s

2023-11-23 18:57:53 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6601, val_loss: 0.6053, val_log_loss: 0.6109, lr: 0.000876, 2.3s

2023-11-23 18:57:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.6113, val_loss: 0.6001, val_log_loss: 0.6068, lr: 0.000825, 2.3s

2023-11-23 18:57:58 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.5909, val_loss: 0.5544, val_log_loss: 0.5565, lr: 0.000773, 2.3s

2023-11-23 18:58:00 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.6210, val_loss: 0.5501, val_log_loss: 0.5659, lr: 0.000722, 2.3s

2023-11-23 18:58:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.5358, val_loss: 0.5498, val_log_loss: 0.5758, lr: 0.000670, 2.3s

2023-11-23 18:58:06 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.4879, val_loss: 0.5530, val_log_loss: 0.5826, lr: 0.000619, 2.3s

2023-11-23 18:58:08 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.5287, val_loss: 0.5856, val_log_loss: 0.6028, lr: 0.000567, 2.3s

2023-11-23 18:58:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.4775, val_loss: 0.6264, val_log_loss: 0.6346, lr: 0.000515, 2.3s

2023-11-23 18:58:13 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.4984, val_loss: 0.6161, val_log_loss: 0.6435, lr: 0.000464, 2.3s

2023-11-23 18:58:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.5191, val_loss: 0.5239, val_log_loss: 0.5291, lr: 0.000412, 2.3s

2023-11-23 18:58:18 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.4388, val_loss: 0.4819, val_log_loss: 0.5037, lr: 0.000361, 2.3s

2023-11-23 18:58:21 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.4087, val_loss: 0.5044, val_log_loss: 0.5297, lr: 0.000309, 2.3s

2023-11-23 18:58:24 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.3742, val_loss: 0.5480, val_log_loss: 0.5758, lr: 0.000258, 2.3s

2023-11-23 18:58:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.3309, val_loss: 0.5649, val_log_loss: 0.5743, lr: 0.000206, 2.3s

2023-11-23 18:58:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.3161, val_loss: 0.4796, val_log_loss: 0.5043, lr: 0.000155, 2.3s

2023-11-23 18:58:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.3028, val_loss: 0.5471, val_log_loss: 0.5501, lr: 0.000103, 2.3s

2023-11-23 18:58:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.2393, val_loss: 0.6115, val_log_loss: 0.6193, lr: 0.000052, 2.3s

2023-11-23 18:58:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [20/20] train_loss: 0.2115, val_loss: 0.6827, val_log_loss: 0.6723, lr: 0.000000, 2.3s

2023-11-23 18:58:37 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:58:37 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 1, result {'log_loss': 0.5043017678894103, 'auc': 0.8186068256194484, 'f1_score': 0.7301587301587302, 'mcc': 0.6062795312891787, 'acc': 0.83, 'precision': 0.71875, 'recall': 0.7419354838709677}

2023-11-23 18:58:38 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:58:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.6788, val_loss: 0.5458, val_log_loss: 0.5384, lr: 0.000979, 2.3s

2023-11-23 18:58:44 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.5177, val_loss: 0.6364, val_log_loss: 0.4680, lr: 0.000928, 2.3s

2023-11-23 18:58:46 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.4383, val_loss: 0.9260, val_log_loss: 0.7203, lr: 0.000876, 2.3s

2023-11-23 18:58:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5153, val_loss: 0.8318, val_log_loss: 0.6400, lr: 0.000825, 2.3s

2023-11-23 18:58:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4405, val_loss: 0.5321, val_log_loss: 0.5628, lr: 0.000773, 2.3s

2023-11-23 18:58:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.4598, val_loss: 0.5166, val_log_loss: 0.5325, lr: 0.000722, 2.3s

2023-11-23 18:58:57 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3756, val_loss: 0.5933, val_log_loss: 0.4916, lr: 0.000670, 2.3s

2023-11-23 18:58:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.3388, val_loss: 0.7589, val_log_loss: 0.5634, lr: 0.000619, 2.3s

2023-11-23 18:59:01 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.2492, val_loss: 0.5312, val_log_loss: 0.5143, lr: 0.000567, 2.3s

2023-11-23 18:59:03 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.2568, val_loss: 0.4971, val_log_loss: 0.4512, lr: 0.000515, 2.3s

2023-11-23 18:59:06 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.2048, val_loss: 0.7064, val_log_loss: 0.7679, lr: 0.000464, 2.3s

2023-11-23 18:59:09 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.1704, val_loss: 0.8314, val_log_loss: 0.6956, lr: 0.000412, 2.3s

2023-11-23 18:59:11 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.1544, val_loss: 0.7920, val_log_loss: 0.7233, lr: 0.000361, 2.3s

2023-11-23 18:59:13 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.0756, val_loss: 1.1304, val_log_loss: 0.9830, lr: 0.000309, 2.3s

2023-11-23 18:59:16 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.0811, val_loss: 1.2926, val_log_loss: 1.0568, lr: 0.000258, 2.3s

2023-11-23 18:59:16 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 15

2023-11-23 18:59:17 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:59:17 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 2, result {'log_loss': 0.4512166403513402, 'auc': 0.883076923076923, 'f1_score': 0.6769230769230768, 'mcc': 0.5261353625024292, 'acc': 0.79, 'precision': 0.7333333333333333, 'recall': 0.6285714285714286}

2023-11-23 18:59:17 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:59:20 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.7124, val_loss: 0.5760, val_log_loss: 0.5814, lr: 0.000979, 2.3s

2023-11-23 18:59:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6066, val_loss: 0.8263, val_log_loss: 0.8241, lr: 0.000928, 2.3s

2023-11-23 18:59:26 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.4958, val_loss: 0.9265, val_log_loss: 0.8112, lr: 0.000876, 2.3s

2023-11-23 18:59:28 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.5454, val_loss: 0.5007, val_log_loss: 0.4902, lr: 0.000825, 2.3s

2023-11-23 18:59:31 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.4104, val_loss: 0.5707, val_log_loss: 0.5179, lr: 0.000773, 2.3s

2023-11-23 18:59:33 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.3482, val_loss: 0.6198, val_log_loss: 0.5771, lr: 0.000722, 2.3s

2023-11-23 18:59:36 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.3141, val_loss: 0.6691, val_log_loss: 0.6092, lr: 0.000670, 2.3s

2023-11-23 18:59:38 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.2528, val_loss: 0.6074, val_log_loss: 0.5606, lr: 0.000619, 2.3s

2023-11-23 18:59:40 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.3273, val_loss: 0.7806, val_log_loss: 0.7215, lr: 0.000567, 2.3s

2023-11-23 18:59:40 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 9

2023-11-23 18:59:42 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 18:59:42 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 3, result {'log_loss': 0.4902133098989725, 'auc': 0.8263888888888888, 'f1_score': 0.6756756756756757, 'mcc': 0.48586716060579827, 'acc': 0.76, 'precision': 0.6578947368421053, 'recall': 0.6944444444444444}

2023-11-23 18:59:42 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 18:59:45 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [1/20] train_loss: 0.7487, val_loss: 0.7275, val_log_loss: 0.7477, lr: 0.000979, 2.3s

2023-11-23 18:59:48 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [2/20] train_loss: 0.6866, val_loss: 0.6568, val_log_loss: 0.6696, lr: 0.000928, 2.2s

2023-11-23 18:59:51 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [3/20] train_loss: 0.6866, val_loss: 0.6343, val_log_loss: 0.6381, lr: 0.000876, 2.3s

2023-11-23 18:59:54 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [4/20] train_loss: 0.6493, val_loss: 0.6576, val_log_loss: 0.6694, lr: 0.000825, 2.3s

2023-11-23 18:59:56 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [5/20] train_loss: 0.7021, val_loss: 0.6521, val_log_loss: 0.6631, lr: 0.000773, 2.3s

2023-11-23 18:59:59 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [6/20] train_loss: 0.6025, val_loss: 0.6182, val_log_loss: 0.6197, lr: 0.000722, 2.3s

2023-11-23 19:00:02 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [7/20] train_loss: 0.6136, val_loss: 0.6401, val_log_loss: 0.6507, lr: 0.000670, 2.3s

2023-11-23 19:00:04 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [8/20] train_loss: 0.5782, val_loss: 0.5394, val_log_loss: 0.5462, lr: 0.000619, 2.3s

2023-11-23 19:00:07 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [9/20] train_loss: 0.5193, val_loss: 0.4840, val_log_loss: 0.4959, lr: 0.000567, 2.3s

2023-11-23 19:00:10 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [10/20] train_loss: 0.5375, val_loss: 0.6650, val_log_loss: 0.6732, lr: 0.000515, 2.3s

2023-11-23 19:00:13 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [11/20] train_loss: 0.5865, val_loss: 0.5375, val_log_loss: 0.5559, lr: 0.000464, 2.3s

2023-11-23 19:00:15 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [12/20] train_loss: 0.5346, val_loss: 0.5470, val_log_loss: 0.5808, lr: 0.000412, 2.3s

2023-11-23 19:00:17 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [13/20] train_loss: 0.5283, val_loss: 0.6050, val_log_loss: 0.6260, lr: 0.000361, 2.3s

2023-11-23 19:00:20 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [14/20] train_loss: 0.4953, val_loss: 0.4253, val_log_loss: 0.4485, lr: 0.000309, 2.3s

2023-11-23 19:00:23 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [15/20] train_loss: 0.4970, val_loss: 0.4543, val_log_loss: 0.4766, lr: 0.000258, 2.3s

2023-11-23 19:00:25 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [16/20] train_loss: 0.4653, val_loss: 0.5407, val_log_loss: 0.5677, lr: 0.000206, 2.3s

2023-11-23 19:00:27 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [17/20] train_loss: 0.4566, val_loss: 0.4484, val_log_loss: 0.4780, lr: 0.000155, 2.3s

2023-11-23 19:00:29 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [18/20] train_loss: 0.4356, val_loss: 0.5082, val_log_loss: 0.5356, lr: 0.000103, 2.3s

2023-11-23 19:00:32 | unimol/tasks/trainer.py | 169 | INFO | Uni-Mol(QSAR) | Epoch [19/20] train_loss: 0.4205, val_loss: 0.4876, val_log_loss: 0.5162, lr: 0.000052, 2.3s

2023-11-23 19:00:32 | unimol/tasks/trainer.py | 202 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 19

2023-11-23 19:00:33 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:00:33 | unimol/models/nnmodel.py | 129 | INFO | Uni-Mol(QSAR) | fold 4, result {'log_loss': 0.4485196087509394, 'auc': 0.868011769651114, 'f1_score': 0.7027027027027027, 'mcc': 0.5308588184157569, 'acc': 0.78, 'precision': 0.7428571428571429, 'recall': 0.6666666666666666}

2023-11-23 19:00:33 | unimol/models/nnmodel.py | 144 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 

{'log_loss': 0.4583714719358832, 'auc': 0.8566892316281665, 'f1_score': 0.6997084548104956, 'mcc': 0.5441826412104309, 'acc': 0.794, 'precision': 0.7272727272727273, 'recall': 0.6741573033707865}

2023-11-23 19:00:33 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!

2023-11-23 19:00:33 | unimol/utils/metrics.py | 260 | INFO | Uni-Mol(QSAR) | metrics for threshold: f1_score

2023-11-23 19:00:34 | unimol/utils/metrics.py | 274 | INFO | Uni-Mol(QSAR) | best threshold: 0.3736185831458945, metrics: 0.7397959183673469
代码
文本

5.4. “MolPredict”模块的参数说明

load_model:训练好的模型路径

代码
文本
[12]
# 使用手动拆分的数据valid_data_split验证模型的性能
threshold = 0.5

valid_split_results = {}

lr_ft = [1e-5, 1e-4, 1e-3] # 学习率
for i in range(len(lr_ft)): # 循环输入每个学习率
valid_pred_model = MolPredict(load_model='./split_learning_rate_'+str(lr_ft[i])) # 载入不同学习率下的模型
valid_pred = valid_pred_model.predict("./mol_valid_split.csv") #对测试数据进行预测
valid_results = pd.DataFrame({'pred':valid_pred.reshape(-1),
'SMILES':valid_data_split["SMILES"],
'Target_BBB':valid_data_split["TARGET"]})
auc = roc_auc_score(valid_results.Target_BBB, valid_results.pred)
fpr, tpr, _ = roc_curve(valid_results.Target_BBB, valid_results.pred)
f2_score = fbeta_score(
valid_results.Target_BBB,
[1 if p > threshold else 0 for p in valid_results.pred],
beta=2
)
valid_split_results[f"Split Learning Rate: {lr_ft[i]}"] = {"AUC": auc, "FPR": fpr, "TPR": tpr, "F2_Score": f2_score}
print(f"[Learning Rate: {lr_ft[i]}]\tAUC:{auc:.4f}\tF2_Score:{f2_score:.4f}")

sorted_valid_split_results = sorted(valid_split_results.items(), key=lambda x: x[1]["F2_Score"], reverse=True) # 根据AUC对结果进行排序

# 绘制ROC曲线
plt.figure(figsize=(10, 6), dpi=200)
plt.title("ROC Curves")
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")

for name, result in sorted_valid_split_results:
if name.startswith("Split Learning Rate"):
plt.plot(result["FPR"], result["TPR"], label=f"{name} (AUC:{result['AUC']:.4f} F2_Score:{result['F2_Score']:.4f})")

plt.legend(loc="lower right")
plt.title("Validation_Split_Model", fontdict=font)
plt.show()
2023-11-23 19:02:03 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

200it [00:03, 51.07it/s]

2023-11-23 19:02:07 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 19:02:07 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 19:02:07 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 19:02:08 | unimol/models/nnmodel.py | 154 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1

2023-11-23 19:02:08 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:09 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:09 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:10 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:10 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:11 | unimol/predict.py | 66 | INFO | Uni-Mol(QSAR) | final predict metrics score: 

{'log_loss': 0.37887229695916175, 'auc': 0.9043035734041376, 'f1_score': 0.7716535433070866, 'mcc': 0.6719854177364072, 'acc': 0.855, 'precision': 0.8448275862068966, 'recall': 0.7101449275362319}
[Learning Rate: 1e-05]	AUC:0.9043	F2_Score:0.7335
2023-11-23 19:02:11 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

200it [00:03, 50.99it/s]

2023-11-23 19:02:15 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 19:02:15 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 19:02:16 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 19:02:16 | unimol/models/nnmodel.py | 154 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1

2023-11-23 19:02:16 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:17 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:18 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:18 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:19 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:19 | unimol/predict.py | 66 | INFO | Uni-Mol(QSAR) | final predict metrics score: 

{'log_loss': 0.3505101210810244, 'auc': 0.9322933952870893, 'f1_score': 0.7213114754098361, 'mcc': 0.6128578156678152, 'acc': 0.83, 'precision': 0.8301886792452831, 'recall': 0.6376811594202898}
[Learning Rate: 0.0001]	AUC:0.9323	F2_Score:0.6687
2023-11-23 19:02:20 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

200it [00:03, 53.26it/s]

2023-11-23 19:02:23 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-23 19:02:23 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-23 19:02:24 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-23 19:02:24 | unimol/models/nnmodel.py | 154 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1

2023-11-23 19:02:25 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:25 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:26 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:26 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:27 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-23 19:02:27 | unimol/predict.py | 66 | INFO | Uni-Mol(QSAR) | final predict metrics score: 

{'log_loss': 0.4100132327340543, 'auc': 0.8853855514990596, 'f1_score': 0.7681159420289855, 'mcc': 0.6459785374488328, 'acc': 0.84, 'precision': 0.7681159420289855, 'recall': 0.7681159420289855}
[Learning Rate: 0.001]	AUC:0.8854	F2_Score:0.7681
代码
文本

6. 使用效果最好的模型对测试数据进行预测

代码
文本
[9]
# 使用全量数据训练的模型在对test数据进行预测
threshold = 0.5

test_full_pred_model = MolPredict(load_model='./full_learning_rate_'+str(1e-4)) # 载入不同学习率下的模型

test_smi = {"SMILES": test_data["SMILES"].values.tolist()}

test_full_pred = test_full_pred_model.predict(test_smi["SMILES"]) #对测试数据进行预测
test_full_results = pd.DataFrame({'pred':test_full_pred.reshape(-1),
'SMILES':test_data["SMILES"]})

test_full_results["TARGET"] = [1 if x > threshold else 0 for x in test_full_results["pred"].values.tolist()]

test_full_results[["SMILES", "TARGET"]].to_csv('./Full_Model_Submission.csv', index=False, header=True)
2023-11-20 18:55:03 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

367it [00:07, 48.62it/s]

2023-11-20 18:55:10 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-20 18:55:10 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-20 18:55:11 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-20 18:55:11 | unimol/models/nnmodel.py | 154 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1

2023-11-20 18:55:13 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:55:16 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:55:19 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:55:23 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:55:25 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

                                                    
代码
文本
[11]
# 使用手动拆分的数据训练的模型对test数据进行预测
test_split_pred_model = MolPredict(load_model='./split_learning_rate_'+str(1e-3)) # 载入不同学习率下的模型

test_split_pred = test_split_pred_model.predict(test_smi["SMILES"]) #对测试数据进行预测
test_split_results = pd.DataFrame({'pred':test_split_pred.reshape(-1),
'SMILES':test_data["SMILES"]})

test_split_results["TARGET"] = [1 if x > threshold else 0 for x in test_split_pred["pred"].values.tolist()]

test_split_results[["SMILES", "TARGET"]].to_csv('./Split_Model_Submission.csv', index=False, header=True)
2023-11-20 18:56:38 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Start generating conformers...

367it [00:07, 47.77it/s]

2023-11-20 18:56:46 | unimol/data/conformer.py | 66 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.

2023-11-20 18:56:46 | unimol/data/conformer.py | 68 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.00% of molecules.

2023-11-20 18:56:47 | unimol/models/unimol.py | 116 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt

2023-11-20 18:56:47 | unimol/models/nnmodel.py | 154 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1

2023-11-20 18:56:47 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:56:48 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:56:50 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:56:51 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

2023-11-20 18:56:52 | unimol/tasks/trainer.py | 213 | INFO | Uni-Mol(QSAR) | load model success!

                                                    
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
predSMILES
00.030412CC(CCC(=O)O)C1CCC2C3C(CC(=O)C12C)C4(C)CCC(=O)C...
10.774587CC(=O)c1ccc2c(c1)Sc3ccccc3N2CCCN4CCN(CC4)CCO
20.036411CCCN(CCC)C(=O)C(CCC(=O)OCCCN1CCN(CCOC(=O)Cc2c(...
30.032624CC(C)CCCC(C)CCCC(C)CCCC1(C)CCc2c(C)c(O)c(C)c(C...
40.685985CCCN(CCC)CCc1cccc2c1CC(=O)N2
,
代码
文本
AI4S Cup-Getting Started
AI4S Cup-Getting Started
已赞1
本文被以下合集收录
uni-mol
钵钵鸡
更新于 2024-06-17
3 篇0 人关注
推荐阅读
公开
AI4S Cup学习赛:小分子药物血脑屏障(BBB)渗透性分类预测 - Molecular Descriptors & ML Baseline
AI4S Cup-Getting Started
AI4S Cup-Getting Started
hyb
发布于 2024-02-01
4 赞3 转存文件
公开
Uni-Mol性质预测实战-分类任务-血脑屏障渗透性
TutorialMachine Learning中文notebookUni-MolQSAR
TutorialMachine Learning中文notebookUni-MolQSAR
zhengh@dp.tech
发布于 2023-06-12
9 赞9 转存文件