Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
定量构效关系(QSAR)模型从0到1 & Uni-Mol入门实践(回归任务)
化学信息学
Deep Learning
RDKit
QSAR
Tutorial
中文
Machine Learning
Uni-Mol
notebook
Scikit-Learn
化学信息学Deep LearningRDKitQSARTutorial中文Machine LearningUni-MolnotebookScikit-Learn
zhengh@dp.tech
发布于 2023-06-17
推荐镜像 :unimol-qsar:v0.2
推荐机型 :c12_m92_1 * NVIDIA V100
赞 13
5
41
定量构效关系(QSAR)模型从0到1 & Uni-Mol入门实践(回归任务)
目录
引言
先准备一些数据吧!
QSAR的简明历史
QSAR建模的基本要求
QSAR建模的基本流程
分子表示
1D-QSAR分子表征
2D-QSAR分子表征
3D-QSAR分子表征
Uni-Mol 分子表示学习和预训练框架
预训练模型
Uni-Mol 简介
结果总览
One More Thing

定量构效关系(QSAR)模型从0到1 & Uni-Mol入门实践(回归任务)

©️ Copyright 2023 @ Authors
作者: 郑行 📨
日期:2023-06-16
共享协议:本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
快速开始:点击上方的 开始连接 按钮,选择 unimol-qsar:v0.2镜像及任意GPU节点配置,稍等片刻即可运行。

代码
文本

近年来,人工智能(AI)正以前所未有的速度发展,为各个领域带来巨大的突破和变革。

而实际上,在药物研发领域,药物科学家从上世纪就开始运用一系列数学和统计方法来助力药物研发的流程。他们基于药物分子的结构,构建数学模拟,用以预测药物的生化活性,这种方法被称为定量构效关系(Quantitative Structure-Activity Relationship,QSAR)。QSAR模型也随着人们对药物分子研究的不断深入,以及更多的人工智能方法被提出而持续发展。

可以说,QSAR模型是一个很好的观察AI for Science领域发展的缩影。在这个Notebook中,我们将以案例的形式介绍不同类型的QSAR模型的构建方法。

代码
文本

引言

定量构效关系(Quantitative Structure-Activity Relationship,QSAR)是一种研究化合物的化学结构与生物活性之间定量关系的方法,是计算机辅助药物设计(Computer-Aided Drug Design, CADD)中最为重要的工具之一。QSAR旨在建立数学模型,构建分子结构与其生化、物化性质关系,帮助药物科学家对新的药物分子的性质开展合理预测。

构建一个有效的QSAR模型涉及到若干步骤:

  1. 构建合理的分子表征(Molecular Representation),将分子结构转化为计算机可读的数值表示;
  2. 选择适合分子表征的机器学习模型,并使用已有的分子-性质数据训练模型
  3. 使用训练好的机器学习模型,对未测定性质的分子进行性质预测

QSAR模型的发展也正是随着分子表征的演进,以及对应机器学习模型的升级而不断变化。 在这个Notebook中,我们将以案例的形式介绍不同类型的QSAR模型的构建方法。

代码
文本

先准备一些数据吧!

为了带领大家更好地学习和体验构建QSAR模型的过程,我们将使用hERG蛋白抑制能力预测任务来作为演示案例。

我们可以首先下载hERG数据集:

代码
文本
[1]
import os
os.makedirs("datasets", exist_ok=True)

!pip install seaborn
!pip install lightgbm
!wget https://dp-public.oss-cn-beijing.aliyuncs.com/community/hERG.csv -O datasets/hERG.csv
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting seaborn
  Downloading seaborn-0.12.2-py3-none-any.whl (293 kB)
     |████████████████████████████████| 293 kB 338 kB/s eta 0:00:01
Requirement already satisfied: matplotlib!=3.6.1,>=3.1 in /opt/conda/lib/python3.8/site-packages (from seaborn) (3.7.1)
Requirement already satisfied: numpy!=1.24.0,>=1.17 in /opt/conda/lib/python3.8/site-packages (from seaborn) (1.20.3)
Requirement already satisfied: pandas>=0.25 in /opt/conda/lib/python3.8/site-packages (from seaborn) (1.5.3)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (0.11.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (1.4.4)
Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (9.5.0)
Requirement already satisfied: contourpy>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (1.0.7)
Requirement already satisfied: fonttools>=4.22.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (4.39.4)
Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (23.1)
Requirement already satisfied: python-dateutil>=2.7 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (2.8.2)
Requirement already satisfied: pyparsing>=2.3.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (3.0.9)
Requirement already satisfied: importlib-resources>=3.2.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib!=3.6.1,>=3.1->seaborn) (5.12.0)
Requirement already satisfied: zipp>=3.1.0 in /opt/conda/lib/python3.8/site-packages (from importlib-resources>=3.2.0->matplotlib!=3.6.1,>=3.1->seaborn) (3.15.0)
Requirement already satisfied: pytz>=2020.1 in /opt/conda/lib/python3.8/site-packages (from pandas>=0.25->seaborn) (2023.3)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib!=3.6.1,>=3.1->seaborn) (1.16.0)
Installing collected packages: seaborn
Successfully installed seaborn-0.12.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting lightgbm
  Downloading lightgbm-3.3.5-py3-none-manylinux1_x86_64.whl (2.0 MB)
     |████████████████████████████████| 2.0 MB 319 kB/s eta 0:00:01
Requirement already satisfied: scikit-learn!=0.22.0 in /opt/conda/lib/python3.8/site-packages (from lightgbm) (0.24.2)
Requirement already satisfied: wheel in /opt/conda/lib/python3.8/site-packages (from lightgbm) (0.40.0)
Requirement already satisfied: numpy in /opt/conda/lib/python3.8/site-packages (from lightgbm) (1.20.3)
Requirement already satisfied: scipy in /opt/conda/lib/python3.8/site-packages (from lightgbm) (1.6.3)
Requirement already satisfied: joblib>=0.11 in /opt/conda/lib/python3.8/site-packages (from scikit-learn!=0.22.0->lightgbm) (1.1.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/lib/python3.8/site-packages (from scikit-learn!=0.22.0->lightgbm) (3.1.0)
Installing collected packages: lightgbm
Successfully installed lightgbm-3.3.5
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
--2023-06-17 12:34:08--  https://dp-public.oss-cn-beijing.aliyuncs.com/community/hERG.csv
Resolving ga.dp.tech (ga.dp.tech)... 10.255.255.41
Connecting to ga.dp.tech (ga.dp.tech)|10.255.255.41|:8118... connected.
Proxy request sent, awaiting response... 200 OK
Length: 560684 (548K) [text/csv]
Saving to: ‘datasets/hERG.csv’

datasets/hERG.csv   100%[===================>] 547.54K  --.-KB/s    in 0.08s   

2023-06-17 12:34:09 (6.55 MB/s) - ‘datasets/hERG.csv’ saved [560684/560684]

代码
文本

然后,我们可以观察一下这个数据集的组成:

代码
文本
[8]
import pandas as pd
import numpy as np

data = pd.read_csv("./datasets/hERG.csv")
print("------------ Original data ------------")
print(data)
data.columns = ["SMILES", "TARGET"]

# 将数据集的80%设置为训练数据集,20%设置为测试数据集
train_fraction = 0.8
train_data = data.sample(frac=train_fraction, random_state=1)
train_data.to_csv("./datasets/hERG_train.csv", index=False)
test_data = data.drop(train_data.index)
test_data.to_csv("./datasets/hERG_test.csv", index=False)

# 设定训练/测试目标
train_y = np.array(train_data["TARGET"].values.tolist())
test_y = np.array(test_data["TARGET"].values.tolist())

# 创建一个用来存后面结果的results
results = {}

# 可视化结果
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(6,4), dpi=150)
font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 15}
plt.hist(train_data["TARGET"], bins=20, label="Train Data")
plt.hist(test_data["TARGET"], bins=20, label="Test Data")
plt.ylabel("Count", fontdict=font)
plt.xlabel("pIC50", fontdict=font)
plt.legend()
plt.show()
------------ Original data ------------
                                                 SMILES  pIC50
0     Cc1ccc(CN2[C@@H]3CC[C@H]2C[C@@H](C3)Oc4cccc(c4...   9.85
1     COc1nc2ccc(Br)cc2cc1[C@@H](c3ccccc3)[C@@](O)(C...   9.70
2     NC(=O)c1cccc(O[C@@H]2C[C@H]3CC[C@@H](C2)N3CCCc...   9.60
3                          CCCCCCCc1cccc([n+]1C)CCCCCCC   9.60
4     Cc1ccc(CN2[C@@H]3CC[C@H]2C[C@@H](C3)Oc4cccc(c4...   9.59
...                                                 ...    ...
9199  O=C1[C@H]2N(c3ccc(OCC=CCCNCC(=O)Nc4c(Cl)cc(cc4...   4.89
9200  O=C1[C@H]2N(c3ccc(OCCCCCNCC(=O)Nc4c(Cl)cc(cc4C...   4.89
9201  O=C1[C@H]2N(c3ccc(OCC=CCCCNCC(=O)Nc4c(Cl)cc(cc...   4.89
9202  O=C1[C@H]2N(c3ccc(OCCCCCCNCC(=O)Nc4c(Cl)cc(cc4...   4.49
9203  O=C1N=C/C(=C2\N(c3c(cc(Cl)c(Cl)c3)N\2)Cc4cc(Cl...   5.30

[9204 rows x 2 columns]
代码
文本

可以看到,在hERG数据集里:

  • 分子用SMILES字符串表示;
  • 任务目标是一个回归预测任务,预测分子对蛋白的抑制活性,用pIC50表示。

这是一个常见的分子性质预测任务。好的,先把这个数据集放一边。接下来,让我们正式进入探索

代码
文本

QSAR的简明历史

定量构效关系(Quantitative Structure-Activity Relationship,QSAR)是一种研究化合物的化学结构与生物活性之间定量关系的方法,是计算机辅助药物设计(Computer-Aided Drug Design, CADD)中最为重要的工具之一。QSAR旨在建立数学模型,构建分子结构与其生化、物化性质关系,帮助药物科学家对新的药物分子的性质开展合理预测。

QSAR是由构效关系(Structure-Activity Relationship,SAR)分析发展而来的。SAR的起源可以追溯到19世纪末,当时化学家们开始研究化合物的结构与生物活性之间的关系,德国化学家Paul Ehrlich(1854-1915),他提出了“锁-钥”假说,即化合物(钥匙)与生物靶标(锁)之间的相互作用取决于它们的空间匹配度。随着科学家对分子间相互作用的深入理解,大家发现除了空间匹配外,靶点表面性质(例如疏水性、亲电性)与配体对应结构的性质相互匹配也至关重要,于是发展了一系列评价结构特性与结合亲和力的分析方法,即构效关系。

然而,SAR方法主要依赖于化学家的经验和直观判断,缺乏严密的理论基础和统一的分析方法。为了克服这些局限性,20世纪60年代,科学家们开始尝试使用数学和统计方法分子结构生物活性之间的关系进行定量分析

最早提出的QSAR模型可以追溯到1868年,化学家Alexander Crum Brown和生理学家Thomas R. Fraser开始研究化合物结构与生物活性之间的关系。在研究生物碱的碱性N原子甲基化前后的生物效应时,他们提出化合物的生理活性依赖于其组分的构成,即生物活性是化合物组成的函数:,这就是著名的Crum-Brown方程。这一假设为后来的QSAR研究奠定了基础。

随后,不断有QSAR模型在学界被提出,例如Hammett提出的有机物毒性与分子电性的QSAR模型、Taft提出的立体参数模型。1964年,Hansch和Fujita提出了著名的Hansch模型,指出分子的生物活性主要是由其疏水效应()、立体效应()和静电效应()决定的,并假设这三种效应彼此可以独立相加,其完整形式为:。Hansch模型首次将化学信息与药物生物活性之间的关系进行了定量化描述,为后续的QSAR研究提供了一个实用的理论框架,被认为是从盲目药物设计过渡到合理药物设计的重要标志。

时至今日,QSAR已经发展成为一个成熟的研究领域,涉及多种计算方法和技术。近年来,随着机器学习和人工智能技术的快速发展,QSAR方法得到了进一步的拓展和应用。例如,深度学习技术被应用于QSAR模型的构建,提高了模型的预测能力和准确性。此外,QSAR方法在环境科学、材料科学等领域也取得了广泛的应用,显示出其强大的潜力和广泛的应用前景。

代码
文本

QSAR建模的基本要求

2002年在葡萄牙的Setubal召开的一次国际会议上,与会的科学工作者们提出了关于QSAR模型有效性的几条规则,被称为“Setubal Principles”,这些规则在 2004年11月得到了进一步详细的修正,并被正式命名为“OECD Principles”。规定一个QSAR模型要达到调控目的(regulatory purpose),应该满足以下5个条件:

  1. a defined endpoint(明确目标)
  2. an unambiguous algorithm(明确算法)
  3. a defined domain of applicability(明确的使用范围)
  4. appropriate measures of goodness-of-fit, robustness and predictivity(稳定)
  5. a mechanistic interpretation, if possible(如果可能的话,可解释)
代码
文本

QSAR建模的基本流程

构建一个有效的QSAR模型主要有三步:

  1. 构建合理的分子表征(Molecular Representation),将分子结构转化为计算机可读的数值表示;
  2. 选择适合分子表征的机器学习模型,并使用已有的分子-性质数据训练模型
  3. 使用训练好的机器学习模型,对未测定性质的分子进行性质预测

由于分子结构并不是一个计算机可读的格式,因此我们首先要将分子结构转化为计算机可读的数值向量,才能基于其选择合适的数学模型。我们把这个过程称为分子表示(molecular representation)。有效的分子表示以及匹配的数学模型选择是构建定量构效关系模型的核心。

代码
文本

分子表示

分子表示是包含分子属性的数值表示。例如我们常见的分子描述符(Descriptor)、分子指纹(Fingerprints)、SMILES字符串、分子势函数等都是常见的分子表示方法。

Wei, J., Chu, X., Sun, X. Y., Xu, K., Deng, H. X., Chen, J., ... & Lei, M. (2019). Machine learning in materials science. InfoMat, 1(3), 338-358.

事实上,QSAR的发展也正是随着分子表示包含的信息不断增多、分子表示的形式不断变化而产生分类,常见的QSAR模型可以分为1D-QSAR、2D-QSAR、3D-QSAR:

不同的分子表示有不同的数值特点,因此也要选用不同的机器学习/深度学习模型进行建模。接下来,我们就将以实际案例给大家展示如何构建1D-QSAR, 2D-QSAR, 3D-QSAR模型。

代码
文本

1D-QSAR分子表征

早期的定量构效关系模型大多以分子量、水溶性、分子表面积等分子的物化性质作为表征的方法,我们称这些分子的物化性质为分子描述符(Descriptor)。这就是1D-QSAR的阶段。

这个阶段往往需要富有经验的科学家基于自己的领域知识,来进行分子描述符的设计,去构建一些可能和这个性质相关的一些分子性质。例如假设要预测某个药物是否能通过血脑屏障,那这个性质可能和药物分子的水溶性、分子量、极性表面积等物化属性相关,科学家就要把这样的属性加入到分子描述符中。

这个阶段由于计算机尚未普及,或算力不足,科学家往往通过一些简单的数学模型进行建模,例如线性回归、随机森林等方法。当然了,由于通过分子描述符构建的分子表示通常是低维的实值向量,这些数学模型也很适合做这样的工作。

代码
文本
[11]
from rdkit import Chem
from rdkit.Chem import Descriptors

def calculate_1dqsar_repr(smiles):
# 从 SMILES 字符串创建分子对象
mol = Chem.MolFromSmiles(smiles)
# 计算分子的分子量
mol_weight = Descriptors.MolWt(mol)
# 计算分子的 LogP 值
log_p = Descriptors.MolLogP(mol)
# 计算分子中的氢键供体数量
num_h_donors = Descriptors.NumHDonors(mol)
# 计算分子中的氢键受体数量
num_h_acceptors = Descriptors.NumHAcceptors(mol)
# 计算分子的表面积极性
tpsa = Descriptors.TPSA(mol)
# 计算分子中的可旋转键数量
num_rotatable_bonds = Descriptors.NumRotatableBonds(mol)
# 计算分子中的芳香环数量
num_aromatic_rings = Descriptors.NumAromaticRings(mol)
# 计算分子中的脂环数量
num_aliphatic_rings = Descriptors.NumAliphaticRings(mol)
# 计算分子中的饱和环数量
num_saturated_rings = Descriptors.NumSaturatedRings(mol)
# 计算分子中的杂原子数量
num_heteroatoms = Descriptors.NumHeteroatoms(mol)
# 计算分子中的价电子数量
num_valence_electrons = Descriptors.NumValenceElectrons(mol)
# 计算分子中的自由基电子数量
num_radical_electrons = Descriptors.NumRadicalElectrons(mol)
# 计算分子的 QED 值
qed = Descriptors.qed(mol)
# 返回所有计算出的属性值
return [mol_weight, log_p, num_h_donors, num_h_acceptors, tpsa, num_rotatable_bonds, num_aromatic_rings,
num_aliphatic_rings, num_saturated_rings, num_heteroatoms, num_valence_electrons, num_radical_electrons,qed]


train_data["1dqsar_mr"] = train_data["SMILES"].apply(calculate_1dqsar_repr)
test_data["1dqsar_mr"] = test_data["SMILES"].apply(calculate_1dqsar_repr)
代码
文本
[12]
print(train_data["1dqsar_mr"][:1].values.tolist())
[[464.87300000000016, 2.5531800000000002, 1, 10, 140.73000000000002, 8, 4, 0, 0, 12, 166, 0, 0.4159359067517256]]
代码
文本
[13]
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
from xgboost import XGBRegressor
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_squared_error

# 将训练和测试数据转换为NumPy数组
train_x = np.array(train_data["1dqsar_mr"].values.tolist())
train_y = np.array(train_data["TARGET"].values.tolist())
test_x = np.array(test_data["1dqsar_mr"].values.tolist())
test_y = np.array(test_data["TARGET"].values.tolist())

# 定义要使用的回归器列表
regressors = [
("Linear Regression", LinearRegression()), # 线性回归模型
("Ridge Regression", Ridge(random_state=42)), # 岭回归模型
("Lasso Regression", Lasso(random_state=42)), # Lasso回归模型
("ElasticNet Regression", ElasticNet(random_state=42)), # ElasticNet回归模型
("Support Vector", SVR()), # 支持向量回归模型
("K-Nearest Neighbors", KNeighborsRegressor()), # K-最近邻回归模型
("Decision Tree", DecisionTreeRegressor(random_state=42)), # 决策树回归模型
("Random Forest", RandomForestRegressor(random_state=42)), # 随机森林回归模型
("Gradient Boosting", GradientBoostingRegressor(random_state=42)), # 梯度提升回归模型
("XGBoost", XGBRegressor(random_state=42)), # XGBoost回归模型
("LightGBM", LGBMRegressor(random_state=42)), # LightGBM回归模型
("Multi-layer Perceptron", MLPRegressor( # 多层感知器(神经网络)回归模型
hidden_layer_sizes=(128,64,32),
learning_rate_init=0.0001,
activation='relu', solver='adam',
max_iter=10000, random_state=42)),
]

# 对每个回归器进行训练和预测,并计算各项性能指标
for name, regressor in regressors:
# 训练回归器
regressor.fit(train_x, train_y)
# 预测训练数据和测试数据
pred_train_y = regressor.predict(train_x)
pred_test_y = regressor.predict(test_x)
# 将预测结果添加到训练数据和测试数据中
train_data[f"1D-QSAR-{name}_pred"] = pred_train_y
test_data[f"1D-QSAR-{name}_pred"] = pred_test_y
# 计算测试数据的性能指标
mse = mean_squared_error(test_y, pred_test_y)
se = abs(test_y - pred_test_y)
results[f"1D-QSAR-{name}"] = {"MSE": mse, "error": se}
print(f"[1D-QSAR][{name}]\tMSE:{mse:.4f}")

[1D-QSAR][Linear Regression]	MSE:0.8857
[1D-QSAR][Ridge Regression]	MSE:0.8857
[1D-QSAR][Lasso Regression]	MSE:0.9286
[1D-QSAR][ElasticNet Regression]	MSE:0.9269
[1D-QSAR][Support Vector]	MSE:0.9398
[1D-QSAR][K-Nearest Neighbors]	MSE:0.9110
[1D-QSAR][Decision Tree]	MSE:1.0579
[1D-QSAR][Random Forest]	MSE:0.6052
[1D-QSAR][Gradient Boosting]	MSE:0.7607
[1D-QSAR][XGBoost]	MSE:0.6057
[1D-QSAR][LightGBM]	MSE:0.6426
[1D-QSAR][Multi-layer Perceptron]	MSE:0.9385
代码
文本
[14]
import matplotlib.pyplot as plt
import seaborn as sns

# 绘制残差图
residuals_data = []
for name, result in results.items():
if name.startswith("1D-QSAR"):
model_residuals = pd.DataFrame({"Model": name, "Error": result["error"]})
residuals_data.append(model_residuals)

residuals_df = pd.concat(residuals_data, ignore_index=True)
residuals_df.sort_values(by="Error", ascending=True, inplace=True)
model_order = residuals_df.groupby("Model")["Error"].median().sort_values(ascending=True).index

# 使用seaborn绘制violinplot
plt.figure(figsize=(10, 7), dpi=150)
font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 15}
sns.boxplot(y="Model", x="Error", data=residuals_df, order=model_order)
plt.xlabel("Abs Error", fontdict=font)
plt.ylabel("Models", fontdict=font)
plt.show()
代码
文本

2D-QSAR分子表征

然而,面临一些生化机制尚不清晰的分子性质预测问题时,科学家可能很难设计出有效的分子描述符来表征分子,导致QSAR模型构建的失败。由于分子性质很大程度由分子结构决定,例如分子上有什么官能团,因此人们想把分子的键连关系引入到QSAR建模中。于是领域进入了2D-QSAR的阶段。

较早被提出的是Morgan指纹等通过遍历分子中每个原子与周边原子的键连关系来表征的分子指纹方法。为了满足不同大小的分子能用相同长度的数值向量来表征的要求,分子指纹往往会通过hash的操作来保证向量长度的统一,因此分子指纹往往是高维的0/1向量。在这个场景下,科学家通常会选择例如支持向量机,以及全连接神经网络等对高维稀疏向量有较好处理能力的机器学习方法来进行模型构建。

随着AI模型的发展,能处理序列数据(例如文本)的循环神经网络(Recurrent neural network, RNN)、能处理图片数据的卷积神经网络(convolutional neural network, CNN)、能处理非结构化的图数据的图神经网络(graph neural network, GNN)等深度学习模型不断被提出和应用,QSAR模型也根据这些模型能处理的数据特点,构建了适配的分子表示。例如人们将分子的SMILES字符表示应用RNN建模,将分子的二维图像应用CNN建模,将分子的键连拓扑结构转化成图应用GNN建模发展了一系列的QSAR建模方法。

但是总的来说,在2D-QSAR阶段中,人们在利用各类方法去解析分子的键连关系(拓扑结构)来进行分子性质的建模预测。

代码
文本
[15]
import numpy as np
from rdkit.Chem import AllChem

def calculate_2dqsar_repr(smiles):
# 将smiles字符串转换为rdkit的分子对象
mol = Chem.MolFromSmiles(smiles)
# 计算Morgan指纹(半径为3,长度为1024位)
fp = AllChem.GetMorganFingerprintAsBitVect(mol, 3, nBits=512)
# 返回numpy数组格式的指纹
return np.array(fp)

train_data["2dqsar_mr"] = train_data["SMILES"].apply(calculate_2dqsar_repr)
test_data["2dqsar_mr"] = test_data["SMILES"].apply(calculate_2dqsar_repr)
代码
文本
[16]
print(train_data["2dqsar_mr"][:1].values.tolist())
[array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0,
       0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1,
       0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0,
       0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0,
       0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1,
       0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0,
       0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0,
       0, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0,
       0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0,
       1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0,
       1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0,
       1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
       0, 1, 0, 0, 0, 1])]
代码
文本
[17]
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
from xgboost import XGBRegressor
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_squared_error

# 将训练和测试数据转换为NumPy数组
train_x = np.array(train_data["2dqsar_mr"].values.tolist())
train_y = np.array(train_data["TARGET"].values.tolist())
test_x = np.array(test_data["2dqsar_mr"].values.tolist())
test_y = np.array(test_data["TARGET"].values.tolist())

# 定义要使用的回归器列表
regressors = [
("Linear Regression", LinearRegression()), # 线性回归模型
("Ridge Regression", Ridge(random_state=42)), # 岭回归模型
("Lasso Regression", Lasso(random_state=42)), # Lasso回归模型
("ElasticNet Regression", ElasticNet(random_state=42)), # ElasticNet回归模型
("Support Vector", SVR()), # 支持向量回归模型
("K-Nearest Neighbors", KNeighborsRegressor()), # K-最近邻回归模型
("Decision Tree", DecisionTreeRegressor(random_state=42)), # 决策树回归模型
("Random Forest", RandomForestRegressor(random_state=42)), # 随机森林回归模型
("Gradient Boosting", GradientBoostingRegressor(random_state=42)), # 梯度提升回归模型
("XGBoost", XGBRegressor(random_state=42)), # XGBoost回归模型
("LightGBM", LGBMRegressor(random_state=42)), # LightGBM回归模型
("Multi-layer Perceptron", MLPRegressor( # 多层感知器(神经网络)回归模型
hidden_layer_sizes=(128,64,32),
learning_rate_init=0.0001,
activation='relu', solver='adam',
max_iter=10000, random_state=42)),
]

# 对每个回归器进行训练和预测,并计算各项性能指标
for name, regressor in regressors:
# 训练回归器
regressor.fit(train_x, train_y)
# 预测训练数据和测试数据
pred_train_y = regressor.predict(train_x)
pred_test_y = regressor.predict(test_x)
# 将预测结果添加到训练数据和测试数据中
train_data[f"2D-QSAR-{name}_pred"] = pred_train_y
test_data[f"2D-QSAR-{name}_pred"] = pred_test_y
# 计算测试数据的性能指标
mse = mean_squared_error(test_y, pred_test_y)
se = abs(test_y - pred_test_y)
results[f"2D-QSAR-{name}"] = {"MSE": mse, "error": se}
print(f"[2D-QSAR][{name}]\tMSE:{mse:.4f}")

[2D-QSAR][Linear Regression]	MSE:0.7156
[2D-QSAR][Ridge Regression]	MSE:0.7154
[2D-QSAR][Lasso Regression]	MSE:1.0109
[2D-QSAR][ElasticNet Regression]	MSE:1.0109
[2D-QSAR][Support Vector]	MSE:0.4554
[2D-QSAR][K-Nearest Neighbors]	MSE:0.4806
[2D-QSAR][Decision Tree]	MSE:0.8892
[2D-QSAR][Random Forest]	MSE:0.4717
[2D-QSAR][Gradient Boosting]	MSE:0.6694
[2D-QSAR][XGBoost]	MSE:0.4591
[2D-QSAR][LightGBM]	MSE:0.4797
[2D-QSAR][Multi-layer Perceptron]	MSE:0.6933
代码
文本
[18]
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import seaborn as sns

# 绘制残差图
residuals_data = []
for name, result in results.items():
if name.startswith("2D-QSAR"):
model_residuals = pd.DataFrame({"Model": name, "Error": result["error"]})
residuals_data.append(model_residuals)

residuals_df = pd.concat(residuals_data, ignore_index=True)
residuals_df.sort_values(by="Error", ascending=True, inplace=True)
model_order = residuals_df.groupby("Model")["Error"].median().sort_values(ascending=True).index

# 使用seaborn绘制violinplot
plt.figure(figsize=(10, 7), dpi=150)
font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 15}
sns.boxplot(y="Model", x="Error", data=residuals_df, order=model_order)
plt.xlabel("Abs Error", fontdict=font)
plt.ylabel("Models", fontdict=font)
plt.show()
代码
文本

3D-QSAR分子表征

然而,由于分子间、分子内相互作用的存在,拓扑结构相近的分子在各个不同环境下会采取的构象不尽相同。而每个分子在不同环境下的构象以及对应的能量高低决定了分子的真实性质。因此,科学家期望将分子的三维结构引入到QSAR建模里去,来增强对特定场景的分子性质预测能力。这个阶段被称为3D-QSAR阶段。

分子比较场方法(CoFMA)是被广泛应用的3D-QSAR模型。它计算分子存在的空间中各个位置(通常通过格点法进行位置的选取)所受到的力的作用(也就是力场)来表征分子的三维结构。当然,领域中还有一些有益的尝试,包括通过电子密度、分子三维图像等表征方法,或是在分子图上加入几何信息。

而要处理这样的高维空间信息,科学家们往往会选择例如较深的FCNN、3D-CNN、GNN等深度学习方法来进行建模。

代码
文本
[19]
from rdkit.Chem import rdPartialCharges

def calculate_3dqsar_repr(SMILES, max_atoms=100, three_d=False):
mol = Chem.MolFromSmiles(SMILES) # 从SMILES表示创建分子对象
mol = Chem.AddHs(mol) # 添加氢原子
if three_d:
AllChem.EmbedMolecule(mol, AllChem.ETKDG()) # 计算3D坐标
else:
AllChem.Compute2DCoords(mol) # 计算2D坐标
natoms = mol.GetNumAtoms() # 获取原子数量
rdPartialCharges.ComputeGasteigerCharges(mol) # 计算分子的Gasteiger电荷
charges = np.array([float(atom.GetProp("_GasteigerCharge")) for atom in mol.GetAtoms()]) # 获取电荷值
coords = mol.GetConformer().GetPositions() # 获取原子坐标
coulomb_matrix = np.zeros((max_atoms, max_atoms)) # 初始化库仑矩阵
n = min(max_atoms, natoms)
for i in range(n): # 遍历原子
for j in range(i, n):
if i == j:
coulomb_matrix[i, j] = 0.5 * charges[i] ** 2
if i != j:
delta = np.linalg.norm(coords[i] - coords[j]) # 计算原子间距离
if delta != 0:
coulomb_matrix[i, j] = charges[i] * charges[j] / delta # 计算库仑矩阵的元素值
coulomb_matrix[j, i] = coulomb_matrix[i, j]
coulomb_matrix = np.where(np.isinf(coulomb_matrix), 0, coulomb_matrix) # 处理无穷大值
coulomb_matrix = np.where(np.isnan(coulomb_matrix), 0, coulomb_matrix) # 处理NaN值
return coulomb_matrix.reshape(max_atoms*max_atoms).tolist() # 将库仑矩阵转换为列表并返回


train_data["3dqsar_mr"] = train_data["SMILES"].apply(calculate_3dqsar_repr)
test_data["3dqsar_mr"] = test_data["SMILES"].apply(calculate_3dqsar_repr)
代码
文本
[20]
print("length:", len(train_data["3dqsar_mr"][:1].values.tolist()[0]))
length: 10000
代码
文本

我们可以看到3D-QSAR会构建出非常长的分子表征来。所以我们先对这个分子表征通过PCA进行降维。

代码
文本
[21]
from sklearn.decomposition import PCA

# 定义 PCA 对象,n_components 设置为512,表示降维到512维
pca = PCA(n_components=512)

# 拟合训练数据并转换
train_data_pca = pca.fit_transform(np.array(train_data["3dqsar_mr"].tolist()))

# 转换测试数据
test_data_pca = pca.transform(np.array(test_data["3dqsar_mr"].tolist()))

# 将降维后的数据存储为新的列
train_data["3dqsar_mr_pca"] = train_data_pca.tolist()
test_data["3dqsar_mr_pca"] = test_data_pca.tolist()
代码
文本
[22]
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
from xgboost import XGBRegressor
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_squared_error

# 将训练和测试数据转换为NumPy数组
train_x = np.array(train_data["3dqsar_mr_pca"].values.tolist())
train_y = np.array(train_data["TARGET"].values.tolist())
test_x = np.array(test_data["3dqsar_mr_pca"].values.tolist())
test_y = np.array(test_data["TARGET"].values.tolist())

# 定义要使用的回归器列表
regressors = [
("Linear Regression", LinearRegression()), # 线性回归模型
("Ridge Regression", Ridge(random_state=42)), # 岭回归模型
("Lasso Regression", Lasso(random_state=42)), # Lasso回归模型
("ElasticNet Regression", ElasticNet(random_state=42)), # ElasticNet回归模型
("Support Vector", SVR()), # 支持向量回归模型
("K-Nearest Neighbors", KNeighborsRegressor()), # K-最近邻回归模型
("Decision Tree", DecisionTreeRegressor(random_state=42)), # 决策树回归模型
("Random Forest", RandomForestRegressor(random_state=42)), # 随机森林回归模型
("Gradient Boosting", GradientBoostingRegressor(random_state=42)), # 梯度提升回归模型
("XGBoost", XGBRegressor(random_state=42)), # XGBoost回归模型
("LightGBM", LGBMRegressor(random_state=42)), # LightGBM回归模型
("Multi-layer Perceptron", MLPRegressor( # 多层感知器(神经网络)回归模型
hidden_layer_sizes=(128,64,32),
learning_rate_init=0.0001,
activation='relu', solver='adam',
max_iter=10000, random_state=42)),
]

# 对每个回归器进行训练和预测,并计算各项性能指标
for name, regressor in regressors:
# 训练回归器
regressor.fit(train_x, train_y)
# 预测训练数据和测试数据
pred_train_y = regressor.predict(train_x)
pred_test_y = regressor.predict(test_x)
# 将预测结果添加到训练数据和测试数据中
train_data[f"3D-QSAR-{name}_pred"] = pred_train_y
test_data[f"3D-QSAR-{name}_pred"] = pred_test_y
# 计算测试数据的性能指标
mse = mean_squared_error(test_y, pred_test_y)
se = abs(test_y - pred_test_y)
results[f"3D-QSAR-{name}"] = {"MSE": mse, "error": se}
print(f"[3D-QSAR][{name}]\tMSE:{mse:.4f}")

[3D-QSAR][Linear Regression]	MSE:34953863171341550859845632.0000
[3D-QSAR][Ridge Regression]	MSE:4392658479297741235159040.0000
[3D-QSAR][Lasso Regression]	MSE:805.7580
[3D-QSAR][ElasticNet Regression]	MSE:2390.2618
[3D-QSAR][Support Vector]	MSE:1.0427
[3D-QSAR][K-Nearest Neighbors]	MSE:1.1943
[3D-QSAR][Decision Tree]	MSE:1.5984
[3D-QSAR][Random Forest]	MSE:0.7831
[3D-QSAR][Gradient Boosting]	MSE:0.8663
[3D-QSAR][XGBoost]	MSE:0.8103
[3D-QSAR][LightGBM]	MSE:0.7307
[3D-QSAR][Multi-layer Perceptron]	MSE:3482168556886455484416.0000
代码
文本
[23]
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import seaborn as sns

# 绘制残差图
residuals_data = []
for name, result in results.items():
if name.startswith("3D-QSAR"):
if result["MSE"] > 10:
continue
model_residuals = pd.DataFrame({"Model": name, "Error": result["error"]})
residuals_data.append(model_residuals)

residuals_df = pd.concat(residuals_data, ignore_index=True)
residuals_df.sort_values(by="Error", ascending=True, inplace=True)
model_order = residuals_df.groupby("Model")["Error"].median().sort_values(ascending=True).index

# 使用seaborn绘制violinplot
plt.figure(figsize=(10, 7), dpi=150)
font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 15}
sns.boxplot(y="Model", x="Error", data=residuals_df, order=model_order)
plt.xlabel("Abs Error", fontdict=font)
plt.ylabel("Models", fontdict=font)
plt.show()
代码
文本

Uni-Mol 分子表示学习和预训练框架

预训练模型

在药物研发领域中,QSAR建模面临的一个主要挑战是数据量有限。由于药物活性数据的获取成本高且实验难度大,这导致了标签数据不足的情况。数据量不足会影响模型的预测能力,因为模型可能难以捕捉到足够的信息来描述化合物结构和生物活性之间的关系。

面临这种有标签数据不足的情况,在机器学习发展地更为成熟的领域,例如自然语言处理(NLP)和计算机视觉(CV)中,预训练-微调(Pretrain-Finetune)模式已经成为了通用的解决方案。预训练是指在大量无标签数据对模型通过自监督学习进行预先训练,使模型获得一些基本信息和通用能力,然后再在有限的有标签数据上进行监督学习来微调模型,使模型在具体问题上具备特定问题的推理能力。

例如,我想进行猫狗的图片识别,但是我没有很多猫狗的有标签数据。于是我可以先用大量的没有标签的图片预训练模型,先让模型学到点线面轮廓的基本知识,然后再把猫狗图片给模型做有监督训练,这时候,模型可能就能基于轮廓信息,快速学习到什么是猫什么是狗的信息了。

预训练方法可以充分利用大量容易获取的无标签数据的信息,提高模型的泛化能力和预测性能。在QSAR建模中,我们同样可以借鉴预训练的思想来解决数据数量和数据质量问题。

Uni-Mol 简介

Uni-Mol是深势科技于2022年5月发布的一款基于分子三维结构的通用分子表征学习框架。Uni-Mol将分子三维结构作为模型输入,并使用约2亿个小分子构象300万个蛋白表面空腔结构,使用原子类型还原原子坐标还原两种自监督任务对模型进行预训练

Uni-Mol 论文:https://openreview.net/forum?id=6K2RM6wVqKu
开源代码:https://github.com/dptech-corp/Uni-Mol

从三维信息出发的表征学习和有效的预训练方案让 Uni-Mol 在几乎所有与药物分子和蛋白口袋相关的下游任务上都超越了 SOTA(state of the art),也让 Uni-Mol 得以能够直接完成分子构象生成、蛋白-配体结合构象预测等三维构象生成相关的任务,并超越现有解决方案。论文被机器学习顶会ICLR 2023接收。

接下来,我们要使用Uni-Mol来完成BACE-1分子活性预测任务的构建:

代码
文本
[3]
from unimol import MolTrain

clf = MolTrain(task='regression',
data_type='molecule',
epochs=50,
learning_rate=0.0001,
batch_size=16,
early_stopping=10,
metrics='mse',
split='random',
save_path='./exp_reg_hERG_0616',
)

clf.fit('datasets/hERG_train.csv')
2023-06-17 12:35:53 | unimol/data/datareader.py | 138 | INFO | Uni-Mol(QSAR) | Anomaly clean with 3 sigma threshold: 7363 -> 7232
2023-06-17 12:35:55 | unimol/data/conformer.py | 56 | INFO | Uni-Mol(QSAR) | Start generating conformers...
7232it [02:54, 41.48it/s]
2023-06-17 12:38:49 | unimol/data/conformer.py | 60 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.01% of molecules.
2023-06-17 12:38:49 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.07% of molecules.
2023-06-17 12:38:49 | unimol/train.py | 86 | INFO | Uni-Mol(QSAR) | Output directory already exists: ./exp_reg_hERG_0616
2023-06-17 12:38:49 | unimol/train.py | 87 | INFO | Uni-Mol(QSAR) | Warning: Overwrite output directory: ./exp_reg_hERG_0616
2023-06-17 12:38:50 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 12:38:50 | unimol/models/nnmodel.py | 100 | INFO | Uni-Mol(QSAR) | start training Uni-Mol:unimolv1
2023-06-17 12:39:07 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [1/50] train_loss: 1.0198, val_loss: 0.9594, val_mse: 0.6890, lr: 0.000067, 14.3s
2023-06-17 12:39:16 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [2/50] train_loss: 0.9963, val_loss: 0.9689, val_mse: 0.7027, lr: 0.000099, 8.2s
2023-06-17 12:39:25 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [3/50] train_loss: 0.9583, val_loss: 1.0292, val_mse: 0.7534, lr: 0.000097, 8.1s
2023-06-17 12:39:33 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [4/50] train_loss: 0.9276, val_loss: 0.9263, val_mse: 0.6723, lr: 0.000095, 8.0s
2023-06-17 12:39:41 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [5/50] train_loss: 0.9060, val_loss: 0.8094, val_mse: 0.5913, lr: 0.000093, 8.1s
2023-06-17 12:39:50 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [6/50] train_loss: 0.8745, val_loss: 0.7400, val_mse: 0.5398, lr: 0.000091, 8.1s
2023-06-17 12:40:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [7/50] train_loss: 0.8458, val_loss: 0.7638, val_mse: 0.5611, lr: 0.000089, 8.9s
2023-06-17 12:40:08 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [8/50] train_loss: 0.8492, val_loss: 0.7316, val_mse: 0.5450, lr: 0.000087, 8.1s
2023-06-17 12:40:16 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [9/50] train_loss: 0.8398, val_loss: 0.6793, val_mse: 0.5067, lr: 0.000085, 8.0s
2023-06-17 12:40:24 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [10/50] train_loss: 0.8264, val_loss: 0.7524, val_mse: 0.5610, lr: 0.000082, 8.1s
2023-06-17 12:40:32 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [11/50] train_loss: 0.8191, val_loss: 0.6767, val_mse: 0.4993, lr: 0.000080, 8.2s
2023-06-17 12:40:41 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [12/50] train_loss: 0.8035, val_loss: 0.6709, val_mse: 0.4910, lr: 0.000078, 8.1s
2023-06-17 12:40:50 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [13/50] train_loss: 0.7891, val_loss: 0.6657, val_mse: 0.4940, lr: 0.000076, 8.2s
2023-06-17 12:40:58 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [14/50] train_loss: 0.7720, val_loss: 0.6442, val_mse: 0.4793, lr: 0.000074, 8.2s
2023-06-17 12:41:07 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [15/50] train_loss: 0.7678, val_loss: 0.6312, val_mse: 0.4662, lr: 0.000072, 8.3s
2023-06-17 12:41:16 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [16/50] train_loss: 0.7501, val_loss: 0.6796, val_mse: 0.5068, lr: 0.000070, 8.2s
2023-06-17 12:41:24 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [17/50] train_loss: 0.7714, val_loss: 0.5719, val_mse: 0.4233, lr: 0.000068, 8.2s
2023-06-17 12:41:33 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [18/50] train_loss: 0.7501, val_loss: 0.6096, val_mse: 0.4527, lr: 0.000066, 8.1s
2023-06-17 12:41:41 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [19/50] train_loss: 0.7531, val_loss: 0.7351, val_mse: 0.5461, lr: 0.000064, 8.1s
2023-06-17 12:41:49 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [20/50] train_loss: 0.7357, val_loss: 0.5855, val_mse: 0.4357, lr: 0.000062, 8.1s
2023-06-17 12:41:58 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [21/50] train_loss: 0.7334, val_loss: 0.5762, val_mse: 0.4231, lr: 0.000060, 8.2s
2023-06-17 12:42:07 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [22/50] train_loss: 0.7062, val_loss: 0.5763, val_mse: 0.4312, lr: 0.000058, 8.1s
2023-06-17 12:42:15 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [23/50] train_loss: 0.7371, val_loss: 0.5740, val_mse: 0.4278, lr: 0.000056, 8.2s
2023-06-17 12:42:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [24/50] train_loss: 0.7131, val_loss: 0.6085, val_mse: 0.4584, lr: 0.000054, 8.1s
2023-06-17 12:42:31 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [25/50] train_loss: 0.7075, val_loss: 0.5816, val_mse: 0.4340, lr: 0.000052, 8.1s
2023-06-17 12:42:39 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [26/50] train_loss: 0.6932, val_loss: 0.5505, val_mse: 0.4123, lr: 0.000049, 8.0s
2023-06-17 12:42:49 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [27/50] train_loss: 0.7009, val_loss: 0.8499, val_mse: 0.6284, lr: 0.000047, 8.2s
2023-06-17 12:42:57 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [28/50] train_loss: 0.6985, val_loss: 0.6224, val_mse: 0.4643, lr: 0.000045, 8.1s
2023-06-17 12:43:05 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [29/50] train_loss: 0.6649, val_loss: 0.6121, val_mse: 0.4566, lr: 0.000043, 8.3s
2023-06-17 12:43:13 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [30/50] train_loss: 0.6705, val_loss: 0.5974, val_mse: 0.4445, lr: 0.000041, 8.0s
2023-06-17 12:43:21 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [31/50] train_loss: 0.6639, val_loss: 0.6243, val_mse: 0.4603, lr: 0.000039, 8.1s
2023-06-17 12:43:29 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [32/50] train_loss: 0.6774, val_loss: 0.5461, val_mse: 0.4065, lr: 0.000037, 8.0s
2023-06-17 12:43:38 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [33/50] train_loss: 0.6568, val_loss: 0.5854, val_mse: 0.4339, lr: 0.000035, 8.1s
2023-06-17 12:43:46 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [34/50] train_loss: 0.6483, val_loss: 0.6151, val_mse: 0.4625, lr: 0.000033, 8.1s
2023-06-17 12:43:54 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [35/50] train_loss: 0.6560, val_loss: 0.5742, val_mse: 0.4311, lr: 0.000031, 8.1s
2023-06-17 12:44:03 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [36/50] train_loss: 0.6448, val_loss: 0.6001, val_mse: 0.4513, lr: 0.000029, 8.6s
2023-06-17 12:44:11 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [37/50] train_loss: 0.6241, val_loss: 0.6042, val_mse: 0.4489, lr: 0.000027, 8.2s
2023-06-17 12:44:19 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [38/50] train_loss: 0.6224, val_loss: 0.6076, val_mse: 0.4564, lr: 0.000025, 8.1s
2023-06-17 12:44:27 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [39/50] train_loss: 0.6247, val_loss: 0.5829, val_mse: 0.4303, lr: 0.000023, 8.2s
2023-06-17 12:44:36 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [40/50] train_loss: 0.6328, val_loss: 0.5770, val_mse: 0.4261, lr: 0.000021, 8.2s
2023-06-17 12:44:44 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [41/50] train_loss: 0.6317, val_loss: 0.5813, val_mse: 0.4320, lr: 0.000019, 8.1s
2023-06-17 12:44:52 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [42/50] train_loss: 0.6271, val_loss: 0.6083, val_mse: 0.4543, lr: 0.000016, 8.1s
2023-06-17 12:44:52 | unimol/utils/metrics.py | 255 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 42
2023-06-17 12:44:53 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 12:44:54 | unimol/models/nnmodel.py | 123 | INFO | Uni-Mol(QSAR) | fold 0, result {'mse': 0.40648797, 'mae': 0.4646326, 'spearmanr': 0.685573097620505, 'rmse': 0.63756406, 'r2': 0.44125538296758327}
2023-06-17 12:44:55 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 12:45:03 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [1/50] train_loss: 1.0218, val_loss: 1.0607, val_mse: 0.7615, lr: 0.000067, 8.2s
2023-06-17 12:45:12 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [2/50] train_loss: 1.0089, val_loss: 0.9100, val_mse: 0.6544, lr: 0.000099, 8.1s
2023-06-17 12:45:21 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [3/50] train_loss: 0.9560, val_loss: 0.8490, val_mse: 0.6177, lr: 0.000097, 8.1s
2023-06-17 12:45:30 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [4/50] train_loss: 0.9313, val_loss: 0.8471, val_mse: 0.6134, lr: 0.000095, 8.1s
2023-06-17 12:45:39 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [5/50] train_loss: 0.9388, val_loss: 0.7970, val_mse: 0.5788, lr: 0.000093, 8.2s
2023-06-17 12:45:48 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [6/50] train_loss: 0.8808, val_loss: 0.7646, val_mse: 0.5524, lr: 0.000091, 8.1s
2023-06-17 12:45:57 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [7/50] train_loss: 0.8609, val_loss: 0.8060, val_mse: 0.5749, lr: 0.000089, 8.1s
2023-06-17 12:46:06 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [8/50] train_loss: 0.8700, val_loss: 0.7409, val_mse: 0.5293, lr: 0.000087, 8.1s
2023-06-17 12:46:15 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [9/50] train_loss: 0.8086, val_loss: 0.7425, val_mse: 0.5365, lr: 0.000085, 8.1s
2023-06-17 12:46:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [10/50] train_loss: 0.8355, val_loss: 0.8126, val_mse: 0.5832, lr: 0.000082, 8.2s
2023-06-17 12:46:31 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [11/50] train_loss: 0.8044, val_loss: 0.6809, val_mse: 0.4987, lr: 0.000080, 8.2s
2023-06-17 12:46:40 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [12/50] train_loss: 0.7771, val_loss: 0.6235, val_mse: 0.4526, lr: 0.000078, 8.2s
2023-06-17 12:46:49 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [13/50] train_loss: 0.7960, val_loss: 0.6240, val_mse: 0.4551, lr: 0.000076, 8.1s
2023-06-17 12:46:58 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [14/50] train_loss: 0.7717, val_loss: 0.6819, val_mse: 0.5010, lr: 0.000074, 8.2s
2023-06-17 12:47:06 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [15/50] train_loss: 0.7460, val_loss: 0.6513, val_mse: 0.4798, lr: 0.000072, 8.0s
2023-06-17 12:47:14 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [16/50] train_loss: 0.7331, val_loss: 0.6302, val_mse: 0.4661, lr: 0.000070, 8.2s
2023-06-17 12:47:22 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [17/50] train_loss: 0.7535, val_loss: 0.5941, val_mse: 0.4365, lr: 0.000068, 8.2s
2023-06-17 12:47:31 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [18/50] train_loss: 0.7380, val_loss: 0.5936, val_mse: 0.4342, lr: 0.000066, 8.1s
2023-06-17 12:47:40 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [19/50] train_loss: 0.7061, val_loss: 0.6066, val_mse: 0.4422, lr: 0.000064, 8.2s
2023-06-17 12:47:48 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [20/50] train_loss: 0.7326, val_loss: 0.6399, val_mse: 0.4771, lr: 0.000062, 8.2s
2023-06-17 12:47:57 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [21/50] train_loss: 0.7176, val_loss: 0.6288, val_mse: 0.4616, lr: 0.000060, 9.0s
2023-06-17 12:48:06 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [22/50] train_loss: 0.7048, val_loss: 0.6277, val_mse: 0.4632, lr: 0.000058, 9.2s
2023-06-17 12:48:15 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [23/50] train_loss: 0.6901, val_loss: 0.5978, val_mse: 0.4354, lr: 0.000056, 9.1s
2023-06-17 12:48:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [24/50] train_loss: 0.6980, val_loss: 0.5127, val_mse: 0.3796, lr: 0.000054, 8.3s
2023-06-17 12:48:33 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [25/50] train_loss: 0.6828, val_loss: 0.6137, val_mse: 0.4506, lr: 0.000052, 8.1s
2023-06-17 12:48:41 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [26/50] train_loss: 0.6848, val_loss: 0.5487, val_mse: 0.4038, lr: 0.000049, 8.1s
2023-06-17 12:48:49 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [27/50] train_loss: 0.6755, val_loss: 0.5651, val_mse: 0.4137, lr: 0.000047, 8.2s
2023-06-17 12:48:58 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [28/50] train_loss: 0.6721, val_loss: 0.5640, val_mse: 0.4132, lr: 0.000045, 8.2s
2023-06-17 12:49:06 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [29/50] train_loss: 0.6658, val_loss: 0.5870, val_mse: 0.4300, lr: 0.000043, 8.1s
2023-06-17 12:49:14 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [30/50] train_loss: 0.6704, val_loss: 0.5301, val_mse: 0.3882, lr: 0.000041, 8.2s
2023-06-17 12:49:22 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [31/50] train_loss: 0.6458, val_loss: 0.5127, val_mse: 0.3740, lr: 0.000039, 8.1s
2023-06-17 12:49:31 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [32/50] train_loss: 0.6604, val_loss: 0.5854, val_mse: 0.4273, lr: 0.000037, 8.1s
2023-06-17 12:49:39 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [33/50] train_loss: 0.6626, val_loss: 0.6108, val_mse: 0.4480, lr: 0.000035, 8.2s
2023-06-17 12:49:47 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [34/50] train_loss: 0.6247, val_loss: 0.5063, val_mse: 0.3733, lr: 0.000033, 8.2s
2023-06-17 12:49:56 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [35/50] train_loss: 0.6381, val_loss: 0.5741, val_mse: 0.4197, lr: 0.000031, 8.1s
2023-06-17 12:50:05 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [36/50] train_loss: 0.6286, val_loss: 0.5272, val_mse: 0.3871, lr: 0.000029, 8.0s
2023-06-17 12:50:13 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [37/50] train_loss: 0.6384, val_loss: 0.5324, val_mse: 0.3904, lr: 0.000027, 8.2s
2023-06-17 12:50:21 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [38/50] train_loss: 0.6329, val_loss: 0.5152, val_mse: 0.3793, lr: 0.000025, 8.1s
2023-06-17 12:50:29 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [39/50] train_loss: 0.6288, val_loss: 0.5184, val_mse: 0.3816, lr: 0.000023, 8.0s
2023-06-17 12:50:37 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [40/50] train_loss: 0.6354, val_loss: 0.5204, val_mse: 0.3816, lr: 0.000021, 8.1s
2023-06-17 12:50:45 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [41/50] train_loss: 0.6100, val_loss: 0.5263, val_mse: 0.3872, lr: 0.000019, 8.1s
2023-06-17 12:50:53 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [42/50] train_loss: 0.6270, val_loss: 0.5422, val_mse: 0.3978, lr: 0.000016, 8.2s
2023-06-17 12:51:01 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [43/50] train_loss: 0.5859, val_loss: 0.5251, val_mse: 0.3867, lr: 0.000014, 8.1s
2023-06-17 12:51:10 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [44/50] train_loss: 0.5847, val_loss: 0.5394, val_mse: 0.3978, lr: 0.000012, 8.7s
2023-06-17 12:51:10 | unimol/utils/metrics.py | 255 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 44
2023-06-17 12:51:11 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 12:51:13 | unimol/models/nnmodel.py | 123 | INFO | Uni-Mol(QSAR) | fold 1, result {'mse': 0.3733483, 'mae': 0.43755728, 'spearmanr': 0.7287544581640742, 'rmse': 0.61102235, 'r2': 0.4841641368912225}
2023-06-17 12:51:13 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 12:51:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [1/50] train_loss: 1.0566, val_loss: 0.9536, val_mse: 0.7052, lr: 0.000067, 9.2s
2023-06-17 12:51:33 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [2/50] train_loss: 1.0477, val_loss: 0.8633, val_mse: 0.6460, lr: 0.000099, 9.1s
2023-06-17 12:51:43 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [3/50] train_loss: 1.0039, val_loss: 0.9287, val_mse: 0.6972, lr: 0.000097, 9.2s
2023-06-17 12:51:52 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [4/50] train_loss: 0.9543, val_loss: 0.7414, val_mse: 0.5560, lr: 0.000095, 8.8s
2023-06-17 12:52:01 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [5/50] train_loss: 0.9125, val_loss: 0.7524, val_mse: 0.5660, lr: 0.000093, 8.1s
2023-06-17 12:52:09 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [6/50] train_loss: 0.9342, val_loss: 0.6828, val_mse: 0.5135, lr: 0.000091, 8.1s
2023-06-17 12:52:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [7/50] train_loss: 0.8851, val_loss: 0.6962, val_mse: 0.5234, lr: 0.000089, 8.2s
2023-06-17 12:52:26 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [8/50] train_loss: 0.8771, val_loss: 0.6158, val_mse: 0.4621, lr: 0.000087, 8.1s
2023-06-17 12:52:35 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [9/50] train_loss: 0.8470, val_loss: 0.5859, val_mse: 0.4403, lr: 0.000085, 8.1s
2023-06-17 12:52:44 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [10/50] train_loss: 0.8372, val_loss: 0.6298, val_mse: 0.4755, lr: 0.000082, 8.4s
2023-06-17 12:52:52 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [11/50] train_loss: 0.8359, val_loss: 0.7237, val_mse: 0.5394, lr: 0.000080, 8.1s
2023-06-17 12:53:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [12/50] train_loss: 0.8217, val_loss: 0.5328, val_mse: 0.4006, lr: 0.000078, 8.0s
2023-06-17 12:53:09 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [13/50] train_loss: 0.8404, val_loss: 0.7307, val_mse: 0.5494, lr: 0.000076, 7.9s
2023-06-17 12:53:17 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [14/50] train_loss: 0.8167, val_loss: 0.5784, val_mse: 0.4335, lr: 0.000074, 8.2s
2023-06-17 12:53:25 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [15/50] train_loss: 0.7869, val_loss: 0.5668, val_mse: 0.4273, lr: 0.000072, 8.1s
2023-06-17 12:53:33 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [16/50] train_loss: 0.7866, val_loss: 0.5809, val_mse: 0.4385, lr: 0.000070, 8.2s
2023-06-17 12:53:41 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [17/50] train_loss: 0.7720, val_loss: 0.5132, val_mse: 0.3853, lr: 0.000068, 8.2s
2023-06-17 12:53:50 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [18/50] train_loss: 0.7649, val_loss: 0.6460, val_mse: 0.4820, lr: 0.000066, 8.1s
2023-06-17 12:53:58 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [19/50] train_loss: 0.7703, val_loss: 0.5627, val_mse: 0.4162, lr: 0.000064, 8.1s
2023-06-17 12:54:06 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [20/50] train_loss: 0.7510, val_loss: 0.5498, val_mse: 0.4076, lr: 0.000062, 8.2s
2023-06-17 12:54:15 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [21/50] train_loss: 0.7405, val_loss: 0.6078, val_mse: 0.4568, lr: 0.000060, 8.2s
2023-06-17 12:54:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [22/50] train_loss: 0.7389, val_loss: 0.5206, val_mse: 0.3895, lr: 0.000058, 7.9s
2023-06-17 12:54:31 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [23/50] train_loss: 0.7120, val_loss: 0.5197, val_mse: 0.3883, lr: 0.000056, 8.1s
2023-06-17 12:54:40 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [24/50] train_loss: 0.7085, val_loss: 0.5299, val_mse: 0.3929, lr: 0.000054, 9.1s
2023-06-17 12:54:49 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [25/50] train_loss: 0.7204, val_loss: 0.4924, val_mse: 0.3634, lr: 0.000052, 9.0s
2023-06-17 12:54:58 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [26/50] train_loss: 0.7203, val_loss: 0.4618, val_mse: 0.3460, lr: 0.000049, 8.8s
2023-06-17 12:55:07 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [27/50] train_loss: 0.7021, val_loss: 0.5251, val_mse: 0.3886, lr: 0.000047, 8.3s
2023-06-17 12:55:15 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [28/50] train_loss: 0.7067, val_loss: 0.5120, val_mse: 0.3827, lr: 0.000045, 8.0s
2023-06-17 12:55:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [29/50] train_loss: 0.6974, val_loss: 0.5871, val_mse: 0.4380, lr: 0.000043, 8.0s
2023-06-17 12:55:32 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [30/50] train_loss: 0.7017, val_loss: 0.4469, val_mse: 0.3339, lr: 0.000041, 8.2s
2023-06-17 12:55:40 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [31/50] train_loss: 0.6828, val_loss: 0.5249, val_mse: 0.3889, lr: 0.000039, 7.9s
2023-06-17 12:55:48 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [32/50] train_loss: 0.6813, val_loss: 0.5739, val_mse: 0.4252, lr: 0.000037, 8.2s
2023-06-17 12:55:57 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [33/50] train_loss: 0.6845, val_loss: 0.4893, val_mse: 0.3636, lr: 0.000035, 8.5s
2023-06-17 12:56:05 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [34/50] train_loss: 0.6813, val_loss: 0.6106, val_mse: 0.4473, lr: 0.000033, 8.2s
2023-06-17 12:56:13 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [35/50] train_loss: 0.6657, val_loss: 0.5357, val_mse: 0.4016, lr: 0.000031, 8.4s
2023-06-17 12:56:22 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [36/50] train_loss: 0.6654, val_loss: 0.5157, val_mse: 0.3824, lr: 0.000029, 8.1s
2023-06-17 12:56:30 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [37/50] train_loss: 0.6520, val_loss: 0.5144, val_mse: 0.3813, lr: 0.000027, 8.0s
2023-06-17 12:56:38 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [38/50] train_loss: 0.6396, val_loss: 0.5489, val_mse: 0.4070, lr: 0.000025, 8.2s
2023-06-17 12:56:46 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [39/50] train_loss: 0.6530, val_loss: 0.5142, val_mse: 0.3851, lr: 0.000023, 8.0s
2023-06-17 12:56:54 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [40/50] train_loss: 0.6644, val_loss: 0.5770, val_mse: 0.4274, lr: 0.000021, 8.1s
2023-06-17 12:56:54 | unimol/utils/metrics.py | 255 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 40
2023-06-17 12:56:56 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 12:56:57 | unimol/models/nnmodel.py | 123 | INFO | Uni-Mol(QSAR) | fold 2, result {'mse': 0.33389777, 'mae': 0.42934737, 'spearmanr': 0.7043604826771424, 'rmse': 0.5778389, 'r2': 0.5037052463708651}
2023-06-17 12:56:58 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 12:57:06 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [1/50] train_loss: 0.9963, val_loss: 1.1187, val_mse: 0.8181, lr: 0.000067, 8.1s
2023-06-17 12:57:16 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [2/50] train_loss: 0.9809, val_loss: 1.0758, val_mse: 0.7894, lr: 0.000099, 8.9s
2023-06-17 12:57:25 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [3/50] train_loss: 0.9744, val_loss: 0.9197, val_mse: 0.6816, lr: 0.000097, 8.3s
2023-06-17 12:57:34 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [4/50] train_loss: 0.8896, val_loss: 0.8931, val_mse: 0.6608, lr: 0.000095, 8.1s
2023-06-17 12:57:43 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [5/50] train_loss: 0.8730, val_loss: 0.7967, val_mse: 0.5925, lr: 0.000093, 8.1s
2023-06-17 12:57:52 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [6/50] train_loss: 0.8742, val_loss: 0.8469, val_mse: 0.6300, lr: 0.000091, 8.1s
2023-06-17 12:58:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [7/50] train_loss: 0.8481, val_loss: 0.7456, val_mse: 0.5564, lr: 0.000089, 8.1s
2023-06-17 12:58:09 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [8/50] train_loss: 0.8167, val_loss: 0.7243, val_mse: 0.5415, lr: 0.000087, 8.1s
2023-06-17 12:58:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [9/50] train_loss: 0.7877, val_loss: 0.7422, val_mse: 0.5536, lr: 0.000085, 8.1s
2023-06-17 12:58:26 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [10/50] train_loss: 0.7905, val_loss: 0.6694, val_mse: 0.5022, lr: 0.000082, 8.0s
2023-06-17 12:58:35 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [11/50] train_loss: 0.7806, val_loss: 0.6803, val_mse: 0.5080, lr: 0.000080, 8.2s
2023-06-17 12:58:43 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [12/50] train_loss: 0.7693, val_loss: 0.7758, val_mse: 0.5808, lr: 0.000078, 8.2s
2023-06-17 12:58:51 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [13/50] train_loss: 0.8012, val_loss: 0.6340, val_mse: 0.4780, lr: 0.000076, 8.1s
2023-06-17 12:59:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [14/50] train_loss: 0.7451, val_loss: 0.7638, val_mse: 0.5757, lr: 0.000074, 8.0s
2023-06-17 12:59:08 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [15/50] train_loss: 0.7363, val_loss: 0.6183, val_mse: 0.4703, lr: 0.000072, 8.3s
2023-06-17 12:59:17 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [16/50] train_loss: 0.7324, val_loss: 0.6532, val_mse: 0.4934, lr: 0.000070, 8.2s
2023-06-17 12:59:26 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [17/50] train_loss: 0.7169, val_loss: 0.6582, val_mse: 0.4971, lr: 0.000068, 8.6s
2023-06-17 12:59:35 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [18/50] train_loss: 0.6963, val_loss: 0.6372, val_mse: 0.4803, lr: 0.000066, 9.2s
2023-06-17 12:59:44 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [19/50] train_loss: 0.7022, val_loss: 0.6088, val_mse: 0.4620, lr: 0.000064, 9.0s
2023-06-17 12:59:53 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [20/50] train_loss: 0.6787, val_loss: 0.5849, val_mse: 0.4404, lr: 0.000062, 8.4s
2023-06-17 13:00:02 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [21/50] train_loss: 0.7170, val_loss: 0.6280, val_mse: 0.4728, lr: 0.000060, 8.1s
2023-06-17 13:00:10 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [22/50] train_loss: 0.6678, val_loss: 0.6732, val_mse: 0.5074, lr: 0.000058, 8.1s
2023-06-17 13:00:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [23/50] train_loss: 0.6440, val_loss: 0.5854, val_mse: 0.4393, lr: 0.000056, 7.9s
2023-06-17 13:00:26 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [24/50] train_loss: 0.6608, val_loss: 0.6185, val_mse: 0.4654, lr: 0.000054, 8.1s
2023-06-17 13:00:34 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [25/50] train_loss: 0.6663, val_loss: 0.6564, val_mse: 0.4948, lr: 0.000052, 8.0s
2023-06-17 13:00:42 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [26/50] train_loss: 0.6536, val_loss: 0.5631, val_mse: 0.4249, lr: 0.000049, 8.0s
2023-06-17 13:00:53 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [27/50] train_loss: 0.6346, val_loss: 0.5800, val_mse: 0.4309, lr: 0.000047, 9.1s
2023-06-17 13:01:02 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [28/50] train_loss: 0.6523, val_loss: 0.5852, val_mse: 0.4444, lr: 0.000045, 8.9s
2023-06-17 13:01:10 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [29/50] train_loss: 0.6302, val_loss: 0.5751, val_mse: 0.4311, lr: 0.000043, 8.2s
2023-06-17 13:01:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [30/50] train_loss: 0.6229, val_loss: 0.5828, val_mse: 0.4391, lr: 0.000041, 8.0s
2023-06-17 13:01:26 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [31/50] train_loss: 0.6291, val_loss: 0.5778, val_mse: 0.4351, lr: 0.000039, 8.1s
2023-06-17 13:01:34 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [32/50] train_loss: 0.5913, val_loss: 0.5777, val_mse: 0.4298, lr: 0.000037, 8.1s
2023-06-17 13:01:42 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [33/50] train_loss: 0.6336, val_loss: 0.5617, val_mse: 0.4222, lr: 0.000035, 8.2s
2023-06-17 13:01:51 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [34/50] train_loss: 0.6080, val_loss: 0.5359, val_mse: 0.4038, lr: 0.000033, 8.2s
2023-06-17 13:02:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [35/50] train_loss: 0.6015, val_loss: 0.5468, val_mse: 0.4091, lr: 0.000031, 8.0s
2023-06-17 13:02:08 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [36/50] train_loss: 0.5942, val_loss: 0.5709, val_mse: 0.4266, lr: 0.000029, 8.1s
2023-06-17 13:02:16 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [37/50] train_loss: 0.5994, val_loss: 0.5571, val_mse: 0.4149, lr: 0.000027, 8.1s
2023-06-17 13:02:24 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [38/50] train_loss: 0.6010, val_loss: 0.5653, val_mse: 0.4218, lr: 0.000025, 8.1s
2023-06-17 13:02:32 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [39/50] train_loss: 0.5760, val_loss: 0.5826, val_mse: 0.4352, lr: 0.000023, 8.0s
2023-06-17 13:02:40 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [40/50] train_loss: 0.6080, val_loss: 0.5744, val_mse: 0.4271, lr: 0.000021, 8.2s
2023-06-17 13:02:49 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [41/50] train_loss: 0.5717, val_loss: 0.5595, val_mse: 0.4186, lr: 0.000019, 8.1s
2023-06-17 13:02:57 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [42/50] train_loss: 0.5596, val_loss: 0.5536, val_mse: 0.4140, lr: 0.000016, 8.0s
2023-06-17 13:03:05 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [43/50] train_loss: 0.5461, val_loss: 0.5577, val_mse: 0.4174, lr: 0.000014, 8.1s
2023-06-17 13:03:13 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [44/50] train_loss: 0.5775, val_loss: 0.5613, val_mse: 0.4205, lr: 0.000012, 8.1s
2023-06-17 13:03:13 | unimol/utils/metrics.py | 255 | WARNING | Uni-Mol(QSAR) | Early stopping at epoch: 44
2023-06-17 13:03:14 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:03:15 | unimol/models/nnmodel.py | 123 | INFO | Uni-Mol(QSAR) | fold 3, result {'mse': 0.40383738, 'mae': 0.45316046, 'spearmanr': 0.7082075781363502, 'rmse': 0.635482, 'r2': 0.4945312503291812}
2023-06-17 13:03:16 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 13:03:24 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [1/50] train_loss: 1.0177, val_loss: 0.9977, val_mse: 0.7278, lr: 0.000067, 8.1s
2023-06-17 13:03:33 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [2/50] train_loss: 1.0013, val_loss: 1.1132, val_mse: 0.8146, lr: 0.000099, 8.0s
2023-06-17 13:03:42 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [3/50] train_loss: 0.9368, val_loss: 0.8208, val_mse: 0.5932, lr: 0.000097, 8.8s
2023-06-17 13:03:51 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [4/50] train_loss: 0.8955, val_loss: 0.8036, val_mse: 0.5778, lr: 0.000095, 8.8s
2023-06-17 13:04:01 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [5/50] train_loss: 0.8553, val_loss: 0.7418, val_mse: 0.5351, lr: 0.000093, 8.9s
2023-06-17 13:04:10 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [6/50] train_loss: 0.8764, val_loss: 0.9070, val_mse: 0.6461, lr: 0.000091, 8.1s
2023-06-17 13:04:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [7/50] train_loss: 0.8546, val_loss: 0.6987, val_mse: 0.5121, lr: 0.000089, 8.0s
2023-06-17 13:04:28 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [8/50] train_loss: 0.8261, val_loss: 0.7095, val_mse: 0.5165, lr: 0.000087, 9.0s
2023-06-17 13:04:37 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [9/50] train_loss: 0.8314, val_loss: 0.6756, val_mse: 0.4913, lr: 0.000085, 9.0s
2023-06-17 13:04:46 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [10/50] train_loss: 0.8112, val_loss: 0.7011, val_mse: 0.5129, lr: 0.000082, 8.0s
2023-06-17 13:04:53 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [11/50] train_loss: 0.7951, val_loss: 0.6594, val_mse: 0.4772, lr: 0.000080, 7.8s
2023-06-17 13:05:02 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [12/50] train_loss: 0.8264, val_loss: 0.6824, val_mse: 0.5008, lr: 0.000078, 8.0s
2023-06-17 13:05:10 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [13/50] train_loss: 0.7938, val_loss: 0.6051, val_mse: 0.4428, lr: 0.000076, 8.0s
2023-06-17 13:05:19 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [14/50] train_loss: 0.7698, val_loss: 0.5837, val_mse: 0.4272, lr: 0.000074, 8.0s
2023-06-17 13:05:28 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [15/50] train_loss: 0.7579, val_loss: 0.6009, val_mse: 0.4390, lr: 0.000072, 8.0s
2023-06-17 13:05:36 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [16/50] train_loss: 0.7400, val_loss: 0.6783, val_mse: 0.4961, lr: 0.000070, 8.0s
2023-06-17 13:05:44 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [17/50] train_loss: 0.7337, val_loss: 0.6855, val_mse: 0.4961, lr: 0.000068, 7.9s
2023-06-17 13:05:52 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [18/50] train_loss: 0.7444, val_loss: 0.6196, val_mse: 0.4444, lr: 0.000066, 8.3s
2023-06-17 13:06:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [19/50] train_loss: 0.7323, val_loss: 0.5721, val_mse: 0.4148, lr: 0.000064, 8.2s
2023-06-17 13:06:09 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [20/50] train_loss: 0.7124, val_loss: 0.5886, val_mse: 0.4301, lr: 0.000062, 8.0s
2023-06-17 13:06:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [21/50] train_loss: 0.7051, val_loss: 0.5489, val_mse: 0.3975, lr: 0.000060, 8.4s
2023-06-17 13:06:27 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [22/50] train_loss: 0.7238, val_loss: 0.5354, val_mse: 0.3884, lr: 0.000058, 8.7s
2023-06-17 13:06:36 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [23/50] train_loss: 0.7240, val_loss: 0.6725, val_mse: 0.4873, lr: 0.000056, 8.1s
2023-06-17 13:06:44 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [24/50] train_loss: 0.6937, val_loss: 0.5550, val_mse: 0.4090, lr: 0.000054, 7.9s
2023-06-17 13:06:52 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [25/50] train_loss: 0.6826, val_loss: 0.6393, val_mse: 0.4632, lr: 0.000052, 8.0s
2023-06-17 13:07:00 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [26/50] train_loss: 0.6814, val_loss: 0.5455, val_mse: 0.3975, lr: 0.000049, 7.9s
2023-06-17 13:07:08 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [27/50] train_loss: 0.6589, val_loss: 0.5434, val_mse: 0.3960, lr: 0.000047, 8.0s
2023-06-17 13:07:16 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [28/50] train_loss: 0.6763, val_loss: 0.5872, val_mse: 0.4352, lr: 0.000045, 8.0s
2023-06-17 13:07:23 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [29/50] train_loss: 0.6659, val_loss: 0.5268, val_mse: 0.3853, lr: 0.000043, 7.8s
2023-06-17 13:07:32 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [30/50] train_loss: 0.6684, val_loss: 0.5575, val_mse: 0.4024, lr: 0.000041, 8.0s
2023-06-17 13:07:40 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [31/50] train_loss: 0.6467, val_loss: 0.5300, val_mse: 0.3829, lr: 0.000039, 8.0s
2023-06-17 13:07:50 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [32/50] train_loss: 0.6467, val_loss: 0.5846, val_mse: 0.4279, lr: 0.000037, 8.9s
2023-06-17 13:07:57 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [33/50] train_loss: 0.6270, val_loss: 0.5577, val_mse: 0.4093, lr: 0.000035, 7.9s
2023-06-17 13:08:05 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [34/50] train_loss: 0.6427, val_loss: 0.5868, val_mse: 0.4248, lr: 0.000033, 8.0s
2023-06-17 13:08:13 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [35/50] train_loss: 0.6501, val_loss: 0.5432, val_mse: 0.3935, lr: 0.000031, 7.9s
2023-06-17 13:08:21 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [36/50] train_loss: 0.6305, val_loss: 0.5176, val_mse: 0.3793, lr: 0.000029, 8.0s
2023-06-17 13:08:30 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [37/50] train_loss: 0.6297, val_loss: 0.5156, val_mse: 0.3713, lr: 0.000027, 7.9s
2023-06-17 13:08:39 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [38/50] train_loss: 0.6432, val_loss: 0.5531, val_mse: 0.4048, lr: 0.000025, 8.0s
2023-06-17 13:08:47 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [39/50] train_loss: 0.6161, val_loss: 0.5432, val_mse: 0.3905, lr: 0.000023, 7.9s
2023-06-17 13:08:55 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [40/50] train_loss: 0.6009, val_loss: 0.5115, val_mse: 0.3746, lr: 0.000021, 7.9s
2023-06-17 13:09:03 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [41/50] train_loss: 0.6052, val_loss: 0.5091, val_mse: 0.3698, lr: 0.000019, 7.9s
2023-06-17 13:09:11 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [42/50] train_loss: 0.6008, val_loss: 0.5425, val_mse: 0.3979, lr: 0.000016, 8.0s
2023-06-17 13:09:19 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [43/50] train_loss: 0.5973, val_loss: 0.5384, val_mse: 0.3888, lr: 0.000014, 8.0s
2023-06-17 13:09:27 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [44/50] train_loss: 0.6030, val_loss: 0.5716, val_mse: 0.4130, lr: 0.000012, 7.9s
2023-06-17 13:09:35 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [45/50] train_loss: 0.5957, val_loss: 0.5366, val_mse: 0.3926, lr: 0.000010, 8.0s
2023-06-17 13:09:43 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [46/50] train_loss: 0.6118, val_loss: 0.5083, val_mse: 0.3681, lr: 0.000008, 8.0s
2023-06-17 13:09:53 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [47/50] train_loss: 0.5846, val_loss: 0.5475, val_mse: 0.3945, lr: 0.000006, 8.9s
2023-06-17 13:10:01 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [48/50] train_loss: 0.5963, val_loss: 0.5365, val_mse: 0.3876, lr: 0.000004, 8.3s
2023-06-17 13:10:09 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [49/50] train_loss: 0.5992, val_loss: 0.5280, val_mse: 0.3823, lr: 0.000002, 8.0s
2023-06-17 13:10:18 | unimol/tasks/trainer.py | 156 | INFO | Uni-Mol(QSAR) | Epoch [50/50] train_loss: 0.5823, val_loss: 0.5252, val_mse: 0.3796, lr: 0.000000, 8.9s
2023-06-17 13:10:20 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:10:21 | unimol/models/nnmodel.py | 123 | INFO | Uni-Mol(QSAR) | fold 4, result {'mse': 0.3681449, 'mae': 0.4405691, 'spearmanr': 0.7418240886637215, 'rmse': 0.6067495, 'r2': 0.5089094245543386}
2023-06-17 13:10:21 | unimol/models/nnmodel.py | 135 | INFO | Uni-Mol(QSAR) | Uni-Mol metrics score: 
{'mse': 0.3771468026353712, 'mae': 0.4450550450234257, 'spearmanr': 0.7078743179462539, 'rmse': 0.6141227911707652, 'r2': 0.4866199624063017}
2023-06-17 13:10:21 | unimol/models/nnmodel.py | 136 | INFO | Uni-Mol(QSAR) | Uni-Mol & Metric result saved!
代码
文本
[9]
from unimol import MolPredict
from sklearn.metrics import mean_squared_error

predm = MolPredict(load_model='./exp_reg_hERG_0616')
pred_train_y = predm.predict('datasets/hERG_train.csv').reshape(-1)
pred_test_y = predm.predict('datasets/hERG_test.csv').reshape(-1)

mse = mean_squared_error(test_y, pred_test_y)
se = abs(test_y - pred_test_y)
results[f"Uni-Mol"] = {"MSE": mse, "error": se}
print(f"[Uni-Mol]\tMSE:{mse:.4f}")
2023-06-17 13:21:14 | unimol/data/conformer.py | 56 | INFO | Uni-Mol(QSAR) | Start generating conformers...
7363it [02:58, 41.14it/s]
2023-06-17 13:24:13 | unimol/data/conformer.py | 60 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.01% of molecules.
2023-06-17 13:24:14 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.07% of molecules.
2023-06-17 13:24:14 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 13:24:14 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1
2023-06-17 13:24:15 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:24:21 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:24:27 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:24:33 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:24:40 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:24:45 | unimol/predict.py | 66 | INFO | Uni-Mol(QSAR) | final predict metrics score: 
{'mse': 0.20172259087068667, 'mae': 0.2717721822179887, 'spearmanr': 0.9094161936023004, 'rmse': 0.4491353814505006, 'r2': 0.7876258838726424}
2023-06-17 13:24:46 | unimol/data/conformer.py | 56 | INFO | Uni-Mol(QSAR) | Start generating conformers...
1841it [00:40, 45.34it/s]
2023-06-17 13:25:27 | unimol/data/conformer.py | 60 | INFO | Uni-Mol(QSAR) | Failed to generate conformers for 0.00% of molecules.
2023-06-17 13:25:27 | unimol/data/conformer.py | 62 | INFO | Uni-Mol(QSAR) | Failed to generate 3d conformers for 0.05% of molecules.
2023-06-17 13:25:28 | unimol/models/unimol.py | 107 | INFO | Uni-Mol(QSAR) | Loading pretrained weights from /opt/conda/lib/python3.8/site-packages/unimol-0.0.2-py3.8.egg/unimol/weights/mol_pre_all_h_220816.pt
2023-06-17 13:25:28 | unimol/models/nnmodel.py | 145 | INFO | Uni-Mol(QSAR) | start predict NNModel:unimolv1
2023-06-17 13:25:28 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:25:30 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:25:32 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:25:34 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:25:36 | unimol/tasks/trainer.py | 197 | INFO | Uni-Mol(QSAR) | load model success!
2023-06-17 13:25:38 | unimol/predict.py | 66 | INFO | Uni-Mol(QSAR) | final predict metrics score: 
{'mse': 0.4197742218716444, 'mae': 0.42912608007320174, 'spearmanr': 0.7708930974024512, 'rmse': 0.6478998548168108, 'r2': 0.5847316207841755}
[Uni-Mol]	MSE:0.4198
代码
文本
[10]
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import seaborn as sns

# 绘制残差图
residuals_data = []
for name, result in results.items():
if name.startswith("Uni-Mol"):
model_residuals = pd.DataFrame({"Model": name, "Error": result["error"]})
residuals_data.append(model_residuals)

residuals_df = pd.concat(residuals_data, ignore_index=True)
residuals_df.sort_values(by="Error", ascending=True, inplace=True)
model_order = residuals_df.groupby("Model")["Error"].median().sort_values(ascending=True).index

# 使用seaborn绘制violinplot
plt.figure(figsize=(10, 7), dpi=150)
font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 15}
sns.boxplot(y="Model", x="Error", data=residuals_df, order=model_order)
plt.xlabel("Abs Error", fontdict=font)
plt.ylabel("Models", fontdict=font)
plt.show()
代码
文本

结果总览

最后,我们可以横向比较一下1D-QSAR, 2D-QSAR, 3D-QSAR和不同的模型组合,以及Uni-Mol在同一个数据集上的预测表现。

代码
文本
双击即可修改
代码
文本
[27]
import pandas as pd

df = pd.DataFrame(results).T
df.sort_values(by="MSE", ascending=True, inplace=True)
df
MSE error
Uni-Mol 0.419774 [2.522239303588867, 2.0335350990295407, 2.1235...
2D-QSAR-Support Vector 0.455441 [1.6594621254469004, 1.801769913338167, 1.3386...
2D-QSAR-XGBoost 0.459129 [1.523523902893066, 1.5693136215209957, 0.7394...
2D-QSAR-Random Forest 0.47166 [1.9880250000000013, 2.382200000000001, 0.8454...
2D-QSAR-LightGBM 0.479684 [2.022284730700359, 2.591602960937026, 0.79469...
2D-QSAR-K-Nearest Neighbors 0.480645 [1.5099999999999998, 1.5079999999999973, 0.975...
1D-QSAR-Random Forest 0.605183 [2.3907239177489146, 2.4765941666666667, 3.016...
1D-QSAR-XGBoost 0.605652 [2.509926891326904, 3.200466728210449, 2.42616...
1D-QSAR-LightGBM 0.642647 [2.346929558613308, 2.5087293396443835, 2.5538...
2D-QSAR-Gradient Boosting 0.669449 [2.918999383876205, 2.653413649160223, 2.55135...
2D-QSAR-Multi-layer Perceptron 0.693308 [1.1202709345431376, 1.3843046457283936, 1.224...
2D-QSAR-Ridge Regression 0.715356 [2.78798850792775, 2.1465278084733654, 2.51336...
2D-QSAR-Linear Regression 0.715559 [2.785544528615863, 2.1429891031766406, 2.5107...
3D-QSAR-LightGBM 0.730661 [3.8540520524439525, 1.364569493019939, 1.6295...
1D-QSAR-Gradient Boosting 0.760707 [4.202714018353101, 4.082667464743396, 3.79711...
3D-QSAR-Random Forest 0.783114 [3.5546999999999995, 2.4127666666666663, 2.851...
3D-QSAR-XGBoost 0.810273 [3.825884914398193, 0.8879476547241207, 1.3634...
3D-QSAR-Gradient Boosting 0.866329 [4.33815854578517, 2.8987060129646505, 2.78310...
1D-QSAR-Ridge Regression 0.885736 [4.533467314501845, 4.120692997179958, 4.01560...
1D-QSAR-Linear Regression 0.885739 [4.533419974565081, 4.120610897612493, 4.01553...
2D-QSAR-Decision Tree 0.889239 [0.5700000000000003, 0.17999999999999972, 1.54...
1D-QSAR-K-Nearest Neighbors 0.911012 [2.523999999999999, 3.3120000000000003, 4.3239...
1D-QSAR-ElasticNet Regression 0.926934 [4.59931604048185, 4.317325121519717, 4.137556...
1D-QSAR-Lasso Regression 0.928588 [4.596617256928905, 4.317406647064892, 4.13327...
1D-QSAR-Multi-layer Perceptron 0.938524 [4.539483485335261, 4.240490901203048, 4.04913...
1D-QSAR-Support Vector 0.939777 [4.7594409976508505, 4.452918330671407, 4.2889...
2D-QSAR-ElasticNet Regression 1.010851 [4.570272986554394, 4.320272986554394, 4.09027...
2D-QSAR-Lasso Regression 1.010851 [4.570272986554394, 4.320272986554394, 4.09027...
3D-QSAR-Support Vector 1.042737 [4.750099446088549, 4.500099446088741, 4.27009...
1D-QSAR-Decision Tree 1.057852 [2.523999999999999, 2.8999999999999995, 2.3549...
3D-QSAR-K-Nearest Neighbors 1.194292 [4.92, 4.12, 5.353999999999999, 4.231999999999...
3D-QSAR-Decision Tree 1.598439 [3.0299999999999994, 1.1399999999999988, 1.659...
3D-QSAR-Lasso Regression 805.758003 [4.569499870078355, 4.319499281941863, 4.08949...
3D-QSAR-ElasticNet Regression 2390.261763 [4.56929324838699, 4.319292354357786, 4.089292...
3D-QSAR-Multi-layer Perceptron 3482168556886455484416.0 [3804401.3051103745, 3804401.5506377984, 38044...
3D-QSAR-Ridge Regression 4392658479297741235159040.0 [4.323969041648061, 3.9322019482891504, 3.6812...
3D-QSAR-Linear Regression 34953863171341550859845632.0 [4.28654773084929, 3.4351554464664877, 3.17109...
代码
文本
[28]
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
import seaborn as sns

# 绘制残差图
residuals_data = []
for name, result in results.items():
if result["MSE"] > 10:
continue
model_residuals = pd.DataFrame({"Model": name, "Error": result["error"]})
residuals_data.append(model_residuals)

residuals_df = pd.concat(residuals_data, ignore_index=True)
residuals_df.sort_values(by="Error", ascending=True, inplace=True)
model_order = residuals_df.groupby("Model")["Error"].median().sort_values(ascending=True).index

# 使用seaborn绘制violinplot
plt.figure(figsize=(10, 7), dpi=150)
font = {'family': 'serif',
'color': 'black',
'weight': 'normal',
'size': 15}
sns.boxplot(y="Model", x="Error", data=residuals_df, order=model_order)
plt.xlabel("Abs Error", fontdict=font)
plt.ylabel("Models", fontdict=font)
plt.show()
代码
文本

One More Thing

代码
文本
[39]
# 计算每个模型在测试集上的性能
mse_scores = [(k, results[k]["MSE"]) for k in results.keys()]

# 根据性能指标对模型进行排序
mse_scores.sort(key=lambda x: x[1])

# 选择前五名表现最好的模型
top_5_models = mse_scores[1:6]

# 输出前五名模型的名称和性能指标
print("Top 5 models:")
for name, mse in top_10_models:
print(f"{name}: MSE={mse:.4f}")

# 获取前五名模型的预测结果
top_5_train_predictions = [train_data[f"{name}_pred"].values for name, _ in top_5_models]
top_5_test_predictions = [test_data[f"{name}_pred"].values for name, _ in top_5_models]

# 将预测结果堆叠为特征矩阵
meta_train_x = np.column_stack(top_5_train_predictions)
meta_test_x = np.column_stack(top_5_test_predictions)
Top 5 models:
2D-QSAR-Support Vector: MSE=0.4554
2D-QSAR-XGBoost: MSE=0.4591
2D-QSAR-Random Forest: MSE=0.4717
2D-QSAR-LightGBM: MSE=0.4797
2D-QSAR-K-Nearest Neighbors: MSE=0.4806
1D-QSAR-Random Forest: MSE=0.6052
1D-QSAR-XGBoost: MSE=0.6057
1D-QSAR-LightGBM: MSE=0.6426
2D-QSAR-Gradient Boosting: MSE=0.6694
2D-QSAR-Multi-layer Perceptron: MSE=0.6933
代码
文本
[40]
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
from xgboost import XGBRegressor
from sklearn.linear_model import Ridge, Lasso, ElasticNet
from sklearn.svm import SVR
from sklearn.ensemble import GradientBoostingRegressor
from lightgbm import LGBMRegressor
from sklearn.metrics import mean_squared_error

# 定义元学习器列表
meta_regressors = [
("Linear Regression", LinearRegression()), # 线性回归模型
("Ridge Regression", Ridge(random_state=42)), # 岭回归模型
("Lasso Regression", Lasso(random_state=42)), # Lasso回归模型
("ElasticNet Regression", ElasticNet(random_state=42)), # ElasticNet回归模型
("Support Vector", SVR()), # 支持向量回归模型
("K-Nearest Neighbors", KNeighborsRegressor()), # K-最近邻回归模型
("Decision Tree", DecisionTreeRegressor(random_state=42)), # 决策树回归模型
("Random Forest", RandomForestRegressor(random_state=42)), # 随机森林回归模型
("Gradient Boosting", GradientBoostingRegressor(random_state=42)), # 梯度提升回归模型
("XGBoost", XGBRegressor(random_state=42)), # XGBoost回归模型
("LightGBM", LGBMRegressor(random_state=42)), # LightGBM回归模型
("Multi-layer Perceptron", MLPRegressor( # 多层感知器(神经网络)回归模型
hidden_layer_sizes=(128,64,32),
learning_rate_init=0.0001,
activation='relu', solver='adam',
max_iter=10000, random_state=42)),
]

# 训练元学习器并获取预测结果
for name, regressor in meta_regressors:
# 训练元模型
regressor.fit(meta_train_x, train_y)

# 使用元模型进行预测
pred_meta_train_y = regressor.predict(meta_train_x)
pred_meta_test_y = regressor.predict(meta_test_x)

# 将元模型预测结果添加到训练数据和测试数据中
train_data[f"META-{name}_pred"] = pred_meta_train_y
test_data[f"META-{name}_pred"] = pred_meta_test_y

# 计算测试数据的性能指标
mse_meta = mean_squared_error(test_y, pred_meta_test_y)
se_meta = abs(test_y - pred_meta_test_y)
results[f"META-{name}"] = {"MSE": mse_meta, "error": se_meta}
print(f"[META][{name}]\tMSE:{mse_meta:.4f}")
[META][Linear Regression]	MSE:0.4761
[META][Ridge Regression]	MSE:0.4750
[META][Lasso Regression]	MSE:1.0109
[META][ElasticNet Regression]	MSE:0.6884
[META][Support Vector]	MSE:0.4874
[META][K-Nearest Neighbors]	MSE:0.4511
[META][Decision Tree]	MSE:0.4866
[META][Random Forest]	MSE:0.4824
[META][Gradient Boosting]	MSE:0.4765
[META][XGBoost]	MSE:0.4829
[META][LightGBM]	MSE:0.4774
[META][Multi-layer Perceptron]	MSE:0.4604
代码
文本
[47]
# 计算每个元学习器在测试集上的性能
mse_scores = []
for name, regressor in meta_regressors:
pred_test_y = regressor.predict(meta_test_x)
mse = mean_squared_error(test_y, pred_test_y)
mse_scores.append((name, mse))

# 根据性能指标对元学习器进行排序
mse_scores.sort(key=lambda x: x[1])

# 选择前五名表现最好的元学习器
top_5_regressors = mse_scores[:5]

# 初始化用于存储平均预测结果的变量
pred_meta_train_y_top5_avg = np.zeros_like(train_y)
pred_meta_test_y_top5_avg = np.zeros_like(test_y)

# 计算前五名元学习器预测结果的平均值
for name, _ in top_5_regressors:
regressor = dict(meta_regressors)[name]
pred_meta_train_y_top5_avg += regressor.predict(meta_train_x)
pred_meta_test_y_top5_avg += regressor.predict(meta_test_x)
pred_meta_train_y_top5_avg /= len(top_5_regressors)
pred_meta_test_y_top5_avg /= len(top_5_regressors)

# 将平均预测结果添加到训练数据和测试数据中
train_data["Top5_Meta_pred"] = pred_meta_train_y_top5_avg
test_data["Top5_Meta_pred"] = pred_meta_test_y_top5_avg

# 计算平均预测结果的性能指标
mse_top5_meta = mean_squared_error(test_y, pred_meta_test_y_top5_avg)
se_top5_meta = abs(test_y - pred_meta_test_y_top5_avg)
results["Top5_Meta"] = {"MSE": mse_top5_meta, "error": se_top5_meta}
print(f"[Top 5 Meta Model]\tMSE:{mse_top5_meta:.4f}")

# 比较平均预测结果与单个元学习器的性能
for name in [name for name, _ in top_5_regressors]:
mse_single = results[f"META-{name}"]["MSE"]
performance_gain = mse_single - mse_top5_meta
print(f"[Ensemble][{name} vs. Top5_Meta]\tPerformance Gain (MSE): {performance_gain:.4f}")
[Top 5 Meta Model]	MSE:0.4656
[Ensemble][K-Nearest Neighbors vs. Top5_Meta]	Performance Gain (MSE): -0.0145
[Ensemble][Multi-layer Perceptron vs. Top5_Meta]	Performance Gain (MSE): -0.0052
[Ensemble][Ridge Regression vs. Top5_Meta]	Performance Gain (MSE): 0.0094
[Ensemble][Linear Regression vs. Top5_Meta]	Performance Gain (MSE): 0.0105
[Ensemble][Gradient Boosting vs. Top5_Meta]	Performance Gain (MSE): 0.0110
代码
文本
化学信息学
Deep Learning
RDKit
QSAR
Tutorial
中文
Machine Learning
Uni-Mol
notebook
Scikit-Learn
化学信息学Deep LearningRDKitQSARTutorial中文Machine LearningUni-MolnotebookScikit-Learn
已赞13
本文被以下合集收录
材料计算
虚白
更新于 2024-08-25
20 篇5 人关注
App related
Charmy Niu
更新于 2024-01-17
10 篇3 人关注
推荐阅读
公开
定量构效关系(QSAR)模型从0到1 & Uni-Mol入门实践(分类任务)
中文TutorialnotebookMachine LearningUni-MolQSARScikit-Learn
中文TutorialnotebookMachine LearningUni-MolQSARScikit-Learn
zhengh@dp.tech
发布于 2023-06-12
31 赞85 转存文件6 评论
公开
定量构效关系(QSAR)模型从0到1 & Uni-Mol入门实践(回归任务)
pythonUni-Mol QSAR
pythonUni-Mol QSAR
Letian
更新于 2024-08-27
1 赞
评论
 # 计算每个模型在测试集上的性能 mse...

songk@dp.tech

10-13 00:29
这里是top_5_models
评论