Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
AI4S Cup 酶功能与突变序列间的关系预测 工具介绍
AI4S
notebook
protein
enzyme
AI4Snotebookproteinenzyme
zhangjun19
发布于 2024-03-28
推荐镜像 :esm-mpnn-progen:1.0
推荐机型 :c12_m46_1 * NVIDIA GPU B
赞 1
10
5
AI4S Cup 酶功能与突变序列间的关系预测 工具介绍
酶蛋白结构预测及设计常用工具
1.结构预测工具
2. 蛋白序列生成及打分
3. 蛋白设计及打分
4. 蛋白序列特征泛化
LigandMPNN的使用
创建ligandMPNN环境
获得复合体结构和序列文件
LigandMPNN打分
读取打分矩阵
结果示例
ESM使用示例
环境配置
读取预训练特征
ESM-1v介绍
Progen2的使用

AI4S Cup 酶功能与突变序列间的关系预测 工具介绍

代码
文本

©️ Copyright 2024 @ Authors
作者: zhangjun@dp.tech📨
日期:2024-04-01
共享协议:本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
快速开始:点击上方的 开始连接 按钮,选择公共镜像third-party software中的 esm-mpnn-progen:1.0 镜像及 c12_m46_1 * NVIDIA GPU B 节点配置,由于需要加载esm的模型参数,大概15分钟即可运行。
AI4SCUP赛事说明: 本教程仅供选手参考,提供一些常用工具的使用介绍,为选手制定合适的方法提供一些灵感。

比赛地址: https://bohrium.dp.tech/competitions/3812328860

代码
文本

对于酶功能的预测还没有出现具有一致性和普适性的算法,基于AI的算法在酶功能的设计和预测上的成功率和准确率仍然较低。针对RhlA合成酶,我们可以尝试现有的一些酶设计及打分工具,根据已有的实际实验数据,开发定制化的预测模型,以期在该体系的酶功能预测上取得更高的准确率和可靠性,从而指导高性能突变序列的设计。

代码
文本

酶蛋白结构预测及设计常用工具

代码
文本

LigandMPNN的使用

LigandMPNN是一种基于深度学习的蛋白质序列设计方法。以酶-小分子的复合体结构为输入,通过score.py打分即可获得每个位点的20种突变概率。

代码
文本

创建ligandMPNN环境

代码
文本
[ ]
!git clone https://github.com/dauparas/LigandMPNN.git
%cd LigandMPNN
!bash get_model_params.sh "./model_params"
!conda create -n ligandmpnn_env python=3.11
!pip3 install torch
!pip install prody
代码
文本

获得复合体结构和序列文件

从PDB网站获取RhlA的结构文件,PDB id: 8ik2
序列rcsb_pdb_8IK2.fasta: MRRESLLVSVCKGLRVHVERVGQDPGRSTVMLVNGAMATTASFARTCKCLAEHFNVVLFDLPFAGQSRQHNPQRGLITKDDEVEILLALIERFEVNHLVSASWGGISTLLALSRNPRGIRSSVVMAFAPGLNQAMLDYVGRAQALIELDDKSAIGHLLNETVGKYLPQRLKASNHQHMASLATGEYEQARFHIDQVLALNDRGYLACLERIQSHVHFINGSWDEYTTAEDARQFRDYLPHCSFSRVEGTGHFLDLESKLAAVRVHRALLEHLLKQPEPQRAERAAGFHEMAIGYAHHHHHH 更改为(与pdb结构文件保持一致):RRESLLVSVCKGLRVHVERVGQDPGRSTVMLVNGAMATTASFARTCKCLAEHFNVVLFDLPFAGQSRQHNPGLITKDDEVEILLALIERFEVNHLVSASWGGISTLLALSRNPRGIRSSVVMAFAPGLNQAMLDYVGRAQALIELDDKSAIGHLLNETVGKYLPQRLKASNHQHMASLATGEYEQARFHIDQVLALNDRGYLACLERIQSHVHFINGSWDEYTTAEDARQFRDYLPHCSFSRVEGTGHFLDLESKLAAVRVHRALLEHLL

在本次竞赛中,可以先借助Uni-Mol等工具预测蛋白(RhlA)-小分子的复合体结构,随后将不同选择性的复合体结构输入LigandMPNN,即可获得相应的打分。

代码
文本

观察结构,可以发现晶体结构缺失了73-74位残基,可以借助Unifold或Alphafold2等工具预测缺失结构。 image-2.png

代码
文本

LigandMPNN打分

建议使用single amino acid score with sequence info,自回归打分受到解码顺序的影响,可能不准确。其余功能自行探索。

代码
文本
[65]
#进入/personal/soft/LigandMPNN/,创建test.sh

#!/bin/bash
python score.py \
--model_type "ligand_mpnn" \
--seed 111 \
--single_aa_score 1 \
--pdb_path "./inputs/8ik2.pdb" \
--out_folder "./outputs/8ik2_single_aa_score" \
--use_sequence 1 \
--batch_size 1 \
--number_of_batches 10

UsageError: Cell magic `%%` not found.
代码
文本
[77]
%cd /personal/soft/LigandMPNN/
/personal/soft/LigandMPNN
代码
文本
[78]
! bash test.sh
Designing protein from this path: ./inputs/8ik2.pdb

These residues will be redesigned:  ['A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9', 'A10', 'A11', 'A12', 'A13', 'A14', 'A15', 'A16', 'A17', 'A18', 'A19', 'A20', 'A21', 'A22', 'A23', 'A24', 'A25', 'A26', 'A27', 'A28', 'A29', 'A30', 'A31', 'A32', 'A33', 'A34', 'A35', 'A36', 'A37', 'A38', 'A39', 'A40', 'A41', 'A42', 'A43', 'A44', 'A45', 'A46', 'A47', 'A48', 'A49', 'A50', 'A51', 'A52', 'A53', 'A54', 'A55', 'A56', 'A57', 'A58', 'A59', 'A60', 'A61', 'A62', 'A63', 'A64', 'A65', 'A66', 'A67', 'A68', 'A69', 'A70', 'A71', 'A72', 'A75', 'A76', 'A77', 'A78', 'A79', 'A80', 'A81', 'A82', 'A83', 'A84', 'A85', 'A86', 'A87', 'A88', 'A89', 'A90', 'A91', 'A92', 'A93', 'A94', 'A95', 'A96', 'A97', 'A98', 'A99', 'A100', 'A101', 'A102', 'A103', 'A104', 'A105', 'A106', 'A107', 'A108', 'A109', 'A110', 'A111', 'A112', 'A113', 'A114', 'A115', 'A116', 'A117', 'A118', 'A119', 'A120', 'A121', 'A122', 'A123', 'A124', 'A125', 'A126', 'A127', 'A128', 'A129', 'A130', 'A131', 'A132', 'A133', 'A134', 'A135', 'A136', 'A137', 'A138', 'A139', 'A140', 'A141', 'A142', 'A143', 'A144', 'A145', 'A146', 'A147', 'A148', 'A149', 'A150', 'A151', 'A152', 'A153', 'A154', 'A155', 'A156', 'A157', 'A158', 'A159', 'A160', 'A161', 'A162', 'A163', 'A164', 'A165', 'A166', 'A167', 'A168', 'A169', 'A170', 'A171', 'A172', 'A173', 'A174', 'A175', 'A176', 'A177', 'A178', 'A179', 'A180', 'A181', 'A182', 'A183', 'A184', 'A185', 'A186', 'A187', 'A188', 'A189', 'A190', 'A191', 'A192', 'A193', 'A194', 'A195', 'A196', 'A197', 'A198', 'A199', 'A200', 'A201', 'A202', 'A203', 'A204', 'A205', 'A206', 'A207', 'A208', 'A209', 'A210', 'A211', 'A212', 'A213', 'A214', 'A215', 'A216', 'A217', 'A218', 'A219', 'A220', 'A221', 'A222', 'A223', 'A224', 'A225', 'A226', 'A227', 'A228', 'A229', 'A230', 'A231', 'A232', 'A233', 'A234', 'A235', 'A236', 'A237', 'A238', 'A239', 'A240', 'A241', 'A242', 'A243', 'A244', 'A245', 'A246', 'A247', 'A248', 'A249', 'A250', 'A251', 'A252', 'A253', 'A254', 'A255', 'A256', 'A257', 'A258', 'A259', 'A260', 'A261', 'A262', 'A263', 'A264', 'A265', 'A266', 'A267', 'A268', 'A269', 'A270', 'A271', 'A272', 'A273']

These residues will be fixed:  []

The number of ligand atoms parsed is equal to: 13

Type: C, Coords [  4.687   2.347 -12.914], Mask 1

Type: C, Coords [  5.01    3.12  -11.618], Mask 1

Type: C, Coords [  4.634   2.594 -10.177], Mask 1

Type: C, Coords [ 5.57   2.763 -9.026], Mask 1

Type: C, Coords [ 5.001  2.233 -7.688], Mask 1

Type: C, Coords [ 5.164  3.096 -6.482], Mask 1

Type: O, Coords [  4.549   1.245 -10.069], Mask 1

Type: O, Coords [  4.755   1.11  -12.976], Mask 1

Type: O, Coords [  4.367   2.885 -13.955], Mask 1

Type: C, Coords [ 3.814  3.51  -5.931], Mask 1

Type: C, Coords [ 3.735  4.87  -5.267], Mask 1

Type: C, Coords [ 2.87   5.932 -5.892], Mask 1

Type: C, Coords [ 3.586  7.277 -5.902], Mask 1
代码
文本

读取打分矩阵

代码
文本
[73]
#get_score.py
import argparse
import torch
import pandas as pd
import numpy as np
import os

index = ['A', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'V', 'W', 'Y', 'X']
input_file = "./outputs/8ik2_single_aa_score/8ik2.pt"
seq_file = "./inputs/rcsb_pdb_8IK2.fasta"
outpath="./outputs/8ik2_single_aa_score/8ik2_single_aa_score.csv"
# 加载.pt文件
data = torch.load(input_file)
with open(seq_file, 'r') as file:
sequence = file.read().strip()
#sequence = np.array(list(sequence_arr)).reshape(1, -1)
# 创建DataFrame
df_sub = pd.DataFrame(data["mean_of_probs"], index=index)
wt_values = [df_sub.loc[sequence[i], df_sub.columns[j]] for j, i in enumerate(range(len(sequence)))]
# 将每一列的数值与对应列的wt_vaule相减
for col in df_sub.columns:
df_sub[col] = df_sub[col].astype(float) - wt_values[df_sub.columns.get_loc(col)]

# 找到每一列的最大值,并追加在最后一行
max_values = round(df_sub.max(axis=0), 2)
max_labels = []
mutations = []
for i, column_name in enumerate(df_sub.columns):
max_value = max_values[column_name]
max_index = np.argmax(df_sub[column_name].values) # 找到最大值的索引
max_label = sequence[i] + str(i+1) + index[max_index] + ":" + str(max_value)
max_labels.append(max_label)
mutations.append(index[max_index])
if float(max_value) >= 0.4:
print(max_label)
df_sub.loc['Sequence'] = list(sequence)
df_sub.loc['Max_vaules'] = list(max_labels)
df_sub.loc['Mutations'] = list(mutations)
df_sub.to_csv(outpath)
print(f"结果已保存到 {outpath}")
R19V:0.49

N33H:0.54

T45L:0.73

L60W:0.65

H69L:0.47

D78E:0.43

F90Y:0.44

A98H:0.75

S118R:0.46

V136L:0.75

G137D:0.44

D147R:0.51

M175F:0.69

T180P:0.4

Q185N:0.75

R187L:0.44

H189Y:0.43

A225P:0.44

L235I:0.8

H237N:0.53

D251W:0.44

结果已保存到 ./outputs/8ik2_single_aa_score/8ik2_single_aa_score.csv
代码
文本

结果示例

image.png

从图中的打分热图可以看出,R2V,L6V,K11N具有有利的打分(仅为示例)。

代码
文本

ESM使用示例

代码
文本

ESM(Evolutionary Scale Modeling)采用深度学习技术预测蛋白质的结构与功能。该方法将蛋白质序列视为一种特殊的语言,其中每个氨基酸对应一个字母,并利用Transformer模型来掌握这种语言的统计特性。通过在庞大的蛋白质序列集上训练Transformer,ESM能够学习蛋白质序列的进化模式以及序列与结构和功能之间的复杂联系。ESM的训练过程采用遮盖训练策略,即随机隐藏序列中的某些氨基酸,让模型预测这些被遮盖的氨基酸,从而深入学习序列中的长距离依赖关系和上下文信息。训练完成后,ESM输出包含蛋白质的结构和功能信息的高维向量,作为序列特征,从而用于蛋白质的下游预测和分析任务。

代码
文本

环境配置

通过git clone命令下载代码仓库,通过命令conda env create -f environment.yml.一键配置conda环境。此环境已经配置好,可以直接使用。

代码
文本
[ ]
#!git clone https://github.com/facebookresearch/esm.git
# #若出现下载失败的问题,可使用代理,命令如下
!git clone https://ghproxy.com/https://github.com/facebookresearch/esm.git
%cd esm
!conda update -n base -c conda-forge conda
!conda env create -f environment.yml
!python setup.py install
代码
文本

读取预训练特征

从安装好的ESM-2预训练模型中提取蛋白质序列的表示,这里加载esm2_t33_650M_UR50D模型,并可视化自注意力联系图。这些表示可以用于各种下游的生物信息学任务,如序列比对、结构预测或功能注释。

代码
文本
[1]
import torch
import esm

# Load ESM-2 model
model, alphabet = esm.pretrained.esm2_t33_650M_UR50D()
batch_converter = alphabet.get_batch_converter()
model.eval() # disables dropout for deterministic results

# Prepare data (first 2 sequences from ESMStructuralSplitDataset superfamily / 4)
data = [
("protein1", "MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG"),
("protein2", "KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
("protein2 with mask","KALTARQQEVFDLIRD<mask>ISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE"),
("protein3", "K A <mask> I S Q"),
]
batch_labels, batch_strs, batch_tokens = batch_converter(data)
batch_lens = (batch_tokens != alphabet.padding_idx).sum(1)

# Extract per-residue representations (on CPU)
with torch.no_grad():
results = model(batch_tokens, repr_layers=[33], return_contacts=True)
token_representations = results["representations"][33]

# Generate per-sequence representations via averaging
# NOTE: token 0 is always a beginning-of-sequence token, so the first residue is token 1.
sequence_representations = []
for i, tokens_len in enumerate(batch_lens):
sequence_representations.append(token_representations[i, 1 : tokens_len - 1].mean(0))

# Look at the unsupervised self-attention map contact predictions
import matplotlib.pyplot as plt
for (_, seq), tokens_len, attention_contacts in zip(data, batch_lens, results["contacts"]):
plt.matshow(attention_contacts[: tokens_len, : tokens_len])
plt.title(seq)
plt.show()
/opt/miniconda/lib/python3.7/site-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html

  from .autonotebook import tqdm as notebook_tqdm
代码
文本

ESM-1v介绍

ESM-1v是在UR90数据集上训练得到的通用蛋白质语言模型,其创新之处在于能够无需任何额外数据的情况下,预测氨基酸突变对蛋白质功能可能产生的影响,因此可能能够在酶突变功能预测上发挥作用。

本案例演示了如何使用esm-1v模型,来预测蛋白质变异对生物活性的影响。以β-内酰胺酶作为例子,给出了突变位点和实验测得的突变效应值。

参考: https://github.com/facebookresearch/esm/blob/2b369911bb5b4b0dda914521b9475cad1656b2ac/examples/sup_variant_prediction.ipynb

代码
文本
[2]
import random
from collections import Counter
from tqdm import tqdm

import torch
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns

import esm
import scipy
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.decomposition import PCA
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.svm import SVC, SVR
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression, SGDRegressor
from sklearn.pipeline import Pipeline
代码
文本

下载所需参数,该镜像中已经下载好。

代码
文本
[51]
!python esm/scripts/extract.py esm1v_t33_650M_UR90S_1 esm/examples/data/P62593.fasta esm/examples/data/P62593_emb_esm1v/ --repr_layers 33 --include mean
Downloading: "https://dl.fbaipublicfiles.com/fair-esm/models/esm1v_t33_650M_UR90S_1.pt" to /root/.cache/torch/hub/checkpoints/esm1v_t33_650M_UR90S_1.pt

/opt/miniconda/lib/python3.7/site-packages/fair_esm-2.0.1-py3.7.egg/esm/pretrained.py:216: UserWarning: Regression weights not found, predicting contacts will not produce correct results.

Transferred model to GPU

Read esm/examples/data/P62593.fasta with 5397 sequences

Processing 1 of 386 batches (14 sequences)

Processing 2 of 386 batches (14 sequences)

Processing 3 of 386 batches (14 sequences)

Processing 4 of 386 batches (14 sequences)

Processing 5 of 386 batches (14 sequences)

Processing 6 of 386 batches (14 sequences)

Processing 7 of 386 batches (14 sequences)

Processing 8 of 386 batches (14 sequences)

Processing 9 of 386 batches (14 sequences)

Processing 10 of 386 batches (14 sequences)

Processing 11 of 386 batches (14 sequences)

Processing 12 of 386 batches (14 sequences)

Processing 13 of 386 batches (14 sequences)

Processing 14 of 386 batches (14 sequences)

Processing 15 of 386 batches (14 sequences)

Processing 16 of 386 batches (14 sequences)

Processing 17 of 386 batches (14 sequences)

Processing 18 of 386 batches (14 sequences)

Processing 19 of 386 batches (14 sequences)

Processing 20 of 386 batches (14 sequences)

Processing 21 of 386 batches (14 sequences)

Processing 22 of 386 batches (14 sequences)

Processing 23 of 386 batches (14 sequences)

Processing 24 of 386 batches (14 sequences)

Processing 25 of 386 batches (14 sequences)

Processing 26 of 386 batches (14 sequences)

Processing 27 of 386 batches (14 sequences)

Processing 28 of 386 batches (14 sequences)

Processing 29 of 386 batches (14 sequences)

Processing 30 of 386 batches (14 sequences)

Processing 31 of 386 batches (14 sequences)

Processing 32 of 386 batches (14 sequences)

Processing 33 of 386 batches (14 sequences)

Processing 34 of 386 batches (14 sequences)

Processing 35 of 386 batches (14 sequences)

Processing 36 of 386 batches (14 sequences)

Processing 37 of 386 batches (14 sequences)

Processing 38 of 386 batches (14 sequences)

Processing 39 of 386 batches (14 sequences)

Processing 40 of 386 batches (14 sequences)

Processing 41 of 386 batches (14 sequences)

Processing 42 of 386 batches (14 sequences)

Processing 43 of 386 batches (14 sequences)

Processing 44 of 386 batches (14 sequences)

Processing 45 of 386 batches (14 sequences)

Processing 46 of 386 batches (14 sequences)

Processing 47 of 386 batches (14 sequences)

Processing 48 of 386 batches (14 sequences)

Processing 49 of 386 batches (14 sequences)

Processing 50 of 386 batches (14 sequences)

Processing 51 of 386 batches (14 sequences)

Processing 52 of 386 batches (14 sequences)

Processing 53 of 386 batches (14 sequences)

Processing 54 of 386 batches (14 sequences)

Processing 55 of 386 batches (14 sequences)

Processing 56 of 386 batches (14 sequences)

Processing 57 of 386 batches (14 sequences)

Processing 58 of 386 batches (14 sequences)

Processing 59 of 386 batches (14 sequences)

Processing 60 of 386 batches (14 sequences)

Processing 61 of 386 batches (14 sequences)

Processing 62 of 386 batches (14 sequences)

Processing 63 of 386 batches (14 sequences)

Processing 64 of 386 batches (14 sequences)

Processing 65 of 386 batches (14 sequences)

Processing 66 of 386 batches (14 sequences)

Processing 67 of 386 batches (14 sequences)

Processing 68 of 386 batches (14 sequences)

Processing 69 of 386 batches (14 sequences)

Processing 70 of 386 batches (14 sequences)

Processing 71 of 386 batches (14 sequences)

Processing 72 of 386 batches (14 sequences)

Processing 73 of 386 batches (14 sequences)

Processing 74 of 386 batches (14 sequences)

Processing 75 of 386 batches (14 sequences)

Processing 76 of 386 batches (14 sequences)

Processing 77 of 386 batches (14 sequences)

Processing 78 of 386 batches (14 sequences)

Processing 79 of 386 batches (14 sequences)

Processing 80 of 386 batches (14 sequences)

Processing 81 of 386 batches (14 sequences)

Processing 82 of 386 batches (14 sequences)

Processing 83 of 386 batches (14 sequences)

Processing 84 of 386 batches (14 sequences)

Processing 85 of 386 batches (14 sequences)

Processing 86 of 386 batches (14 sequences)

Processing 87 of 386 batches (14 sequences)

Processing 88 of 386 batches (14 sequences)

Processing 89 of 386 batches (14 sequences)

Processing 90 of 386 batches (14 sequences)

Processing 91 of 386 batches (14 sequences)

Processing 92 of 386 batches (14 sequences)

Processing 93 of 386 batches (14 sequences)

Processing 94 of 386 batches (14 sequences)

Processing 95 of 386 batches (14 sequences)

Processing 96 of 386 batches (14 sequences)

Processing 97 of 386 batches (14 sequences)

Processing 98 of 386 batches (14 sequences)

Processing 99 of 386 batches (14 sequences)

Processing 100 of 386 batches (14 sequences)

Processing 101 of 386 batches (14 sequences)

Processing 102 of 386 batches (14 sequences)

Processing 103 of 386 batches (14 sequences)

Processing 104 of 386 batches (14 sequences)

Processing 105 of 386 batches (14 sequences)

Processing 106 of 386 batches (14 sequences)

Processing 107 of 386 batches (14 sequences)

Processing 108 of 386 batches (14 sequences)

Processing 109 of 386 batches (14 sequences)

Processing 110 of 386 batches (14 sequences)

Processing 111 of 386 batches (14 sequences)

Processing 112 of 386 batches (14 sequences)

Processing 113 of 386 batches (14 sequences)

Processing 114 of 386 batches (14 sequences)

Processing 115 of 386 batches (14 sequences)

Processing 116 of 386 batches (14 sequences)

Processing 117 of 386 batches (14 sequences)

Processing 118 of 386 batches (14 sequences)

Processing 119 of 386 batches (14 sequences)

Processing 120 of 386 batches (14 sequences)

Processing 121 of 386 batches (14 sequences)

Processing 122 of 386 batches (14 sequences)

Processing 123 of 386 batches (14 sequences)

Processing 124 of 386 batches (14 sequences)

Processing 125 of 386 batches (14 sequences)

Processing 126 of 386 batches (14 sequences)

Processing 127 of 386 batches (14 sequences)

Processing 128 of 386 batches (14 sequences)

Processing 129 of 386 batches (14 sequences)

Processing 130 of 386 batches (14 sequences)

Processing 131 of 386 batches (14 sequences)

Processing 132 of 386 batches (14 sequences)

Processing 133 of 386 batches (14 sequences)

Processing 134 of 386 batches (14 sequences)

Processing 135 of 386 batches (14 sequences)

Processing 136 of 386 batches (14 sequences)

Processing 137 of 386 batches (14 sequences)

Processing 138 of 386 batches (14 sequences)

Processing 139 of 386 batches (14 sequences)

Processing 140 of 386 batches (14 sequences)

Processing 141 of 386 batches (14 sequences)

Processing 142 of 386 batches (14 sequences)

Processing 143 of 386 batches (14 sequences)

Processing 144 of 386 batches (14 sequences)

Processing 145 of 386 batches (14 sequences)

Processing 146 of 386 batches (14 sequences)

Processing 147 of 386 batches (14 sequences)

Processing 148 of 386 batches (14 sequences)

Processing 149 of 386 batches (14 sequences)

Processing 150 of 386 batches (14 sequences)

Processing 151 of 386 batches (14 sequences)

Processing 152 of 386 batches (14 sequences)

Processing 153 of 386 batches (14 sequences)

Processing 154 of 386 batches (14 sequences)

Processing 155 of 386 batches (14 sequences)

Processing 156 of 386 batches (14 sequences)

Processing 157 of 386 batches (14 sequences)

Processing 158 of 386 batches (14 sequences)

Processing 159 of 386 batches (14 sequences)

Processing 160 of 386 batches (14 sequences)

Processing 161 of 386 batches (14 sequences)

Processing 162 of 386 batches (14 sequences)

Processing 163 of 386 batches (14 sequences)

Processing 164 of 386 batches (14 sequences)

Processing 165 of 386 batches (14 sequences)

Processing 166 of 386 batches (14 sequences)

Processing 167 of 386 batches (14 sequences)

Processing 168 of 386 batches (14 sequences)

Processing 169 of 386 batches (14 sequences)

Processing 170 of 386 batches (14 sequences)

Processing 171 of 386 batches (14 sequences)

Processing 172 of 386 batches (14 sequences)

Processing 173 of 386 batches (14 sequences)

Processing 174 of 386 batches (14 sequences)

Processing 175 of 386 batches (14 sequences)

Processing 176 of 386 batches (14 sequences)

Processing 177 of 386 batches (14 sequences)

Processing 178 of 386 batches (14 sequences)

Processing 179 of 386 batches (14 sequences)

Processing 180 of 386 batches (14 sequences)

Processing 181 of 386 batches (14 sequences)

Processing 182 of 386 batches (14 sequences)

Processing 183 of 386 batches (14 sequences)

Processing 184 of 386 batches (14 sequences)

Processing 185 of 386 batches (14 sequences)

Processing 186 of 386 batches (14 sequences)

Processing 187 of 386 batches (14 sequences)

Processing 188 of 386 batches (14 sequences)

Processing 189 of 386 batches (14 sequences)

Processing 190 of 386 batches (14 sequences)

Processing 191 of 386 batches (14 sequences)

Processing 192 of 386 batches (14 sequences)

Processing 193 of 386 batches (14 sequences)

Processing 194 of 386 batches (14 sequences)

Processing 195 of 386 batches (14 sequences)

Processing 196 of 386 batches (14 sequences)

Processing 197 of 386 batches (14 sequences)

Processing 198 of 386 batches (14 sequences)

Processing 199 of 386 batches (14 sequences)

Processing 200 of 386 batches (14 sequences)

Processing 201 of 386 batches (14 sequences)

Processing 202 of 386 batches (14 sequences)

Processing 203 of 386 batches (14 sequences)

Processing 204 of 386 batches (14 sequences)

Processing 205 of 386 batches (14 sequences)

Processing 206 of 386 batches (14 sequences)

Processing 207 of 386 batches (14 sequences)

Processing 208 of 386 batches (14 sequences)

Processing 209 of 386 batches (14 sequences)

Processing 210 of 386 batches (14 sequences)

Processing 211 of 386 batches (14 sequences)

Processing 212 of 386 batches (14 sequences)

Processing 213 of 386 batches (14 sequences)

Processing 214 of 386 batches (14 sequences)

Processing 215 of 386 batches (14 sequences)

Processing 216 of 386 batches (14 sequences)

Processing 217 of 386 batches (14 sequences)

Processing 218 of 386 batches (14 sequences)

Processing 219 of 386 batches (14 sequences)

Processing 220 of 386 batches (14 sequences)

Processing 221 of 386 batches (14 sequences)

Processing 222 of 386 batches (14 sequences)

Processing 223 of 386 batches (14 sequences)

Processing 224 of 386 batches (14 sequences)

Processing 225 of 386 batches (14 sequences)

Processing 226 of 386 batches (14 sequences)

Processing 227 of 386 batches (14 sequences)

Processing 228 of 386 batches (14 sequences)

Processing 229 of 386 batches (14 sequences)

Processing 230 of 386 batches (14 sequences)

Processing 231 of 386 batches (14 sequences)

Processing 232 of 386 batches (14 sequences)

Processing 233 of 386 batches (14 sequences)

Processing 234 of 386 batches (14 sequences)

Processing 235 of 386 batches (14 sequences)

Processing 236 of 386 batches (14 sequences)

Processing 237 of 386 batches (14 sequences)

Processing 238 of 386 batches (14 sequences)

Processing 239 of 386 batches (14 sequences)

Processing 240 of 386 batches (14 sequences)

Processing 241 of 386 batches (14 sequences)

Processing 242 of 386 batches (14 sequences)

Processing 243 of 386 batches (14 sequences)

Processing 244 of 386 batches (14 sequences)

Processing 245 of 386 batches (14 sequences)

Processing 246 of 386 batches (14 sequences)

Processing 247 of 386 batches (14 sequences)

Processing 248 of 386 batches (14 sequences)

Processing 249 of 386 batches (14 sequences)

Processing 250 of 386 batches (14 sequences)

Processing 251 of 386 batches (14 sequences)

Processing 252 of 386 batches (14 sequences)

Processing 253 of 386 batches (14 sequences)

Processing 254 of 386 batches (14 sequences)

Processing 255 of 386 batches (14 sequences)

Processing 256 of 386 batches (14 sequences)

Processing 257 of 386 batches (14 sequences)

Processing 258 of 386 batches (14 sequences)

Processing 259 of 386 batches (14 sequences)

Processing 260 of 386 batches (14 sequences)

Processing 261 of 386 batches (14 sequences)

Processing 262 of 386 batches (14 sequences)

Processing 263 of 386 batches (14 sequences)

Processing 264 of 386 batches (14 sequences)

Processing 265 of 386 batches (14 sequences)

Processing 266 of 386 batches (14 sequences)

Processing 267 of 386 batches (14 sequences)

Processing 268 of 386 batches (14 sequences)

Processing 269 of 386 batches (14 sequences)

Processing 270 of 386 batches (14 sequences)

Processing 271 of 386 batches (14 sequences)

Processing 272 of 386 batches (14 sequences)

Processing 273 of 386 batches (14 sequences)

Processing 274 of 386 batches (14 sequences)

Processing 275 of 386 batches (14 sequences)

Processing 276 of 386 batches (14 sequences)

Processing 277 of 386 batches (14 sequences)

Processing 278 of 386 batches (14 sequences)

Processing 279 of 386 batches (14 sequences)

Processing 280 of 386 batches (14 sequences)

Processing 281 of 386 batches (14 sequences)

Processing 282 of 386 batches (14 sequences)

Processing 283 of 386 batches (14 sequences)

Processing 284 of 386 batches (14 sequences)

Processing 285 of 386 batches (14 sequences)

Processing 286 of 386 batches (14 sequences)

Processing 287 of 386 batches (14 sequences)

Processing 288 of 386 batches (14 sequences)

Processing 289 of 386 batches (14 sequences)

Processing 290 of 386 batches (14 sequences)

Processing 291 of 386 batches (14 sequences)

Processing 292 of 386 batches (14 sequences)

Processing 293 of 386 batches (14 sequences)

Processing 294 of 386 batches (14 sequences)

Processing 295 of 386 batches (14 sequences)

Processing 296 of 386 batches (14 sequences)

Processing 297 of 386 batches (14 sequences)

Processing 298 of 386 batches (14 sequences)

Processing 299 of 386 batches (14 sequences)

Processing 300 of 386 batches (14 sequences)

Processing 301 of 386 batches (14 sequences)

Processing 302 of 386 batches (14 sequences)

Processing 303 of 386 batches (14 sequences)

Processing 304 of 386 batches (14 sequences)

Processing 305 of 386 batches (14 sequences)

Processing 306 of 386 batches (14 sequences)

Processing 307 of 386 batches (14 sequences)

Processing 308 of 386 batches (14 sequences)

Processing 309 of 386 batches (14 sequences)

Processing 310 of 386 batches (14 sequences)

Processing 311 of 386 batches (14 sequences)

Processing 312 of 386 batches (14 sequences)

Processing 313 of 386 batches (14 sequences)

Processing 314 of 386 batches (14 sequences)

Processing 315 of 386 batches (14 sequences)

Processing 316 of 386 batches (14 sequences)

Processing 317 of 386 batches (14 sequences)

Processing 318 of 386 batches (14 sequences)

Processing 319 of 386 batches (14 sequences)

Processing 320 of 386 batches (14 sequences)

Processing 321 of 386 batches (14 sequences)

Processing 322 of 386 batches (14 sequences)

Processing 323 of 386 batches (14 sequences)

Processing 324 of 386 batches (14 sequences)

Processing 325 of 386 batches (14 sequences)

Processing 326 of 386 batches (14 sequences)

Processing 327 of 386 batches (14 sequences)

Processing 328 of 386 batches (14 sequences)

Processing 329 of 386 batches (14 sequences)

Processing 330 of 386 batches (14 sequences)

Processing 331 of 386 batches (14 sequences)

Processing 332 of 386 batches (14 sequences)

Processing 333 of 386 batches (14 sequences)

Processing 334 of 386 batches (14 sequences)

Processing 335 of 386 batches (14 sequences)

Processing 336 of 386 batches (14 sequences)

Processing 337 of 386 batches (14 sequences)

Processing 338 of 386 batches (14 sequences)

Processing 339 of 386 batches (14 sequences)

Processing 340 of 386 batches (14 sequences)

Processing 341 of 386 batches (14 sequences)

Processing 342 of 386 batches (14 sequences)

Processing 343 of 386 batches (14 sequences)

Processing 344 of 386 batches (14 sequences)

Processing 345 of 386 batches (14 sequences)

Processing 346 of 386 batches (14 sequences)

Processing 347 of 386 batches (14 sequences)

Processing 348 of 386 batches (14 sequences)

Processing 349 of 386 batches (14 sequences)

Processing 350 of 386 batches (14 sequences)

Processing 351 of 386 batches (14 sequences)

Processing 352 of 386 batches (14 sequences)

Processing 353 of 386 batches (14 sequences)

Processing 354 of 386 batches (14 sequences)

Processing 355 of 386 batches (14 sequences)

Processing 356 of 386 batches (14 sequences)

Processing 357 of 386 batches (14 sequences)

Processing 358 of 386 batches (14 sequences)

Processing 359 of 386 batches (14 sequences)

Processing 360 of 386 batches (14 sequences)

Processing 361 of 386 batches (14 sequences)

Processing 362 of 386 batches (14 sequences)

Processing 363 of 386 batches (14 sequences)

Processing 364 of 386 batches (14 sequences)

Processing 365 of 386 batches (14 sequences)

Processing 366 of 386 batches (14 sequences)

Processing 367 of 386 batches (14 sequences)

Processing 368 of 386 batches (14 sequences)

Processing 369 of 386 batches (14 sequences)

Processing 370 of 386 batches (14 sequences)

Processing 371 of 386 batches (14 sequences)

Processing 372 of 386 batches (14 sequences)

Processing 373 of 386 batches (14 sequences)

Processing 374 of 386 batches (14 sequences)

Processing 375 of 386 batches (14 sequences)

Processing 376 of 386 batches (14 sequences)

Processing 377 of 386 batches (14 sequences)

Processing 378 of 386 batches (14 sequences)

Processing 379 of 386 batches (14 sequences)

Processing 380 of 386 batches (14 sequences)

Processing 381 of 386 batches (14 sequences)

Processing 382 of 386 batches (14 sequences)

Processing 383 of 386 batches (14 sequences)

Processing 384 of 386 batches (14 sequences)

Processing 385 of 386 batches (14 sequences)

Processing 386 of 386 batches (7 sequences)
代码
文本
[3]
#替换为对应路径即可
FASTA_PATH = "/personal/esm/examples/data/P62593.fasta" # Path to P62593.fasta
EMB_PATH = "/personal/esm/examples/data/P62593_emb_esm1v" # Path to directory of embeddings for P62593.fasta
EMB_LAYER = 33
代码
文本

加载embeddings(Xs)以及对应目标效应(ys),并划分训练集和测试集。

代码
文本
[4]
ys = []
Xs = []
for header, _seq in esm.data.read_fasta(FASTA_PATH):
scaled_effect = header.split('|')[-1]
ys.append(float(scaled_effect))
fn = f'{EMB_PATH}/{header[0:]}.pt'
embs = torch.load(fn)
Xs.append(embs['mean_representations'][EMB_LAYER])
Xs = torch.stack(Xs, dim=0).numpy()
print(len(ys))
print(Xs.shape)

train_size = 0.8
Xs_train, Xs_test, ys_train, ys_test = train_test_split(Xs, ys, train_size=train_size, random_state=42)
Xs_train.shape, Xs_test.shape, len(ys_train), len(ys_test)
5397

(5397, 1280)
((4317, 1280), (1080, 1280), 4317, 1080)
代码
文本

使用PCA对1280维特征的数据进行降维,将特征降到60维。

代码
文本
[28]
num_pca_components = 60
pca = PCA(num_pca_components)
Xs_train_pca = pca.fit_transform(Xs_train)
Xs_train_pca.shape
(4317, 60)
代码
文本

绘制降维后的前两个主成分,点的颜色显示表示突变效应的大小。从视觉上看,esm表征对于区分不同突变效应是有一定效果的,但分离并不明显。

代码
文本
[8]
fig_dims = (7, 6)
fig, ax = plt.subplots(figsize=fig_dims)
sc = ax.scatter(Xs_train_pca[:,0], Xs_train_pca[:,1], c=ys_train, marker='.')
ax.set_xlabel('PCA first principal component')
ax.set_ylabel('PCA second principal component')
plt.colorbar(sc, label='Variant Effect')
<matplotlib.colorbar.Colorbar at 0x7fb8025c32d0>
代码
文本

接下来,使用scikit-learn库的网格搜索方法来优化模型的参数,定义了三种不同的回归模型,使用PCA进行数据降维,训练数据Xs_train和ys_train。

代码
文本
[9]
knn_grid = [
{
'model': [KNeighborsRegressor()],
'model__n_neighbors': [5, 10],
'model__weights': ['uniform', 'distance'],
'model__algorithm': ['ball_tree', 'kd_tree', 'brute'],
'model__leaf_size' : [15, 30],
'model__p' : [1, 2],
}
]

svm_grid = [
{
'model': [SVR()],
'model__C' : [0.1, 1.0, 10.0],
'model__kernel' : ['linear', 'poly', 'rbf', 'sigmoid'],
'model__degree' : [3],
'model__gamma': ['scale'],
}
]

rfr_grid = [
{
'model': [RandomForestRegressor()],
'model__n_estimators' : [20],
'model__criterion' : ['squared_error', 'absolute_error'],
'model__max_features': ['sqrt', 'log2'],
'model__min_samples_split' : [5, 10],
'model__min_samples_leaf': [1, 4]
}
]
代码
文本
[10]
cls_list = [KNeighborsRegressor, SVR, RandomForestRegressor]
param_grid_list = [knn_grid, svm_grid, rfr_grid]
代码
文本
[33]
# make sure data preprocessing (PCA here) is run inside CV to avoid data leakage
pipe = Pipeline(
steps = (
('pca', PCA(num_pca_components)),
('model', 'passthrough')
)
)

result_list = []
grid_list = []
for cls_name, param_grid in zip(cls_list, param_grid_list):
print(cls_name)
grid = GridSearchCV(
estimator = pipe,
param_grid = param_grid,
scoring = 'r2',
verbose = 1,
n_jobs = -1 # use all available cores
)
grid.fit(Xs_train, ys_train)
result_list.append(pd.DataFrame.from_dict(grid.cv_results_))
grid_list.append(grid)
<class 'sklearn.neighbors._regression.KNeighborsRegressor'>

Fitting 5 folds for each of 48 candidates, totalling 240 fits

<class 'sklearn.svm._classes.SVR'>

Fitting 5 folds for each of 12 candidates, totalling 60 fits

<class 'sklearn.ensemble._forest.RandomForestRegressor'>

Fitting 5 folds for each of 16 candidates, totalling 80 fits
代码
文本
[42]
result_list[0]['param_model'][0]
KNeighborsRegressor(algorithm='brute', leaf_size=15, p=1, weights='distance')
代码
文本

根据在测试集上的分数mean_test_score打印每个模型排名前五的参数设置

代码
文本
[34]
result_list[0].sort_values('rank_test_score').head(5)
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
mean_fit_timestd_fit_timemean_score_timestd_score_timeparam_modelparam_model__algorithmparam_model__leaf_sizeparam_model__n_neighborsparam_model__pparam_model__weightsparamssplit0_test_scoresplit1_test_scoresplit2_test_scoresplit3_test_scoresplit4_test_scoremean_test_scorestd_test_scorerank_test_score
330.4648660.1779690.3212980.108862KNeighborsRegressor(algorithm='brute', leaf_si...brute1551distance{'model': KNeighborsRegressor(algorithm='brute...0.6986470.6974570.6645840.6536140.6614700.6751540.0190381
10.4337040.0300750.5009210.148813KNeighborsRegressor(algorithm='brute', leaf_si...ball_tree1551distance{'model': KNeighborsRegressor(algorithm='brute...0.6906710.7045710.6680780.6490020.6614730.6747590.0201322
410.4007730.0148720.2694590.013537KNeighborsRegressor(algorithm='brute', leaf_si...brute3051distance{'model': KNeighborsRegressor(algorithm='brute...0.6964420.7047920.6569560.6527900.6622520.6746460.0215783
90.3597920.0479040.4150020.016745KNeighborsRegressor(algorithm='brute', leaf_si...ball_tree3051distance{'model': KNeighborsRegressor(algorithm='brute...0.6938180.6993750.6657140.6465090.6560080.6722850.0208334
170.3843390.0282340.7326690.047760KNeighborsRegressor(algorithm='brute', leaf_si...kd_tree1551distance{'model': KNeighborsRegressor(algorithm='brute...0.6950700.7000200.6601170.6432850.6596420.6716270.0220695
,
代码
文本
[43]
result_list[1].sort_values('rank_test_score').head(5)
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
mean_fit_timestd_fit_timemean_score_timestd_score_timeparam_modelparam_model__Cparam_model__degreeparam_model__gammaparam_model__kernelparamssplit0_test_scoresplit1_test_scoresplit2_test_scoresplit3_test_scoresplit4_test_scoremean_test_scorestd_test_scorerank_test_score
61.4081590.0302120.2593440.007544SVR()1.03scalerbf{'model': SVR(), 'model__C': 1.0, 'model__degr...0.7387750.7268740.6775210.7040640.6899640.7074390.0226781
101.9682620.0933800.2507300.042791SVR()10.03scalerbf{'model': SVR(), 'model__C': 10.0, 'model__deg...0.7004710.7143930.6710480.6949770.6785620.6918900.0155022
21.7090860.8363040.3704260.166561SVR()0.13scalerbf{'model': SVR(), 'model__C': 0.1, 'model__degr...0.6431360.6214950.5754180.6090850.5811990.6060670.0252153
52.2184330.4443320.1454570.010865SVR()1.03scalepoly{'model': SVR(), 'model__C': 1.0, 'model__degr...0.5236250.4241930.4540110.4776160.4219540.4602800.0377454
81.4631970.3238810.1642610.062616SVR()10.03scalelinear{'model': SVR(), 'model__C': 10.0, 'model__deg...0.4839420.4473070.4219510.4568680.4398850.4499910.0204725
,
代码
文本
[44]
result_list[2].sort_values('rank_test_score').head(5)
, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
mean_fit_timestd_fit_timemean_score_timestd_score_timeparam_modelparam_model__criterionparam_model__max_featuresparam_model__min_samples_leafparam_model__min_samples_splitparam_model__n_estimatorsparamssplit0_test_scoresplit1_test_scoresplit2_test_scoresplit3_test_scoresplit4_test_scoremean_test_scorestd_test_scorerank_test_score
00.9858760.0194410.0132880.001239RandomForestRegressor(max_features='sqrt', min...squared_errorsqrt1520{'model': RandomForestRegressor(max_features='...0.5367940.5296430.4964190.5263190.4830460.5144440.0208921
20.9004090.0344860.0125280.001749RandomForestRegressor(max_features='sqrt', min...squared_errorsqrt4520{'model': RandomForestRegressor(max_features='...0.5363880.5107550.4781700.5251150.4671170.5035090.0267092
40.8343380.0415320.0141590.002465RandomForestRegressor(max_features='sqrt', min...squared_errorlog21520{'model': RandomForestRegressor(max_features='...0.5140580.5161650.4777990.5234400.4686730.5000270.0222833
10.9659250.0328330.0133760.001395RandomForestRegressor(max_features='sqrt', min...squared_errorsqrt11020{'model': RandomForestRegressor(max_features='...0.5215750.4987320.4620310.5256690.4908610.4997740.0230264
50.7609860.0914460.0121310.001436RandomForestRegressor(max_features='sqrt', min...squared_errorlog211020{'model': RandomForestRegressor(max_features='...0.5278990.5001220.4615900.4980320.4774030.4930090.0224675
,
代码
文本

绘制学习曲线以了解模型的性能随着训练数据量的变化情况

代码
文本
[59]
from sklearn.model_selection import learning_curve
def plot_learning_curve(estimator, X, y, title, subplot_index):
plt.subplot(1, 3, subplot_index)
train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=5, scoring="r2", n_jobs=-1)
train_scores_mean = np.mean(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)

plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score")
plt.xlabel("Training examples")
plt.ylabel("Score")
plt.legend(loc="best")
plt.title(title)
plt.grid()
plt.figure(figsize=(18, 6))
subplot_index = 1
for grid in grid_list:
best_estimator = grid.best_estimator_
plot_learning_curve(best_estimator, Xs_train, ys_train, title=f'{best_estimator.named_steps["model"].__class__.__name__} Learning Curve', subplot_index=subplot_index)
subplot_index += 1
plt.show()
代码
文本

对于三种方法,绘制不同模型的预测结果与真实结果的散点图,以便直观地比较它们的性能。并使用best_estimator_来评估验证集的预测突变效应与真实突变效应之间的相关性,使用Spearman相关系数。

代码
文本
[60]
def plot_predictions(y_true, y_pred, title, ax):
ax.scatter(y_true, y_pred, alpha=0.5)
ax.set_xlabel('True Values')
ax.set_ylabel('Predictions')
ax.set_title(title)

fig, axs = plt.subplots(1, 3, figsize=(24, 8))

for i, grid in enumerate(grid_list):
model_name = grid.best_estimator_.named_steps["model"].__class__.__name__
print(model_name) # get the model details from the estimator
print()
preds = grid.predict(Xs_test)
print(f'{scipy.stats.spearmanr(ys_test, preds)}')
print('', '-' * 80, '')
plot_predictions(ys_test, preds, title=f'{model_name} Predictions vs True Values', ax=axs[i])

plt.show()
KNeighborsRegressor



SpearmanrResult(correlation=0.8000028700947366, pvalue=2.119700151311828e-241)

 -------------------------------------------------------------------------------- 

SVR



SpearmanrResult(correlation=0.8134720990350951, pvalue=5.536753308261259e-256)

 -------------------------------------------------------------------------------- 

RandomForestRegressor



SpearmanrResult(correlation=0.7206324349238818, pvalue=1.1276858229812068e-173)

 -------------------------------------------------------------------------------- 
代码
文本

根据结果,SVM在测试集上表现最好,spearman rho值为0.815,说明使用预训练ESM嵌入表征,能够在下游预测任务中取得不错的效果。
关于更加有效的zero-shot 突变预测方法,可以查看对应examples/variant-prediction文件夹。

对于RhlA酶突变及其相应的活性和选择性的例子,我们可以尝试使用ESM-1v模型进行数据泛化。然而,仅仅依赖序列特征是不够的,我们还需要考虑结构特征,以便深入挖掘突变对两种底物与酶之间催化几何构象和空间结构的影响。

代码
文本

Progen2的使用

代码
文本

Progen2是一种蛋白质序列生成语言模型,能够根据所需属性生成特定蛋白质序列,也可以用于对蛋白质序列的适应度进行评估。
Progen2的安装流程非常简单,我们已经集成到了这个环境中,可以下载模型参数后,直接使用。

代码
文本
[ ]
!git clone https://github.com/salesforce/progen
%cd progen/progen2
# checkpoint
!model=progen2-large # 选择模型参数
!wget -P checkpoints/${model} https://storage.googleapis.com/sfr-progen-research/checkpoints/${model}.tar.gz
!tar -xvf checkpoints/${model}/${model}.tar.gz -C checkpoints/${model}/

#!python3.8 -m venv .venv
#!source .venv/bin/activate
!pip3 install --upgrade pip setuptools
!pip3 install -r requirements.txt
代码
文本
[5]
%cd /personal/soft/progen/progen2/
/personal/soft/progen/progen2
代码
文本
[6]
!python3 likelihood.py --model progen2-large --context "1RRESLLVSVCKGLRVHVERVGQDPGRSTVMLVNGAMATTASFARTCKCLAEHFNVVLFDLPFAGQSRQHNPGLITKDDEVEILLALIERFEVNHLVSASWGGISTLLALSRNPRGIRSSVVMAFAPGLNQAMLDYVGRAQALIELDDKSAIGHLLNETVGKYLPQRLKASNHQHMASLATGEYEQARFHIDQVLALNDRGYLACLERIQSHVHFINGSWDEYTTAEDARQFRDYLPHCSFSRVEGTGHFLDLESKLAAVRVHRALLEHLL"

loading parameters

loading parameters took 58.58s

loading tokenizer

loading tokenizer took 0.04s

log-likelihood (left-to-right, right-to-left)

ll_sum=-595.1453857421875

ll_mean=-2.208328366279602

log-likelihood (left-to-right, right-to-left) took 0.90s

done.
代码
文本

Progen2返回的打分ll_mean即为对序列在序列空间中可能性的评估,即概率的对数值,可以作为对序列适应度(fitness)的打分评估。sample.py可以用于对蛋白质进行序列生成式设计。

代码
文本
AI4S
notebook
protein
enzyme
AI4Snotebookproteinenzyme
已赞1
本文被以下合集收录
AI-蛋白预测
bohred2edf
更新于 2024-05-11
1 篇0 人关注
AI4S Cup - 酶功能与突变序列间的关系预测
bohr40876c
更新于 2024-04-22
1 篇0 人关注
推荐阅读
公开
AI4S Cup学习赛:特征工程与可视化
AI4S Cup-Getting Started
AI4S Cup-Getting Started
hyb
发布于 2024-02-01
4 赞4 转存文件
公开
AI4S Cup 电芯电化学阻抗赛题 Baseline
AI4SCUP-EIS
AI4SCUP-EIS
JiaweiMiao
发布于 2023-10-12
5 赞85 转存文件
评论
 <div style="color:bl...

Relax

04-17 08:12
如何把这个环境配置到我本地

zhangjun19

作者
04-18 01:35
公共镜像,在third-party software里,registry.dp.tech/dptech/esm-mpnn-progen:1.0
评论