Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
Fine-Tuning Protein Language Models
Protein Language Model
Hugging Face
English
Transformers
Protein Language Model Hugging FaceEnglishTransformers
Bohrium
发布于 2023-06-13
2
AI4SCUP-CNS-BBB(v1)

最后一次修改: dingzh@dp.tech

描述: 本教程主要参考 hugging face notebook,可在 Bohrium Notebook 上直接运行。你可以点击界面上方蓝色按钮 开始连接,选择 bohrium-notebook:2023-04-07 镜像及任意一款GPU节点配置,稍等片刻即可运行。 如您遇到任何问题,请联系 bohrium@dp.tech

共享协议: 本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。

代码
文本
[ ]
! pip install transformers evaluate datasets requests pandas sklearn
代码
文本

use huggingface token instead {huggingface_token}

代码
文本
[ ]
%%bash
cd ~
touch .git-credentials
echo "https://user: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@huggingface.co" > .git-credentials
代码
文本

Then you need to install Git-LFS. Uncomment the following instructions:

代码
文本
[ ]
!apt install git-lfs
代码
文本

We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.

代码
文本
[ ]
from transformers.utils import send_example_telemetry

send_example_telemetry("protein_language_modeling_notebook", framework="pytorch")
代码
文本

Fine-Tuning Protein Language Models

代码
文本

In this notebook, we're going to do some transfer learning to fine-tune some large, pre-trained protein language models on tasks of interest. If that sentence feels a bit intimidating to you, don't panic - there's a blog post that explains the concepts here in much more detail.

The specific model we're going to use is ESM-2, which is the state-of-the-art protein language model at the time of writing (November 2022). The citation for this model is Lin et al, 2022.

There are several ESM-2 checkpoints with differing model sizes. Larger models will generally have better accuracy, but they require more GPU memory and will take much longer to train. The available ESM-2 checkpoints (at time of writing) are:

Checkpoint name Num layers Num parameters
esm2_t48_15B_UR50D 48 15B
esm2_t36_3B_UR50D 36 3B
esm2_t33_650M_UR50D 33 650M
esm2_t30_150M_UR50D 30 150M
esm2_t12_35M_UR50D 12 35M
esm2_t6_8M_UR50D 6 8M

Note that the larger checkpoints may be very difficult to train without a large cloud GPU like an A100 or H100, and the largest 15B parameter checkpoint will probably be impossible to train on any single GPU! Also, note that memory usage for attention during training will scale as O(batch_size * num_layers * seq_len^2), so larger models on long sequences will use quite a lot of memory! We will use the esm2_t12_35M_UR50D checkpoint for this notebook, which should train on any Colab instance or modern GPU.

代码
文本
[ ]
model_checkpoint = "facebook/esm2_t12_35M_UR50D"
代码
文本

Sequence classification

代码
文本

One of the most common tasks you can perform with a language model is sequence classification. In sequence classification, we classify an entire protein into a category, from a list of two or more possibilities. There's no limit on the number of categories you can use, or the specific problem you choose, as long as it's something the model could in theory infer from the raw protein sequence. To keep things simple for this example, though, let's try classifying proteins by their cellular localization - given their sequence, can we predict if they're going to be found in the cytosol (the fluid inside the cell) or embedded in the cell membrane?

代码
文本

Data preparation

代码
文本

In this section, we're going to gather some training data from UniProt. Our goal is to create a pair of lists: sequences and labels. sequences will be a list of protein sequences, which will just be strings like "MNKL...", where each letter represents a single amino acid in the complete protein. labels will be a list of the category for each sequence. The categories will just be integers, with 0 representing the first category, 1 representing the second and so on. In other words, if sequences[i] is a protein sequence then labels[i] should be its corresponding category. These will form the training data we're going to use to teach the model the task we want it to do.

If you're adapting this notebook for your own use, this will probably be the main section you want to change! You can do whatever you want here, as long as you create those two lists by the end of it. If you want to follow along with this example, though, first we'll need to import requests and set up our query to UniProt.

代码
文本
[ ]
import requests

query_url ="https://rest.uniprot.org/uniprotkb/stream?compressed=true&fields=accession%2Csequence%2Ccc_subcellular_location&format=tsv&query=%28%28organism_id%3A9606%29%20AND%20%28reviewed%3Atrue%29%20AND%20%28length%3A%5B80%20TO%20500%5D%29%29"
代码
文本

This query URL might seem mysterious, but it isn't! To get it, we searched for (organism_id:9606) AND (reviewed:true) AND (length:[80 TO 500]) on UniProt to get a list of reasonably-sized human proteins, then selected 'Download', and set the format to TSV and the columns to Sequence and Subcellular location [CC], since those contain the data we care about for this task.

Once that's done, selecting Generate URL for API gives you a URL you can pass to Requests. Alternatively, if you're not on Colab you can just download the data through the web interface and open the file locally.

代码
文本
[ ]
uniprot_request = requests.get(query_url)
代码
文本

To get this data into Pandas, we use a BytesIO object, which Pandas will treat like a file. If you downloaded the data as a file you can skip this bit and just pass the filepath directly to read_csv.

代码
文本
[ ]
from io import BytesIO
import pandas

bio = BytesIO(uniprot_request.content)

df = pandas.read_csv(bio, compression='gzip', sep='\t')
df
代码
文本

Nice! Now we have some proteins and their subcellular locations. Let's start filtering this down. First, let's ditch the columns without subcellular location information.

代码
文本
[ ]
df = df.dropna() # Drop proteins with missing columns
代码
文本

Now we'll make one dataframe of proteins that contain cytosol or cytoplasm in their subcellular localization column, and a second that mentions the membrane or cell membrane. To ensure we don't get overlap, we ensure each dataframe only contains proteins that don't match the other search term.

代码
文本
[ ]
cytosolic = df['Subcellular location [CC]'].str.contains("Cytosol") | df['Subcellular location [CC]'].str.contains("Cytoplasm")
membrane = df['Subcellular location [CC]'].str.contains("Membrane") | df['Subcellular location [CC]'].str.contains("Cell membrane")
代码
文本
[ ]
cytosolic_df = df[cytosolic & ~membrane]
cytosolic_df
代码
文本
[ ]
membrane_df = df[membrane & ~cytosolic]
membrane_df
代码
文本

We're almost done! Now, let's make a list of sequences from each df and generate the associated labels. We'll use 0 as the label for cytosolic proteins and 1 as the label for membrane proteins.

代码
文本
[ ]
cytosolic_sequences = cytosolic_df["Sequence"].tolist()
cytosolic_labels = [0 for protein in cytosolic_sequences]
代码
文本
[ ]
membrane_sequences = membrane_df["Sequence"].tolist()
membrane_labels = [1 for protein in membrane_sequences]
代码
文本

Now we can concatenate these lists together to get the sequences and labels lists that will form our final training data. Don't worry - they'll get shuffled during training!

代码
文本
[ ]
sequences = cytosolic_sequences + membrane_sequences
labels = cytosolic_labels + membrane_labels

# Quick check to make sure we got it right
len(sequences) == len(labels)
代码
文本

Phew!

代码
文本

Splitting the data

代码
文本

Since the data we're loading isn't prepared for us as a machine learning dataset, we'll have to split the data into train and test sets ourselves! We can use sklearn's function for that:

代码
文本
[ ]
from sklearn.model_selection import train_test_split

train_sequences, test_sequences, train_labels, test_labels = train_test_split(sequences, labels, test_size=0.25, shuffle=True)
代码
文本

Tokenizing the data

代码
文本

All inputs to neural nets must be numerical. The process of converting strings into numerical indices suitable for a neural net is called tokenization. For natural language this can be quite complex, as usually the network's vocabulary will not contain every possible word, which means the tokenizer must handle splitting rarer words into pieces, as well as all the complexities of capitalization and unicode characters and so on.

With proteins, however, things are very easy. In protein language models, each amino acid is converted to a single token. Every model on transformers comes with an associated tokenizer that handles tokenization for it, and protein language models are no different. Let's get our tokenizer!

代码
文本
[ ]
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
代码
文本

Let's try a single sequence to see what the outputs from our tokenizer look like:

代码
文本
[ ]
tokenizer(train_sequences[0])
代码
文本

This looks good! We can see that our sequence has been converted into input_ids, which is the tokenized sequence, and an attention_mask. The attention mask handles the case when we have sequences of variable length - in those cases, the shorter sequences are padded with blank "padding" tokens, and the attention mask is padded with 0s to indicate that those tokens should be ignored by the model.

So now, let's tokenize our whole dataset. Note that we don't need to do anything with the labels, as they're already in the format we need.

代码
文本
[ ]
train_tokenized = tokenizer(train_sequences)
test_tokenized = tokenizer(test_sequences)
代码
文本

Dataset creation

代码
文本

Now we want to turn this data into a dataset that PyTorch can load samples from. We can use the HuggingFace Dataset class for this, although if you prefer you can also use torch.utils.data.Dataset, at the cost of some more boilerplate code.

代码
文本
[ ]
from datasets import Dataset
train_dataset = Dataset.from_dict(train_tokenized)
test_dataset = Dataset.from_dict(test_tokenized)

train_dataset
代码
文本

This looks good, but we're missing our labels! Let's add those on as an extra column to the datasets.

代码
文本
[ ]
train_dataset = train_dataset.add_column("labels", train_labels)
test_dataset = test_dataset.add_column("labels", test_labels)
train_dataset
代码
文本

Looks good! We're ready for training.

代码
文本

Model loading

代码
文本

Next, we want to load our model. Make sure to use exactly the same model as you used when loading the tokenizer, or your model might not understand the tokenization scheme you're using!

代码
文本
[ ]
from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer

num_labels = max(train_labels + test_labels) + 1 # Add 1 since 0 can be a label
model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels)
代码
文本

These warnings are telling us that the model is discarding some weights that it used for language modelling (the lm_head) and adding some weights for sequence classification (the classifier). This is exactly what we expect when we want to fine-tune a language model on a sequence classification task!

Next, we initialize our TrainingArguments. These control the various training hyperparameters, and will be passed to our Trainer.

代码
文本
[ ]
model_name = model_checkpoint.split("/")[-1]
batch_size = 8

args = TrainingArguments(
f"{model_name}-finetuned-localization",
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.01,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
)
代码
文本

Next, we define the metric we will use to evaluate our models and write a compute_metrics function. We can load this from the evaluate library.

代码
文本
[ ]
from evaluate import load
import numpy as np

metric = load("accuracy")

def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = np.argmax(predictions, axis=1)
return metric.compute(predictions=predictions, references=labels)
代码
文本

And at last we're ready to initialize our Trainer:

代码
文本
[ ]
cd /root
代码
文本
[ ]
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
代码
文本

You might wonder why we pass along the tokenizer when we already preprocessed our data. This is because we will use it one last time to make all the samples we gather the same length by applying padding, which requires knowing the model's preferences regarding padding (to the left or right? with which token?). The tokenizer has a pad method that will do all of this right for us, and the Trainer will use it. You can customize this part by defining and passing your own data_collator which will receive samples like the dictionaries seen above and will need to return a dictionary of tensors.

代码
文本

We can now finetune our model by just calling the train method:

代码
文本
[ ]
trainer.train()
代码
文本

Nice! After three epochs we have a model accuracy of ~94%. Note that we didn't do a lot of work to filter the training data or tune hyperparameters for this experiment, and also that we used one of the smallest ESM-2 models. With a larger starting model and more effort to ensure that the training data categories were cleanly separable, accuracy could almost certainly go a lot higher!

代码
文本

Token classification

代码
文本

Another common language model task is token classification. In this task, instead of classifying the whole sequence into a single category, we categorize each token (amino acid, in this case!) into one or more categories. This kind of model could be useful for:

  • Predicting secondary structure
  • Predicting buried vs. exposed residues
  • Predicting residues that will receive post-translational modifications
  • Predicting residues involved in binding pockets or active sites
  • Probably several other things, it's been a while since I was a postdoc
代码
文本

Data preparation

代码
文本

In this section, we're going to gather some training data from UniProt. As in the sequence classification example, we aim to create two lists: sequences and labels. Unlike in that example, however, the labels are more than just single integers. Instead, the label for each sample will be one integer per token in the input. This should make sense - when we do token classification, different tokens in the input may have different categories!

To demonstrate token classification, we're going to go back to UniProt and get some data on protein secondary structures. As above, this will probably the main section you want to change when adapting this code to your own problems.

代码
文本
[ ]
import requests

query_url ="https://rest.uniprot.org/uniprotkb/stream?compressed=true&fields=accession%2Csequence%2Cft_strand%2Cft_helix&format=tsv&query=%28%28organism_id%3A9606%29%20AND%20%28reviewed%3Atrue%29%20AND%20%28length%3A%5B80%20TO%20500%5D%29%29"
代码
文本

This time, our UniProt search was (organism_id:9606) AND (reviewed:true) AND (length:[100 TO 1000]) as it was in the first example, but instead of Subcellular location [CC] we take the Helix and Beta strand columns, as they contain the secondary structure information we want.

代码
文本
[ ]
uniprot_request = requests.get(query_url)
代码
文本

To get this data into Pandas, we use a BytesIO object, which Pandas will treat like a file. If you downloaded the data as a file you can skip this bit and just pass the filepath directly to read_csv.

代码
文本
[ ]
from io import BytesIO
import pandas

bio = BytesIO(uniprot_request.content)

df = pandas.read_csv(bio, compression='gzip', sep='\t')
df
代码
文本

Since not all proteins have this structural information, we discard proteins that have no annotated beta strands or alpha helices.

代码
文本
[ ]
no_structure_rows = df["Beta strand"].isna() & df["Helix"].isna()
df = df[~no_structure_rows]
df
代码
文本

Well, this works, but that data still isn't in a clean format that we can use to build our labels. Let's take a look at one sample to see what exactly we're dealing with:

代码
文本
[ ]
df.iloc[0]["Helix"]
代码
文本

We'll need to use a regex to pull out each segment that's marked as being a STRAND or HELIX. What we're asking for is a list of everywhere we see the word STRAND or HELIX followed by two numbers separated by two dots. In each case where this pattern is found, we tell the regex to extract the two numbers as a tuple for us.

代码
文本
[ ]
import re

strand_re = r"STRAND\s(\d+)\.\.(\d+)\;"
helix_re = r"HELIX\s(\d+)\.\.(\d+)\;"

re.findall(helix_re, df.iloc[0]["Helix"])
代码
文本

Looks good! We can use this to build our training data. Recall that the labels need to be a list or array of integers that's the same length as the input sequence. We're going to use 0 to indicate residues without any annotated structure, 1 for residues in an alpha helix, and 2 for residues in a beta strand. To build that, we'll start with an array of all 0s, and then fill in values based on the positions that our regex pulls out of the UniProt results.

We'll use NumPy arrays rather than lists here, since these allow slice assignment, which will be a lot simpler than editing a list of integers. Note also that UniProt annotates residues starting from 1 (unlike Python, which starts from 0), and region annotations are inclusive (so 1..3 means residues 1, 2 and 3). To turn these into Python slices, we subtract 1 from the start of each annotation, but not the end.

代码
文本
[ ]
import numpy as np

def build_labels(sequence, strands, helices):
# Start with all 0s
labels = np.zeros(len(sequence), dtype=np.int64)
if isinstance(helices, float): # Indicates missing (NaN)
found_helices = []
else:
found_helices = re.findall(helix_re, helices)
for helix_start, helix_end in found_helices:
helix_start = int(helix_start) - 1
helix_end = int(helix_end)
assert helix_end <= len(sequence)
labels[helix_start: helix_end] = 1 # Helix category
if isinstance(strands, float): # Indicates missing (NaN)
found_strands = []
else:
found_strands = re.findall(strand_re, strands)
for strand_start, strand_end in found_strands:
strand_start = int(strand_start) - 1
strand_end = int(strand_end)
assert strand_end <= len(sequence)
labels[strand_start: strand_end] = 2 # Strand category
return labels
代码
文本

Now we've defined a helper function, let's build our lists of sequences and labels:

代码
文本
[ ]
sequences = []
labels = []

for row_idx, row in df.iterrows():
row_labels = build_labels(row["Sequence"], row["Beta strand"], row["Helix"])
sequences.append(row["Sequence"])
labels.append(row_labels)
代码
文本

Creating our dataset

代码
文本

Nice! Now we'll split and tokenize the data, and then create datasets - I'll go through this quite quickly here, since it's identical to how we did it in the sequence classification example above.

代码
文本
[ ]
from sklearn.model_selection import train_test_split

train_sequences, test_sequences, train_labels, test_labels = train_test_split(sequences, labels, test_size=0.25, shuffle=True)
代码
文本
[ ]
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

train_tokenized = tokenizer(train_sequences)
test_tokenized = tokenizer(test_sequences)
代码
文本
[ ]
from datasets import Dataset

train_dataset = Dataset.from_dict(train_tokenized)
test_dataset = Dataset.from_dict(test_tokenized)

train_dataset = train_dataset.add_column("labels", train_labels)
test_dataset = test_dataset.add_column("labels", test_labels)
代码
文本

Model loading

代码
文本

The key difference here with the above example is that we use AutoModelForTokenClassification instead of AutoModelForSequenceClassification. We will also need a data_collator this time, as we're in the slightly more complex case where both inputs and labels must be padded in each batch.

代码
文本
[ ]
from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer

num_labels = 3
model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=num_labels)
代码
文本
[ ]
from transformers import DataCollatorForTokenClassification

data_collator = DataCollatorForTokenClassification(tokenizer)
代码
文本

Now we set up our TrainingArguments as before.

代码
文本
[ ]
model_name = model_checkpoint.split("/")[-1]
batch_size = 8

args = TrainingArguments(
f"{model_name}-finetuned-secondary-structure",
evaluation_strategy = "epoch",
save_strategy = "epoch",
learning_rate=1e-4,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=3,
weight_decay=0.001,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
push_to_hub=True,
)
代码
文本

Our compute_metrics function is a bit more complex than in the sequence classification task, as we need to ignore padding tokens (those where the label is -100).

代码
文本
[ ]
from evaluate import load
import numpy as np

metric = load("accuracy")

def compute_metrics(eval_pred):
predictions, labels = eval_pred
labels = labels.reshape((-1,))
predictions = np.argmax(predictions, axis=2)
predictions = predictions.reshape((-1,))
predictions = predictions[labels!=-100]
labels = labels[labels!=-100]
return metric.compute(predictions=predictions, references=labels)
代码
文本

And now we're ready to train our model!

代码
文本
[ ]
trainer = Trainer(
model,
args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
tokenizer=tokenizer,
compute_metrics=compute_metrics,
data_collator=data_collator,
)

trainer.train()
代码
文本

This definitely seems harder than the first task, but we still attain a very respectable accuracy. Remember that to keep this demo lightweight, we used one of the smallest ESM models, focused on human proteins only and didn't put a lot of work into making sure we only included completely-annotated proteins in our training set. With a bigger model and a cleaner, broader training set, accuracy on this task could definitely go a lot higher!

代码
文本
Protein Language Model
Hugging Face
English
Transformers
Protein Language Model Hugging FaceEnglishTransformers
点个赞吧
推荐阅读
公开
Fine-tuning a model on a token classification task
TransformersEnglish Hugging FaceToken ClassificationNLP
TransformersEnglish Hugging FaceToken ClassificationNLP
dingzh@dp.tech
发布于 2023-06-13
公开
Language Modeling from Scratch
Transformers Hugging FaceEnglishNLP
Transformers Hugging FaceEnglishNLP
Bohrium
发布于 2023-06-13