最后一次修改: dingzh@dp.tech
描述: 本教程主要参考 hugging face notebook,可在 Bohrium Notebook 上直接运行。你可以点击界面上方蓝色按钮
开始连接
,选择bohrium-notebook:2023-04-07
镜像及任意一款GPU
节点配置,稍等片刻即可运行。 如您遇到任何问题,请联系 bohrium@dp.tech 。共享协议: 本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
use huggingface token instead {huggingface_token}
Then you need to install Git-LFS. Uncomment the following instructions:
We also quickly upload some telemetry - this tells us which examples and software versions are getting used so we know where to prioritize our maintenance efforts. We don't collect (or care about) any personally identifiable information, but if you'd prefer not to be counted, feel free to skip this step or delete this cell entirely.
Fine-Tuning Protein Language Models
In this notebook, we're going to do some transfer learning to fine-tune some large, pre-trained protein language models on tasks of interest. If that sentence feels a bit intimidating to you, don't panic - there's a blog post that explains the concepts here in much more detail.
The specific model we're going to use is ESM-2, which is the state-of-the-art protein language model at the time of writing (November 2022). The citation for this model is Lin et al, 2022.
There are several ESM-2 checkpoints with differing model sizes. Larger models will generally have better accuracy, but they require more GPU memory and will take much longer to train. The available ESM-2 checkpoints (at time of writing) are:
Checkpoint name | Num layers | Num parameters |
---|---|---|
esm2_t48_15B_UR50D |
48 | 15B |
esm2_t36_3B_UR50D |
36 | 3B |
esm2_t33_650M_UR50D |
33 | 650M |
esm2_t30_150M_UR50D |
30 | 150M |
esm2_t12_35M_UR50D |
12 | 35M |
esm2_t6_8M_UR50D |
6 | 8M |
Note that the larger checkpoints may be very difficult to train without a large cloud GPU like an A100 or H100, and the largest 15B parameter checkpoint will probably be impossible to train on any single GPU! Also, note that memory usage for attention during training will scale as O(batch_size * num_layers * seq_len^2)
, so larger models on long sequences will use quite a lot of memory! We will use the esm2_t12_35M_UR50D
checkpoint for this notebook, which should train on any Colab instance or modern GPU.
Sequence classification
One of the most common tasks you can perform with a language model is sequence classification. In sequence classification, we classify an entire protein into a category, from a list of two or more possibilities. There's no limit on the number of categories you can use, or the specific problem you choose, as long as it's something the model could in theory infer from the raw protein sequence. To keep things simple for this example, though, let's try classifying proteins by their cellular localization - given their sequence, can we predict if they're going to be found in the cytosol (the fluid inside the cell) or embedded in the cell membrane?
Data preparation
In this section, we're going to gather some training data from UniProt. Our goal is to create a pair of lists: sequences
and labels
. sequences
will be a list of protein sequences, which will just be strings like "MNKL...", where each letter represents a single amino acid in the complete protein. labels
will be a list of the category for each sequence. The categories will just be integers, with 0 representing the first category, 1 representing the second and so on. In other words, if sequences[i]
is a protein sequence then labels[i]
should be its corresponding category. These will form the training data we're going to use to teach the model the task we want it to do.
If you're adapting this notebook for your own use, this will probably be the main section you want to change! You can do whatever you want here, as long as you create those two lists by the end of it. If you want to follow along with this example, though, first we'll need to import requests
and set up our query to UniProt.
This query URL might seem mysterious, but it isn't! To get it, we searched for (organism_id:9606) AND (reviewed:true) AND (length:[80 TO 500])
on UniProt to get a list of reasonably-sized human proteins,
then selected 'Download', and set the format to TSV and the columns to Sequence
and Subcellular location [CC]
, since those contain the data we care about for this task.
Once that's done, selecting Generate URL for API
gives you a URL you can pass to Requests. Alternatively, if you're not on Colab you can just download the data through the web interface and open the file locally.
To get this data into Pandas, we use a BytesIO
object, which Pandas will treat like a file. If you downloaded the data as a file you can skip this bit and just pass the filepath directly to read_csv
.
Nice! Now we have some proteins and their subcellular locations. Let's start filtering this down. First, let's ditch the columns without subcellular location information.
Now we'll make one dataframe of proteins that contain cytosol
or cytoplasm
in their subcellular localization column, and a second that mentions the membrane
or cell membrane
. To ensure we don't get overlap, we ensure each dataframe only contains proteins that don't match the other search term.
We're almost done! Now, let's make a list of sequences from each df and generate the associated labels. We'll use 0
as the label for cytosolic proteins and 1
as the label for membrane proteins.
Now we can concatenate these lists together to get the sequences
and labels
lists that will form our final training data. Don't worry - they'll get shuffled during training!
Phew!
Splitting the data
Since the data we're loading isn't prepared for us as a machine learning dataset, we'll have to split the data into train and test sets ourselves! We can use sklearn's function for that:
Tokenizing the data
All inputs to neural nets must be numerical. The process of converting strings into numerical indices suitable for a neural net is called tokenization. For natural language this can be quite complex, as usually the network's vocabulary will not contain every possible word, which means the tokenizer must handle splitting rarer words into pieces, as well as all the complexities of capitalization and unicode characters and so on.
With proteins, however, things are very easy. In protein language models, each amino acid is converted to a single token. Every model on transformers
comes with an associated tokenizer
that handles tokenization for it, and protein language models are no different. Let's get our tokenizer!
Let's try a single sequence to see what the outputs from our tokenizer look like:
This looks good! We can see that our sequence has been converted into input_ids
, which is the tokenized sequence, and an attention_mask
. The attention mask handles the case when we have sequences of variable length - in those cases, the shorter sequences are padded with blank "padding" tokens, and the attention mask is padded with 0s to indicate that those tokens should be ignored by the model.
So now, let's tokenize our whole dataset. Note that we don't need to do anything with the labels, as they're already in the format we need.
Dataset creation
Now we want to turn this data into a dataset that PyTorch can load samples from. We can use the HuggingFace Dataset
class for this, although if you prefer you can also use torch.utils.data.Dataset
, at the cost of some more boilerplate code.
This looks good, but we're missing our labels! Let's add those on as an extra column to the datasets.
Looks good! We're ready for training.
Model loading
Next, we want to load our model. Make sure to use exactly the same model as you used when loading the tokenizer, or your model might not understand the tokenization scheme you're using!
These warnings are telling us that the model is discarding some weights that it used for language modelling (the lm_head
) and adding some weights for sequence classification (the classifier
). This is exactly what we expect when we want to fine-tune a language model on a sequence classification task!
Next, we initialize our TrainingArguments
. These control the various training hyperparameters, and will be passed to our Trainer
.
Next, we define the metric we will use to evaluate our models and write a compute_metrics
function. We can load this from the evaluate
library.
And at last we're ready to initialize our Trainer
:
You might wonder why we pass along the tokenizer
when we already preprocessed our data. This is because we will use it one last time to make all the samples we gather the same length by applying padding, which requires knowing the model's preferences regarding padding (to the left or right? with which token?). The tokenizer
has a pad method that will do all of this right for us, and the Trainer
will use it. You can customize this part by defining and passing your own data_collator
which will receive samples like the dictionaries seen above and will need to return a dictionary of tensors.
We can now finetune our model by just calling the train
method:
Nice! After three epochs we have a model accuracy of ~94%. Note that we didn't do a lot of work to filter the training data or tune hyperparameters for this experiment, and also that we used one of the smallest ESM-2 models. With a larger starting model and more effort to ensure that the training data categories were cleanly separable, accuracy could almost certainly go a lot higher!
Token classification
Another common language model task is token classification. In this task, instead of classifying the whole sequence into a single category, we categorize each token (amino acid, in this case!) into one or more categories. This kind of model could be useful for:
- Predicting secondary structure
- Predicting buried vs. exposed residues
- Predicting residues that will receive post-translational modifications
- Predicting residues involved in binding pockets or active sites
- Probably several other things, it's been a while since I was a postdoc
Data preparation
In this section, we're going to gather some training data from UniProt. As in the sequence classification example, we aim to create two lists: sequences
and labels
. Unlike in that example, however, the labels
are more than just single integers. Instead, the label for each sample will be one integer per token in the input. This should make sense - when we do token classification, different tokens in the input may have different categories!
To demonstrate token classification, we're going to go back to UniProt and get some data on protein secondary structures. As above, this will probably the main section you want to change when adapting this code to your own problems.
This time, our UniProt search was (organism_id:9606) AND (reviewed:true) AND (length:[100 TO 1000])
as it was in the first example, but instead of Subcellular location [CC]
we take the Helix
and Beta strand
columns, as they contain the secondary structure information we want.
To get this data into Pandas, we use a BytesIO
object, which Pandas will treat like a file. If you downloaded the data as a file you can skip this bit and just pass the filepath directly to read_csv
.
Since not all proteins have this structural information, we discard proteins that have no annotated beta strands or alpha helices.
Well, this works, but that data still isn't in a clean format that we can use to build our labels. Let's take a look at one sample to see what exactly we're dealing with:
We'll need to use a regex to pull out each segment that's marked as being a STRAND or HELIX. What we're asking for is a list of everywhere we see the word STRAND or HELIX followed by two numbers separated by two dots. In each case where this pattern is found, we tell the regex to extract the two numbers as a tuple for us.
Looks good! We can use this to build our training data. Recall that the labels need to be a list or array of integers that's the same length as the input sequence. We're going to use 0 to indicate residues without any annotated structure, 1 for residues in an alpha helix, and 2 for residues in a beta strand. To build that, we'll start with an array of all 0s, and then fill in values based on the positions that our regex pulls out of the UniProt results.
We'll use NumPy arrays rather than lists here, since these allow slice assignment, which will be a lot simpler than editing a list of integers. Note also that UniProt annotates residues starting from 1 (unlike Python, which starts from 0), and region annotations are inclusive (so 1..3 means residues 1, 2 and 3). To turn these into Python slices, we subtract 1 from the start of each annotation, but not the end.
Now we've defined a helper function, let's build our lists of sequences and labels:
Creating our dataset
Nice! Now we'll split and tokenize the data, and then create datasets - I'll go through this quite quickly here, since it's identical to how we did it in the sequence classification example above.
Model loading
The key difference here with the above example is that we use AutoModelForTokenClassification
instead of AutoModelForSequenceClassification
. We will also need a data_collator
this time, as we're in the slightly more complex case where both inputs and labels must be padded in each batch.
Now we set up our TrainingArguments
as before.
Our compute_metrics
function is a bit more complex than in the sequence classification task, as we need to ignore padding tokens (those where the label is -100
).
And now we're ready to train our model!
This definitely seems harder than the first task, but we still attain a very respectable accuracy. Remember that to keep this demo lightweight, we used one of the smallest ESM models, focused on human proteins only and didn't put a lot of work into making sure we only included completely-annotated proteins in our training set. With a bigger model and a cleaner, broader training set, accuracy on this task could definitely go a lot higher!