新建
压力范围


更新于 2024-12-23
推荐镜像 :Basic Image:bohrium-notebook:2023-03-26
推荐机型 :c2_m4_cpu
赞
双击即可修改
代码
文本
[1]
!pip install torch
!pip install pandas
!pip install numpy
!pip install matplotlib
!pip install scikit-learn
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: torch in /opt/conda/lib/python3.8/site-packages (1.13.1+cu116) Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.8/site-packages (from torch) (4.5.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: pandas in /opt/conda/lib/python3.8/site-packages (1.5.3) Requirement already satisfied: python-dateutil>=2.8.1 in /opt/conda/lib/python3.8/site-packages (from pandas) (2.8.2) Requirement already satisfied: pytz>=2020.1 in /opt/conda/lib/python3.8/site-packages (from pandas) (2022.7) Requirement already satisfied: numpy>=1.20.3 in /opt/conda/lib/python3.8/site-packages (from pandas) (1.23.5) Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.8/site-packages (from python-dateutil>=2.8.1->pandas) (1.16.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: numpy in /opt/conda/lib/python3.8/site-packages (1.23.5) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: matplotlib in /opt/conda/lib/python3.8/site-packages (3.7.1) Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (9.4.0) Requirement already satisfied: pyparsing>=2.3.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (3.0.9) Requirement already satisfied: contourpy>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (1.0.5) Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (0.11.0) Requirement already satisfied: importlib-resources>=3.2.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (5.2.0) Requirement already satisfied: fonttools>=4.22.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (4.38.0) Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (1.4.4) Requirement already satisfied: python-dateutil>=2.7 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (2.8.2) Requirement already satisfied: numpy>=1.20 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (1.23.5) Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.8/site-packages (from matplotlib) (23.0) Requirement already satisfied: zipp>=3.1.0 in /opt/conda/lib/python3.8/site-packages (from importlib-resources>=3.2.0->matplotlib) (3.14.0) Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.8/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: scikit-learn in /opt/conda/lib/python3.8/site-packages (1.0.2) Requirement already satisfied: scipy>=1.1.0 in /opt/conda/lib/python3.8/site-packages (from scikit-learn) (1.7.3) Requirement already satisfied: joblib>=0.11 in /opt/conda/lib/python3.8/site-packages (from scikit-learn) (1.2.0) Requirement already satisfied: numpy>=1.14.6 in /opt/conda/lib/python3.8/site-packages (from scikit-learn) (1.23.5) Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/conda/lib/python3.8/site-packages (from scikit-learn) (3.1.0) Collecting numpy>=1.14.6 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/2f/14/abc14a3f3663739e5d3c8fd980201d10788d75fea5b0685734227052c4f0/numpy-1.22.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.9/16.9 MB 37.5 MB/s eta 0:00:0000:0100:01 Installing collected packages: numpy Attempting uninstall: numpy Found existing installation: numpy 1.23.5 Uninstalling numpy-1.23.5: Successfully uninstalled numpy-1.23.5 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. moviepy 0.2.3.5 requires decorator<5.0,>=4.0.2, but you have decorator 5.1.1 which is incompatible. cvxpy 1.2.3 requires setuptools<=64.0.2, but you have setuptools 65.6.3 which is incompatible. Successfully installed numpy-1.22.4 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
代码
文本
[2]
import torch
import torch.nn as nn
import torch.optim as optim
from torch.nn.utils.rnn import pad_sequence
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
代码
文本
[3]
device = 'cuda' if torch.cuda.is_available() else 'cpu' # 如果电脑有 GPU ,则在 GPU 上运算,否则在 CPU 上运算
代码
文本
[4]
#如果是非0元素且不是最后一个元素且与下一元素不同的添加到结果列表 result 中
def split_list_by_value(lst):
result = []
temp = []
for i in range(len(lst)):
if lst[i] != 0:
temp.append(lst[i])
if i == len(lst)-1 or lst[i] != lst[i+1]:
result.append(temp)
temp = []
return result
#将长度为 length 的子列表添加到结果列表 result 中
def split_list_by_lengths(lst, lengths):
result = []
start = 0
for length in lengths:
result.append(lst[start:start+length])
start += length
return result
代码
文本
[9]
total_num = 222 # 数据总数
train_size = 194
test_size = 28
CC_input = []
cyc_len = []
baty_lst = [2, 3, 4, 6, 7, 8]
# 过滤电压数据范围函数
def filter_voltage(vol_all, cycle_num, min_vol=3.8, max_vol=4.2):
# 返回满足电压条件的vol_all和cycle_num对应的值
filtered_vol = []
filtered_cycle = []
for vol, cyc in zip(vol_all, cycle_num):
if min_vol <= vol <= max_vol:
filtered_vol.append(vol)
filtered_cycle.append(cyc)
return filtered_vol, filtered_cycle
for i in baty_lst:
CC_data = pd.read_csv(f"/personal/Capacity data/Data_Capacity_25C0{i}.csv") # 用于导入不同电池批次的数据
cycle_num = CC_data.iloc[:, 1:2]["cycle number"].tolist() # 导入第二列循环次数 cycle number 数据
vol_all = CC_data.iloc[:, 3:4]['Ewe/V'].tolist() # 导入第四列电压 Ewe/V 数据
# 过滤电压数据,只保留 3.8-4.2 的数据,并同步过滤对应的cycle_num
vol_all, cycle_num = filter_voltage(vol_all, cycle_num)
cyc = split_list_by_value(cycle_num) # 分割循环次数,每个电池的循环都变成 cycle_num 列表中的一个子列表
if i == 7:
cyc = cyc[:-7] # 由于部分EIS的数据点异常, 去除电池批号为7的最后7个元素
cyc_len.append(len(cyc))
lengths = [len(sublist) for sublist in cyc] # 计算 cyc 中每个子列表的长度
vol_list = split_list_by_lengths(vol_all, lengths) # 按照 cyc 中每个子列表的长度去分割电压数据
tensor_vol = [torch.tensor(sublist) for sublist in vol_list] # 转换为tensor
padded_vol = pad_sequence(tensor_vol, batch_first=True, padding_value=0)#将 tensor_vol 中的张量填充到相同的长度, batch_first=True 表示批次大小为第一个维度, padding_value=0 表示用0填充
print(padded_vol.shape)
new_vol = [t[:856].view(1,856,1) for t in padded_vol]#将 padded_vol 中的每个张量截取前1654个元素,并调整形状为(1, 1654, 1)
CC_input += new_vol#将 new_vol 列表中的元素添加到 CC_input 列表中
print(len(CC_input), cyc_len)
torch.Size([40, 864]) torch.Size([39, 856]) torch.Size([34, 860]) torch.Size([40, 859]) torch.Size([33, 865]) torch.Size([36, 860]) 222 [40, 39, 34, 40, 33, 36]
代码
文本
[ ]
indices = list(range(len(CC_input)))
random.shuffle(indices)# shuffle 是 random 模块中的一个函数,用于将传入的 indices 就地打乱顺序
代码
文本
[ ]
scaler = MinMaxScaler(feature_range=(0, 1))
CC_input_scaled = []
for data in CC_input:
data = np.array(data).squeeze() # 转换为数组并去除多余的维度
scaled_data = scaler.fit_transform(data.reshape(-1, 1))# 将数据重新形状为(-1, 1)使其适应 MinMaxScaler 的输入格式
CC_input_scaled.append(torch.tensor(scaled_data).view(1, -1, 1))# 将标准化后的数据转换为 Tensor 并添加到列表中
代码
文本
[ ]
train_indices = indices[:train_size]#打乱训练集索引
test_indices = indices[train_size:]#打乱测试集索引
encoder_input_train=[CC_input_scaled[i] for i in train_indices]
encoder_input_test=[CC_input_scaled[i] for i in test_indices]
代码
文本
[ ]
decoder_input = []
decoder_target = []
cyc_tot = 0
#创建一个三维列表,用于存储电化学阻抗谱( EIS )数据.其 shape 为(222, 240, 2).第一维包含222个元素,代表四个电池的总循环次数,每个元素是一个包含两个子列表(实部和虚部)的列表,每个子列表有240个元素(每个电池每个循环的 EIS 有60个点,每个电池每个循环有四个 state ,一共240个)
EIS_list = [[[0 for i in range(240)] for j in range(2)] for r in range(222)]
lst = [1, 4, 5, 9]
for k in baty_lst:
EIS_tot = []
for i in lst:
EIS = pd.read_csv(f"/personal/EIS data/EIS_state_{i}_25C0{k}.csv")#导入不同批次电池不同电池soc状态的EIS数据
cyc_n = cyc_len[baty_lst.index(k)]
#print(cyc_n)
EIS_tot.append(EIS.iloc[:, 3:7][" Re(Z)/Ohm"].tolist()[0:60 * cyc_n])#选择第四列 Re(Z)/Ohm 实部数据
EIS_tot.append(EIS.iloc[:, 4:7][" -Im(Z)/Ohm"].tolist()[0:60 * cyc_n])#选择第五列 -Im(Z)/Ohm 虚部数据
cyc_tot += cyc_n
EIS_totm = [[e2 for e2 in e1] for e1 in EIS_tot]#扁平化嵌套列表
print(np.array(EIS_totm).shape, cyc_tot)
lengths = [60 for i in range(cyc_n)]
# print(lengths)
for i in range(4):
EIS_R = split_list_by_lengths(EIS_totm[2 * i], lengths)#将 EIS 数据的实部按照每个电池的循环次数分割成多个子列表
EIS_I = split_list_by_lengths(EIS_totm[2 * i + 1], lengths)#将 EIS 数据的虚部按照每个电池的循环次数分割成多个子列表
for j in range(cyc_tot - cyc_n, cyc_tot, 1):#这个循环遍历当前电池的 EIS 数据
EIS_list[j][0][60 * i:60 * (i + 1)] = EIS_R[j - cyc_tot + cyc_n]#将实部数据 EIS_R 中的第 j - cyc_tot + cyc_n 个元素(即当前电池的实部数据)赋值给 EIS_list 的相应位置. 60 * i:60 * (i + 1) 确定了在 EIS_list 中的起始和结束索引
EIS_list[j][1][60 * i:60 * (i + 1)] = EIS_I[j - cyc_tot + cyc_n]
EIS_list = [np.array(t).squeeze().T for t in EIS_list]#每个电池的EIS数据都被整理成一个二维数组,其中行代表不同的 SOC 状态,列代表不同的数据点
print(np.array(EIS_list)[0].shape)
代码
文本
[ ]
EIS_scaled = []
for i in range(len(EIS_list)):
EIS_data = EIS_list[i] # 获取 EIS 数据
# 使用 MinMaxScaler 对每个 EIS 数据进行标准化
scaled_EIS = scaler.fit_transform(EIS_data) # 将数据标准化到[0, 1]
EIS_scaled.append(torch.tensor(scaled_EIS).float()) # 转换为 Tensor 并添加到列表中
print("Standardized EIS data:", np.array(EIS_scaled).shape)
代码
文本
[ ]
decoder_target = [torch.tensor(t).float().view(1,240, 2) for t in EIS_scaled]#转换为张量并重塑张量的形状为 (1, 240, 2),其中1表示批量大小,240是列表长度,2 表示每个时间步的特征数(实部和虚部)
decoder_input = [torch.ones(1,240,2) for t in EIS_scaled]#初始化为全是1的张量将作为模型的输入数据,通常在训练开始时用于启动解码过程
decoder_input_train = [decoder_input[i] for i in train_indices]
decoder_target_train = [decoder_target[i] for i in train_indices]
decoder_input_test = [decoder_input[i] for i in test_indices]
decoder_target_test = [decoder_target[i] for i in test_indices]
print(decoder_input[0].shape)
代码
文本
[ ]
# define encoder
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.lstm = nn.LSTM(1, 256, batch_first=True, num_layers=2, dropout=0.5)# batch_first=True 是指定输入和输出张量的第一个维度为批次大小, lstm 层数是2, dropout 率为0.5,防止过拟合
def forward(self, x):
_, (h_n, c_n) = self.lstm(x)
return h_n, c_n
代码
文本
[ ]
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.lstm = nn.LSTM(2, 256, batch_first=True, num_layers=2) # 增加单元数
self.dense = nn.Linear(256, 2) # 输出维度为2
def forward(self, x, hidden):#前向传播的为输入数据和隐藏 state
x, _ = self.lstm(x, hidden)
x = self.dense(x)#将 LSTM 层的输出传递给全连接层,得到最终的输出
return x
代码
文本
[ ]
# define Encoder-Decoder model
class EncoderDecoder(nn.Module):
def __init__(self, encoder, decoder):
super(EncoderDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, encoder_input, decoder_input, use_teacher_forcing):#模型的前向传播函数,接受三个参数: encoder_input (编码器的输入), decoder_input (解码器的输入),和 use_teacher_forcing (一个布尔值,决定是否使用教师强制)
hidden = self.encoder(encoder_input)
if use_teacher_forcing:
output = self.decoder(decoder_input, hidden)
else:
batch_size, seq_len, _ = decoder_input.size()
output = torch.zeros_like(decoder_input)#创建一个与解码器输入形状相同的零张量 output ,用于存储解码器的输出
decoder_input_t = decoder_input[:, 0, :]#初始化解码器在时间步 t=0 的输入
for t in range(seq_len):
decoder_output_t = self.decoder(decoder_input_t.unsqueeze(1), hidden)
output[:, t, :] = decoder_output_t.squeeze(1)
decoder_input_t = decoder_output_t.squeeze(1)
return output
代码
文本
[ ]
# build model
encoder = Encoder()
decoder = Decoder()
model = EncoderDecoder(encoder, decoder).to (device)
print(model)
代码
文本
[ ]
criterion = nn.MSELoss().to (device)
optimizer = optim.Adam(model.parameters(), lr=0.001)
num_epochs = 1500
batch_size = 4
代码
文本
[ ]
train_losses = [] # 记录每个 epoch 的训练损失
for epoch in range(num_epochs):
epoch_loss = 0.0#初始化一个变量 epoch_loss 为 0.0,用于累积当前训练周期的总损失
# Scheduled sampling probability
#使用了教师强制
teacher_forcing_ratio = max(0.5 * (1 - epoch / num_epochs), 0.0)#计算教师强制( Teacher Forcing )的比例,这个比例随着训练周期的增加而减少,从0.5 开始,直到最后一个周期降到0
# use_teacher_forcing = torch.rand(1).item() < teacher_forcing_ratio
use_teacher_forcing = True
for batch_idx in range(0, train_size, batch_size):#开始一个循环,用于遍历训练数据的批次,batch_idx 从0开始,以 batch_size 为步长,直到 train_size
#使用torch.cat将编码器和解码器的输入以及解码器的目标数据按批次大小进行拼接
batch_encoder_input = torch.cat(encoder_input_train[batch_idx:batch_idx + batch_size], dim=0).to (device)
batch_decoder_input = torch.cat(decoder_input_train[batch_idx:batch_idx + batch_size], dim=0).to (device)
batch_decoder_target = torch.cat(decoder_target_train[batch_idx:batch_idx + batch_size], dim=0).to (device)
optimizer.zero_grad()#清空优化器的梯度,为反向传播做准备
outputs = model(batch_encoder_input, batch_decoder_input, use_teacher_forcing).to (device)
loss = criterion(outputs, batch_decoder_target)#计算模型输出和解码器目标之间的损失
loss.backward()#反向传播
optimizer.step()#更新模型参数
epoch_loss += loss.item()#将当前批次的损失添加到 epoch_loss
train_losses.append(epoch_loss / (train_size / batch_size))#将当前周期的平均损失添加到train_losses列表中
if (epoch + 1) % 20 == 0:#每20周期轮次输出一次 loss
mean_mse = epoch_loss / (train_size / batch_size)#平均平方误差
rmse = mean_mse ** 0.5#均方根误差
print(f"Epoch [{epoch + 1}/{num_epochs}], Loss: {mean_mse:.8f}, RMSE: {rmse}")
代码
文本
[ ]
plt.plot(train_losses, label='Training loss')
plt.xlabel('Epochs')
plt.ylabel('MSE Loss')
plt.legend()
plt.show()
代码
文本
[ ]
model.eval().to (device)#将模型设置为评估模式,在评估模式下,模型中的某些层(如 Dropout 和 BatchNorm )会改变行为以适应评估过程
test_losses = []
for batch_idx in range(0, test_size, batch_size):#开始一个循环,用于遍历测试数据集, batch_idx 从0开始,以 batch_size 为步长,直到 test_size
#使用 torch.cat 将测试数据集中的编码器输入,解码器输入和解码器目标数据按批次大小进行拼接
batch_encoder_input = torch.cat(encoder_input_test[batch_idx:batch_idx+batch_size], dim=0).to (device)
batch_decoder_input = torch.cat(decoder_input_test[batch_idx:batch_idx+batch_size], dim=0).to (device)
batch_decoder_target = torch.cat(decoder_target_test[batch_idx:batch_idx+batch_size], dim=0).to (device)
with torch.no_grad():#暂时禁用梯度计算.在评估模式下,不需要计算梯度,这可以节省内存并加快计算速度
outputs = model(batch_encoder_input, batch_decoder_input, use_teacher_forcing).to (device)
loss = criterion(outputs, batch_decoder_target).to (device)
test_losses.append(loss.item())
mean_mse = np.mean(test_losses)#计算测试过程中所有批次损失的平均值
test_rmse = mean_mse ** 0.5#计算测试过程中所有批次的均方根的平均值
print(f"Test Loss: {mean_mse:.4f} Test RMSE: {test_rmse:.4f}")
代码
文本
点个赞吧