Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
AI+电芯 | 基于LSTM模型和脉冲电信号的电芯EIS预测
AI4S
AI4SCUP-EIS
EIS
AI4SAI4SCUP-EIS EIS
JiaweiMiao
发布于 2023-10-22
推荐镜像 :Basic Image:bohrium-notebook:2023-04-07
推荐机型 :c12_m46_1 * NVIDIA GPU B
赞 5
3
40
AI4S-Cup-PulseEIS(v3)

AI+电芯 | 基于LSTM模型和脉冲电信号的电芯EIS预测

代码
文本

©️ Copyright 2023 @ Authors
作者: 苗嘉伟📨
日期:2023-07-03
共享协议:本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
快速开始:点击上方的 开始连接 按钮,选择 bohrium-notebook:2023-04-07镜像及任意GPU节点配置,稍等片刻即可运行。

代码
文本

背景

锂离子电池(LIB)具有能量密度高、使用寿命长、环境友好等优点,广泛应用于电动汽车和便携式电子设备中。 然而,锂离子电池的老化对存储和使用过程中的安全性提出了巨大的挑战,这促使了电池预测技术的发展和应用。电化学阻抗谱(EIS)可用于呈现电池健康状态、电池内部阻抗以及锂离子(Li-ions)的动态扩散等。

EIS(electrochemical impedence spectroscopy, EIS)。它是在电化学电池处于平衡状态下(开路状态)或者某一稳定的直流极化条件下,按照正弦规律施加小幅度交流信号,研究电化学的交流阻抗随频率的变化关系的一种方法。广泛应用于锂离子电池、钠离子电池、燃料电池和腐蚀防护等领域,是一种常用的电化学检测手段,用于分析电极过程动力学、双电层和扩散等。

通过可测量参数进行EIS准确预测有助于发展和推广基于EIS的先进预测方法并显着降低成本。已经有一些工作使用完整的电池充电数据基于机器学习实现了准确的阻抗谱预测,均方根误差(RMSE)小于2 mΩ。 这种有前景的预测方法可以准确估计阻抗谱,并为基于 EIS 的实时电池状态估计和健康预测提供可能性。同时,其他类型的数据如脉冲电信号,也被证明和阻抗谱有紧密联系。

这个Notebook通过电池脉冲电压电流数据预测阻抗谱,基于序列到序列的预测,采用LSTM机器学习算法来验证所提出的框架。

Reference:Jinpeng Tian, Rui Xiong, Cheng Chen, Chenxu Wang, Weixiang Shen, Fengchun Sun, Simultaneous prediction of impedance spectra and state for lithium-ion batteries from short-term pulses, Electrochimica Acta, Volume 449, 2023, 142218, ISSN 0013-4686, https://doi.org/10.1016/j.electacta.2023.142218.

Note:请注意,本notebook中的对训练集预测集以及submission命名格式做了些调整,正确格式请参考前一个baseline。

代码
文本

加载工具包

代码
文本

加载pytorch、数据处理和画图的package

代码
文本
[1]
import torch
import math
import torch.nn as nn
import torch.optim as optim
from torch.nn.utils.rnn import pad_sequence
import torch.nn.functional as F
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import pickle
代码
文本
[2]
!pip install ipython-autotime
%load_ext autotime
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Requirement already satisfied: ipython-autotime in /opt/conda/lib/python3.8/site-packages (0.3.1)
Requirement already satisfied: ipython in /opt/conda/lib/python3.8/site-packages (from ipython-autotime) (8.10.0)
Requirement already satisfied: stack-data in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (0.2.0)
Requirement already satisfied: pygments>=2.4.0 in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (2.11.2)
Requirement already satisfied: pexpect>4.3 in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (4.8.0)
Requirement already satisfied: jedi>=0.16 in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (0.18.1)
Requirement already satisfied: decorator in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (5.1.1)
Requirement already satisfied: backcall in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (0.2.0)
Requirement already satisfied: pickleshare in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (0.7.5)
Requirement already satisfied: prompt-toolkit<3.1.0,>=3.0.30 in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (3.0.36)
Requirement already satisfied: traitlets>=5 in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (5.7.1)
Requirement already satisfied: matplotlib-inline in /opt/conda/lib/python3.8/site-packages (from ipython->ipython-autotime) (0.1.6)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /opt/conda/lib/python3.8/site-packages (from jedi>=0.16->ipython->ipython-autotime) (0.8.3)
Requirement already satisfied: ptyprocess>=0.5 in /opt/conda/lib/python3.8/site-packages (from pexpect>4.3->ipython->ipython-autotime) (0.7.0)
Requirement already satisfied: wcwidth in /opt/conda/lib/python3.8/site-packages (from prompt-toolkit<3.1.0,>=3.0.30->ipython->ipython-autotime) (0.2.5)
Requirement already satisfied: executing in /opt/conda/lib/python3.8/site-packages (from stack-data->ipython->ipython-autotime) (0.8.3)
Requirement already satisfied: asttokens in /opt/conda/lib/python3.8/site-packages (from stack-data->ipython->ipython-autotime) (2.0.5)
Requirement already satisfied: pure-eval in /opt/conda/lib/python3.8/site-packages (from stack-data->ipython->ipython-autotime) (0.2.2)
Requirement already satisfied: six in /opt/conda/lib/python3.8/site-packages (from asttokens->stack-data->ipython->ipython-autotime) (1.16.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
time: 456 µs (started: 2023-10-22 09:43:48 +08:00)
代码
文本

数据集展示

代码
文本

加载输入脉冲数据, 输入脉冲数据格式为一个嵌套的字典。其中的键代表不同SOC

代码
文本
[3]
# show one of the input pulse data
with open('/bohr/ai4spulseeis-lr97/v3/train_datasets/train_pulse_1.pkl', 'rb') as fp:
pulse_train_1 = pickle.load(fp, encoding='bytes')

pulse_train_1.keys()
dict_keys(['0%SOC', '2%SOC', '4%SOC', '6%SOC', '8%SOC', '10%SOC', '12%SOC', '14%SOC', '16%SOC', '18%SOC', '20%SOC', '22%SOC', '24%SOC', '26%SOC', '28%SOC', '30%SOC', '32%SOC', '34%SOC', '36%SOC', '38%SOC', '40%SOC', '42%SOC', '44%SOC', '46%SOC', '48%SOC', '50%SOC', '52%SOC', '54%SOC', '56%SOC', '58%SOC', '60%SOC', '62%SOC', '64%SOC', '66%SOC', '68%SOC', '70%SOC', '72%SOC', '74%SOC', '76%SOC', '78%SOC', '80%SOC', '82%SOC', '84%SOC', '86%SOC', '88%SOC', '90%SOC', '92%SOC', '94%SOC', '96%SOC'])
time: 6.57 ms (started: 2023-10-22 09:43:48 +08:00)
代码
文本

其中每个SOC的键值为一个字典,其中包含两个键: 电压信息Voltage和电流信息Current。这两个键对应的值均为长度99的列表。

代码
文本
[4]
pulse_train_1['36%SOC'].keys()
dict_keys(['Voltage', 'Current'])
time: 2.21 ms (started: 2023-10-22 09:43:48 +08:00)
代码
文本
[5]
print(len(pulse_train_1['36%SOC']['Voltage']),len(pulse_train_1['36%SOC']['Current']))
99 99
time: 399 µs (started: 2023-10-22 09:43:48 +08:00)
代码
文本

分别展示脉冲电流和对应的电压:

代码
文本
[6]
plt.figure(figsize=(12, 4.5))
plt.subplot(1, 2, 1)
plt.plot(pulse_train_1['36%SOC']['Current'], 'r')
plt.title('Pulse Current')
plt.xlabel('Time(s)')
plt.ylabel('Current(A)')

plt.subplot(1, 2, 2)
plt.plot(pulse_train_1['36%SOC']['Voltage'], 'b')
plt.title('Pulse Voltage')
plt.xlabel('Time(s)')
plt.ylabel('Voltage(V)')
plt.show()
代码
文本

加载目标EIS数据, 输入EIS数据文件格式为一个嵌套的字典。与脉冲数据类似,其中的键代表不同SOC

代码
文本
[7]
# show one of the target EIS data
with open('/bohr/ai4spulseeis-lr97/v3/train_datasets/train_eis_1.pkl', 'rb') as fp:
eis_train_1 = pickle.load(fp, encoding='bytes')

eis_train_1.keys()
dict_keys(['0%SOC', '2%SOC', '4%SOC', '6%SOC', '8%SOC', '10%SOC', '12%SOC', '14%SOC', '16%SOC', '18%SOC', '20%SOC', '22%SOC', '24%SOC', '26%SOC', '28%SOC', '30%SOC', '32%SOC', '34%SOC', '36%SOC', '38%SOC', '40%SOC', '42%SOC', '44%SOC', '46%SOC', '48%SOC', '50%SOC', '52%SOC', '54%SOC', '56%SOC', '58%SOC', '60%SOC', '62%SOC', '64%SOC', '66%SOC', '68%SOC', '70%SOC', '72%SOC', '74%SOC', '76%SOC', '78%SOC', '80%SOC', '82%SOC', '84%SOC', '86%SOC', '88%SOC', '90%SOC', '92%SOC', '94%SOC', '96%SOC'])
time: 4.97 ms (started: 2023-10-22 09:43:49 +08:00)
代码
文本

其中每个SOC的键值为一个字典,其中包含两个键: 阻抗实部Real和阻抗虚部Imaginary。这两个键对应的值均为长度51的列表。

代码
文本
[8]
eis_train_1['36%SOC'].keys()
dict_keys(['Real', 'Imaginary'])
time: 2.21 ms (started: 2023-10-22 09:43:49 +08:00)
代码
文本
[9]
print(len(eis_train_1['36%SOC']['Real']),len(eis_train_1['36%SOC']['Imaginary']))
51 51
time: 420 µs (started: 2023-10-22 09:43:49 +08:00)
代码
文本

展示一条电化学阻抗的Nyqust图,这里的X轴为实部Real,Y轴为虚部Imaginary:

代码
文本
[10]
plt.plot(eis_train_1['36%SOC']['Real'], eis_train_1['36%SOC']['Imaginary'], 'mo')
plt.title('EIS')
plt.xlabel('Z’(mΩ)')
plt.ylabel('Z’’(mΩ)')
plt.show()
代码
文本

将所有SOC下的的阻抗都画在一张图中,我们就得到了完整的EIS阻抗谱:

代码
文本
[11]
soc_lst = [f'{i*2}%SOC' for i in range(49)]
for soc in soc_lst:
plt.plot(eis_train_1[soc]['Real'], eis_train_1[soc]['Imaginary'], 'o-', markerfacecolor='white')
plt.title('EIS for all SOC')
plt.xlabel('Z’(mΩ)')
plt.ylabel('Z’’(mΩ)')
plt.show()
代码
文本

构建LSTM模型

代码
文本
[12]
# define encoder
class Encoder(nn.Module):
def __init__(self):
super(Encoder, self).__init__()
self.lstm = nn.LSTM(2, 256, batch_first=True, num_layers=2)

def forward(self, x):
_, (h_n, c_n) = self.lstm(x)
return h_n, c_n
time: 640 µs (started: 2023-10-22 09:43:49 +08:00)
代码
文本
[13]
# define decoder
class Decoder(nn.Module):
def __init__(self):
super(Decoder, self).__init__()
self.lstm = nn.LSTM(2, 256, batch_first=True, num_layers=2)
self.dense = nn.Linear(256, 2)


def forward(self, x, hidden):
x, _ = self.lstm(x, hidden)
x = self.dense(x)
return x
time: 698 µs (started: 2023-10-22 09:43:49 +08:00)
代码
文本
[14]
# define Encoder-Decoder model
class EncoderDecoder(nn.Module):
def __init__(self, encoder, decoder):
super(EncoderDecoder, self).__init__()
self.encoder = encoder
self.decoder = decoder

def forward(self, encoder_input, decoder_input):
hidden = self.encoder(encoder_input)
output = self.decoder(decoder_input, hidden)
return output
time: 596 µs (started: 2023-10-22 09:43:49 +08:00)
代码
文本
[15]
# build model
encoder = Encoder()
decoder = Decoder()
model = EncoderDecoder(encoder, decoder)
print(model)
EncoderDecoder(
  (encoder): Encoder(
    (lstm): LSTM(2, 256, num_layers=2, batch_first=True)
  )
  (decoder): Decoder(
    (lstm): LSTM(2, 256, num_layers=2, batch_first=True)
    (dense): Linear(in_features=256, out_features=2, bias=True)
  )
)
time: 16.1 ms (started: 2023-10-22 09:43:49 +08:00)
代码
文本

Encoder_input生成函数

代码
文本
[16]
def encoder_input(datasets, mode):
encoder_input = []
soc_lst = [f'{i*10}%SOC' for i in range(10)]
for i in datasets:
with open(f'/bohr/ai4spulseeis-lr97/v3/{mode}_datasets/{mode}_pulse_{i}.pkl', 'rb') as fp:
pulse_data = pickle.load(fp, encoding='bytes')
for soc in soc_lst:
Vol = pulse_data[soc]['Voltage']
Cur = pulse_data[soc]['Current']
tensor_vol = torch.cat((torch.tensor(Vol).unsqueeze(-1),torch.tensor(Cur).unsqueeze(-1)), dim=-1).view(1,99,2)
encoder_input.append(tensor_vol)

return encoder_input
time: 849 µs (started: 2023-10-22 09:43:49 +08:00)
代码
文本

生成训练集和测试集encoder_input. 请注意!这里我们只使用了训练集中的四个数据做训练,剩下训练集李的两个数据一个做验证,一个做测试。所以并没有使用测试集里的数据。

代码
文本
[17]
# prepare input data
train_baty = [1,2,5,6] #训练集
validation_baty = [4] #验证集
test_baty = [3] #测试集

encoder_input_train = encoder_input(datasets = train_baty, mode = 'train')
encoder_input_val = encoder_input(datasets = validation_baty, mode = 'train')
encoder_input_test = encoder_input(datasets = test_baty, mode = 'train')
print(encoder_input_train[0].shape, len(encoder_input_train), len(encoder_input_val), len(encoder_input_test))
torch.Size([1, 99, 2]) 40 10 10
time: 9.79 ms (started: 2023-10-22 09:43:49 +08:00)
代码
文本

Decoder input和target生成函数

代码
文本
[18]
def decoder_input_target(datasets, mode):
decoder_input = []
decoder_target = []
soc_lst = [f'{i*10}%SOC' for i in range(10)]
EIS_list = []
for k in datasets:
with open(f'/bohr/ai4spulseeis-lr97/v3/{mode}_datasets/{mode}_eis_{k}.pkl', 'rb') as fp:
eis_data = pickle.load(fp, encoding='bytes')
for soc in soc_lst:
EIS_tot = [[],[]]
re = eis_data[soc]['Real']
im = eis_data[soc]['Imaginary']

EIS_tot[0] = re
EIS_tot[1] = im
EIS_list.append(EIS_tot)
EIS_list = [np.array(t).squeeze().T for t in EIS_list]
decoder_target = [torch.tensor(t).float().view(1, 51, 2) for t in EIS_list]
decoder_input = [torch.ones(1,51,2) for t in EIS_list]
return decoder_input, decoder_target
time: 1.07 ms (started: 2023-10-22 09:43:49 +08:00)
代码
文本
[19]
decoder_input_train, decoder_target_train = decoder_input_target(datasets = train_baty, mode = 'train')
decoder_input_val, decoder_target_val = decoder_input_target(datasets = validation_baty, mode = 'train')
decoder_input_test, decoder_target_test = decoder_input_target(datasets = test_baty, mode = 'train')

print(decoder_input_train[0].shape, len(decoder_input_train), len(decoder_target_train))
torch.Size([1, 51, 2]) 40 40
time: 39.2 ms (started: 2023-10-22 09:43:49 +08:00)
代码
文本

判断GPU是否可用

代码
文本
[20]
#判断GPU是否可用
if torch.cuda.is_available():
model = model.cuda()
for i in range(len(encoder_input_train)):
encoder_input_train[i] = encoder_input_train[i].cuda()
decoder_input_train[i] = decoder_input_train[i].cuda()
decoder_target_train[i] = decoder_target_train[i].cuda()
for i in range(len(encoder_input_test)):
encoder_input_test[i] = encoder_input_test[i].cuda()
decoder_input_test[i] = decoder_input_test[i].cuda()
decoder_target_test[i] = decoder_target_test[i].cuda()
for i in range(len(encoder_input_val)):
encoder_input_val[i] = encoder_input_val[i].cuda()
decoder_input_val[i] = decoder_input_val[i].cuda()
decoder_target_val[i] = decoder_target_val[i].cuda()
print("CUDE = True")
CUDE = True
time: 5.73 s (started: 2023-10-22 09:43:49 +08:00)
代码
文本

定义loss function和优化器。

这里我们使用均方误差和Adam优化器。Epoch的数量设为3000,batch size设为4。

代码
文本
[21]
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=1e-4)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', factor=0.8, patience=40, verbose=True, min_lr=1e-6)
num_epochs = 3000
batch_size = 4
time: 972 µs (started: 2023-10-22 09:43:55 +08:00)
代码
文本

定义测试函数

代码
文本
[22]
# 使用测试数据进行预测
def evaluation(encoder_input, decoder_input, decoder_target, batch_size=2):
model.eval()
outputs_eval = []
loss_tol = 0.0
for batch_idx in range(0, len(encoder_input), batch_size):
batch_encoder_input = torch.cat(encoder_input[batch_idx:batch_idx+batch_size], dim=0)
batch_decoder_input = torch.cat(decoder_input[batch_idx:batch_idx+batch_size], dim=0)
batch_decoder_target = torch.cat(decoder_target[batch_idx:batch_idx+batch_size], dim=0)

with torch.no_grad():
output = model(batch_encoder_input, batch_decoder_input)
loss = criterion(output, batch_decoder_target)
loss_tol += loss.item() * batch_encoder_input.shape[0]
outputs_eval.append(output)

mean_mse = loss_tol / (len(encoder_input) / batch_size)
rmse = mean_mse ** 0.5
return outputs_eval, rmse
time: 1.02 ms (started: 2023-10-22 09:43:55 +08:00)
代码
文本

模型训练

对train losses进行对数scale的作图。

代码
文本
[23]
train_losses = [] # 记录每个epoch的训练损失
val_rmses = [] # 记录验证集的RMSE
for epoch in range(num_epochs):
epoch_loss = 0.0
train_size = len(encoder_input_train)
for batch_idx in range(0, train_size, batch_size):
batch_encoder_input = torch.cat(encoder_input_train[batch_idx:batch_idx+batch_size], dim=0)
batch_decoder_input = torch.cat(decoder_input_train[batch_idx:batch_idx+batch_size], dim=0)
batch_decoder_target = torch.cat(decoder_target_train[batch_idx:batch_idx+batch_size], dim=0)

optimizer.zero_grad()
model.train() #评估模式to训练模式
outputs = model(batch_encoder_input, batch_decoder_input)
loss = criterion(outputs, batch_decoder_target)
loss.backward()
optimizer.step()
epoch_loss += loss.item() * batch_encoder_input.shape[0]

scheduler.step(epoch_loss / (train_size / batch_size)) # FIXME: should have been called with valid loss
train_losses.append(epoch_loss / (train_size / batch_size))
if (epoch + 1) % 50 == 0:
mean_mse = epoch_loss / (train_size / batch_size)
rmse = mean_mse ** 0.5
_, rmse_val = evaluation(encoder_input=encoder_input_val,
decoder_input=decoder_input_val,
decoder_target=decoder_target_val, batch_size=4)
val_rmses.append(rmse_val)
print(f"Epoch [{epoch + 1}/{num_epochs}] | Loss: {mean_mse:.4f} | Train RMSE: {rmse:.4f} | Validation RMSE: {rmse_val:.4f}")
Epoch [50/3000] | Loss: 224.4293 | Train RMSE: 14.9810 | Validation RMSE: 15.2743
Epoch [100/3000] | Loss: 75.2902 | Train RMSE: 8.6770 | Validation RMSE: 8.8636
Epoch [150/3000] | Loss: 31.5223 | Train RMSE: 5.6145 | Validation RMSE: 5.6384
Epoch [200/3000] | Loss: 21.3610 | Train RMSE: 4.6218 | Validation RMSE: 4.4931
Epoch [250/3000] | Loss: 19.7321 | Train RMSE: 4.4421 | Validation RMSE: 4.1390
Epoch [300/3000] | Loss: 18.2552 | Train RMSE: 4.2726 | Validation RMSE: 4.0464
Epoch [350/3000] | Loss: 16.9251 | Train RMSE: 4.1140 | Validation RMSE: 3.6089
Epoch [400/3000] | Loss: 12.5220 | Train RMSE: 3.5386 | Validation RMSE: 3.1744
Epoch [450/3000] | Loss: 11.1109 | Train RMSE: 3.3333 | Validation RMSE: 3.0190
Epoch [500/3000] | Loss: 8.2681 | Train RMSE: 2.8754 | Validation RMSE: 2.2310
Epoch [550/3000] | Loss: 5.1307 | Train RMSE: 2.2651 | Validation RMSE: 1.7439
Epoch [600/3000] | Loss: 4.2887 | Train RMSE: 2.0709 | Validation RMSE: 1.5607
Epoch [650/3000] | Loss: 4.5426 | Train RMSE: 2.1313 | Validation RMSE: 1.5011
Epoch [700/3000] | Loss: 4.4476 | Train RMSE: 2.1089 | Validation RMSE: 1.3503
Epoch [750/3000] | Loss: 3.0489 | Train RMSE: 1.7461 | Validation RMSE: 1.1061
Epoch [800/3000] | Loss: 3.1009 | Train RMSE: 1.7609 | Validation RMSE: 1.0178
Epoch [850/3000] | Loss: 2.5676 | Train RMSE: 1.6024 | Validation RMSE: 0.8600
Epoch [900/3000] | Loss: 2.4728 | Train RMSE: 1.5725 | Validation RMSE: 0.8107
Epoch [950/3000] | Loss: 2.4183 | Train RMSE: 1.5551 | Validation RMSE: 0.8006
Epoch [1000/3000] | Loss: 3.1542 | Train RMSE: 1.7760 | Validation RMSE: 1.1386
Epoch 01001: reducing learning rate of group 0 to 8.0000e-05.
Epoch 01042: reducing learning rate of group 0 to 6.4000e-05.
Epoch [1050/3000] | Loss: 2.4849 | Train RMSE: 1.5763 | Validation RMSE: 0.8717
Epoch 01083: reducing learning rate of group 0 to 5.1200e-05.
Epoch [1100/3000] | Loss: 2.2984 | Train RMSE: 1.5160 | Validation RMSE: 0.7976
Epoch [1150/3000] | Loss: 2.3269 | Train RMSE: 1.5254 | Validation RMSE: 0.7261
Epoch [1200/3000] | Loss: 2.1336 | Train RMSE: 1.4607 | Validation RMSE: 0.7393
Epoch [1250/3000] | Loss: 2.0727 | Train RMSE: 1.4397 | Validation RMSE: 0.7150
Epoch [1300/3000] | Loss: 2.2970 | Train RMSE: 1.5156 | Validation RMSE: 0.7394
Epoch 01308: reducing learning rate of group 0 to 4.0960e-05.
Epoch 01349: reducing learning rate of group 0 to 3.2768e-05.
Epoch [1350/3000] | Loss: 1.9926 | Train RMSE: 1.4116 | Validation RMSE: 0.7172
Epoch [1400/3000] | Loss: 1.9580 | Train RMSE: 1.3993 | Validation RMSE: 0.7057
Epoch [1450/3000] | Loss: 2.0317 | Train RMSE: 1.4254 | Validation RMSE: 0.7407
Epoch 01458: reducing learning rate of group 0 to 2.6214e-05.
Epoch [1500/3000] | Loss: 1.8997 | Train RMSE: 1.3783 | Validation RMSE: 0.6820
Epoch [1550/3000] | Loss: 1.9264 | Train RMSE: 1.3879 | Validation RMSE: 0.6753
Epoch [1600/3000] | Loss: 1.9605 | Train RMSE: 1.4002 | Validation RMSE: 0.6695
Epoch [1650/3000] | Loss: 1.8935 | Train RMSE: 1.3760 | Validation RMSE: 0.6695
Epoch [1700/3000] | Loss: 1.9752 | Train RMSE: 1.4054 | Validation RMSE: 1.0469
Epoch 01716: reducing learning rate of group 0 to 2.0972e-05.
Epoch [1750/3000] | Loss: 1.8278 | Train RMSE: 1.3520 | Validation RMSE: 0.6675
Epoch 01758: reducing learning rate of group 0 to 1.6777e-05.
Epoch 01800: reducing learning rate of group 0 to 1.3422e-05.
Epoch [1800/3000] | Loss: 1.7988 | Train RMSE: 1.3412 | Validation RMSE: 0.6698
Epoch 01842: reducing learning rate of group 0 to 1.0737e-05.
Epoch [1850/3000] | Loss: 1.7468 | Train RMSE: 1.3217 | Validation RMSE: 0.6728
Epoch 01884: reducing learning rate of group 0 to 8.5899e-06.
Epoch [1900/3000] | Loss: 1.7228 | Train RMSE: 1.3126 | Validation RMSE: 0.6836
Epoch [1950/3000] | Loss: 1.7155 | Train RMSE: 1.3098 | Validation RMSE: 0.6970
Epoch [2000/3000] | Loss: 1.7082 | Train RMSE: 1.3070 | Validation RMSE: 0.7056
Epoch [2050/3000] | Loss: 1.7017 | Train RMSE: 1.3045 | Validation RMSE: 0.7136
Epoch [2100/3000] | Loss: 1.6956 | Train RMSE: 1.3022 | Validation RMSE: 0.7207
Epoch [2150/3000] | Loss: 1.6900 | Train RMSE: 1.3000 | Validation RMSE: 0.7272
Epoch [2200/3000] | Loss: 1.6847 | Train RMSE: 1.2980 | Validation RMSE: 0.7331
Epoch [2250/3000] | Loss: 1.6799 | Train RMSE: 1.2961 | Validation RMSE: 0.7388
Epoch [2300/3000] | Loss: 1.6757 | Train RMSE: 1.2945 | Validation RMSE: 0.7442
Epoch [2350/3000] | Loss: 1.6720 | Train RMSE: 1.2930 | Validation RMSE: 0.7496
Epoch [2400/3000] | Loss: 1.6686 | Train RMSE: 1.2917 | Validation RMSE: 0.7549
Epoch [2450/3000] | Loss: 1.7196 | Train RMSE: 1.3114 | Validation RMSE: 0.6651
Epoch 02496: reducing learning rate of group 0 to 6.8719e-06.
Epoch [2500/3000] | Loss: 1.6384 | Train RMSE: 1.2800 | Validation RMSE: 0.7547
Epoch 02540: reducing learning rate of group 0 to 5.4976e-06.
Epoch [2550/3000] | Loss: 1.6473 | Train RMSE: 1.2835 | Validation RMSE: 0.8005
Epoch [2600/3000] | Loss: 1.6746 | Train RMSE: 1.2941 | Validation RMSE: 0.8196
Epoch [2650/3000] | Loss: 1.6180 | Train RMSE: 1.2720 | Validation RMSE: 0.7395
Epoch 02658: reducing learning rate of group 0 to 4.3980e-06.
Epoch [2700/3000] | Loss: 1.6056 | Train RMSE: 1.2671 | Validation RMSE: 0.7326
Epoch 02702: reducing learning rate of group 0 to 3.5184e-06.
Epoch 02744: reducing learning rate of group 0 to 2.8147e-06.
Epoch [2750/3000] | Loss: 1.5866 | Train RMSE: 1.2596 | Validation RMSE: 0.7217
Epoch 02786: reducing learning rate of group 0 to 2.2518e-06.
Epoch [2800/3000] | Loss: 1.5793 | Train RMSE: 1.2567 | Validation RMSE: 0.7170
Epoch [2850/3000] | Loss: 1.5784 | Train RMSE: 1.2563 | Validation RMSE: 0.7166
Epoch [2900/3000] | Loss: 1.5774 | Train RMSE: 1.2559 | Validation RMSE: 0.7164
Epoch [2950/3000] | Loss: 1.5763 | Train RMSE: 1.2555 | Validation RMSE: 0.7162
Epoch [3000/3000] | Loss: 1.5752 | Train RMSE: 1.2551 | Validation RMSE: 0.7161
time: 6min 9s (started: 2023-10-22 09:43:55 +08:00)
代码
文本
[24]
# 绘制训练过程中loss的变化曲线(对数作图)
plt.figure(figsize=(12,4.5))
losses_log = [math.log(t) for t in train_losses]
plt.subplot(1,2,1)
plt.plot(losses_log, label='Training loss')
plt.xlabel('Epochs')
plt.ylabel('Log MSE Loss')
plt.legend()

epochs_val = [i*20 for i in range(len(val_rmses))]
plt.subplot(1,2,2)
plt.plot(epochs_val, val_rmses, color='orange', label='Validation RMSE')
plt.xlabel('Epochs')
plt.ylabel('RMSE')
plt.legend()
plt.show()
代码
文本

计算测试集EIS

代码
文本
[25]
outputs_eval, _ = evaluation(encoder_input=encoder_input_test,
decoder_input=decoder_input_test,
decoder_target=decoder_target_test, batch_size=4)
print('outputs:', len(outputs_eval), outputs_eval[0].shape)
outputs: 3 torch.Size([4, 51, 2])
time: 13.6 ms (started: 2023-10-22 09:50:05 +08:00)
代码
文本
[26]
predict_outputs = torch.cat(outputs_eval, dim=0).view(1, len(encoder_input_test)*51, 2).tolist()
predict_data = np.array(predict_outputs).squeeze().T
print(predict_data.shape)
(2, 510)
time: 1.39 ms (started: 2023-10-22 09:50:05 +08:00)
代码
文本

submission.csv 生成

代码
文本
[27]
results_data = {}
results_data['test_data_number'] = [1 for i in range(10*51)]
results_data['SOC(%)'] = []

for j in range(len(test_baty)):
for i in range(10):
results_data['SOC(%)'] += [i*10 for j in range(51)]

results_data['EIS_real'] = predict_data[0].tolist()
results_data['EIS_imaginary'] = predict_data[1].tolist()

data_submit = pd.DataFrame(results_data)
data_submit.to_csv('subm_lstm_infp.csv', index=False, header=True)
print(data_submit)
     test_data_number  SOC(%)   EIS_real  EIS_imaginary
0                   1       0  16.141947      -8.554930
1                   1       0  16.080833      -6.592542
2                   1       0  16.138363      -4.937032
3                   1       0  16.134314      -3.488502
4                   1       0  16.287409      -2.331476
..                ...     ...        ...            ...
505                 1      90  32.282169       1.632849
506                 1      90  32.428963       1.941551
507                 1      90  32.574966       2.120454
508                 1      90  32.835133       2.208242
509                 1      90  33.004349       2.241853

[510 rows x 4 columns]
time: 38.8 ms (started: 2023-10-22 09:50:05 +08:00)
代码
文本

预测结果展示

代码
文本
[28]
for i in range(10):
plt.plot(predict_data[0][51*i:51*(i+1)],predict_data[1][51*i:51*(i+1)],'o-')

plt.title('predict EIS for test dataset 1')
plt.xlabel('Z_re')
plt.ylabel('Z_im')
plt.show()
代码
文本

测试集目标数据和预测数据比较

代码
文本
[29]
target_outputs = torch.cat(decoder_target_test, dim=0).view(1, len(encoder_input_test)*51, 2).tolist()
target_data = np.array(target_outputs).squeeze().T
print(target_data.shape)
(2, 510)
time: 1.43 ms (started: 2023-10-22 09:50:05 +08:00)
代码
文本

分别计算两个测试数据的RMSE

代码
文本
[30]
distance = [[] for i in range(len(test_baty))]

num = list(range(len(test_baty)*10*51))
for j in range(len(test_baty)):
for i in num[j*10*51:(j+1)*10*51]:
diff0 = predict_data[0][i] - target_data[0][i]
diff1 = predict_data[1][i] - target_data[1][i]
distance[j].append(diff0 ** 2 + diff1 ** 2)

mse = []
rmse = []
for j in range(len(test_baty)):
mse.append(np.mean(distance[j]))
rmse.append(mse[j] ** 0.5)

print(f'test data {j+1} -- RMSE:{rmse[j]:.4f}')
test data 1 -- RMSE:1.0506
time: 2.78 ms (started: 2023-10-22 09:50:05 +08:00)
代码
文本

画图对比测试集目标数据和预测数据

代码
文本
[31]
for j in range(len(test_baty)):
plt.figure(figsize=(9.5, 6.5))
for i in range(10):
if i == 0:
label_t = 'target EIS'
label_p = 'predict EIS'
else:
label_t = None
label_p = None
plt.plot(predict_data[0][51*(i+j*10):51*(i+1+j*10)],predict_data[1][51*(i+j*10):51*(i+1+j*10)],'o-', label=f'predict soc={i*10}%')
plt.plot(target_data[0][51*(i+j*10):51*(i+1+j*10)],target_data[1][51*(i+j*10):51*(i+1+j*10)],'o-',markerfacecolor='white', label=f'target soc={i*10}%')

plt.title(f'Comparasion EIS for test dataset {j+1}')
plt.xlabel('Z_re')
plt.ylabel('Z_im')
plt.legend(loc=2, bbox_to_anchor=(1.05,1.0), borderaxespad = 0.)
plt.show()
代码
文本
[ ]

代码
文本
AI4S
AI4SCUP-EIS
EIS
AI4SAI4SCUP-EIS EIS
已赞5
本文被以下合集收录
电化学-电池计算
hjchen
更新于 2024-06-13
7 篇9 人关注
电芯
Piloteye
更新于 2024-07-22
17 篇3 人关注
推荐阅读
公开
CALYPSO—电池实战之界面结构预测篇
固态电解质CALYPSO界面
固态电解质CALYPSO界面
zhanglinshuang
发布于 2023-10-27
1 赞5 转存文件
公开
AI+电芯 | 充电数据曲线预测电池阻抗谱
AI锂电池EIS
AI锂电池EIS
JiaweiMiao
发布于 2023-08-24
4 赞16 转存文件