AI+电芯 | 基于PINN的融合模型预测磷酸锂铁电池SOH和RUL
快速开始:点击上方的 开始连接 按钮,选择 d2l-ai:pytorch镜像 和任意GPU配置机型即可开始。
背景
对于锂离子(Li-ion)电池的预测和健康管理(PHM),已经建立了许多模型来表征其老化过程。 现有的经验或物理模型可以揭示有关老化动态的重要信息。 然而,没有通用且灵活的方法来融合这些模型所表示的信息。 Physics-Informed Neural Network (PINN) 是将经验或物理动态模型与数据驱动模型融合的有效工具。 为了充分利用各种信息源,文章提出了一种基于PINN的模型融合方案。 这是通过开发半经验半物理偏微分方程(PDE)来模拟锂离子电池老化的动力学过程来完成的。 当有关动力学的先验知识很少时,文章中利用数据驱动的深层隐藏物理模型 (DeepHPM) 来发现底层的控制动态模型。 然后将动态信息与 PINN 框架中的代理神经网络挖掘的信息融合。 此外,在训练 PINN 时,采用基于不确定性的自适应加权方法来平衡多个学习任务。 所提出的方法在磷酸锂离子(LFP)/石墨电池的公共循环数据集上对SOH(state of health)和RUL(remaining useful life)的预测进行了验证。
模型
基于先验知识建立的或由神经网络近似建立的偏微分方程动态模型可能很难求解。基于神经网络作为通用函数逼近器的众所周知的能力,我们采用另一个由 Φ 参数化的神经网络来逼近系统的隐藏解 u(x,t;Φ)。解u(x,t;Φ)也称为代理网络(surrogate network)。使用此神经网络求器,无需直接访问或近似所涉及的偏导数。 代理神经网络与显式 PDE 模型或 DeepHPM 的模型融合形成 PINN。
PINN 的典型结构由三个模块组成,包括动态模型、代理神经网络和自动微分器。动态模型 G (x, t, u, ux, uxx, uxxx, ...; θ) 提炼出控制退化系统动力学的机制,可以由 DeepHPM 近似为。 代理神经网络 u(x,t;Φ)用于近似动态模型的隐藏解u(x,t),自动微分器(AutoDiff)用于计算所有涉及的偏微分输入的值到动态模型。
本案例涉及的框架包括数据驱动的普通神经网络(基线)、以 Verhulst 方程作为动态模型的 PINN(PINN-Verhulst)以及以 DeepHPM 作为动态模型的 PINN(PINN-DeepHPM)。这些框架分别如图 1、图 2 和图 3 所示。 为了更清楚地比较各自的学习框架,基线中的普通神经网络也被表示为代理神经网络,尽管这里没有需要求解的偏微分方程。 代理神经网络的基本结构如图4所示。代理神经网络的隐藏层均由全连接(FC)层组成,并且由于双曲正切的可微性而采用双曲正切作为激活函数。 图1、图2、图3中所有涉及的代理神经网络的结构如无特殊说明均如此设置。 DeepHPM 的结构设置与代理神经网络相同。
数据集
本文涉及的数据集由 A123 Systems 生产的三批商用 LFP/石墨电池组成,其中总共 124 个电池在数十种不同的快速充电协议下循环至失效。 电芯标称容量为1.1Ah,单位充放电倍率1C则等于1.1A电流。 每个充电协议被标记为格式为“C1(Q1)-C2”的字符串,其中对应的电池首先用电流C1从0% SoC(单位:%)充电到SoC Q1。 当在 Q1 充电时,充电电流随后切换至 C2,将电池充电至 80% SoC。 所有电池最终均以恒流恒压 (CC-CV) 形式从 80% SoC 充电至 100% 至 3.6V 上限截止电势和 C/50 截止电流。 此外,所有电池也以CC-CV形式在4C至2.0V较低截止电位和C/50截止电流下放电。
为了更好的捕捉数据中的信息,我们对数据集中的数据进行了处理,获取了一些我们认为更能表征SOH状态的值作为输入,这些值包括在电压为2.7-3.3V间的描述放电容量差值二次方程的两个参数、平均温度、充电时间等多项数据。这里使用SeversonBattery公开数据集。对于SOH和RUL分别使用了两组(A、B)和三组电池(A、B、C)的数据进行了分别训练与预测。
首先,我们选择case A的SOH预测进行展示,使用DeepHPM-AdpBal模型(即使用自适应加权方法的DeepHPM)。对于Baseline、DeepHPM-sum和Verhulst模型,由于训练测试代码较为相同,为避免代码重复,这里不做展示,有兴趣的话,可以下载本notebook使用的数据集详细学习。在训练测试完成后,会有和文章中相同的对各个模型进行比较的效果图。
文章中提到的baseline为普通神经网络模型在相同数据下的结果。
本Notebook搬运自Github/WenPengfei0823/PINN-Battery-Prognostics,数据和模型算法来自文章Fusing Models for Prognostics and Health Management of Lithium-Ion Batteries Based on Physics-Informed Neural Networks
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting scikit-learn Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bd/05/e561bc99a615b5c099c7a9355409e5e57c525a108f1c2e156abb005b90a6/scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.8 MB) |████████████████████████████████| 24.8 MB 14.4 MB/s eta 0:00:01 Collecting joblib>=0.11 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/10/40/d551139c85db202f1f384ba8bcf96aca2f329440a844f924c8a0040b6d02/joblib-1.3.2-py3-none-any.whl (302 kB) |████████████████████████████████| 302 kB 49.2 MB/s eta 0:00:01 Requirement already satisfied: scipy>=1.1.0 in /opt/miniconda/lib/python3.7/site-packages (from scikit-learn) (1.7.3) Collecting threadpoolctl>=2.0.0 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/61/cf/6e354304bcb9c6413c4e02a747b600061c21d38ba51e7e544ac7bc66aecc/threadpoolctl-3.1.0-py3-none-any.whl (14 kB) Requirement already satisfied: numpy>=1.14.6 in /opt/miniconda/lib/python3.7/site-packages (from scikit-learn) (1.21.5) Installing collected packages: threadpoolctl, joblib, scikit-learn Successfully installed joblib-1.3.2 scikit-learn-1.0.2 threadpoolctl-3.1.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
加载case A的设置数据:
作者通过对baseline的模型进行研究确定了最佳的层数和神经元个数,并运用到这个PINN DeepHPM模型上。
这里使用的一些func函数,引用自原作者的使用的function.py中的函数。详细信息请参考:function
这里我们挑选DeepHPM的模型函数作为展示:
class DeepHPMNN(nn.Module):
def __init__(self, seq_len, inputs_dim, outputs_dim, layers, scaler_inputs, scaler_targets,
inputs_dynamical, inputs_dim_dynamical):
super(DeepHPMNN, self).__init__()
self.seq_len, self.inputs_dim, self.outputs_dim = seq_len, inputs_dim, outputs_dim
self.scaler_inputs, self.scaler_targets = scaler_inputs, scaler_targets
if len(inputs_dynamical.split(',')) <= 1:
self.inputs_dynamical = inputs_dynamical
else:
self.inputs_dynamical = 'torch.cat((' + inputs_dynamical + '), dim=2)'
self.inputs_dim_dynamical = eval(inputs_dim_dynamical)
self.surrogateNN = Neural_Net(
seq_len=self.seq_len,
inputs_dim=self.inputs_dim,
outputs_dim=self.outputs_dim,
layers=layers
)
self.dynamicalNN = Neural_Net(
seq_len=self.seq_len,
inputs_dim=self.inputs_dim_dynamical,
outputs_dim=1,
layers=layers
)
def forward(self, inputs):
s = inputs[:, :, 0: self.inputs_dim - 1]
t = inputs[:, :, self.inputs_dim - 1:]
s.requires_grad_(True)
s_norm, _, _ = standardize_tensor(s, mode='transform', mean=self.scaler_inputs[0][0: self.inputs_dim - 1],
std=self.scaler_inputs[1][0: self.inputs_dim - 1])
s_norm.requires_grad_(True)
t.requires_grad_(True)
t_norm, _, _ = standardize_tensor(t, mode='transform', mean=self.scaler_inputs[0][self.inputs_dim - 1:],
std=self.scaler_inputs[1][self.inputs_dim - 1:])
t_norm.requires_grad_(True)
U_norm = self.surrogateNN(x=torch.cat((s_norm, t_norm), dim=2))
U = inverse_standardize_tensor(U_norm, mean=self.scaler_targets[0], std=self.scaler_targets[1])
grad_outputs = torch.ones_like(U)
U_t = torch.autograd.grad(
U, t,
grad_outputs=grad_outputs,
create_graph=True,
retain_graph=True,
only_inputs=True
)[0]
U_s = torch.autograd.grad(
U, s,
grad_outputs=grad_outputs,
create_graph=True,
retain_graph=True,
only_inputs=True
)[0]
G = eval('self.dynamicalNN(x=' + self.inputs_dynamical + ')')
F = U_t - G
F_t = torch.autograd.grad(
F, t,
grad_outputs=grad_outputs,
create_graph=True,
retain_graph=True,
only_inputs=True
)[0]
self.U_t = U_t
return U, F, F_t
搭建模型并进行训练。
Epoch: 100, Period: 1, Loss: 3.54421, Loss_U: 0.05309, Loss_F: 5.35491, Loss_F_t: 0.00001 Epoch: 100, Period: 2, Loss: 2.72425, Loss_U: 0.05090, Loss_F: 4.54454, Loss_F_t: 0.00001 Epoch: 200, Period: 1, Loss: 0.22462, Loss_U: 0.03490, Loss_F: 2.50119, Loss_F_t: 0.00000 Epoch: 200, Period: 2, Loss: -0.35281, Loss_U: 0.02656, Loss_F: 1.92590, Loss_F_t: 0.00000 Epoch: 300, Period: 1, Loss: -1.60316, Loss_U: 0.02232, Loss_F: 1.06536, Loss_F_t: 0.00000 Epoch: 300, Period: 2, Loss: -1.58062, Loss_U: 0.02300, Loss_F: 1.08956, Loss_F_t: 0.00000 Epoch: 400, Period: 1, Loss: -2.38902, Loss_U: 0.01334, Loss_F: 0.68319, Loss_F_t: 0.00000 Epoch: 400, Period: 2, Loss: -2.45619, Loss_U: 0.01546, Loss_F: 0.61347, Loss_F_t: 0.00000 Epoch: 500, Period: 1, Loss: -3.09120, Loss_U: 0.01384, Loss_F: 0.37828, Loss_F_t: 0.00000 Epoch: 500, Period: 2, Loss: -3.12405, Loss_U: 0.01135, Loss_F: 0.35187, Loss_F_t: 0.00000 Epoch: 600, Period: 1, Loss: -3.66069, Loss_U: 0.01055, Loss_F: 0.22253, Loss_F_t: 0.00000 Epoch: 600, Period: 2, Loss: -3.61110, Loss_U: 0.01082, Loss_F: 0.27331, Loss_F_t: 0.00000 Epoch: 700, Period: 1, Loss: -4.15602, Loss_U: 0.00735, Loss_F: 0.14916, Loss_F_t: 0.00000 Epoch: 700, Period: 2, Loss: -4.16911, Loss_U: 0.00797, Loss_F: 0.13669, Loss_F_t: 0.00000 Epoch: 800, Period: 1, Loss: -4.62212, Loss_U: 0.00726, Loss_F: 0.10233, Loss_F_t: 0.00000 Epoch: 800, Period: 2, Loss: -4.61436, Loss_U: 0.00770, Loss_F: 0.11030, Loss_F_t: 0.00000 Epoch: 900, Period: 1, Loss: -5.07331, Loss_U: 0.00806, Loss_F: 0.07063, Loss_F_t: 0.00000 Epoch: 900, Period: 2, Loss: -5.08290, Loss_U: 0.00799, Loss_F: 0.06397, Loss_F_t: 0.00000 Epoch: 1000, Period: 1, Loss: -5.53107, Loss_U: 0.00604, Loss_F: 0.04853, Loss_F_t: 0.00000 Epoch: 1000, Period: 2, Loss: -5.53281, Loss_U: 0.00575, Loss_F: 0.05021, Loss_F_t: 0.00000 Epoch: 1100, Period: 1, Loss: -5.98240, Loss_U: 0.00618, Loss_F: 0.02757, Loss_F_t: 0.00000 Epoch: 1100, Period: 2, Loss: -5.97554, Loss_U: 0.00707, Loss_F: 0.03050, Loss_F_t: 0.00000 Epoch: 1200, Period: 1, Loss: -6.42348, Loss_U: 0.00655, Loss_F: 0.01688, Loss_F_t: 0.00000 Epoch: 1200, Period: 2, Loss: -6.41990, Loss_U: 0.00696, Loss_F: 0.01904, Loss_F_t: 0.00000 Epoch: 1300, Period: 1, Loss: -6.86935, Loss_U: 0.00569, Loss_F: 0.01294, Loss_F_t: 0.00000 Epoch: 1300, Period: 2, Loss: -6.85872, Loss_U: 0.00711, Loss_F: 0.01270, Loss_F_t: 0.00000 Epoch: 1400, Period: 1, Loss: -7.31777, Loss_U: 0.00546, Loss_F: 0.00643, Loss_F_t: 0.00000 Epoch: 1400, Period: 2, Loss: -7.32097, Loss_U: 0.00535, Loss_F: 0.00660, Loss_F_t: 0.00000 Epoch: 1500, Period: 1, Loss: -7.75563, Loss_U: 0.00582, Loss_F: 0.00441, Loss_F_t: 0.00000 Epoch: 1500, Period: 2, Loss: -7.76390, Loss_U: 0.00537, Loss_F: 0.00448, Loss_F_t: 0.00000 Epoch: 1600, Period: 1, Loss: -8.20879, Loss_U: 0.00524, Loss_F: 0.00274, Loss_F_t: 0.00000 Epoch: 1600, Period: 2, Loss: -8.21008, Loss_U: 0.00534, Loss_F: 0.00219, Loss_F_t: 0.00000 Epoch: 1700, Period: 1, Loss: -8.63017, Loss_U: 0.00650, Loss_F: 0.00123, Loss_F_t: 0.00000 Epoch: 1700, Period: 2, Loss: -8.65981, Loss_U: 0.00513, Loss_F: 0.00119, Loss_F_t: 0.00000 Epoch: 1800, Period: 1, Loss: -9.09943, Loss_U: 0.00534, Loss_F: 0.00068, Loss_F_t: 0.00000 Epoch: 1800, Period: 2, Loss: -9.08829, Loss_U: 0.00589, Loss_F: 0.00065, Loss_F_t: 0.00000 Epoch: 1900, Period: 1, Loss: -9.55046, Loss_U: 0.00516, Loss_F: 0.00032, Loss_F_t: 0.00000 Epoch: 1900, Period: 2, Loss: -9.54120, Loss_U: 0.00555, Loss_F: 0.00034, Loss_F_t: 0.00000 Epoch: 2000, Period: 1, Loss: -9.98005, Loss_U: 0.00563, Loss_F: 0.00015, Loss_F_t: 0.00000 Epoch: 2000, Period: 2, Loss: -9.97842, Loss_U: 0.00574, Loss_F: 0.00015, Loss_F_t: 0.00000
对训练的模型进行测试(并将预测结果写入结果文件夹,由于数据集写入限制,这里我们省去写入这一步。后续的画图模块,使用原作者计算好的结果数据。)
设置绘图格式
加载计算好的结果数据。比较在case A的情况下,不同模型对SOH的预测效果。
为了更好的反应各个模型的预测能力,这里我们附上文章中的一张数据图表
图表里的SOH预测值为各个模型的RMSPE(Mean Square Percentage Erro),单位为%。从图表中我们可以看看到,相较于baseline,引入PINN和adabal后的模型,在数据集B上出现了较好优化的效果。在数据集A上的效果则不明显,这可能是由于在baseline上optimal的神经网络层数和神经元个数,并不一定在别的模型上也是最优解。
使用同模型预测RUL
与SOH预测相同,这里我们也只对caseA的DeepHPM-AdpBal模型进行展示。
加载case A的设置数据和数据集:
搭建模型并进行训练
Epoch: 100, Period: 1, Loss: 2155640.50000, Loss_U: 6290418.00000, Loss_F: 73.68130, Loss_F_t: 0.05357 Epoch: 100, Period: 2, Loss: 2275743.75000, Loss_U: 6642410.00000, Loss_F: 59.97404, Loss_F_t: 0.01389 Epoch: 200, Period: 1, Loss: 1288396.62500, Loss_U: 3917336.50000, Loss_F: 36.70158, Loss_F_t: 0.00134 Epoch: 200, Period: 2, Loss: 1474424.12500, Loss_U: 4483811.00000, Loss_F: 40.06873, Loss_F_t: 0.00145 Epoch: 300, Period: 1, Loss: 948385.87500, Loss_U: 2992287.00000, Loss_F: 25.10822, Loss_F_t: 0.00252 Epoch: 300, Period: 2, Loss: 959551.43750, Loss_U: 3028063.25000, Loss_F: 24.48299, Loss_F_t: 0.00008 Epoch: 400, Period: 1, Loss: 805611.75000, Loss_U: 2632619.00000, Loss_F: 19.18599, Loss_F_t: 0.00012 Epoch: 400, Period: 2, Loss: 737998.12500, Loss_U: 2412090.50000, Loss_F: 17.17958, Loss_F_t: 0.00158 Epoch: 500, Period: 1, Loss: 687239.37500, Loss_U: 2323840.00000, Loss_F: 15.06810, Loss_F_t: 0.00218 Epoch: 500, Period: 2, Loss: 550802.81250, Loss_U: 1862801.12500, Loss_F: 15.90533, Loss_F_t: 0.00175 Epoch: 600, Period: 1, Loss: 537652.43750, Loss_U: 1880249.75000, Loss_F: 13.80499, Loss_F_t: 0.00092 Epoch: 600, Period: 2, Loss: 531793.75000, Loss_U: 1860071.50000, Loss_F: 14.24928, Loss_F_t: 0.00371 Epoch: 700, Period: 1, Loss: 481136.46875, Loss_U: 1738624.75000, Loss_F: 12.16540, Loss_F_t: 0.00325 Epoch: 700, Period: 2, Loss: 418739.34375, Loss_U: 1513391.37500, Loss_F: 13.07103, Loss_F_t: 0.00834 Epoch: 800, Period: 1, Loss: 395370.90625, Loss_U: 1476887.50000, Loss_F: 12.49607, Loss_F_t: 0.00267 Epoch: 800, Period: 2, Loss: 386239.62500, Loss_U: 1443020.75000, Loss_F: 11.59053, Loss_F_t: 0.00013 Epoch: 900, Period: 1, Loss: 402340.93750, Loss_U: 1553259.75000, Loss_F: 11.49965, Loss_F_t: 0.01686 Epoch: 900, Period: 2, Loss: 335048.90625, Loss_U: 1293691.00000, Loss_F: 10.92959, Loss_F_t: 0.00133 Epoch: 1000, Period: 1, Loss: 329994.40625, Loss_U: 1317770.62500, Loss_F: 11.06236, Loss_F_t: 0.00424 Epoch: 1000, Period: 2, Loss: 327248.78125, Loss_U: 1307022.62500, Loss_F: 10.45050, Loss_F_t: 0.08518 Epoch: 1100, Period: 1, Loss: 343784.25000, Loss_U: 1420629.75000, Loss_F: 9.68945, Loss_F_t: 0.00181 Epoch: 1100, Period: 2, Loss: 297456.43750, Loss_U: 1229404.25000, Loss_F: 9.89140, Loss_F_t: 0.00578 Epoch: 1200, Period: 1, Loss: 251131.98438, Loss_U: 1075451.75000, Loss_F: 9.86708, Loss_F_t: 0.18221 Epoch: 1200, Period: 2, Loss: 266361.40625, Loss_U: 1140891.87500, Loss_F: 10.20467, Loss_F_t: 0.00032 Epoch: 1300, Period: 1, Loss: 261435.43750, Loss_U: 1160815.75000, Loss_F: 9.87165, Loss_F_t: 0.00041 Epoch: 1300, Period: 2, Loss: 295724.90625, Loss_U: 1313311.62500, Loss_F: 9.74287, Loss_F_t: 0.00062 Epoch: 1400, Period: 1, Loss: 260014.26562, Loss_U: 1201055.75000, Loss_F: 9.99243, Loss_F_t: 0.00319 Epoch: 1400, Period: 2, Loss: 265476.93750, Loss_U: 1226542.25000, Loss_F: 9.52675, Loss_F_t: 0.00015 Epoch: 1500, Period: 1, Loss: 222812.18750, Loss_U: 1069931.62500, Loss_F: 7.90012, Loss_F_t: 0.00019 Epoch: 1500, Period: 2, Loss: 238479.92188, Loss_U: 1145393.87500, Loss_F: 8.63923, Loss_F_t: 0.00014 Epoch: 1600, Period: 1, Loss: 347469.09375, Loss_U: 1739058.50000, Loss_F: 24.58323, Loss_F_t: 0.00046 Epoch: 1600, Period: 2, Loss: 356266.31250, Loss_U: 1783505.25000, Loss_F: 29.40131, Loss_F_t: 0.00213 Epoch: 1700, Period: 1, Loss: 199943.12500, Loss_U: 1050412.37500, Loss_F: 9.86898, Loss_F_t: 0.00145 Epoch: 1700, Period: 2, Loss: 204533.84375, Loss_U: 1074733.00000, Loss_F: 10.78495, Loss_F_t: 0.10656 Epoch: 1800, Period: 1, Loss: 187425.29688, Loss_U: 1027877.43750, Loss_F: 7.17488, Loss_F_t: 0.00290 Epoch: 1800, Period: 2, Loss: 174570.73438, Loss_U: 957541.00000, Loss_F: 7.40506, Loss_F_t: 0.20739 Epoch: 1900, Period: 1, Loss: 165579.81250, Loss_U: 948095.43750, Loss_F: 8.51260, Loss_F_t: 0.13792 Epoch: 1900, Period: 2, Loss: 186771.75000, Loss_U: 1069705.00000, Loss_F: 8.10844, Loss_F_t: 0.01623 Epoch: 2000, Period: 1, Loss: 163509.92188, Loss_U: 979782.25000, Loss_F: 10.24785, Loss_F_t: 0.00151 Epoch: 2000, Period: 2, Loss: 177722.10938, Loss_U: 1065192.37500, Loss_F: 11.67247, Loss_F_t: 0.00048
对训练的模型进行测试。
加载计算好的结果数据。比较在case A的情况下,不同模型对RUL的预测效果。
为了更好的反应各个模型的预测能力,这里我们附上文章中的一张数据图表
图表里的值为各个模型RUL的RMSE。从图表中我们可以看到,同样,相较于baseline,引入PINN和adabal后的模型,在一些数据集B上出现了较好优化的效果。在数据集A上的效果则不明显,这个趋势在SOH和RUL预测上相同,因此,验证了所提出的方法从SoH估计到RUL预测的通用性。
KaiqiYang