Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
AI+电芯 | 基于PINN的融合模型预测磷酸锂铁电池SOH和RUL
AI
PINN
RUL
AIPINNRUL
JiaweiMiao
发布于 2023-08-29
推荐镜像 :Third-party software:d2l-ai:pytorch
推荐机型 :c12_m46_1 * NVIDIA GPU B
赞 3
1
7
PINN-SOH-RUL(v1)

AI+电芯 | 基于PINN的融合模型预测磷酸锂铁电池SOH和RUL

代码
文本

快速开始:点击上方的 开始连接 按钮,选择 d2l-ai:pytorch镜像 和任意GPU配置机型即可开始。

代码
文本

背景

对于锂离子(Li-ion)电池的预测和健康管理(PHM),已经建立了许多模型来表征其老化过程。 现有的经验或物理模型可以揭示有关老化动态的重要信息。 然而,没有通用且灵活的方法来融合这些模型所表示的信息。 Physics-Informed Neural Network (PINN) 是将经验或物理动态模型与数据驱动模型融合的有效工具。 为了充分利用各种信息源,文章提出了一种基于PINN的模型融合方案。 这是通过开发半经验半物理偏微分方程(PDE)来模拟锂离子电池老化的动力学过程来完成的。 当有关动力学的先验知识很少时,文章中利用数据驱动的深层隐藏物理模型 (DeepHPM) 来发现底层的控制动态模型。 然后将动态信息与 PINN 框架中的代理神经网络挖掘的信息融合。 此外,在训练 PINN 时,采用基于不确定性的自适应加权方法来平衡多个学习任务。 所提出的方法在磷酸锂离子(LFP)/石墨电池的公共循环数据集上对SOH(state of health)RUL(remaining useful life)的预测进行了验证。

模型

基于先验知识建立的或由神经网络近似建立的偏微分方程动态模型可能很难求解。基于神经网络作为通用函数逼近器的众所周知的能力,我们采用另一个由 Φ 参数化的神经网络来逼近系统的隐藏解 u(x,t;Φ)。解u(x,t;Φ)也称为代理网络(surrogate network)。使用此神经网络求器,无需直接访问或近似所涉及的偏导数。 代理神经网络与显式 PDE 模型或 DeepHPM 的模型融合形成 PINN。

PINN 的典型结构由三个模块组成,包括动态模型、代理神经网络和自动微分器。动态模型 G (x, t, u, ux, uxx, uxxx, ...; θ) 提炼出控制退化系统动力学的机制,可以由 DeepHPM 近似为。 代理神经网络 u(x,t;Φ)用于近似动态模型的隐藏解u(x,t),自动微分器(AutoDiff)用于计算所有涉及的偏微分输入的值到动态模型。

本案例涉及的框架包括数据驱动的普通神经网络(基线)、以 Verhulst 方程作为动态模型的 PINN(PINN-Verhulst)以及以 DeepHPM 作为动态模型的 PINN(PINN-DeepHPM)。这些框架分别如图 1、图 2 和图 3 所示。 为了更清楚地比较各自的学习框架,基线中的普通神经网络也被表示为代理神经网络,尽管这里没有需要求解的偏微分方程。 代理神经网络的基本结构如图4所示。代理神经网络的隐藏层均由全连接(FC)层组成,并且由于双曲正切的可微性而采用双曲正切作为激活函数。 图1、图2、图3中所有涉及的代理神经网络的结构如无特殊说明均如此设置。 DeepHPM 的结构设置与代理神经网络相同。

PINN1.jpg PInn2.jpg PINN3.jpg PINN4.jpg

数据集

本文涉及的数据集由 A123 Systems 生产的三批商用 LFP/石墨电池组成,其中总共 124 个电池在数十种不同的快速充电协议下循环至失效。 电芯标称容量为1.1Ah,单位充放电倍率1C则等于1.1A电流。 每个充电协议被标记为格式为“C1(Q1)-C2”的字符串,其中对应的电池首先用电流C1从0% SoC(单位:%)充电到SoC Q1。 当在 Q1 充电时,充电电流随后切换至 C2,将电池充电至 80% SoC。 所有电池最终均以恒流恒压 (CC-CV) 形式从 80% SoC 充电至 100% 至 3.6V 上限截止电势和 C/50 截止电流。 此外,所有电池也以CC-CV形式在4C至2.0V较低截止电位和C/50截止电流下放电。

为了更好的捕捉数据中的信息,我们对数据集中的数据进行了处理,获取了一些我们认为更能表征SOH状态的值作为输入,这些值包括在电压为2.7-3.3V间的描述放电容量差值二次方程的两个参数、平均温度、充电时间等多项数据。这里使用SeversonBattery公开数据集。对于SOH和RUL分别使用了两组(A、B)和三组电池(A、B、C)的数据进行了分别训练与预测。

首先,我们选择case ASOH预测进行展示,使用DeepHPM-AdpBal模型(即使用自适应加权方法的DeepHPM)。对于Baseline、DeepHPM-sum和Verhulst模型,由于训练测试代码较为相同,为避免代码重复,这里不做展示,有兴趣的话,可以下载本notebook使用的数据集详细学习。在训练测试完成后,会有和文章中相同的对各个模型进行比较的效果图。

文章中提到的baseline为普通神经网络模型在相同数据下的结果。

本Notebook搬运自Github/WenPengfei0823/PINN-Battery-Prognostics,数据和模型算法来自文章Fusing Models for Prognostics and Health Management of Lithium-Ion Batteries Based on Physics-Informed Neural Networks

代码
文本
[1]
!pip install -U scikit-learn
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting scikit-learn
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bd/05/e561bc99a615b5c099c7a9355409e5e57c525a108f1c2e156abb005b90a6/scikit_learn-1.0.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.8 MB)
     |████████████████████████████████| 24.8 MB 14.4 MB/s eta 0:00:01
Collecting joblib>=0.11
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/10/40/d551139c85db202f1f384ba8bcf96aca2f329440a844f924c8a0040b6d02/joblib-1.3.2-py3-none-any.whl (302 kB)
     |████████████████████████████████| 302 kB 49.2 MB/s eta 0:00:01
Requirement already satisfied: scipy>=1.1.0 in /opt/miniconda/lib/python3.7/site-packages (from scikit-learn) (1.7.3)
Collecting threadpoolctl>=2.0.0
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/61/cf/6e354304bcb9c6413c4e02a747b600061c21d38ba51e7e544ac7bc66aecc/threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Requirement already satisfied: numpy>=1.14.6 in /opt/miniconda/lib/python3.7/site-packages (from scikit-learn) (1.21.5)
Installing collected packages: threadpoolctl, joblib, scikit-learn
Successfully installed joblib-1.3.2 scikit-learn-1.0.2 threadpoolctl-3.1.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
代码
文本
[2]
import numpy as np
import torch
from torch import optim
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import os
import sys

sys.path.append('/bohr/PINN-SOH-RUL-9onb/v1/PINN')

import functions as func

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
代码
文本

加载case A的设置数据:

作者通过对baseline的模型进行研究确定了最佳的层数和神经元个数,并运用到这个PINN DeepHPM模型上。

代码
文本
[3]
settings = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Experiments/Settings/settings_SoH_CaseA.pth')
seq_len = 1
perc_val = 0.2
num_rounds = 1 # or 5
batch_size = settings['batch_size'] # 1024
num_epoch = settings['num_epoch'] # 2000
num_layers = settings['num_layers'] # [2]
num_neurons = settings['num_neurons'] # [128]
inputs_lib_dynamical = settings['inputs_lib_dynamical'] # ['s_norm, t_norm']
inputs_dim_lib_dynamical = settings['inputs_dim_lib_dynamical'] # ['inputs_dim']

addr = '/bohr/PINN-SOH-RUL-9onb/v1/PINN/SeversonBattery.mat'
data = func.SeversonBattery(addr, seq_len=seq_len)

metric_mean = dict()
metric_std = dict()
metric_mean['train'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_mean['val'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_mean['test'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_std['train'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_std['val'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_std['test'] = np.zeros((len(inputs_lib_dynamical), 1))
代码
文本
这里使用的一些func函数,引用自原作者的使用的function.py中的函数。详细信息请参考:function

这里我们挑选DeepHPM的模型函数作为展示:

class DeepHPMNN(nn.Module):

def __init__(self, seq_len, inputs_dim, outputs_dim, layers, scaler_inputs, scaler_targets,
             inputs_dynamical, inputs_dim_dynamical):
    super(DeepHPMNN, self).__init__()
    self.seq_len, self.inputs_dim, self.outputs_dim = seq_len, inputs_dim, outputs_dim
    self.scaler_inputs, self.scaler_targets = scaler_inputs, scaler_targets

    if len(inputs_dynamical.split(',')) <= 1:
        self.inputs_dynamical = inputs_dynamical
    else:
        self.inputs_dynamical = 'torch.cat((' + inputs_dynamical + '), dim=2)'
    self.inputs_dim_dynamical = eval(inputs_dim_dynamical)

    self.surrogateNN = Neural_Net(
        seq_len=self.seq_len,
        inputs_dim=self.inputs_dim,
        outputs_dim=self.outputs_dim,
        layers=layers
    )

    self.dynamicalNN = Neural_Net(
        seq_len=self.seq_len,
        inputs_dim=self.inputs_dim_dynamical,
        outputs_dim=1,
        layers=layers
    )
代码
文本
def forward(self, inputs):

    s = inputs[:, :, 0: self.inputs_dim - 1]
    t = inputs[:, :, self.inputs_dim - 1:]

    s.requires_grad_(True)
    s_norm, _, _ = standardize_tensor(s, mode='transform', mean=self.scaler_inputs[0][0: self.inputs_dim - 1],
                                      std=self.scaler_inputs[1][0: self.inputs_dim - 1])
    s_norm.requires_grad_(True)

    t.requires_grad_(True)
    t_norm, _, _ = standardize_tensor(t, mode='transform', mean=self.scaler_inputs[0][self.inputs_dim - 1:],
                                      std=self.scaler_inputs[1][self.inputs_dim - 1:])
    t_norm.requires_grad_(True)

    U_norm = self.surrogateNN(x=torch.cat((s_norm, t_norm), dim=2))

    U = inverse_standardize_tensor(U_norm, mean=self.scaler_targets[0], std=self.scaler_targets[1])

    grad_outputs = torch.ones_like(U)
    U_t = torch.autograd.grad(
        U, t,
        grad_outputs=grad_outputs,
        create_graph=True,
        retain_graph=True,
        only_inputs=True
    )[0]

    U_s = torch.autograd.grad(
        U, s,
        grad_outputs=grad_outputs,
        create_graph=True,
        retain_graph=True,
        only_inputs=True
    )[0]

    G = eval('self.dynamicalNN(x=' + self.inputs_dynamical + ')')

    F = U_t - G
    F_t = torch.autograd.grad(
        F, t,
        grad_outputs=grad_outputs,
        create_graph=True,
        retain_graph=True,
        only_inputs=True
    )[0]

    self.U_t = U_t
    return U, F, F_t
代码
文本

搭建模型并进行训练。

代码
文本
[4]
for l in range(len(inputs_lib_dynamical)):
inputs_dynamical, inputs_dim_dynamical = inputs_lib_dynamical[l], inputs_dim_lib_dynamical[l]
layers = num_layers[0] * [num_neurons[0]]
np.random.seed(1234)
torch.manual_seed(1234)
metric_rounds = dict()
metric_rounds['train'] = np.zeros(num_rounds)
metric_rounds['val'] = np.zeros(num_rounds)
metric_rounds['test'] = np.zeros(num_rounds)
for round in range(num_rounds):
inputs_dict, targets_dict = func.create_chosen_cells(
data,
idx_cells_train=[91, 100],
idx_cells_test=[124],
perc_val=perc_val
)
inputs_train = inputs_dict['train'].to(device)
inputs_val = inputs_dict['val'].to(device)
inputs_test = inputs_dict['test'].to(device)
targets_train = targets_dict['train'][:, :, 0:1].to(device)
targets_val = targets_dict['val'][:, :, 0:1].to(device)
targets_test = targets_dict['test'][:, :, 0:1].to(device)

inputs_dim = inputs_train.shape[2]
outputs_dim = 1

_, mean_inputs_train, std_inputs_train = func.standardize_tensor(inputs_train, mode='fit')
_, mean_targets_train, std_targets_train = func.standardize_tensor(targets_train, mode='fit')

train_set = func.TensorDataset(inputs_train, targets_train) # J_train is a placeholder
train_loader = DataLoader(
train_set,
batch_size=batch_size,
shuffle=True,
num_workers=0,
drop_last=True
)

model = func.DeepHPMNN(
seq_len=seq_len,
inputs_dim=inputs_dim,
outputs_dim=outputs_dim,
layers=layers,
scaler_inputs=(mean_inputs_train, std_inputs_train),
scaler_targets=(mean_targets_train, std_targets_train),
inputs_dynamical=inputs_dynamical,
inputs_dim_dynamical=inputs_dim_dynamical
).to(device)

log_sigma_u = torch.randn((), requires_grad=True)
log_sigma_f = torch.randn((), requires_grad=True)
log_sigma_f_t = torch.randn((), requires_grad=True)

criterion = func.My_loss(mode='AdpBal')

params = ([p for p in model.parameters()] + [log_sigma_u] + [log_sigma_f] + [log_sigma_f_t])
optimizer = optim.Adam(params, lr=settings['lr'])
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=settings['step_size'], gamma=settings['gamma'])
model, results_epoch = func.train(
num_epoch=num_epoch,
batch_size=batch_size,
train_loader=train_loader,
num_slices_train=inputs_train.shape[0],
inputs_val=inputs_val,
targets_val=targets_val,
model=model,
optimizer=optimizer,
scheduler=scheduler,
criterion=criterion,
log_sigma_u=log_sigma_u,
log_sigma_f=log_sigma_f,
log_sigma_f_t=log_sigma_f_t
)

model.eval()

U_pred_train, F_pred_train, _ = model(inputs=inputs_train)
U_pred_train = 1. - U_pred_train
targets_train = 1. - targets_train
RMSPE_train = torch.sqrt(torch.mean(((U_pred_train - targets_train) / targets_train) ** 2))

U_pred_val, F_pred_val, _ = model(inputs=inputs_val)
U_pred_val = 1. - U_pred_val
targets_val = 1. - targets_val
RMSPE_val = torch.sqrt(torch.mean(((U_pred_val - targets_val) / targets_val) ** 2))

U_pred_test, F_pred_test, _ = model(inputs=inputs_test)
U_pred_test = 1. - U_pred_test
targets_test = 1. - targets_test
RMSPE_test = torch.sqrt(torch.mean(((U_pred_test - targets_test) / targets_test) ** 2))

metric_rounds['train'][round] = RMSPE_train.detach().cpu().numpy()
metric_rounds['val'][round] = RMSPE_val.detach().cpu().numpy()
metric_rounds['test'][round] = RMSPE_test.detach().cpu().numpy()

metric_mean['train'][l] = np.mean(metric_rounds['train'])
metric_mean['val'][l] = np.mean(metric_rounds['val'])
metric_mean['test'][l] = np.mean(metric_rounds['test'])
metric_std['train'][l] = np.std(metric_rounds['train'])
metric_std['val'][l] = np.std(metric_rounds['val'])
metric_std['test'][l] = np.std(metric_rounds['test'])
Epoch: 100, Period: 1, Loss: 3.54421, Loss_U: 0.05309, Loss_F: 5.35491, Loss_F_t: 0.00001
Epoch: 100, Period: 2, Loss: 2.72425, Loss_U: 0.05090, Loss_F: 4.54454, Loss_F_t: 0.00001
Epoch: 200, Period: 1, Loss: 0.22462, Loss_U: 0.03490, Loss_F: 2.50119, Loss_F_t: 0.00000
Epoch: 200, Period: 2, Loss: -0.35281, Loss_U: 0.02656, Loss_F: 1.92590, Loss_F_t: 0.00000
Epoch: 300, Period: 1, Loss: -1.60316, Loss_U: 0.02232, Loss_F: 1.06536, Loss_F_t: 0.00000
Epoch: 300, Period: 2, Loss: -1.58062, Loss_U: 0.02300, Loss_F: 1.08956, Loss_F_t: 0.00000
Epoch: 400, Period: 1, Loss: -2.38902, Loss_U: 0.01334, Loss_F: 0.68319, Loss_F_t: 0.00000
Epoch: 400, Period: 2, Loss: -2.45619, Loss_U: 0.01546, Loss_F: 0.61347, Loss_F_t: 0.00000
Epoch: 500, Period: 1, Loss: -3.09120, Loss_U: 0.01384, Loss_F: 0.37828, Loss_F_t: 0.00000
Epoch: 500, Period: 2, Loss: -3.12405, Loss_U: 0.01135, Loss_F: 0.35187, Loss_F_t: 0.00000
Epoch: 600, Period: 1, Loss: -3.66069, Loss_U: 0.01055, Loss_F: 0.22253, Loss_F_t: 0.00000
Epoch: 600, Period: 2, Loss: -3.61110, Loss_U: 0.01082, Loss_F: 0.27331, Loss_F_t: 0.00000
Epoch: 700, Period: 1, Loss: -4.15602, Loss_U: 0.00735, Loss_F: 0.14916, Loss_F_t: 0.00000
Epoch: 700, Period: 2, Loss: -4.16911, Loss_U: 0.00797, Loss_F: 0.13669, Loss_F_t: 0.00000
Epoch: 800, Period: 1, Loss: -4.62212, Loss_U: 0.00726, Loss_F: 0.10233, Loss_F_t: 0.00000
Epoch: 800, Period: 2, Loss: -4.61436, Loss_U: 0.00770, Loss_F: 0.11030, Loss_F_t: 0.00000
Epoch: 900, Period: 1, Loss: -5.07331, Loss_U: 0.00806, Loss_F: 0.07063, Loss_F_t: 0.00000
Epoch: 900, Period: 2, Loss: -5.08290, Loss_U: 0.00799, Loss_F: 0.06397, Loss_F_t: 0.00000
Epoch: 1000, Period: 1, Loss: -5.53107, Loss_U: 0.00604, Loss_F: 0.04853, Loss_F_t: 0.00000
Epoch: 1000, Period: 2, Loss: -5.53281, Loss_U: 0.00575, Loss_F: 0.05021, Loss_F_t: 0.00000
Epoch: 1100, Period: 1, Loss: -5.98240, Loss_U: 0.00618, Loss_F: 0.02757, Loss_F_t: 0.00000
Epoch: 1100, Period: 2, Loss: -5.97554, Loss_U: 0.00707, Loss_F: 0.03050, Loss_F_t: 0.00000
Epoch: 1200, Period: 1, Loss: -6.42348, Loss_U: 0.00655, Loss_F: 0.01688, Loss_F_t: 0.00000
Epoch: 1200, Period: 2, Loss: -6.41990, Loss_U: 0.00696, Loss_F: 0.01904, Loss_F_t: 0.00000
Epoch: 1300, Period: 1, Loss: -6.86935, Loss_U: 0.00569, Loss_F: 0.01294, Loss_F_t: 0.00000
Epoch: 1300, Period: 2, Loss: -6.85872, Loss_U: 0.00711, Loss_F: 0.01270, Loss_F_t: 0.00000
Epoch: 1400, Period: 1, Loss: -7.31777, Loss_U: 0.00546, Loss_F: 0.00643, Loss_F_t: 0.00000
Epoch: 1400, Period: 2, Loss: -7.32097, Loss_U: 0.00535, Loss_F: 0.00660, Loss_F_t: 0.00000
Epoch: 1500, Period: 1, Loss: -7.75563, Loss_U: 0.00582, Loss_F: 0.00441, Loss_F_t: 0.00000
Epoch: 1500, Period: 2, Loss: -7.76390, Loss_U: 0.00537, Loss_F: 0.00448, Loss_F_t: 0.00000
Epoch: 1600, Period: 1, Loss: -8.20879, Loss_U: 0.00524, Loss_F: 0.00274, Loss_F_t: 0.00000
Epoch: 1600, Period: 2, Loss: -8.21008, Loss_U: 0.00534, Loss_F: 0.00219, Loss_F_t: 0.00000
Epoch: 1700, Period: 1, Loss: -8.63017, Loss_U: 0.00650, Loss_F: 0.00123, Loss_F_t: 0.00000
Epoch: 1700, Period: 2, Loss: -8.65981, Loss_U: 0.00513, Loss_F: 0.00119, Loss_F_t: 0.00000
Epoch: 1800, Period: 1, Loss: -9.09943, Loss_U: 0.00534, Loss_F: 0.00068, Loss_F_t: 0.00000
Epoch: 1800, Period: 2, Loss: -9.08829, Loss_U: 0.00589, Loss_F: 0.00065, Loss_F_t: 0.00000
Epoch: 1900, Period: 1, Loss: -9.55046, Loss_U: 0.00516, Loss_F: 0.00032, Loss_F_t: 0.00000
Epoch: 1900, Period: 2, Loss: -9.54120, Loss_U: 0.00555, Loss_F: 0.00034, Loss_F_t: 0.00000
Epoch: 2000, Period: 1, Loss: -9.98005, Loss_U: 0.00563, Loss_F: 0.00015, Loss_F_t: 0.00000
Epoch: 2000, Period: 2, Loss: -9.97842, Loss_U: 0.00574, Loss_F: 0.00015, Loss_F_t: 0.00000
代码
文本

对训练的模型进行测试(并将预测结果写入结果文件夹,由于数据集写入限制,这里我们省去写入这一步。后续的画图模块,使用原作者计算好的结果数据。)

代码
文本
[5]
model.eval()
inputs_test = inputs_dict['test'].to(device)
targets_test = targets_dict['test'][:, :, 1:].to(device)
U_pred_test, F_pred_test, _ = model(inputs=inputs_test)

results = dict()
results['U_true'] = targets_test.detach().cpu().numpy().squeeze()
results['U_pred'] = U_pred_test.detach().cpu().numpy().squeeze()
results['U_t_pred'] = model.U_t.detach().cpu().numpy().squeeze()
results['Cycles'] = inputs_test[:, :, -1:].detach().cpu().numpy().squeeze()
results['Epochs'] = np.arange(0, num_epoch)
#torch.save(results, 'Results/4 Presentation/RUL Prognostics/RUL_CaseA_Baseline.pth')
pass
代码
文本

设置绘图格式

代码
文本
[6]
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
plt.rcParams['font.sans-serif'] = 'Times New Roman'
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['axes.labelsize'] = 8
figsize_single = (3.5, 3.5 * 0.618)
代码
文本

加载计算好的结果数据。比较在case A的情况下,不同模型对SOH的预测效果。

代码
文本
[7]
results_SoH_CaseA_Baseline = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/SoH Estimation/SoH_CaseA_Baseline.pth')
results_SoH_CaseA_Verhulst_Sum = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/SoH Estimation/SoH_CaseA_Verhulst_Sum.pth')
results_SoH_CaseA_Verhulst_AdpBal = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/SoH Estimation/SoH_CaseA_Verhulst_AdpBal.pth')
results_SoH_CaseA_DeepHPM_Sum = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/SoH Estimation/SoH_CaseA_DeepHPM_Sum.pth')
results_SoH_CaseA_DeepHPM_AdpBal = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/SoH Estimation/SoH_CaseA_DeepHPM_AdpBal.pth')

plt.figure(figsize=figsize_single)
plt.plot(
results_SoH_CaseA_Baseline['Cycles'],
results_SoH_CaseA_Baseline['U_true'],
linewidth=2,
color=(238/255, 28/255, 37/255),
label='Ground Truth'
)
plt.plot(
results_SoH_CaseA_Baseline['Cycles'],
results_SoH_CaseA_Baseline['U_pred'],
linestyle='--',
linewidth=2,
color=(238/255, 28/255, 37/255),
alpha=1.,
label='Baseline'
)
plt.plot(
results_SoH_CaseA_Verhulst_Sum['Cycles'],
results_SoH_CaseA_Verhulst_Sum['U_pred'],
linestyle='--',
linewidth=2,
color=(1/255, 168/255, 158/255),
alpha=1.,
label='PINN-Verhulst (Sum)'
)
plt.plot(
results_SoH_CaseA_Verhulst_AdpBal['Cycles'],
results_SoH_CaseA_Verhulst_AdpBal['U_pred'],
linewidth=2,
color=(1/255, 168/255, 158/255),
alpha=1.,
label='PINN-Verhulst (AdpBal)'
)
plt.plot(
results_SoH_CaseA_DeepHPM_Sum['Cycles'],
results_SoH_CaseA_DeepHPM_Sum['U_pred'],
linestyle='--',
linewidth=2,
color=(0/255, 84/255, 165/255),
alpha=1.,
label='PINN-DeepHPM (Sum)'
)
plt.plot(
results_SoH_CaseA_DeepHPM_AdpBal['Cycles'],
results_SoH_CaseA_DeepHPM_AdpBal['U_pred'],
linewidth=2,
color=(0/255, 84/255, 165/255),
alpha=1.,
label='PINN-DeepHPM (AdpBal)'
)
plt.legend(prop={'size': 8})
plt.grid(True, linestyle="--", alpha=0.5)
plt.xlim(min(results_SoH_CaseA_Baseline['Cycles']), max(results_SoH_CaseA_Baseline['Cycles']))
plt.xlabel('Monitoring Time (Unit: Cycles)')
plt.ylabel('State of Health (Unit: 1)')
plt.show()
代码
文本

为了更好的反应各个模型的预测能力,这里我们附上文章中的一张数据图表

20230831-175640.jpg

图表里的SOH预测值为各个模型的RMSPE(Mean Square Percentage Erro),单位为%。从图表中我们可以看看到,相较于baseline,引入PINN和adabal后的模型,在数据集B上出现了较好优化的效果。在数据集A上的效果则不明显,这可能是由于在baseline上optimal的神经网络层数和神经元个数,并不一定在别的模型上也是最优解。

代码
文本

使用同模型预测RUL

与SOH预测相同,这里我们也只对caseA的DeepHPM-AdpBal模型进行展示。

加载case A的设置数据和数据集:

代码
文本
[8]
settings = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Experiments/Settings/settings_RUL_CaseA.pth')
seq_len = 1
perc_val = 0.2
num_rounds = 1
batch_size = settings['batch_size'] # 1024
num_epoch = settings['num_epoch'] # 2000
num_layers = settings['num_layers'] # 2
num_neurons = settings['num_neurons'] # 128
inputs_lib_dynamical = settings['inputs_lib_dynamical'] # ['s_norm, t_norm, U_norm']
inputs_dim_lib_dynamical = settings['inputs_dim_lib_dynamical'] # ['inputs_dim + 1']

addr = '/bohr/PINN-SOH-RUL-9onb/v1/PINN/SeversonBattery.mat'
data = func.SeversonBattery(addr, seq_len=seq_len)

metric_mean = dict()
metric_std = dict()
metric_mean['train'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_mean['val'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_mean['test'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_std['train'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_std['val'] = np.zeros((len(inputs_lib_dynamical), 1))
metric_std['test'] = np.zeros((len(inputs_lib_dynamical), 1))
代码
文本

搭建模型并进行训练

代码
文本
[9]
for l in range(len(inputs_lib_dynamical)):
inputs_dynamical, inputs_dim_dynamical = inputs_lib_dynamical[l], inputs_dim_lib_dynamical[l]
layers = num_layers[0] * [num_neurons[0]]
np.random.seed(1234)
torch.manual_seed(1234)
metric_rounds = dict()
metric_rounds['train'] = np.zeros(num_rounds)
metric_rounds['val'] = np.zeros(num_rounds)
metric_rounds['test'] = np.zeros(num_rounds)
for round in range(num_rounds):
inputs_dict, targets_dict = func.create_chosen_cells(
data,
idx_cells_train=[91, 100],
idx_cells_test=[124],
perc_val=perc_val
)
inputs_train = inputs_dict['train'].to(device)
inputs_val = inputs_dict['val'].to(device)
inputs_test = inputs_dict['test'].to(device)
targets_train = targets_dict['train'][:, :, 1:].to(device)
targets_val = targets_dict['val'][:, :, 1:].to(device)
targets_test = targets_dict['test'][:, :, 1:].to(device)

inputs_dim = inputs_train.shape[2]
outputs_dim = 1

_, mean_inputs_train, std_inputs_train = func.standardize_tensor(inputs_train, mode='fit')
_, mean_targets_train, std_targets_train = func.standardize_tensor(targets_train, mode='fit')

train_set = func.TensorDataset(inputs_train, targets_train) # J_train is a placeholder
train_loader = DataLoader(
train_set,
batch_size=batch_size,
shuffle=True,
num_workers=0,
drop_last=True
)

model = func.DeepHPMNN(
seq_len=seq_len,
inputs_dim=inputs_dim,
outputs_dim=outputs_dim,
layers=layers,
scaler_inputs=(mean_inputs_train, std_inputs_train),
scaler_targets=(mean_targets_train, std_targets_train),
inputs_dynamical=inputs_dynamical,
inputs_dim_dynamical=inputs_dim_dynamical
).to(device)

log_sigma_u = torch.randn((), requires_grad=True)
log_sigma_f = torch.randn((), requires_grad=True)
log_sigma_f_t = torch.randn((), requires_grad=True)

criterion = func.My_loss(mode='AdpBal')

params = ([p for p in model.parameters()] + [log_sigma_u] + [log_sigma_f] + [log_sigma_f_t])
optimizer = optim.Adam(params, lr=settings['lr'])
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=settings['step_size'], gamma=settings['gamma'])
model, results_epoch = func.train(
num_epoch=num_epoch,
batch_size=batch_size,
train_loader=train_loader,
num_slices_train=inputs_train.shape[0],
inputs_val=inputs_val,
targets_val=targets_val,
model=model,
optimizer=optimizer,
scheduler=scheduler,
criterion=criterion,
log_sigma_u=log_sigma_u,
log_sigma_f=log_sigma_f,
log_sigma_f_t=log_sigma_f_t
)

model.eval()

U_pred_train, F_pred_train, _ = model(inputs=inputs_train)
RMSE_train = torch.sqrt(torch.mean(((U_pred_train - targets_train)) ** 2))

U_pred_val, F_pred_val, _ = model(inputs=inputs_val)
RMSE_val = torch.sqrt(torch.mean(((U_pred_val - targets_val)) ** 2))

U_pred_test, F_pred_test, _ = model(inputs=inputs_test)
RMSE_test = torch.sqrt(torch.mean(((U_pred_test - targets_test)) ** 2))

metric_rounds['train'][round] = RMSE_train.detach().cpu().numpy()
metric_rounds['val'][round] = RMSE_val.detach().cpu().numpy()
metric_rounds['test'][round] = RMSE_test.detach().cpu().numpy()

metric_mean['train'][l] = np.mean(metric_rounds['train'])
metric_mean['val'][l] = np.mean(metric_rounds['val'])
metric_mean['test'][l] = np.mean(metric_rounds['test'])
metric_std['train'][l] = np.std(metric_rounds['train'])
metric_std['val'][l] = np.std(metric_rounds['val'])
metric_std['test'][l] = np.std(metric_rounds['test'])
Epoch: 100, Period: 1, Loss: 2155640.50000, Loss_U: 6290418.00000, Loss_F: 73.68130, Loss_F_t: 0.05357
Epoch: 100, Period: 2, Loss: 2275743.75000, Loss_U: 6642410.00000, Loss_F: 59.97404, Loss_F_t: 0.01389
Epoch: 200, Period: 1, Loss: 1288396.62500, Loss_U: 3917336.50000, Loss_F: 36.70158, Loss_F_t: 0.00134
Epoch: 200, Period: 2, Loss: 1474424.12500, Loss_U: 4483811.00000, Loss_F: 40.06873, Loss_F_t: 0.00145
Epoch: 300, Period: 1, Loss: 948385.87500, Loss_U: 2992287.00000, Loss_F: 25.10822, Loss_F_t: 0.00252
Epoch: 300, Period: 2, Loss: 959551.43750, Loss_U: 3028063.25000, Loss_F: 24.48299, Loss_F_t: 0.00008
Epoch: 400, Period: 1, Loss: 805611.75000, Loss_U: 2632619.00000, Loss_F: 19.18599, Loss_F_t: 0.00012
Epoch: 400, Period: 2, Loss: 737998.12500, Loss_U: 2412090.50000, Loss_F: 17.17958, Loss_F_t: 0.00158
Epoch: 500, Period: 1, Loss: 687239.37500, Loss_U: 2323840.00000, Loss_F: 15.06810, Loss_F_t: 0.00218
Epoch: 500, Period: 2, Loss: 550802.81250, Loss_U: 1862801.12500, Loss_F: 15.90533, Loss_F_t: 0.00175
Epoch: 600, Period: 1, Loss: 537652.43750, Loss_U: 1880249.75000, Loss_F: 13.80499, Loss_F_t: 0.00092
Epoch: 600, Period: 2, Loss: 531793.75000, Loss_U: 1860071.50000, Loss_F: 14.24928, Loss_F_t: 0.00371
Epoch: 700, Period: 1, Loss: 481136.46875, Loss_U: 1738624.75000, Loss_F: 12.16540, Loss_F_t: 0.00325
Epoch: 700, Period: 2, Loss: 418739.34375, Loss_U: 1513391.37500, Loss_F: 13.07103, Loss_F_t: 0.00834
Epoch: 800, Period: 1, Loss: 395370.90625, Loss_U: 1476887.50000, Loss_F: 12.49607, Loss_F_t: 0.00267
Epoch: 800, Period: 2, Loss: 386239.62500, Loss_U: 1443020.75000, Loss_F: 11.59053, Loss_F_t: 0.00013
Epoch: 900, Period: 1, Loss: 402340.93750, Loss_U: 1553259.75000, Loss_F: 11.49965, Loss_F_t: 0.01686
Epoch: 900, Period: 2, Loss: 335048.90625, Loss_U: 1293691.00000, Loss_F: 10.92959, Loss_F_t: 0.00133
Epoch: 1000, Period: 1, Loss: 329994.40625, Loss_U: 1317770.62500, Loss_F: 11.06236, Loss_F_t: 0.00424
Epoch: 1000, Period: 2, Loss: 327248.78125, Loss_U: 1307022.62500, Loss_F: 10.45050, Loss_F_t: 0.08518
Epoch: 1100, Period: 1, Loss: 343784.25000, Loss_U: 1420629.75000, Loss_F: 9.68945, Loss_F_t: 0.00181
Epoch: 1100, Period: 2, Loss: 297456.43750, Loss_U: 1229404.25000, Loss_F: 9.89140, Loss_F_t: 0.00578
Epoch: 1200, Period: 1, Loss: 251131.98438, Loss_U: 1075451.75000, Loss_F: 9.86708, Loss_F_t: 0.18221
Epoch: 1200, Period: 2, Loss: 266361.40625, Loss_U: 1140891.87500, Loss_F: 10.20467, Loss_F_t: 0.00032
Epoch: 1300, Period: 1, Loss: 261435.43750, Loss_U: 1160815.75000, Loss_F: 9.87165, Loss_F_t: 0.00041
Epoch: 1300, Period: 2, Loss: 295724.90625, Loss_U: 1313311.62500, Loss_F: 9.74287, Loss_F_t: 0.00062
Epoch: 1400, Period: 1, Loss: 260014.26562, Loss_U: 1201055.75000, Loss_F: 9.99243, Loss_F_t: 0.00319
Epoch: 1400, Period: 2, Loss: 265476.93750, Loss_U: 1226542.25000, Loss_F: 9.52675, Loss_F_t: 0.00015
Epoch: 1500, Period: 1, Loss: 222812.18750, Loss_U: 1069931.62500, Loss_F: 7.90012, Loss_F_t: 0.00019
Epoch: 1500, Period: 2, Loss: 238479.92188, Loss_U: 1145393.87500, Loss_F: 8.63923, Loss_F_t: 0.00014
Epoch: 1600, Period: 1, Loss: 347469.09375, Loss_U: 1739058.50000, Loss_F: 24.58323, Loss_F_t: 0.00046
Epoch: 1600, Period: 2, Loss: 356266.31250, Loss_U: 1783505.25000, Loss_F: 29.40131, Loss_F_t: 0.00213
Epoch: 1700, Period: 1, Loss: 199943.12500, Loss_U: 1050412.37500, Loss_F: 9.86898, Loss_F_t: 0.00145
Epoch: 1700, Period: 2, Loss: 204533.84375, Loss_U: 1074733.00000, Loss_F: 10.78495, Loss_F_t: 0.10656
Epoch: 1800, Period: 1, Loss: 187425.29688, Loss_U: 1027877.43750, Loss_F: 7.17488, Loss_F_t: 0.00290
Epoch: 1800, Period: 2, Loss: 174570.73438, Loss_U: 957541.00000, Loss_F: 7.40506, Loss_F_t: 0.20739
Epoch: 1900, Period: 1, Loss: 165579.81250, Loss_U: 948095.43750, Loss_F: 8.51260, Loss_F_t: 0.13792
Epoch: 1900, Period: 2, Loss: 186771.75000, Loss_U: 1069705.00000, Loss_F: 8.10844, Loss_F_t: 0.01623
Epoch: 2000, Period: 1, Loss: 163509.92188, Loss_U: 979782.25000, Loss_F: 10.24785, Loss_F_t: 0.00151
Epoch: 2000, Period: 2, Loss: 177722.10938, Loss_U: 1065192.37500, Loss_F: 11.67247, Loss_F_t: 0.00048
代码
文本

对训练的模型进行测试。

代码
文本
[10]
model.eval()
inputs_test = inputs_dict['test'].to(device)
targets_test = targets_dict['test'][:, :, 1:].to(device)
U_pred_test, F_pred_test, _ = model(inputs=inputs_test)

results = dict()
results['U_true'] = targets_test.detach().cpu().numpy().squeeze()
results['U_pred'] = U_pred_test.detach().cpu().numpy().squeeze()
results['U_t_pred'] = model.U_t.detach().cpu().numpy().squeeze()
results['Cycles'] = inputs_test[:, :, -1:].detach().cpu().numpy().squeeze()
results['Epochs'] = np.arange(0, num_epoch)
results['lambda_U'] = results_epoch['var_U']
results['lambda_F'] = results_epoch['var_F']
#torch.save(results, '/Results/4 Presentation/RUL Prognostics/RUL_CaseA_DeepHPM_AdpBal.pth')
pass
代码
文本

加载计算好的结果数据。比较在case A的情况下,不同模型对RUL的预测效果。

代码
文本
[11]
results_RUL_CaseA_Baseline = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/RUL Prognostics/RUL_CaseA_Baseline.pth')
results_RUL_CaseA_DeepHPM_Sum = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/RUL Prognostics/RUL_CaseA_DeepHPM_Sum.pth')
results_RUL_CaseA_DeepHPM_AdpBal = torch.load('/bohr/PINN-SOH-RUL-9onb/v1/PINN/Results/4 Presentation/RUL Prognostics/RUL_CaseA_DeepHPM_AdpBal.pth')

plt.figure(figsize=figsize_single)
plt.plot(
results_RUL_CaseA_Baseline['Cycles'],
results_RUL_CaseA_Baseline['U_true'],
linewidth=2,
color=(238/255, 28/255, 37/255),
label='Ground Truth'
)
plt.plot(
results_RUL_CaseA_Baseline['Cycles'],
results_RUL_CaseA_Baseline['U_pred'],
linestyle='--',
linewidth=2,
color=(238/255, 28/255, 37/255),
alpha=1.,
label='Baseline'
)
plt.plot(
results_RUL_CaseA_DeepHPM_Sum['Cycles'],
results_RUL_CaseA_DeepHPM_Sum['U_pred'],
linestyle='--',
linewidth=2,
color=(0/255, 84/255, 165/255),
alpha=1.,
label='PINN-DeepHPM (Sum)'
)
plt.plot(
results_RUL_CaseA_DeepHPM_AdpBal['Cycles'],
results_RUL_CaseA_DeepHPM_AdpBal['U_pred'],
linewidth=2,
color=(0/255, 84/255, 165/255),
alpha=1.,
label='PINN-DeepHPM (AdpBal)'
)
plt.legend(prop={'size': 8})
plt.grid(True, linestyle="--", alpha=0.5)
plt.xlim(min(results_RUL_CaseA_Baseline['Cycles']), max(results_RUL_CaseA_Baseline['Cycles']))
plt.xlabel('Monitoring Time (Unit: Cycles)')
plt.ylabel('Remaining Useful Life (Unit: Cycles)')
plt.show()
代码
文本

为了更好的反应各个模型的预测能力,这里我们附上文章中的一张数据图表

20230831-175640.jpg

图表里的值为各个模型RUL的RMSE。从图表中我们可以看到,同样,相较于baseline,引入PINN和adabal后的模型,在一些数据集B上出现了较好优化的效果。在数据集A上的效果则不明显,这个趋势在SOH和RUL预测上相同,因此,验证了所提出的方法从SoH估计到RUL预测的通用性。

代码
文本
[ ]

代码
文本
AI
PINN
RUL
AIPINNRUL
已赞3
本文被以下合集收录
电芯
Piloteye
更新于 2024-07-22
17 篇3 人关注
PINN
bohr945e35
更新于 2024-04-13
7 篇0 人关注
推荐阅读
公开
AI+电芯 | 基于LSTM模型和脉冲电信号的电芯EIS预测
AI4SEISAI4SCUP-EIS
AI4SEISAI4SCUP-EIS
JiaweiMiao
发布于 2023-10-22
5 赞40 转存文件
公开
AI+电芯 | 基于部分无label电池数据使用深度网络模型进行SOH预测
AI电芯SOH
AI电芯SOH
JiaweiMiao
发布于 2023-08-28
1 赞2 转存文件2 评论
评论
  ##### 这里使用的一些**func...

KaiqiYang

2023-08-30
这个function的链接是一个notebook还是数据集里面的内容?
评论
 for l in range(len(i...

KaiqiYang

2023-08-29
这里主要的模型搭建的内容直接从funtion文件里面引用过来的,核心的建模部分是不是都放在了不怎么显眼的地方?

JiaweiMiao

作者
2023-08-29
回复 KaiqiYang 是的,可以把建模的主要函数拿过来展示一下
评论