Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
AI+电芯 | 基于部分无label电池数据使用深度网络模型进行SOH预测
AI
电芯
SOH
AI电芯SOH
JiaweiMiao
发布于 2023-08-28
推荐镜像 :Third-party software:d2l-ai:pytorch
推荐机型 :c12_m46_1 * NVIDIA GPU B
赞 1
2
BLG_SOH(v1)

基于部分无label电池数据使用深度网络模型进行SOH预测

代码
文本

快速开始:点击上方的 开始连接 按钮,选择 d2l-ai:pytorch镜像 和任意GPU配置机型即可开始。

代码
文本

背景

锂离子电池(LIB)具有能量密度高、响应速度快、环境友好等优点,前所未有地促进了可再生能源的普及。 2020年,全球锂离子电池市场呈现出惊人的增长,电动汽车消耗的电能高达142.8GWh,预计未来几年市场规模将超过918亿美元。 尽管锂离子电池正在以惊人的速度普及,但其长周期应用依然面临着严峻的挑战。 与其他机械的情况一样,电极和隔膜等锂离子电池组件也会经历不同程度的退化。 这些负面溢出效应导致容量和电力衰退,从而危及设备安全。 为了确保安全高效的电池管理,获得准确的电池健康状态(SOH)至关重要。

电池SOH有多种形式的定义。 它可以通过使用时间或内阻的增加来定义。 尽管这些变量很容易测量,但电池退化也伴随着容量损失,容量损失的准确估计会影响其他电池管理任务,例如行驶里程估计和寿命预测。 因此,SOH(定义为当前容量与初始容量之比)引起了广泛关注。 然而,容量测量需要使用特定协议对电池进行完全充电或放电,这对于使用中的电池来说并不实用。 这促使根据日常运行数据进行 SOH 估算。

虽然准确的健康状态估计已经取得了显着进展,但生成目标电池标签的耗时和资源消耗的退化实验阻碍了健康状态估计方法的发展。 在本notebook中,作者设计了一个深度学习框架,可以在没有目标电池标签的情况下估计电池的健康状态。 该框架集成了一群配备域适应的深度神经网络,以产生准确的估计。 作者使用来自 5 个不同制造商的 65 个商用电池生成 71,588 个样本进行交叉验证。 验证结果表明,所提出的框架可以确保89.4%的样本的绝对误差小于3%(98.9%的样本的绝对误差小于5%),在没有目标标签的情况下最大绝对误差小于8.87% 。 这项工作强调了深度学习在防止老化实验中的力量,并强调了仅使用以前的实验数据快速开发新一代电池的电池管理算法的前景。

模型

本文所提出框架的训练过程集成了 N 个 DNN 的独立子训练(见下图a)。不失一般性的,电池充电数据被用作DNN的输入,因为电池充电过程通常是可控的并且有规律地发生。具体来说,是将电压采样窗口内的充电容量序列(所谓的部分充电曲线)作为每个DNN的输入。在子训练开始之前,源域和目标域的部分充电曲线均按其标定容量进行归一化。训练开始时,所有子训练均启用并共享由标记的源域样本和未标记的目标域样本组成的相同训练集。此外,我们还设计了一个trimming round,通过丢弃一些样本来形成具有平衡 SOH 分布的新源域。 所有子训练完成后,所提出框架的训练过程终止

soh1.jpg

所提出的框架中的每个 DNN 都具有相同的超参数(图b),但使用基于 He 初始化器的不同随机种子进行初始化。作为 DNN 的输入,源域和目标域的部分充电曲线首先以10 mV 的电压间隔进行网格化,以减少数据负担。接下来,这些样本被输入到堆叠的一维 (1D) CNN 层中以提取其特征向量。之后,源域的特征向量被展平并馈送到终端全连接 (TFC) 层以生成其 SOH 估计。这些估计与源域标签一起使用来计算源域损失。另一方面,目标域样本的特征向量被展平到中间的全连接(MFC)层以重建其特征向量。这些重建的特征向量起着两个作用。第一个是量化域间隙以及源域特征向量。第二个是在估计过程中提供目标域样本的估计(视为每个经过训练的 DNN 的预估计),其中重建的特征向量进一步馈送到与源域相同的 TFC 中进行回归。通过同时最小化源域样本的 SOH 估计损失和两个域的 TFC 输入之间的差距,每个子训练将源域知识转移到目标域。

soh2.jpg

与训练过程不同,所提出框架的估计过程是选择一组经过训练的 DNN 来参与估计(图c)。首先,如上所述,激活所有 DNN 以估计目标域中的 SOH。由于训练的不确定性,经过训练的 DNN 的估计性能预计会有很大差异,因此可以被视为预估计器。为了产生可靠的最终估计,我们通过为估计结果的平均值和标准差设置四分位阈值来消除一些不利的 DNN。对所选 DNN 的估计进行平均,以生成目标域样本的最终 SOH 估计。

soh3.jpg

本Notebook搬运自北京理工大学先进储能科学与应用联合实验室数据库。模型和以上图片介绍来自文章:Deep learning to estimate lithium-ion battery state of health without additional degradation experiments

代码
文本

首先我们要加载一些加载必要的工具包

代码
文本
[1]
import torch
import math
import random
import numpy as np
import scipy.io as scio
import matplotlib.pyplot as plt
import torch.nn as nn
代码
文本

从数据集中载入数据。

代码
文本
[2]
BatteryDatabase=scio.loadmat('/bohr/blg-soh-6f3l/v1/BatteryDatabase.mat')

data_scr_CALCE=BatteryDatabase['data_scr_CALCE']
data_scr_GOTION=BatteryDatabase['data_scr_GOTION']
data_scr_PANASONIC=BatteryDatabase['data_scr_PANASONIC']
data_scr_KOKAM=BatteryDatabase['data_scr_KOKAM']
data_scr_SANYO=BatteryDatabase['data_scr_SANYO']

label_scr_CALCE=BatteryDatabase['label_scr_CALCE']
label_scr_GOTION=BatteryDatabase['label_scr_GOTION']
label_scr_PANASONIC=BatteryDatabase['label_scr_PANASONIC']
label_scr_KOKAM=BatteryDatabase['label_scr_KOKAM']
label_scr_SANYO=BatteryDatabase['label_scr_SANYO']

label_scr_CALCE=np.squeeze(label_scr_CALCE)
label_scr_GOTION=np.squeeze(label_scr_GOTION)
label_scr_PANASONIC=np.squeeze(label_scr_PANASONIC)
label_scr_KOKAM=np.squeeze(label_scr_KOKAM)
label_scr_SANYO=np.squeeze(label_scr_SANYO)

number_CALCE=BatteryDatabase['number_CALCE']
number_GOTION=BatteryDatabase['number_GOTION']
number_PANASONIC=BatteryDatabase['number_PANASONIC']
number_KOKAM=BatteryDatabase['number_KOKAM']
number_SANYO=BatteryDatabase['number_SANYO']

number_CALCE=np.squeeze(number_CALCE)
number_GOTION=np.squeeze(number_GOTION)
number_PANASONIC=np.squeeze(number_PANASONIC)
number_KOKAM=np.squeeze(number_KOKAM)
number_SANYO=np.squeeze(number_SANYO)

del BatteryDatabase
代码
文本

展示数据集结构。这里我们的data输入为归一化后的电压采样容量数据,即图a中经过normalization后的部分。label为每条曲线对应的SOH值。下图为输入data的展示

代码
文本
[9]
print(data_scr_CALCE)
print(label_scr_CALCE)
for i in range(len(data_scr_CALCE)):
if i%10==0:
plt.plot(data_scr_CALCE[i])
plt.xlabel('Voltage sampling points')
plt.ylabel('Capacity')
plt.show
代码
文本

生成data和label批次数据。

代码
文本
[4]
batch_size = 20 # batchsize

def batch_generate(data,label):
l=data.shape[0]
batch_input = []
batch_label = []
for batch in range(batch_size):
num = int(random.randint(1, int(l)-1))
start, end = num, num + 1
x_np = data[start:end, :]
y_np = label[start:end]
batch_input.append(x_np)
y_np = np.array(y_np)
batch_label.append(y_np)
batch_input = np.array(batch_input)
x = torch.from_numpy(batch_input)
y = batch_label
y = np.array(y)
y = y[:, :, np.newaxis]
y = torch.from_numpy(y)
return x,y
代码
文本

构建深度学习框架,在没有目标标签的情况下估计电池 SOH。具体结构可参考上图a,b

代码
文本
[5]
class framework(nn.Module):
def __init__(self, FeatureNumber):
super(framework, self).__init__()
self.cnn = nn.Sequential()
self.cnn.add_module('Conv1',nn.Conv1d(1, 32, 2, stride=2, padding=0))
self.cnn.add_module('ReLU1', nn.ReLU())
self.cnn.add_module('Conv2',nn.Conv1d(32, 64, 2, stride=2, padding=0))
self.cnn.add_module('ReLU2', nn.ReLU())
self.cnn.add_module('Conv3',nn.Conv1d(64, 128, 4, stride=1, padding=0))
self.cnn.add_module('ReLU3', nn.ReLU())

self.predictor = nn.Sequential()
self.predictor.add_module('pre_Fc1', nn.Linear(FeatureNumber, 1))
self.predictor.add_module('pre_ReLU1', nn.ReLU())
self.predictor.add_module('pre_Sigm1', nn.Sigmoid())
self.reconstructor = nn.Sequential()
self.reconstructor.add_module('recon_Fc1', nn.Linear(FeatureNumber, FeatureNumber))
self.reconstructor.add_module('recon_ReLU1', nn.ReLU())

def forward(self,X_src,X_tar):

fs0=self.cnn(X_src)
a=fs0.shape[0]
fs=fs0.view(a,1,-1)

ft0=self.cnn(X_tar)
a=ft0.shape[0]
ft=ft0.view(a,1,-1)
ft=self.reconstructor(ft)
Pre_src=self.predictor(fs)
Pre_tar=self.predictor(ft)

return Pre_src, fs, ft, Pre_tar
代码
文本

计算最大平均差异。

代码
文本
[6]
def mix_rbf_mmd(x, y):
k_xx, k_xy, k_yy = mix_rbf_kernel(x, y)
return _mmd(k_xx, k_xy, k_yy)

def mix_rbf_kernel(x, y):
assert(x.size(0) == y.size(0))
m = x.size(0)
z = torch.cat((x, y), 0)
zzt = torch.mm(z, z.t())
diag_zzt = torch.diag(zzt).unsqueeze(1)
z_norm_sqr = diag_zzt.expand_as(zzt)
exponent = z_norm_sqr-2*zzt+z_norm_sqr.t()
k = 0.0
for sigma in range(0,5,1):
gamma = 1/(2*(10**(sigma-2))**2)
k += torch.exp(-gamma*exponent)
return k[:m, :m], k[:m, m:], k[m:, m:]

def _mmd(k_xx, k_xy, k_yy):
m = k_xx.size(0)
diag_x = torch.diag(k_xx)
diag_y = torch.diag(k_yy)
sum_diag_x = torch.sum(diag_x)
sum_diag_y = torch.sum(diag_y)
kt_xx_sums = k_xx.sum(dim=1) - diag_x
kt_yy_sums = k_yy.sum(dim=1) - diag_y
k_xy_sums_0 = k_xy.sum(dim=0)
kt_xx_sum = kt_xx_sums.sum()
kt_yy_sum = kt_yy_sums.sum()
k_xy_sum = k_xy_sums_0.sum()
mmd = ((kt_xx_sum + sum_diag_x) / (m * m) + (kt_yy_sum + sum_diag_y) / (m * m) - 2.0 * k_xy_sum / (m * m))
return mmd
代码
文本

设置训练参数

代码
文本
[7]
#%% initialization
MaxTrainNum = 2000 # maximum number of epochs
FeatureNum = 1152 # size of feature explanation
LR = 0.001 # learning rate
DNNNum = 300 # size of DNN swarm

#%% training and estimating
Predict_traject = []
Predict_MMD_total = []
Predict_MMDt_total = []
代码
文本

训练模型

代码
文本
[8]
for DNNIndex in range(DNNNum):
# import battery data from 5 different manufacturers
# loading source domain data: 5 suffixes (representing 5 types of batteries) optional
# (i.e., CALCE, SANYO, PANASONIC, KOKAM, GOTION)
label_scr1_actual = label_scr_CALCE
label_scr1 = label_scr_CALCE
data_scr1 = np.row_stack([data_scr_CALCE])
scrnum_battery = number_CALCE
label_scr1_valid = label_scr1[number_CALCE[-1]:-1]
data_scr1_valid = data_scr1[number_CALCE[-1]:-1,:]
label_scr1_train = label_scr1[1:(number_CALCE[-1]-1)]
data_scr1_train = data_scr1[1:(number_CALCE[-1]-1),:]
# loading target domain data: 5 suffixes (representing 5 types of batteries) optional
# (i.e., CALCE, SANYO, PANASONIC, KOKAM, GOTION)
LowerSOHBound = 0.8
test_label_actual = label_scr_GOTION
test_data = data_scr_GOTION
num_battery = number_GOTION
ID_origin = np.arange(0,(np.float32(test_label_actual)).shape[0],1)
rmnum = []
for kkk in range(test_label_actual.shape[0]):
if test_label_actual[kkk]>1:
rmnum.append(kkk)
for kkk in range(test_label_actual.shape[0]):
if test_label_actual[kkk]<LowerSOHBound:
rmnum.append(kkk)
rmnum2 = []
for kkk in range(test_label_actual.shape[0]):
if test_label_actual[kkk]>1:
rmnum2.append(kkk)
for kkk in range(test_label_actual.shape[0]):
if test_label_actual[kkk]<1:
rmnum2.append(kkk)
test_label5 = np.delete(test_label_actual,rmnum2,axis = 0)
test_data5 = np.delete(test_data,rmnum2,axis = 0)
del rmnum2
test_label_actual = np.delete(test_label_actual,rmnum,axis = 0)
test_data = np.delete(test_data,rmnum,axis = 0)
test_label = test_label_actual
ID_origin = np.delete(ID_origin,rmnum,axis = 0)
number_battery = np.array([0])
for i in range(1,num_battery.shape[0]):
number_battery = np.append(number_battery,(ID_origin==num_battery[i]).nonzero())
del num_battery
ID_origin = torch.arange(0,(np.float32(test_label)).shape[0],1)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

model = framework(FeatureNum)
model.to('cuda')
optimizer = torch.optim.Adam(model.parameters(),LR)
loss_func0 = nn.MSELoss()
predict_ = []
real_ = []
Pre_loss = []
mmd = []
# training
for n in range(MaxTrainNum):
d_scr1,_ = batch_generate(data_scr1, label_scr1)
d_scr1_train,l_scr1_train = batch_generate(data_scr1_train, label_scr1_train)
d_scr1_valid,l_scr1_valid = batch_generate(data_scr1_valid, label_scr1_valid)
d_tar_tr,_ = batch_generate(test_data,test_label)
d_tar_tr2,l_tar_tr2 = batch_generate(test_data5,test_label5)
d_scr1 = d_scr1.data.cuda()
d_scr1_train = d_scr1_train.data.cuda()
l_scr1_train = l_scr1_train.data.cuda()
d_scr1_valid = d_scr1_valid.data.cuda()
l_scr1_valid = l_scr1_valid.data.cuda()
d_tar_tr = d_tar_tr.data.cuda()
d_tar_tr2 = d_tar_tr2.data.cuda()
l_tar_tr2 = l_tar_tr2.data.cuda()
pref1, fs, ft, _ = model(d_scr1_train,d_tar_tr)
pref1v, _, _, _ = model(d_scr1_valid,d_tar_tr)
_, _, _, pref2 = model(d_scr1,d_tar_tr2)
pre_loss_s1 = loss_func0(pref1, l_scr1_train)
pre_loss_s1_v = loss_func0(pref1v, l_scr1_valid)
pre_loss2 = loss_func0(pref2, l_tar_tr2)

pre_loss = pre_loss_s1

loss_MMD = 0
Fs = fs.reshape([-1,FeatureNum])
Ft = ft.reshape([-1,FeatureNum])
loss_MMD = mix_rbf_mmd(Fs, Ft)
Loss = 1.0*pre_loss+0.1*loss_MMD+1.0*pre_loss2
optimizer.zero_grad()
Loss.backward()
optimizer.step()
Loss = Loss.data.cpu().detach().numpy()
loss_pre = pre_loss.data.cpu().numpy()
loss_MMD = loss_MMD.data.cpu().numpy()
Loss = torch.from_numpy(Loss)
Loss = Loss.data.cuda()

Pre_loss.append(loss_pre)
mmd.append(loss_MMD)
if pre_loss_s1_v<=pre_loss_s1 and n>=500 and math.sqrt(pre_loss)<0.05:
break

print('DNN#', DNNIndex+1,': Training finished')
# estimating
model.eval()
x = torch.from_numpy(test_data)
x = x.unsqueeze(dim = 1)
x = x.data.cuda()
y = torch.from_numpy(test_label)
y = y.data.cuda()
_ ,Predict_MMD,Predict_MMDt,pre = model(x,x)
Predict_MMD = Predict_MMD.squeeze()
Predict_MMD = Predict_MMD.cpu().detach().numpy()
Predict_MMDt = Predict_MMDt.squeeze()
Predict_MMDt = Predict_MMDt.cpu().detach().numpy()
pre = pre.data.cpu().detach().numpy()
pre_data = pre.squeeze()
if DNNIndex == 0:
Predict_traject = np.row_stack([y.cpu(),pre_data])
Predict_MMD_total = Predict_MMD
Predict_MMDt_total = Predict_MMDt
else:
Predict_traject = np.row_stack([Predict_traject,pre_data])
Predict_MMD_total = np.row_stack([Predict_MMD_total,Predict_MMD])
Predict_MMDt_total = np.row_stack([Predict_MMDt_total,Predict_MMDt])
DNN# 1 : Training finished
DNN# 2 : Training finished
DNN# 3 : Training finished
DNN# 4 : Training finished
DNN# 5 : Training finished
DNN# 6 : Training finished
DNN# 7 : Training finished
DNN# 8 : Training finished
DNN# 9 : Training finished
DNN# 10 : Training finished
DNN# 11 : Training finished
DNN# 12 : Training finished
DNN# 13 : Training finished
DNN# 14 : Training finished
DNN# 15 : Training finished
DNN# 16 : Training finished
DNN# 17 : Training finished
DNN# 18 : Training finished
DNN# 19 : Training finished
DNN# 20 : Training finished
DNN# 21 : Training finished
DNN# 22 : Training finished
DNN# 23 : Training finished
DNN# 24 : Training finished
DNN# 25 : Training finished
DNN# 26 : Training finished
DNN# 27 : Training finished
DNN# 28 : Training finished
DNN# 29 : Training finished
DNN# 30 : Training finished
DNN# 31 : Training finished
DNN# 32 : Training finished
DNN# 33 : Training finished
DNN# 34 : Training finished
DNN# 35 : Training finished
DNN# 36 : Training finished
DNN# 37 : Training finished
DNN# 38 : Training finished
DNN# 39 : Training finished
DNN# 40 : Training finished
DNN# 41 : Training finished
DNN# 42 : Training finished
DNN# 43 : Training finished
DNN# 44 : Training finished
DNN# 45 : Training finished
DNN# 46 : Training finished
DNN# 47 : Training finished
DNN# 48 : Training finished
DNN# 49 : Training finished
DNN# 50 : Training finished
DNN# 51 : Training finished
DNN# 52 : Training finished
DNN# 53 : Training finished
DNN# 54 : Training finished
DNN# 55 : Training finished
DNN# 56 : Training finished
DNN# 57 : Training finished
DNN# 58 : Training finished
DNN# 59 : Training finished
DNN# 60 : Training finished
DNN# 61 : Training finished
DNN# 62 : Training finished
DNN# 63 : Training finished
DNN# 64 : Training finished
DNN# 65 : Training finished
DNN# 66 : Training finished
DNN# 67 : Training finished
DNN# 68 : Training finished
DNN# 69 : Training finished
DNN# 70 : Training finished
DNN# 71 : Training finished
DNN# 72 : Training finished
DNN# 73 : Training finished
DNN# 74 : Training finished
DNN# 75 : Training finished
DNN# 76 : Training finished
DNN# 77 : Training finished
DNN# 78 : Training finished
DNN# 79 : Training finished
DNN# 80 : Training finished
DNN# 81 : Training finished
DNN# 82 : Training finished
DNN# 83 : Training finished
DNN# 84 : Training finished
DNN# 85 : Training finished
DNN# 86 : Training finished
DNN# 87 : Training finished
DNN# 88 : Training finished
DNN# 89 : Training finished
DNN# 90 : Training finished
DNN# 91 : Training finished
DNN# 92 : Training finished
DNN# 93 : Training finished
DNN# 94 : Training finished
DNN# 95 : Training finished
DNN# 96 : Training finished
DNN# 97 : Training finished
DNN# 98 : Training finished
DNN# 99 : Training finished
DNN# 100 : Training finished
DNN# 101 : Training finished
DNN# 102 : Training finished
DNN# 103 : Training finished
DNN# 104 : Training finished
DNN# 105 : Training finished
DNN# 106 : Training finished
DNN# 107 : Training finished
DNN# 108 : Training finished
DNN# 109 : Training finished
DNN# 110 : Training finished
DNN# 111 : Training finished
DNN# 112 : Training finished
DNN# 113 : Training finished
DNN# 114 : Training finished
DNN# 115 : Training finished
DNN# 116 : Training finished
DNN# 117 : Training finished
DNN# 118 : Training finished
DNN# 119 : Training finished
DNN# 120 : Training finished
DNN# 121 : Training finished
DNN# 122 : Training finished
DNN# 123 : Training finished
DNN# 124 : Training finished
DNN# 125 : Training finished
DNN# 126 : Training finished
DNN# 127 : Training finished
DNN# 128 : Training finished
DNN# 129 : Training finished
DNN# 130 : Training finished
DNN# 131 : Training finished
DNN# 132 : Training finished
DNN# 133 : Training finished
DNN# 134 : Training finished
DNN# 135 : Training finished
DNN# 136 : Training finished
DNN# 137 : Training finished
DNN# 138 : Training finished
DNN# 139 : Training finished
DNN# 140 : Training finished
DNN# 141 : Training finished
DNN# 142 : Training finished
DNN# 143 : Training finished
DNN# 144 : Training finished
DNN# 145 : Training finished
DNN# 146 : Training finished
DNN# 147 : Training finished
DNN# 148 : Training finished
DNN# 149 : Training finished
DNN# 150 : Training finished
DNN# 151 : Training finished
DNN# 152 : Training finished
DNN# 153 : Training finished
DNN# 154 : Training finished
DNN# 155 : Training finished
DNN# 156 : Training finished
DNN# 157 : Training finished
DNN# 158 : Training finished
DNN# 159 : Training finished
DNN# 160 : Training finished
DNN# 161 : Training finished
DNN# 162 : Training finished
DNN# 163 : Training finished
DNN# 164 : Training finished
DNN# 165 : Training finished
DNN# 166 : Training finished
DNN# 167 : Training finished
DNN# 168 : Training finished
DNN# 169 : Training finished
DNN# 170 : Training finished
DNN# 171 : Training finished
DNN# 172 : Training finished
DNN# 173 : Training finished
DNN# 174 : Training finished
DNN# 175 : Training finished
DNN# 176 : Training finished
DNN# 177 : Training finished
DNN# 178 : Training finished
DNN# 179 : Training finished
DNN# 180 : Training finished
DNN# 181 : Training finished
DNN# 182 : Training finished
DNN# 183 : Training finished
DNN# 184 : Training finished
DNN# 185 : Training finished
DNN# 186 : Training finished
DNN# 187 : Training finished
DNN# 188 : Training finished
DNN# 189 : Training finished
DNN# 190 : Training finished
DNN# 191 : Training finished
DNN# 192 : Training finished
DNN# 193 : Training finished
DNN# 194 : Training finished
DNN# 195 : Training finished
DNN# 196 : Training finished
DNN# 197 : Training finished
DNN# 198 : Training finished
DNN# 199 : Training finished
DNN# 200 : Training finished
DNN# 201 : Training finished
DNN# 202 : Training finished
DNN# 203 : Training finished
DNN# 204 : Training finished
DNN# 205 : Training finished
DNN# 206 : Training finished
DNN# 207 : Training finished
DNN# 208 : Training finished
DNN# 209 : Training finished
DNN# 210 : Training finished
DNN# 211 : Training finished
DNN# 212 : Training finished
DNN# 213 : Training finished
DNN# 214 : Training finished
DNN# 215 : Training finished
DNN# 216 : Training finished
DNN# 217 : Training finished
DNN# 218 : Training finished
DNN# 219 : Training finished
DNN# 220 : Training finished
DNN# 221 : Training finished
DNN# 222 : Training finished
DNN# 223 : Training finished
DNN# 224 : Training finished
DNN# 225 : Training finished
DNN# 226 : Training finished
DNN# 227 : Training finished
DNN# 228 : Training finished
DNN# 229 : Training finished
DNN# 230 : Training finished
DNN# 231 : Training finished
DNN# 232 : Training finished
DNN# 233 : Training finished
DNN# 234 : Training finished
DNN# 235 : Training finished
DNN# 236 : Training finished
DNN# 237 : Training finished
DNN# 238 : Training finished
DNN# 239 : Training finished
DNN# 240 : Training finished
DNN# 241 : Training finished
DNN# 242 : Training finished
DNN# 243 : Training finished
DNN# 244 : Training finished
DNN# 245 : Training finished
DNN# 246 : Training finished
DNN# 247 : Training finished
DNN# 248 : Training finished
DNN# 249 : Training finished
DNN# 250 : Training finished
DNN# 251 : Training finished
DNN# 252 : Training finished
DNN# 253 : Training finished
DNN# 254 : Training finished
DNN# 255 : Training finished
DNN# 256 : Training finished
DNN# 257 : Training finished
DNN# 258 : Training finished
DNN# 259 : Training finished
DNN# 260 : Training finished
DNN# 261 : Training finished
DNN# 262 : Training finished
DNN# 263 : Training finished
DNN# 264 : Training finished
DNN# 265 : Training finished
DNN# 266 : Training finished
DNN# 267 : Training finished
DNN# 268 : Training finished
DNN# 269 : Training finished
DNN# 270 : Training finished
DNN# 271 : Training finished
DNN# 272 : Training finished
DNN# 273 : Training finished
DNN# 274 : Training finished
DNN# 275 : Training finished
DNN# 276 : Training finished
DNN# 277 : Training finished
DNN# 278 : Training finished
DNN# 279 : Training finished
DNN# 280 : Training finished
DNN# 281 : Training finished
DNN# 282 : Training finished
DNN# 283 : Training finished
DNN# 284 : Training finished
DNN# 285 : Training finished
DNN# 286 : Training finished
DNN# 287 : Training finished
DNN# 288 : Training finished
DNN# 289 : Training finished
DNN# 290 : Training finished
DNN# 291 : Training finished
DNN# 292 : Training finished
DNN# 293 : Training finished
DNN# 294 : Training finished
DNN# 295 : Training finished
DNN# 296 : Training finished
DNN# 297 : Training finished
DNN# 298 : Training finished
DNN# 299 : Training finished
DNN# 300 : Training finished
代码
文本

做出预测并绘图比较。

代码
文本
[9]
#%% selection
DNNMean = [];
DNNStd = [];
for n in range(1,Predict_traject.shape[0]):
if n == 1:
DNNMean = np.mean(Predict_traject[n])
DNNStd = np.std(Predict_traject[n])
else:
DNNMean = np.column_stack([DNNMean,np.mean(Predict_traject[n])])
DNNStd = np.column_stack([DNNStd,np.std(Predict_traject[n])])
Q3 = np.quantile(DNNMean,0.75)
Q1 = np.quantile(DNNStd,0.25)
DNNMean = DNNMean.squeeze()
DNNStd = DNNStd.squeeze()
X = [];
count = 0;
for n in range(0,DNNMean.shape[0]-1):
if DNNMean[n]>=Q3 and DNNStd[n]<=Q1:
if count == 0:
X = n
count = count+1
else:
X = np.column_stack([X,n])
count = count+1
X = X+1
Predict_final = np.mean(Predict_traject[X],axis = 1)

#%% show the real values of SOH and their estimates
plt.scatter(Predict_traject[0],Predict_traject[0],c = 'black')
plt.scatter(Predict_traject[0],Predict_final)
plt.ylabel('SoH Real')
plt.xlabel('SoH Predict')
plt.show()
代码
文本
[ ]

代码
文本
AI
电芯
SOH
AI电芯SOH
已赞1
本文被以下合集收录
电芯
Piloteye
更新于 2024-07-22
17 篇3 人关注
机器学习
bohrb060ec
更新于 2024-07-18
17 篇0 人关注
推荐阅读
公开
AI+电芯 | 基于时间和温度的动力电池剩余寿命RUL预测
AI+电芯RUL中文
AI+电芯RUL中文
JiaweiMiao
发布于 2023-09-08
2 转存文件
公开
AI+电芯 | 充电数据曲线预测电池阻抗谱
AI锂电池EIS
AI锂电池EIS
JiaweiMiao
发布于 2023-08-24
4 赞16 转存文件
评论
 BatteryDatabase=scio...

KaiqiYang

2023-08-31
这个数据包含了什么,后面可能还是要补几个图展示一下才行。
评论
 for DNNIndex in rang...

bohr35c6a6

09-29 02:58
rmnum2 = []     for kkk in range(test_label_actual.shape[0]):         if test_label_actual[kkk]>1:             rmnum2.append(kkk)                  for kkk in range(test_label_actual.shape[0]):         if test_label_actual[kkk]<1:             rmnum2.append(kkk)。这行代码为什么要获取了SOH大于1和小于1的对应索引呢?没有看太懂
展开
评论