新建
AI+电芯 | 由小范围电压和容量数据预测全充电曲线(迁移学习案例)

JiaweiMiao

推荐镜像 :Third-party software:d2l-ai:pytorch
推荐机型 :c12_m46_1 * NVIDIA GPU B
赞
1
数据集
BLG-CC(v1)
本Notebook搬运自北京理工大学先进储能科学与应用联合实验室
使用在300mV电压范围内的电压和容量数据,预测完整充电曲线
数据集为公共数据集
使用Oxford数据集的预训练模型做CALCE数据集的迁移学习
代码
文本
[2]
!pip install keras
!pip install scipy
!pip install tensorflow
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting keras Downloading https://pypi.tuna.tsinghua.edu.cn/packages/de/44/bf1b0eef5b13e6201aef076ff34b91bc40aace8591cd273c1c2a94a9cc00/keras-2.11.0-py2.py3-none-any.whl (1.7 MB) |████████████████████████████████| 1.7 MB 303 kB/s eta 0:00:01 Installing collected packages: keras Successfully installed keras-2.11.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Requirement already satisfied: scipy in /opt/miniconda/lib/python3.7/site-packages (1.7.3) Requirement already satisfied: numpy<1.23.0,>=1.16.5 in /opt/miniconda/lib/python3.7/site-packages (from scipy) (1.21.5) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting tensorflow Downloading https://pypi.tuna.tsinghua.edu.cn/packages/42/24/830571895f0927fe205a23309b136520c7914921420bd1e81aff1da47bb1/tensorflow-2.11.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (588.3 MB) |████████████████████████████████| 588.3 MB 5.1 kB/s eta 0:00:013 |███████▍ | 135.1 MB 12.8 MB/s eta 0:00:36 |█████████████████▎ | 318.6 MB 10.9 MB/s eta 0:00:25 |████████████████████████▋ | 453.3 MB 12.6 MB/s eta 0:00:11 |███████████████████████████████▋| 582.1 MB 3.1 MB/s eta 0:00:03 Collecting tensorflow-io-gcs-filesystem>=0.23.1 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/a4/c0/f9ac791c3f6f58a343b350894a3e92d44e53d20d7cf205988279ebcbc6e5/tensorflow_io_gcs_filesystem-0.33.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.4 MB) |████████████████████████████████| 2.4 MB 19.4 MB/s eta 0:00:01 Requirement already satisfied: opt-einsum>=2.3.2 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (3.3.0) Collecting protobuf<3.20,>=3.9.2 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/9c/c3/6768e798cc7e08301ec0a5ef59a30dc49f5f0df2d3950cd5585cecf246a8/protobuf-3.19.6-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB) |████████████████████████████████| 1.1 MB 67.8 MB/s eta 0:00:01 Requirement already satisfied: keras<2.12,>=2.11.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (2.11.0) Collecting libclang>=13.0.0 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/ea/df/55525e489c43f9dbb6c8ea27d8a567b3dcd18a22f3c45483055f5ca6611d/libclang-16.0.6-py2.py3-none-manylinux2010_x86_64.whl (22.9 MB) |████████████████████████████████| 22.9 MB 69.6 MB/s eta 0:00:01 Requirement already satisfied: grpcio<2.0,>=1.24.3 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (1.42.0) Requirement already satisfied: termcolor>=1.1.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (2.1.0) Requirement already satisfied: wrapt>=1.11.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (1.13.3) Requirement already satisfied: gast<=0.4.0,>=0.2.1 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (0.4.0) Requirement already satisfied: setuptools in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (61.2.0) Requirement already satisfied: packaging in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (21.3) Requirement already satisfied: six>=1.12.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (1.16.0) Collecting tensorflow-estimator<2.12,>=2.11.0 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/bb/e2/8bf618c7c30a525054230ee6d40b036d3e5abc2c4ff67cf7c7420a519204/tensorflow_estimator-2.11.0-py2.py3-none-any.whl (439 kB) |████████████████████████████████| 439 kB 13.9 MB/s eta 0:00:01 Requirement already satisfied: astunparse>=1.6.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (1.6.3) Requirement already satisfied: h5py>=2.9.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (3.7.0) Requirement already satisfied: absl-py>=1.0.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (1.3.0) Requirement already satisfied: flatbuffers>=2.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (2.0) Collecting tensorboard<2.12,>=2.11 Downloading https://pypi.tuna.tsinghua.edu.cn/packages/6f/77/e624b4916531721e674aa105151ffa5223fb224d3ca4bd5c10574664f944/tensorboard-2.11.2-py3-none-any.whl (6.0 MB) |████████████████████████████████| 6.0 MB 13.7 MB/s eta 0:00:01 Requirement already satisfied: typing-extensions>=3.6.6 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (4.4.0) Requirement already satisfied: google-pasta>=0.1.1 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (0.2.0) Requirement already satisfied: numpy>=1.20 in /opt/miniconda/lib/python3.7/site-packages (from tensorflow) (1.21.5) Requirement already satisfied: wheel<1.0,>=0.23.0 in /opt/miniconda/lib/python3.7/site-packages (from astunparse>=1.6.0->tensorflow) (0.37.1) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (1.8.1) Requirement already satisfied: markdown>=2.6.8 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (3.3.4) Requirement already satisfied: werkzeug>=1.0.1 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (2.0.3) Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (2.6.0) Requirement already satisfied: requests<3,>=2.21.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (2.25.1) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (0.6.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/miniconda/lib/python3.7/site-packages (from tensorboard<2.12,>=2.11->tensorflow) (0.4.4) Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/miniconda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow) (0.2.8) Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/miniconda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow) (4.2.2) Requirement already satisfied: rsa<5,>=3.1.4 in /opt/miniconda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow) (4.7.2) Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/miniconda/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.12,>=2.11->tensorflow) (1.3.0) Requirement already satisfied: importlib-metadata in /opt/miniconda/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard<2.12,>=2.11->tensorflow) (4.13.0) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/miniconda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow) (0.4.8) Requirement already satisfied: idna<3,>=2.5 in /opt/miniconda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow) (2.10) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/miniconda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow) (4.0.0) Requirement already satisfied: certifi>=2017.4.17 in /opt/miniconda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow) (2022.9.24) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/miniconda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow) (1.26.8) Requirement already satisfied: oauthlib>=3.0.0 in /opt/miniconda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.12,>=2.11->tensorflow) (3.2.1) Requirement already satisfied: zipp>=0.5 in /opt/miniconda/lib/python3.7/site-packages (from importlib-metadata->markdown>=2.6.8->tensorboard<2.12,>=2.11->tensorflow) (3.10.0) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/miniconda/lib/python3.7/site-packages (from packaging->tensorflow) (3.0.9) Installing collected packages: protobuf, tensorflow-io-gcs-filesystem, tensorflow-estimator, tensorboard, libclang, tensorflow Attempting uninstall: protobuf Found existing installation: protobuf 3.20.1 Uninstalling protobuf-3.20.1: Successfully uninstalled protobuf-3.20.1 Attempting uninstall: tensorflow-estimator Found existing installation: tensorflow-estimator 2.6.0 Uninstalling tensorflow-estimator-2.6.0: Successfully uninstalled tensorflow-estimator-2.6.0 Attempting uninstall: tensorboard Found existing installation: tensorboard 2.10.0 Uninstalling tensorboard-2.10.0: Successfully uninstalled tensorboard-2.10.0 Successfully installed libclang-16.0.6 protobuf-3.19.6 tensorboard-2.11.2 tensorflow-2.11.0 tensorflow-estimator-2.11.0 tensorflow-io-gcs-filesystem-0.33.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
代码
文本
[3]
from keras.models import Sequential
import numpy as np
from keras.callbacks import ModelCheckpoint
import time
import scipy.io as scio
start_time = time.time()
2023-08-29 09:45:41.362260: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-08-29 09:45:42.445385: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 2023-08-29 09:45:42.445491: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64 2023-08-29 09:45:42.445502: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
代码
文本
import data
代码
文本
[5]
#%% import data
# training data are from calce36, and testing data are from others
location = '/bohr/blgcc-710p/v1/data_calce/'
curve_cell_1 = np.genfromtxt(location+'CALCE_36.txt',delimiter = ',')
# test dataset
curve_cell_2 = np.genfromtxt(location+'CALCE_35.txt',delimiter = ',')
curve_cell_3 = np.genfromtxt(location+'CALCE_37.txt',delimiter = ',')
curve_cell_4 = np.genfromtxt(location+'CALCE_38.txt',delimiter = ',')
# scale the data according to the nomimal capacities of the oxford and calce batteries
curve_cell_1 = curve_cell_1/1.1*0.74
curve_cell_2 = curve_cell_2/1.1*0.74
curve_cell_3 = curve_cell_3/1.1*0.74
curve_cell_4 = curve_cell_4/1.1*0.74
# downsample the training and test datasets
curve_cell_1 = curve_cell_1[0:-1:45]
curve_cell_2 = curve_cell_2[0:-1:10]
curve_cell_3 = curve_cell_3[0:-1:10]
curve_cell_4 = curve_cell_4[0:-1:10]
curve_train = [curve_cell_1]
curve_test = [curve_cell_2,curve_cell_3,curve_cell_4]
voltage = np.arange(2.71,4.181,0.01)
代码
文本
compute mean and std based on the training data to standarise the input
代码
文本
[6]
#%% compute mean and std based on the training data to standarise the input
entire_charge = curve_train[0].flatten()
for ind in range(1,len(curve_train),1):
entire_charge = np.append(entire_charge,curve_train[ind].flatten())
entire_voltage = np.tile(voltage,len(entire_charge)//len(voltage))
entire_series_stack = np.vstack((entire_voltage, entire_charge))
entire_series = entire_series_stack.T
print(entire_charge.shape)
print(entire_voltage.shape)
print(entire_series.shape)
# mean and std
mean = entire_series.mean(axis=0)
entire_series -= mean
std = entire_series.std(axis=0)
entire_series /= std
(3108,) (3108,) (3108, 2)
代码
文本
sequence generation function
代码
文本
[7]
#%%
def generator(data, lookback, delay, min_index, max_index,
shuffle=False, batch_size=128, step=1):
if max_index is None:
max_index = len(data) - delay - 1
i = min_index + lookback
while 1:
if shuffle:
rows = np.random.randint(
min_index + lookback, max_index, size=batch_size)
else:
if i + batch_size >= max_index:
i = min_index + lookback
rows = np.arange(i, min(i + batch_size, max_index))
i += len(rows)
samples = np.zeros((len(rows),
lookback // step-step,
data.shape[-1]))
for j, row in enumerate(rows):
indices = range(rows[j] - lookback, rows[j], step)
samples[j] = data[indices][1:,:]
samples[j][:,1] -= data[indices][0,1]
return samples
代码
文本
prepare data for each cell and form the training and test dataset
代码
文本
[8]
#%% prepare data for each cell and form the training and test dataset
lookback_size = 31 # window size
step_size = 1 # sampling step
# training data
data_train_temp = []
target_train_temp = []
for ind in range(0,len(curve_train),1):
for k in range(0,len(curve_train[ind]),1):
charge = curve_train[ind][k]
temp_train_vstack = np.vstack((voltage, charge))
temp_train_not = temp_train_vstack.T # not standarisation
# standarisation
temp_train = temp_train_not - mean
temp_train = temp_train/std
batch_size_train = len(temp_train)
(train_gen) = generator(temp_train,
lookback=lookback_size,
delay=0,
min_index=0,
max_index=None,
shuffle=False,
batch_size=batch_size_train,
step=step_size)
data_train_temp.append(train_gen)
A = np.tile(charge,[len(train_gen),1])
target_train_temp.append(A)
train_gen_final = np.concatenate(data_train_temp,axis=0)
train_target_final = np.concatenate(target_train_temp,axis=0)
print(train_gen_final.shape)
print(train_target_final.shape)
(2436, 30, 2) (2436, 148)
代码
文本
test data
代码
文本
[9]
#%% test data
data_test_temp = []
target_test_temp = []
for ind in range(0,len(curve_test),1):
for k in range(0,len(curve_test[ind]),1):
charge = curve_test[ind][k]
temp_test_vstack = np.vstack((voltage, charge))
temp_test_not = temp_test_vstack.T
# standarisation
temp_test = temp_test_not - mean
temp_test = temp_test/std
batch_size_test = len(temp_test)
(test_gen) = generator(temp_test,
lookback=lookback_size,
delay=0,
min_index=0,
max_index=None,
shuffle=False,
batch_size=batch_size_test,
step=step_size)
data_test_temp.append(test_gen)
A = np.tile(charge,[len(test_gen),1])
target_test_temp.append(A)
test_gen_final = np.concatenate(data_test_temp,axis=0)
test_target_final = np.concatenate(target_test_temp,axis=0)
print(test_gen_final.shape)
print(test_target_final.shape)
(33292, 30, 2) (33292, 148)
代码
文本
shuffle the training dataset for validation
代码
文本
[10]
#%% shuffle the training dataset for validation
index = np.arange(train_gen_final.shape[0])
np.random.shuffle(index)
Input_train = train_gen_final[index,:,:]
Output_train = train_target_final[index,:]
Input_test = test_gen_final
Output_test = test_target_final
代码
文本
import the pretrained model and modify it
代码
文本
[11]
#%% import the pretrained model and modify it
from keras import layers
from keras.models import load_model
model_base = load_model('/bohr/blgcc-710p/v1/transferbasis_calce/oxford_model.hdf5')
model_base.summary()
model = Sequential()
for layer in model_base.layers[:-1]: # go through until last layer
model.add(layer)
model.add(layers.Dense(len(voltage),name='new_dense'))
model.summary()
for layer in model.layers[:-1]:
layer.trainable = False
for i,layer in enumerate(model.layers):
print(i,layer.name,layer.trainable)
model.compile(loss='mean_squared_error', optimizer='adam')
# the number of epochs is set to 50 as a fast example
filepath="transfer-{epoch:02d}-{val_loss:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True,
mode='auto')
callbacks_list = [checkpoint]
# freeze copied layers and train for 50 times
history = model.fit(Input_train,Output_train,
epochs=50,
batch_size=512,
validation_split=0.35,
callbacks=callbacks_list, verbose=1)
for layer in model.layers[:-1]:
layer.trainable = True
for i,layer in enumerate(model.layers):
print(i,layer.name,layer.trainable)
# make all layers trainable and train for next 4950 epochs
# the number of epochs is set to 50 as a fast example
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(Input_train,Output_train,
epochs=50,
batch_size=512,
validation_split=0.35,
callbacks=callbacks_list, verbose=1)
2023-08-29 09:46:07.062147: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:07.213774: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:07.214037: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:07.214648: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-08-29 09:46:07.215839: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:07.216099: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:07.216298: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:08.006918: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:08.007201: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:08.007418: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2023-08-29 09:46:08.007615: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 13781 MB memory: -> device: 0, name: Tesla T4, pci bus id: 0000:00:09.0, compute capability: 7.5 Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_1 (Conv1D) (None, None, 16) 112 max_pooling1d_1 (MaxPooling (None, None, 16) 0 1D) conv1d_2 (Conv1D) (None, None, 8) 392 max_pooling1d_2 (MaxPooling (None, None, 8) 0 1D) conv1d_3 (Conv1D) (None, None, 8) 200 global_max_pooling1d_1 (Glo (None, 8) 0 balMaxPooling1D) dense_1 (Dense) (None, 140) 1260 dropout_1 (Dropout) (None, 140) 0 dense_2 (Dense) (None, 140) 19740 ================================================================= Total params: 21,704 Trainable params: 21,704 Non-trainable params: 0 _________________________________________________________________ Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_1 (Conv1D) (None, None, 16) 112 max_pooling1d_1 (MaxPooling (None, None, 16) 0 1D) conv1d_2 (Conv1D) (None, None, 8) 392 max_pooling1d_2 (MaxPooling (None, None, 8) 0 1D) conv1d_3 (Conv1D) (None, None, 8) 200 global_max_pooling1d_1 (Glo (None, 8) 0 balMaxPooling1D) dense_1 (Dense) (None, 140) 1260 dropout_1 (Dropout) (None, 140) 0 new_dense (Dense) (None, 148) 20868 ================================================================= Total params: 22,832 Trainable params: 22,832 Non-trainable params: 0 _________________________________________________________________ 0 conv1d_1 False 1 max_pooling1d_1 False 2 conv1d_2 False 3 max_pooling1d_2 False 4 conv1d_3 False 5 global_max_pooling1d_1 False 6 dense_1 False 7 dropout_1 False 8 new_dense True Epoch 1/50 2023-08-29 09:46:11.727672: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:428] Loaded cuDNN version 8400 2023-08-29 09:46:14.079879: I tensorflow/compiler/xla/service/service.cc:173] XLA service 0x7f0c6eefe680 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2023-08-29 09:46:14.079918: I tensorflow/compiler/xla/service/service.cc:181] StreamExecutor device (0): Tesla T4, Compute Capability 7.5 2023-08-29 09:46:14.085762: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable. 2023-08-29 09:46:14.230124: I tensorflow/compiler/jit/xla_compilation_cache.cc:477] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process. 1/4 [======>.......................] - ETA: 16s - loss: 1386982.2500 Epoch 1: val_loss improved from inf to 1345586.62500, saving model to models_calce/transfer-01-1345586.62.hdf5 4/4 [==============================] - 6s 113ms/step - loss: 1384965.8750 - val_loss: 1345586.6250 Epoch 2/50 1/4 [======>.......................] - ETA: 0s - loss: 1362474.1250 Epoch 2: val_loss improved from 1345586.62500 to 1340142.87500, saving model to models_calce/transfer-02-1340142.88.hdf5 4/4 [==============================] - 0s 24ms/step - loss: 1379141.0000 - val_loss: 1340142.8750 Epoch 3/50 1/4 [======>.......................] - ETA: 0s - loss: 1406960.1250 Epoch 3: val_loss improved from 1340142.87500 to 1334760.75000, saving model to models_calce/transfer-03-1334760.75.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1373373.0000 - val_loss: 1334760.7500 Epoch 4/50 1/4 [======>.......................] - ETA: 0s - loss: 1358637.0000 Epoch 4: val_loss improved from 1334760.75000 to 1329434.25000, saving model to models_calce/transfer-04-1329434.25.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1367651.7500 - val_loss: 1329434.2500 Epoch 5/50 1/4 [======>.......................] - ETA: 0s - loss: 1375513.5000 Epoch 5: val_loss improved from 1329434.25000 to 1324157.12500, saving model to models_calce/transfer-05-1324157.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1361932.7500 - val_loss: 1324157.1250 Epoch 6/50 1/4 [======>.......................] - ETA: 0s - loss: 1355590.3750 Epoch 6: val_loss improved from 1324157.12500 to 1318909.00000, saving model to models_calce/transfer-06-1318909.00.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1356315.2500 - val_loss: 1318909.0000 Epoch 7/50 1/4 [======>.......................] - ETA: 0s - loss: 1364202.8750 Epoch 7: val_loss improved from 1318909.00000 to 1313712.12500, saving model to models_calce/transfer-07-1313712.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1350654.1250 - val_loss: 1313712.1250 Epoch 8/50 1/4 [======>.......................] - ETA: 0s - loss: 1352138.8750 Epoch 8: val_loss improved from 1313712.12500 to 1308581.50000, saving model to models_calce/transfer-08-1308581.50.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1345199.8750 - val_loss: 1308581.5000 Epoch 9/50 1/4 [======>.......................] - ETA: 0s - loss: 1313868.8750 Epoch 9: val_loss improved from 1308581.50000 to 1303506.12500, saving model to models_calce/transfer-09-1303506.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1339560.3750 - val_loss: 1303506.1250 Epoch 10/50 1/4 [======>.......................] - ETA: 0s - loss: 1350091.8750 Epoch 10: val_loss improved from 1303506.12500 to 1298426.00000, saving model to models_calce/transfer-10-1298426.00.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1334449.6250 - val_loss: 1298426.0000 Epoch 11/50 1/4 [======>.......................] - ETA: 0s - loss: 1331032.7500 Epoch 11: val_loss improved from 1298426.00000 to 1293360.37500, saving model to models_calce/transfer-11-1293360.38.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1329024.2500 - val_loss: 1293360.3750 Epoch 12/50 1/4 [======>.......................] - ETA: 0s - loss: 1322404.0000 Epoch 12: val_loss improved from 1293360.37500 to 1288378.50000, saving model to models_calce/transfer-12-1288378.50.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1323501.3750 - val_loss: 1288378.5000 Epoch 13/50 1/4 [======>.......................] - ETA: 0s - loss: 1362874.8750 Epoch 13: val_loss improved from 1288378.50000 to 1283420.75000, saving model to models_calce/transfer-13-1283420.75.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1318401.7500 - val_loss: 1283420.7500 Epoch 14/50 1/4 [======>.......................] - ETA: 0s - loss: 1334875.5000 Epoch 14: val_loss improved from 1283420.75000 to 1278530.62500, saving model to models_calce/transfer-14-1278530.62.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1312878.5000 - val_loss: 1278530.6250 Epoch 15/50 1/4 [======>.......................] - ETA: 0s - loss: 1324260.7500 Epoch 15: val_loss improved from 1278530.62500 to 1273674.87500, saving model to models_calce/transfer-15-1273674.88.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1307743.1250 - val_loss: 1273674.8750 Epoch 16/50 1/4 [======>.......................] - ETA: 0s - loss: 1318022.5000 Epoch 16: val_loss improved from 1273674.87500 to 1268843.12500, saving model to models_calce/transfer-16-1268843.12.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1302758.0000 - val_loss: 1268843.1250 Epoch 17/50 1/4 [======>.......................] - ETA: 0s - loss: 1300141.7500 Epoch 17: val_loss improved from 1268843.12500 to 1264069.75000, saving model to models_calce/transfer-17-1264069.75.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1297477.0000 - val_loss: 1264069.7500 Epoch 18/50 1/4 [======>.......................] - ETA: 0s - loss: 1325723.1250 Epoch 18: val_loss improved from 1264069.75000 to 1259316.50000, saving model to models_calce/transfer-18-1259316.50.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1292031.0000 - val_loss: 1259316.5000 Epoch 19/50 1/4 [======>.......................] - ETA: 0s - loss: 1287463.3750 Epoch 19: val_loss improved from 1259316.50000 to 1254590.87500, saving model to models_calce/transfer-19-1254590.88.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1287223.1250 - val_loss: 1254590.8750 Epoch 20/50 1/4 [======>.......................] - ETA: 0s - loss: 1281949.3750 Epoch 20: val_loss improved from 1254590.87500 to 1249895.75000, saving model to models_calce/transfer-20-1249895.75.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1282178.8750 - val_loss: 1249895.7500 Epoch 21/50 1/4 [======>.......................] - ETA: 0s - loss: 1260376.7500 Epoch 21: val_loss improved from 1249895.75000 to 1245261.12500, saving model to models_calce/transfer-21-1245261.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1277128.0000 - val_loss: 1245261.1250 Epoch 22/50 1/4 [======>.......................] - ETA: 0s - loss: 1258071.5000 Epoch 22: val_loss improved from 1245261.12500 to 1240667.62500, saving model to models_calce/transfer-22-1240667.62.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1271884.7500 - val_loss: 1240667.6250 Epoch 23/50 1/4 [======>.......................] - ETA: 0s - loss: 1296449.6250 Epoch 23: val_loss improved from 1240667.62500 to 1236129.62500, saving model to models_calce/transfer-23-1236129.62.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1267181.3750 - val_loss: 1236129.6250 Epoch 24/50 1/4 [======>.......................] - ETA: 0s - loss: 1251744.6250 Epoch 24: val_loss improved from 1236129.62500 to 1231612.12500, saving model to models_calce/transfer-24-1231612.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1262360.6250 - val_loss: 1231612.1250 Epoch 25/50 1/4 [======>.......................] - ETA: 0s - loss: 1262366.7500 Epoch 25: val_loss improved from 1231612.12500 to 1227175.12500, saving model to models_calce/transfer-25-1227175.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1257646.6250 - val_loss: 1227175.1250 Epoch 26/50 1/4 [======>.......................] - ETA: 0s - loss: 1235870.0000 Epoch 26: val_loss improved from 1227175.12500 to 1222835.25000, saving model to models_calce/transfer-26-1222835.25.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1253027.8750 - val_loss: 1222835.2500 Epoch 27/50 1/4 [======>.......................] - ETA: 0s - loss: 1271048.8750 Epoch 27: val_loss improved from 1222835.25000 to 1218482.75000, saving model to models_calce/transfer-27-1218482.75.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1248085.5000 - val_loss: 1218482.7500 Epoch 28/50 1/4 [======>.......................] - ETA: 0s - loss: 1209076.6250 Epoch 28: val_loss improved from 1218482.75000 to 1214139.75000, saving model to models_calce/transfer-28-1214139.75.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1243850.1250 - val_loss: 1214139.7500 Epoch 29/50 1/4 [======>.......................] - ETA: 0s - loss: 1235816.5000 Epoch 29: val_loss improved from 1214139.75000 to 1209812.37500, saving model to models_calce/transfer-29-1209812.38.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1238702.0000 - val_loss: 1209812.3750 Epoch 30/50 1/4 [======>.......................] - ETA: 0s - loss: 1235334.2500 Epoch 30: val_loss improved from 1209812.37500 to 1205485.25000, saving model to models_calce/transfer-30-1205485.25.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1234365.7500 - val_loss: 1205485.2500 Epoch 31/50 1/4 [======>.......................] - ETA: 0s - loss: 1188386.1250 Epoch 31: val_loss improved from 1205485.25000 to 1201127.37500, saving model to models_calce/transfer-31-1201127.38.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1229603.6250 - val_loss: 1201127.3750 Epoch 32/50 1/4 [======>.......................] - ETA: 0s - loss: 1259105.6250 Epoch 32: val_loss improved from 1201127.37500 to 1196833.75000, saving model to models_calce/transfer-32-1196833.75.hdf5 4/4 [==============================] - 0s 28ms/step - loss: 1224933.8750 - val_loss: 1196833.7500 Epoch 33/50 1/4 [======>.......................] - ETA: 0s - loss: 1240027.8750 Epoch 33: val_loss improved from 1196833.75000 to 1192550.00000, saving model to models_calce/transfer-33-1192550.00.hdf5 4/4 [==============================] - 0s 23ms/step - loss: 1220509.1250 - val_loss: 1192550.0000 Epoch 34/50 1/4 [======>.......................] - ETA: 0s - loss: 1236564.0000 Epoch 34: val_loss improved from 1192550.00000 to 1188315.37500, saving model to models_calce/transfer-34-1188315.38.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1215727.0000 - val_loss: 1188315.3750 Epoch 35/50 1/4 [======>.......................] - ETA: 0s - loss: 1232397.8750 Epoch 35: val_loss improved from 1188315.37500 to 1184106.50000, saving model to models_calce/transfer-35-1184106.50.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1211767.8750 - val_loss: 1184106.5000 Epoch 36/50 1/4 [======>.......................] - ETA: 0s - loss: 1207012.6250 Epoch 36: val_loss improved from 1184106.50000 to 1179994.37500, saving model to models_calce/transfer-36-1179994.38.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1206588.3750 - val_loss: 1179994.3750 Epoch 37/50 1/4 [======>.......................] - ETA: 0s - loss: 1223443.2500 Epoch 37: val_loss improved from 1179994.37500 to 1175948.25000, saving model to models_calce/transfer-37-1175948.25.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1202692.0000 - val_loss: 1175948.2500 Epoch 38/50 1/4 [======>.......................] - ETA: 0s - loss: 1242086.5000 Epoch 38: val_loss improved from 1175948.25000 to 1171897.75000, saving model to models_calce/transfer-38-1171897.75.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1198446.7500 - val_loss: 1171897.7500 Epoch 39/50 1/4 [======>.......................] - ETA: 0s - loss: 1214744.2500 Epoch 39: val_loss improved from 1171897.75000 to 1167851.50000, saving model to models_calce/transfer-39-1167851.50.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1193920.8750 - val_loss: 1167851.5000 Epoch 40/50 1/4 [======>.......................] - ETA: 0s - loss: 1184038.2500 Epoch 40: val_loss improved from 1167851.50000 to 1163824.50000, saving model to models_calce/transfer-40-1163824.50.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1189637.5000 - val_loss: 1163824.5000 Epoch 41/50 1/4 [======>.......................] - ETA: 0s - loss: 1198213.2500 Epoch 41: val_loss improved from 1163824.50000 to 1159806.25000, saving model to models_calce/transfer-41-1159806.25.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1184887.3750 - val_loss: 1159806.2500 Epoch 42/50 1/4 [======>.......................] - ETA: 0s - loss: 1174510.8750 Epoch 42: val_loss improved from 1159806.25000 to 1155823.00000, saving model to models_calce/transfer-42-1155823.00.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1181278.6250 - val_loss: 1155823.0000 Epoch 43/50 1/4 [======>.......................] - ETA: 0s - loss: 1221117.7500 Epoch 43: val_loss improved from 1155823.00000 to 1151907.00000, saving model to models_calce/transfer-43-1151907.00.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1176456.6250 - val_loss: 1151907.0000 Epoch 44/50 1/4 [======>.......................] - ETA: 0s - loss: 1183665.0000 Epoch 44: val_loss improved from 1151907.00000 to 1147964.25000, saving model to models_calce/transfer-44-1147964.25.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1172695.6250 - val_loss: 1147964.2500 Epoch 45/50 1/4 [======>.......................] - ETA: 0s - loss: 1179115.1250 Epoch 45: val_loss improved from 1147964.25000 to 1144045.87500, saving model to models_calce/transfer-45-1144045.88.hdf5 4/4 [==============================] - 0s 21ms/step - loss: 1168256.7500 - val_loss: 1144045.8750 Epoch 46/50 1/4 [======>.......................] - ETA: 0s - loss: 1171655.8750 Epoch 46: val_loss improved from 1144045.87500 to 1140175.12500, saving model to models_calce/transfer-46-1140175.12.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1164356.1250 - val_loss: 1140175.1250 Epoch 47/50 1/4 [======>.......................] - ETA: 0s - loss: 1185106.6250 Epoch 47: val_loss improved from 1140175.12500 to 1136389.87500, saving model to models_calce/transfer-47-1136389.88.hdf5 4/4 [==============================] - 0s 22ms/step - loss: 1159690.8750 - val_loss: 1136389.8750 Epoch 48/50 1/4 [======>.......................] - ETA: 0s - loss: 1148636.3750 Epoch 48: val_loss improved from 1136389.87500 to 1132682.50000, saving model to models_calce/transfer-48-1132682.50.hdf5 4/4 [==============================] - 0s 21ms/step - loss: 1156001.2500 - val_loss: 1132682.5000 Epoch 49/50 1/4 [======>.......................] - ETA: 0s - loss: 1148564.1250 Epoch 49: val_loss improved from 1132682.50000 to 1128972.25000, saving model to models_calce/transfer-49-1128972.25.hdf5 4/4 [==============================] - 0s 21ms/step - loss: 1152468.3750 - val_loss: 1128972.2500 Epoch 50/50 1/4 [======>.......................] - ETA: 0s - loss: 1165753.5000 Epoch 50: val_loss improved from 1128972.25000 to 1125266.75000, saving model to models_calce/transfer-50-1125266.75.hdf5 4/4 [==============================] - 0s 21ms/step - loss: 1147513.2500 - val_loss: 1125266.7500 0 conv1d_1 True 1 max_pooling1d_1 True 2 conv1d_2 True 3 max_pooling1d_2 True 4 conv1d_3 True 5 global_max_pooling1d_1 True 6 dense_1 True 7 dropout_1 True 8 new_dense True Epoch 1/50 4/4 [==============================] - ETA: 0s - loss: 1130656.3750 Epoch 1: val_loss improved from 1125266.75000 to 1074253.25000, saving model to models_calce/transfer-01-1074253.25.hdf5 4/4 [==============================] - 3s 96ms/step - loss: 1130656.3750 - val_loss: 1074253.2500 Epoch 2/50 1/4 [======>.......................] - ETA: 0s - loss: 1108183.5000 Epoch 2: val_loss improved from 1074253.25000 to 1019607.93750, saving model to models_calce/transfer-02-1019607.94.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 1077365.5000 - val_loss: 1019607.9375 Epoch 3/50 1/4 [======>.......................] - ETA: 0s - loss: 1065881.0000 Epoch 3: val_loss improved from 1019607.93750 to 958746.87500, saving model to models_calce/transfer-03-958746.88.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 1017887.6875 - val_loss: 958746.8750 Epoch 4/50 1/4 [======>.......................] - ETA: 0s - loss: 939735.8125 Epoch 4: val_loss improved from 958746.87500 to 891875.06250, saving model to models_calce/transfer-04-891875.06.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 951326.0625 - val_loss: 891875.0625 Epoch 5/50 1/4 [======>.......................] - ETA: 0s - loss: 910789.3125 Epoch 5: val_loss improved from 891875.06250 to 819722.62500, saving model to models_calce/transfer-05-819722.62.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 880672.2500 - val_loss: 819722.6250 Epoch 6/50 1/4 [======>.......................] - ETA: 0s - loss: 815015.7500 Epoch 6: val_loss improved from 819722.62500 to 743715.56250, saving model to models_calce/transfer-06-743715.56.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 803421.3125 - val_loss: 743715.5625 Epoch 7/50 1/4 [======>.......................] - ETA: 0s - loss: 769117.7500 Epoch 7: val_loss improved from 743715.56250 to 665451.18750, saving model to models_calce/transfer-07-665451.19.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 723154.6250 - val_loss: 665451.1875 Epoch 8/50 1/4 [======>.......................] - ETA: 0s - loss: 661749.6875 Epoch 8: val_loss improved from 665451.18750 to 589088.62500, saving model to models_calce/transfer-08-589088.62.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 643742.1250 - val_loss: 589088.6250 Epoch 9/50 1/4 [======>.......................] - ETA: 0s - loss: 591873.0625 Epoch 9: val_loss improved from 589088.62500 to 518517.28125, saving model to models_calce/transfer-09-518517.28.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 566940.2500 - val_loss: 518517.2812 Epoch 10/50 1/4 [======>.......................] - ETA: 0s - loss: 510998.7500 Epoch 10: val_loss improved from 518517.28125 to 456527.00000, saving model to models_calce/transfer-10-456527.00.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 499026.2188 - val_loss: 456527.0000 Epoch 11/50 1/4 [======>.......................] - ETA: 0s - loss: 441122.0938 Epoch 11: val_loss improved from 456527.00000 to 404527.87500, saving model to models_calce/transfer-11-404527.88.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 440418.2812 - val_loss: 404527.8750 Epoch 12/50 1/4 [======>.......................] - ETA: 0s - loss: 395320.5625 Epoch 12: val_loss improved from 404527.87500 to 362482.40625, saving model to models_calce/transfer-12-362482.41.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 390456.0625 - val_loss: 362482.4062 Epoch 13/50 1/4 [======>.......................] - ETA: 0s - loss: 366717.8750 Epoch 13: val_loss improved from 362482.40625 to 329912.75000, saving model to models_calce/transfer-13-329912.75.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 351090.2812 - val_loss: 329912.7500 Epoch 14/50 1/4 [======>.......................] - ETA: 0s - loss: 315038.7812 Epoch 14: val_loss improved from 329912.75000 to 305377.46875, saving model to models_calce/transfer-14-305377.47.hdf5 4/4 [==============================] - 0s 29ms/step - loss: 322009.3438 - val_loss: 305377.4688 Epoch 15/50 1/4 [======>.......................] - ETA: 0s - loss: 312169.4375 Epoch 15: val_loss improved from 305377.46875 to 286453.53125, saving model to models_calce/transfer-15-286453.53.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 301297.0312 - val_loss: 286453.5312 Epoch 16/50 1/4 [======>.......................] - ETA: 0s - loss: 282427.5625 Epoch 16: val_loss improved from 286453.53125 to 271488.90625, saving model to models_calce/transfer-16-271488.91.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 279366.2500 - val_loss: 271488.9062 Epoch 17/50 1/4 [======>.......................] - ETA: 0s - loss: 272606.3438 Epoch 17: val_loss improved from 271488.90625 to 258627.45312, saving model to models_calce/transfer-17-258627.45.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 267070.8438 - val_loss: 258627.4531 Epoch 18/50 1/4 [======>.......................] - ETA: 0s - loss: 254697.9375 Epoch 18: val_loss improved from 258627.45312 to 246811.90625, saving model to models_calce/transfer-18-246811.91.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 254267.0469 - val_loss: 246811.9062 Epoch 19/50 1/4 [======>.......................] - ETA: 0s - loss: 246942.2500 Epoch 19: val_loss improved from 246811.90625 to 235767.39062, saving model to models_calce/transfer-19-235767.39.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 244324.5156 - val_loss: 235767.3906 Epoch 20/50 1/4 [======>.......................] - ETA: 0s - loss: 230561.6875 Epoch 20: val_loss improved from 235767.39062 to 225886.46875, saving model to models_calce/transfer-20-225886.47.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 233113.6250 - val_loss: 225886.4688 Epoch 21/50 1/4 [======>.......................] - ETA: 0s - loss: 212715.4531 Epoch 21: val_loss improved from 225886.46875 to 217093.40625, saving model to models_calce/transfer-21-217093.41.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 221910.9375 - val_loss: 217093.4062 Epoch 22/50 1/4 [======>.......................] - ETA: 0s - loss: 220244.2500 Epoch 22: val_loss improved from 217093.40625 to 209187.40625, saving model to models_calce/transfer-22-209187.41.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 210673.5938 - val_loss: 209187.4062 Epoch 23/50 1/4 [======>.......................] - ETA: 0s - loss: 214417.2188 Epoch 23: val_loss improved from 209187.40625 to 202026.60938, saving model to models_calce/transfer-23-202026.61.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 204669.3594 - val_loss: 202026.6094 Epoch 24/50 1/4 [======>.......................] - ETA: 0s - loss: 203135.1562 Epoch 24: val_loss improved from 202026.60938 to 195432.25000, saving model to models_calce/transfer-24-195432.25.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 196692.2969 - val_loss: 195432.2500 Epoch 25/50 1/4 [======>.......................] - ETA: 0s - loss: 187617.1250 Epoch 25: val_loss improved from 195432.25000 to 189315.85938, saving model to models_calce/transfer-25-189315.86.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 188292.6875 - val_loss: 189315.8594 Epoch 26/50 1/4 [======>.......................] - ETA: 0s - loss: 187414.5938 Epoch 26: val_loss improved from 189315.85938 to 183624.92188, saving model to models_calce/transfer-26-183624.92.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 185284.8125 - val_loss: 183624.9219 Epoch 27/50 1/4 [======>.......................] - ETA: 0s - loss: 177612.4844 Epoch 27: val_loss improved from 183624.92188 to 178471.65625, saving model to models_calce/transfer-27-178471.66.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 180393.6094 - val_loss: 178471.6562 Epoch 28/50 1/4 [======>.......................] - ETA: 0s - loss: 178150.8594 Epoch 28: val_loss improved from 178471.65625 to 173845.87500, saving model to models_calce/transfer-28-173845.88.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 174423.6875 - val_loss: 173845.8750 Epoch 29/50 1/4 [======>.......................] - ETA: 0s - loss: 174709.4375 Epoch 29: val_loss improved from 173845.87500 to 169616.53125, saving model to models_calce/transfer-29-169616.53.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 169944.7188 - val_loss: 169616.5312 Epoch 30/50 1/4 [======>.......................] - ETA: 0s - loss: 165308.9062 Epoch 30: val_loss improved from 169616.53125 to 165702.75000, saving model to models_calce/transfer-30-165702.75.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 166362.7188 - val_loss: 165702.7500 Epoch 31/50 1/4 [======>.......................] - ETA: 0s - loss: 163526.2188 Epoch 31: val_loss improved from 165702.75000 to 162079.96875, saving model to models_calce/transfer-31-162079.97.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 161772.2500 - val_loss: 162079.9688 Epoch 32/50 1/4 [======>.......................] - ETA: 0s - loss: 162510.2969 Epoch 32: val_loss improved from 162079.96875 to 158595.75000, saving model to models_calce/transfer-32-158595.75.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 156884.8125 - val_loss: 158595.7500 Epoch 33/50 1/4 [======>.......................] - ETA: 0s - loss: 143176.1250 Epoch 33: val_loss improved from 158595.75000 to 155130.48438, saving model to models_calce/transfer-33-155130.48.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 153628.6094 - val_loss: 155130.4844 Epoch 34/50 1/4 [======>.......................] - ETA: 0s - loss: 163904.9219 Epoch 34: val_loss improved from 155130.48438 to 151823.89062, saving model to models_calce/transfer-34-151823.89.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 150787.2500 - val_loss: 151823.8906 Epoch 35/50 1/4 [======>.......................] - ETA: 0s - loss: 140994.5312 Epoch 35: val_loss improved from 151823.89062 to 148695.98438, saving model to models_calce/transfer-35-148695.98.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 147144.5469 - val_loss: 148695.9844 Epoch 36/50 1/4 [======>.......................] - ETA: 0s - loss: 131274.3750 Epoch 36: val_loss improved from 148695.98438 to 145723.15625, saving model to models_calce/transfer-36-145723.16.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 145714.6875 - val_loss: 145723.1562 Epoch 37/50 1/4 [======>.......................] - ETA: 0s - loss: 141761.3906 Epoch 37: val_loss improved from 145723.15625 to 142866.85938, saving model to models_calce/transfer-37-142866.86.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 141496.4062 - val_loss: 142866.8594 Epoch 38/50 1/4 [======>.......................] - ETA: 0s - loss: 135996.4688 Epoch 38: val_loss improved from 142866.85938 to 140090.01562, saving model to models_calce/transfer-38-140090.02.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 137653.0469 - val_loss: 140090.0156 Epoch 39/50 1/4 [======>.......................] - ETA: 0s - loss: 133271.8906 Epoch 39: val_loss improved from 140090.01562 to 137524.10938, saving model to models_calce/transfer-39-137524.11.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 135330.5625 - val_loss: 137524.1094 Epoch 40/50 1/4 [======>.......................] - ETA: 0s - loss: 135129.2812 Epoch 40: val_loss improved from 137524.10938 to 135054.42188, saving model to models_calce/transfer-40-135054.42.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 132997.8594 - val_loss: 135054.4219 Epoch 41/50 1/4 [======>.......................] - ETA: 0s - loss: 125628.0312 Epoch 41: val_loss improved from 135054.42188 to 132721.26562, saving model to models_calce/transfer-41-132721.27.hdf5 4/4 [==============================] - 0s 30ms/step - loss: 130171.9062 - val_loss: 132721.2656 Epoch 42/50 1/4 [======>.......................] - ETA: 0s - loss: 134558.3125 Epoch 42: val_loss improved from 132721.26562 to 130462.25781, saving model to models_calce/transfer-42-130462.26.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 127262.1875 - val_loss: 130462.2578 Epoch 43/50 1/4 [======>.......................] - ETA: 0s - loss: 119058.9375 Epoch 43: val_loss improved from 130462.25781 to 128237.41406, saving model to models_calce/transfer-43-128237.41.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 125635.9219 - val_loss: 128237.4141 Epoch 44/50 1/4 [======>.......................] - ETA: 0s - loss: 116540.9375 Epoch 44: val_loss improved from 128237.41406 to 126126.12500, saving model to models_calce/transfer-44-126126.12.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 122424.3984 - val_loss: 126126.1250 Epoch 45/50 1/4 [======>.......................] - ETA: 0s - loss: 115428.5156 Epoch 45: val_loss improved from 126126.12500 to 124067.74219, saving model to models_calce/transfer-45-124067.74.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 122296.1797 - val_loss: 124067.7422 Epoch 46/50 1/4 [======>.......................] - ETA: 0s - loss: 114902.7422 Epoch 46: val_loss improved from 124067.74219 to 122071.03125, saving model to models_calce/transfer-46-122071.03.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 117418.6016 - val_loss: 122071.0312 Epoch 47/50 1/4 [======>.......................] - ETA: 0s - loss: 127500.2188 Epoch 47: val_loss improved from 122071.03125 to 120150.81250, saving model to models_calce/transfer-47-120150.81.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 116726.8203 - val_loss: 120150.8125 Epoch 48/50 1/4 [======>.......................] - ETA: 0s - loss: 121290.3281 Epoch 48: val_loss improved from 120150.81250 to 118294.88281, saving model to models_calce/transfer-48-118294.88.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 115806.1406 - val_loss: 118294.8828 Epoch 49/50 1/4 [======>.......................] - ETA: 0s - loss: 115282.1562 Epoch 49: val_loss improved from 118294.88281 to 116374.40625, saving model to models_calce/transfer-49-116374.41.hdf5 4/4 [==============================] - 0s 27ms/step - loss: 113106.7109 - val_loss: 116374.4062 Epoch 50/50 1/4 [======>.......................] - ETA: 0s - loss: 113384.3125 Epoch 50: val_loss improved from 116374.40625 to 114432.71094, saving model to models_calce/transfer-50-114432.71.hdf5 4/4 [==============================] - 0s 26ms/step - loss: 111091.1953 - val_loss: 114432.7109
代码
文本
show training and validation loss
代码
文本
[12]
#%% show training and validation loss
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
import matplotlib.pyplot as plt
plt.figure(dpi=150)
plt.plot(epochs[1:], np.log(loss[1:]), 'bo', label='Training loss')
plt.plot(epochs[1:], np.log(val_loss[1:]), 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.ylabel('log(loss)')
plt.xlabel('Epoch')
plt.legend()
plt.show()
print("--- %.2s seconds ---" % (time.time() - start_time))
代码
文本
[ ]
代码
文本
点个赞吧
本文被以下合集收录
电芯

Piloteye
更新于 2024-07-22
17 篇3 人关注
机器学习

bohrb060ec

更新于 2024-07-18
17 篇0 人关注
推荐阅读
公开
AI4S Cup - 电芯电化学阻抗预测 Rank3 分享与讨论 B:1.7787 A:1.4349 运气摸奖
我不会呀

发布于 2023-12-04
1 转存文件
公开
用DPTB + ABACUS得到电子密度
liwenfei94@163.com

更新于 2024-07-25
1 转存文件