Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
Finetune Alloy Property using APEX + DPGen2
Workflow
OpenLAM
DPGEN,DeePMD,DP,ABACUS,LAMMPS,机器学习势函数,DeePMD-kit
APEX
APP DEMO
Alloy
WorkflowOpenLAMDPGEN,DeePMD,DP,ABACUS,LAMMPS,机器学习势函数,DeePMD-kitAPEXAPP DEMOAlloy
zhuoyli@connect.hku.hk
更新于 2024-08-14
推荐镜像 :apex-finetune:240812
推荐机型 :c4_m8_cpu
apex-finetune-demo(v2)

Finetune Alloy Property using APEX + DPGen2

Open In Bohrium

©️ Copyright 2024 @ Authors
作者: zhuoyli@connect.hku.hk 📨
日期:2024-08-12
共享协议:本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。
快速开始:点击上方的 开始连接 按钮,选择 apex-finetune:240812 镜像 和任意配置机型即可开始。

📖 Getting Started Guide
This document can be executed directly on the Bohrium Notebook. To begin, click the Connect button located at the top of the interface, then select the apex-finetune:240812 Image and choose your desired machine configuration to proceed.

This Bohrium notebook demonstrates how to perform alloy properties finetuning via APEX based on DPGEN2 concurrent learning framework. The dflow server and simulation resources of this case are based on the Bohrium.

If you have any problems about the APEX, issues are welcomed in GitHub repository

代码
文本

Background

Fig1

Figure 1. Schematic diagram of Alloy property finetune via APEX + DPGEN

The automatic alloy property finetuning function mainly consists of generation of target property initial configurations by APEX and invoking external DPGEN2 concurrent learning workflow.

Above routine can be achieved automatically by the APEX finetune sub-mode with one command. Here we demonstrate a case finetuning prediction performance of pre-trained DPA1-400w foundation models related to generalized stacking fault energy (GSFE) line of the basal plane in regard to HCP Ti. The explored candidates are selected out based on the fixed-levels type force deviation criteria. Please refer to document for details of DPGEN2 settings.

代码
文本

Work Directory Preparation

Copy prepared necessary files from mounted dataset:

代码
文本
[1]
!mkdir finetune
代码
文本
[2]
cd finetune
/finetune
/opt/mamba/lib/python3.10/site-packages/IPython/core/magics/osm.py:417: UserWarning: using dhist requires you to install the `pickleshare` library.
  self.shell.db['dhist'] = compress_dhist(dhist)[-100:]
代码
文本
[6]
!cp -f /bohr/apex-finetune-demo-pig1/v2/finetune/*.* .
代码
文本
[7]
!cp -r /bohr/apex-finetune-demo-pig1/v2/finetune/confs /bohr/apex-finetune-demo-pig1/v2/finetune/abacus_input .
代码
文本
[10]
!tree -L 1
.
├── abacus_input
├── apex_global.json
├── apex_param.json
├── confs
├── dpgen2_input.json
├── model_000.ckpt.pt
├── model_001.ckpt.pt
├── model_002.ckpt.pt
├── model_003.ckpt.pt
└── train.json

2 directories, 8 files
代码
文本

APEX finetune involves three steps in order:

  1. Submit an APEX relaxation workflow for initial structure energy minimization
  2. Generate configurations and corresponding LAMMPS sricpts related to target properties to be finetuned
  3. Invoke external concurrent learning framework DPGEN2 to submit finetuning workflow

The current prepared work directory include all necessary files for both APEX and DPGEN2. One can add new cell to check them via cat:

代码
文本
[11]
!cat apex_param.json
{
  "structures": [
    "confs/Ti/std-hcp"
  ],
  "interaction": {
    "type": "abacus",
    "incar": "abacus_input/INPUT.dcu",
    "potcar_prefix": "abacus_input/pp",
    "potcars": {
      "Ti": "Ti_pd_04_sp.UPF"
    }
  },
  "relaxation": {
    "cal_type": "relaxation",
    "cal_setting": {
      "relax_pos": true,
      "relax_shape": true,
      "relax_vol": true
    }
  },
  "properties": [
    {
         "type":             "vacancy",
         "skip":             false,
         "supercell":        [2, 2, 2]
    },
    {
	  "type":            "gamma",
	  "skip":            false,
	  "suffix":  "basal",
	  "hcp": {
        	"plane_miller":    [0,0,0,1],
        	"slip_direction":  [1,0,-1,0]
		},
          "supercell_size":   [1,1,6],
          "vacuum_size": 15,
	  "add_fix": ["true","true","false"],
          "n_steps":         20
    }
  ]
}
代码
文本

Note that A DPGEN2 configuration template: dpgen2_input.json and all other files referred by it should be prepared in advance. The template provides necessary parameters defining the concurrent workflow, based on which, the APEX finetune will further modify settings of configurations and stages under exploration after generation of APEX property structures.

代码
文本

Before Submission: Please input your Bohrium account information and valid program_id with enough remaining balance into both config files by running following python code:

代码
文本
[16]
import os
import json
import getpass
from monty.serialization import loadfn, dumpfn

apex_global = loadfn('apex_global.json')
dpgen_template = loadfn('dpgen2_input.json')

email = getpass.getpass("Email: ")
password = getpass.getpass("Password: ")
program_id = int(os.environ.get("BOHRIUM_PROJECT_ID"))

apex_global['email'] = email
apex_global['password'] = password
apex_global['program_id'] = program_id

dpgen_template['bohrium_config']['username'] = email
dpgen_template['bohrium_config']['password'] = password
dpgen_template['bohrium_config']['project_id'] = program_id

with open('apex_global.json', 'w') as f1: json.dump(apex_global, f1, indent=4)
with open('dpgen2_input.json', 'w') as f2: json.dump(dpgen_template, f2, indent=4)
Email:
Password:
代码
文本

Submission

The usage of APEX finetune can be checked through:

代码
文本
[17]
!apex finetune -h
usage: apex finetune [-h] [-c [CONFIG]] [-t [TEMPLATE]] [-w WORK [WORK ...]]
                     [-m {dpgen2}] [-p]
                     parameter [parameter ...]

positional arguments:
  parameter             Json files to indicate calculation parameters

options:
  -h, --help            show this help message and exit
  -c [CONFIG], --config [CONFIG]
                        The json file of global config (default:
                        ./global.json)
  -t [TEMPLATE], --template [TEMPLATE]
                        The template input parameter file of external
                        concurrent learning framework to be extended (default:
                        None)
  -w WORK [WORK ...], --work WORK [WORK ...]
                        (Optional) Working directory (default: .)
  -m {dpgen2}, --method {dpgen2}
                        (Optional) Specify concurrent learning method:
                        (default: dpgen2) (default: None)
  -p, --prepare         Only prepare the concurrent learning framework input
                        parameter file without invoking (default: False)
代码
文本

Finally, the APEX finetune function can be called by following command for current demonstration case:

NOTE:

  • It takes about 10 min to finish the relaxation workflow (STEP 1)
  • It takes about 15 min to submit the DPGEN2 workflow (STEP 2), due to huge size of training data in this case
  • Argo workflow can be accessed by appeared link
  • To resume from unexpected termination, re-run the same APEX finetune command
代码
文本
[20]
!apex finetune apex_param.json -c apex_global.json -m dpgen2 -t dpgen2_input.json
---------------------------------------------------------------
░░░░░░█▐▓▓░████▄▄▄█▀▄▓▓▓▌█░░░░░░░░░░█▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀█░░░░░
░░░░░▄█▌▀▄▓▓▄▄▄▄▀▀▀▄▓▓▓▓▓▌█░░░░░░░░░█░░░░░░░░▓░░▓░░░░░░░░█░░░░░
░░░▄█▀▀▄▓█▓▓▓▓▓▓▓▓▓▓▓▓▀░▓▌█░░░░░░░░░█░░░▓░░░░░░░░░▄▄░▓░░░█░▄▄░░
░░█▀▄▓▓▓███▓▓▓███▓▓▓▄░░▄▓▐██░░░▄▀▀▄▄█░░░░░░░▓░░░░█░░▀▄▄▄▄▄▀░░█░
░█▌▓▓▓▀▀▓▓▓▓███▓▓▓▓▓▓▓▄▀▓▓▐█░░░█░░░░█░░░░░░░░░░░░█░░░░░░░░░░░█░
▐█▐██▐░▄▓▓▓▓▓▀▄░▀▓▓▓▓▓▓▓▓▓▌█░░░░▀▀▄▄█░░░░░▓░░░▓░█░░░█▒░░░░█▒░░█
█▌███▓▓▓▓▓▓▓▓▐░░▄▓▓███▓▓▓▄▀▐█░░░░░░░█░░▓░░░░▓░░░█░░░░░░░▀░░░░░█
█▐█▓▀░░▀▓▓▓▓▓▓▓▓▓██████▓▓▓▓▐█░░░░░▄▄█░░░░▓░░░░░░░█░░█▄▄█▄▄█░░█░
▌▓▄▌▀░▀░▐▀█▄▓▓██████████▓▓▓▌██░░░█░░░█▄▄▄▄▄▄▄▄▄▄█░█▄▄▄▄▄▄▄▄▄█░░
▌▓▓▓▄▄▀▀▓▓▓▀▓▓▓▓▓▓▓▓█▓█▓█▓▓▌██░░░█▄▄█░░█▄▄█░░░░░░█▄▄█░░█▄▄█░░░░
█▐▓▓▓▓▓▓▄▄▄▓▓▓▓▓▓█▓█▓█▓█▓▓▓▐█░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
---------------------------------------------------------------
        AAAAA         PPPPPPPPP     EEEEEEEEEE  XXX       XXX
       AAA AAA        PPP     PPP   EEE           XXX   XXX
      AAA   AAA       PPP     PPP   EEE            XXX XXX
     AAAAAAAAAAA      PPPPPPPPP     EEEEEEEEE       XXXXX
    AAA       AAA     PPP           EEE            XXX XXX
   AAA         AAA    PPP           EEE           XXX   XXX
  AAA           AAA   PPP           EEEEEEEEEE  XXX       XXX
---------------------------------------------------------------
==>> Alloy Property EXplorer using simulations (v1.2.0-dpa2)
Checking input files...
-------Finetune Mode-------
===>>> STEP 1: Submit Relaxation APEX Workflow
Running APEX calculation via abacus
Submitting relax workflow...
INFO:root:Working on: /finetune
INFO:root:Temporary upload directory:/tmp/tmp7m27lx4r
Workflow has been submitted (ID: finetune-relax-5f67r, UID: 4f54dad1-6bb9-467b-bf80-257fb7d8a85d)
Workflow link: https://workflows.deepmodeling.com/workflows/argo/finetune-relax-5f67r
Waiting for relaxation result...
Relaxation finished (ID: finetune-relax-5f67r, UID: 4f54dad1-6bb9-467b-bf80-257fb7d8a85d)
Retrieving completed tasks to local...
100%|█████████████████████████████████████████████| 1/1 [00:00<00:00, 20.12it/s]
Archiving results of workflow (ID: finetune-relax-5f67r) into local...
=> Begin archiving /finetune
INFO:root:extract results from /finetune/confs/Ti/std-hcp/relaxation/relax_task
===>>> STEP 2: Generate Target Property Configurations to Finetune
Making property configurations...
Making property tasks locally...
INFO:root:gen vacancy with supercell [2, 2, 2]
/opt/mamba/lib/python3.10/site-packages/spglib/spglib.py:115: DeprecationWarning: dict interface (SpglibDataset['wyckoffs']) is deprecated.Use attribute interface ({self.__class__.__name__}.{key}) instead
  warnings.warn(
/opt/mamba/lib/python3.10/site-packages/spglib/spglib.py:115: DeprecationWarning: dict interface (SpglibDataset['equivalent_atoms']) is deprecated.Use attribute interface ({self.__class__.__name__}.{key}) instead
  warnings.warn(
INFO:root:# 0 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000000, with 12 atoms
INFO:root:# 1 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000001, with 12 atoms
INFO:root:# 2 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000002, with 12 atoms
INFO:root:# 3 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000003, with 12 atoms
INFO:root:# 4 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000004, with 12 atoms
INFO:root:# 5 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000005, with 12 atoms
INFO:root:# 6 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000006, with 12 atoms
INFO:root:# 7 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000007, with 12 atoms
INFO:root:# 8 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000008, with 12 atoms
INFO:root:# 9 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000009, with 12 atoms
INFO:root:# 10 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000010, with 12 atoms
INFO:root:# 11 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000011, with 12 atoms
INFO:root:# 12 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000012, with 12 atoms
INFO:root:# 13 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000013, with 12 atoms
INFO:root:# 14 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000014, with 12 atoms
INFO:root:# 15 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000015, with 12 atoms
INFO:root:# 16 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000016, with 12 atoms
INFO:root:# 17 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000017, with 12 atoms
INFO:root:# 18 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000018, with 12 atoms
INFO:root:# 19 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000019, with 12 atoms
INFO:root:# 20 generate /finetune/confs/Ti/std-hcp/gamma_basal/task.000020, with 12 atoms
===>>> STEP 3: Invoke External Concurrent Learning Framework
Writing input files...
Invoking external concurrent learning framework: dpgen2...
valid_data has been uploaded to oss://13756/31173/store/upload/c674858b-b475-4903-a5b7-8ea32f863ef0/tmpd0qc0d82.tgz
multi_init_data has been uploaded to oss://13756/31173/store/upload/64d4a0ad-e53a-4336-b059-8ebda8f34816/tmpcmyd4g_y.tgz
init_models has been uploaded to oss://13756/31173/store/upload/1d3594e8-9028-4f59-b952-1430f83baf34/tmpcyexvqfg.tgz
Workflow has been submitted (ID: finetune-basal-095-zw9hs, UID: 11e8a9fe-4300-4957-8f09-52ff6270ed09)
Workflow link: https://workflows.deepmodeling.com/workflows/argo/finetune-basal-095-zw9hs
Completed!
代码
文本

Result

Fig2

Figure 2. Benchmark plot of the basal gamma plane

As can be seen, comparing to the initial dpa1-400w model, all other finetuned models have closer prediction over both unstable stacking fault (unstable SF) and stable stacking fault (stable SF) to the DFT evaluation.

代码
文本
Workflow
OpenLAM
DPGEN,DeePMD,DP,ABACUS,LAMMPS,机器学习势函数,DeePMD-kit
APEX
APP DEMO
Alloy
WorkflowOpenLAMDPGEN,DeePMD,DP,ABACUS,LAMMPS,机器学习势函数,DeePMD-kitAPEXAPP DEMOAlloy
点个赞吧
推荐阅读
公开
Hands-on to APEX (v1.2) on Bohrium
APEXWorkflowMaterialEnglishsimulation
APEXWorkflowMaterialEnglishsimulation
zhuoyli@connect.hku.hk
更新于 2024-08-08
4 赞5 转存文件
公开
Hands-on to APEX (Alloy Properties EXplorer using simulations)
EnglishAPEXWorkflowAlloy
EnglishAPEXWorkflowAlloy
zhuoyli@connect.hku.hk
发布于 2023-08-19
1 赞