Bohrium
robot
新建

空间站广场

论文
Notebooks
比赛
课程
Apps
我的主页
我的Notebooks
我的论文库
我的足迹

我的工作空间

任务
节点
文件
数据集
镜像
项目
数据库
公开
董小锐-第11天-2403-计算材料学原理
2403-计算材料学原理
2403-计算材料学原理
Dangalf
发布于 2024-03-19
推荐镜像 :Basic Image:bohrium-notebook:2023-04-07
推荐机型 :c2_m4_cpu
赞 1
1. Introduction to Optimization Algorithms
2. Optimization Techniques Overview
3. Detailed Optimization Algorithms
4. One-Dimensional Search Methods
5. Application to Computational Materials Science
6.1 Steepest Descent Method Implementation
6.2 Conjugate Gradient Method Implementation
6.3 Complete Python Code for Energy Minimization

1. Introduction to Optimization Algorithms

Optimization algorithms are mathematical tools designed for finding the maximum or minimum of functions. These tools are crucial in various domains, including computational materials science, where they are used to predict material properties, optimize structures, and simulate molecular dynamics efficiently.

代码
文本

2. Optimization Techniques Overview

  • Local Optimization: Targets finding a local maximum or minimum. Techniques include gradient descent, where the direction of steepest descent relative to the gradient is pursued.
  • Global Optimization: Seeks the absolute maximum or minimum over the entire function domain. Methods like simulated annealing and genetic algorithms are notable, allowing for exploration beyond local optima.
代码
文本

3. Detailed Optimization Algorithms

  • Gradient Descent/ Steepest Descent Method: This method iteratively moves toward the minimum of a function by taking steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point.

    • Formula: where:
      • is the position vector after the update,
      • is the current position vector,
      • is the learning rate (step size),
      • is the gradient of the function at .
  • Conjugate Gradient Method: Particularly effective for large systems of linear equations with a positive-definite matrix. It minimizes quadratic forms directly without the need for the matrix to be stored or explicitly used.

    • Key Concept: Enhances efficiency by ensuring that search directions are conjugate to each other, reducing the redundancy in search directions.
  • Newton's Method: Uses the second-order Taylor series expansion to find the roots of the first derivative (the zeros of the gradient), aiming for critical points where the function slope is zero.

    • Formula:

    For optimization, specifically finding the minimum or maximum of a function, the formula adjusts to use the gradient and Hessian:

    where:

    • is the Hessian matrix of second-order partial derivatives of at .
代码
文本

4. One-Dimensional Search Methods

These methods are pivotal when optimizing along a single direction is required, often used within broader optimization algorithms to determine optimal step sizes.

  • Bracketing and Golden Section Search: Used to find a minimum within a bounded interval, relying on the property that the function decreases, then increases.
  • Armijo Rule: An inexact line search method that determines a step size meeting specific sufficient decrease conditions, balancing computational cost and progress toward the minimum.
代码
文本

5. Application to Computational Materials Science

  • Energy Minimization in Molecular Systems: Fundamental for identifying stable molecular configurations. Optimization algorithms are employed to minimize the potential energy surface, which is a high-dimensional function of all atomic positions.
  • Force Field Methods: Utilize analytical functions to approximate the interactions between atoms and molecules. The Lennard-Jones potential is a classic example, offering simplicity and the ability to capture key features of molecular interactions.
    • Lennard-Jones Potentials Formula:

      where:

      • is the potential energy as a function of distance ,
      • is the depth of the potential well (a measure of how strongly the two particles attract each other),
      • is the finite distance at which the interparticle potential is zero (often considered the diameter of the particles),
      • is the distance between the centers of the two particles.
代码
文本

6.1 Steepest Descent Method Implementation

代码
文本
[ ]
import numpy as np

def steepest_descent(grad_f, x0, tol=1e-5, max_iter=1000):
"""
Implements the steepest descent optimization algorithm.

Parameters:
- grad_f: The gradient of the function to be minimized.
- x0: Initial guess for the minimum.
- tol: Tolerance for the stopping criterion.
- max_iter: Maximum number of iterations.

Returns:
- x: The estimated position of the minimum.
"""
x = x0
for i in range(max_iter):
gradient = grad_f(x)
if np.linalg.norm(gradient) < tol:
break
alpha = line_search(grad_f, x, gradient)
x = x - alpha * gradient
return x

def line_search(grad_f, x, gradient, alpha_init=1, beta=0.5, sigma=0.1):
"""
Implements backtracking line search to find the step size.

Parameters are similar to steepest_descent with additions:
- alpha_init: Initial step size.
- beta: Factor to decrease alpha by during each iteration.
- sigma: Sufficient decrease condition factor.

Returns:
- alpha: Optimal step size.
"""
alpha = alpha_init
while f(x - alpha * gradient) > f(x) - sigma * alpha * np.dot(gradient, gradient):
alpha *= beta
return alpha
代码
文本

6.2 Conjugate Gradient Method Implementation

代码
文本
[ ]
def conjugate_gradient(A, b, x0, tol=1e-5, max_iter=1000):
"""
Solves the system Ax = b using the Conjugate Gradient method.

Parameters:
- A: Square, symmetric, positive-definite matrix.
- b: Right hand side vector.
- x0: Initial guess of the solution.
- tol: Tolerance for the stopping criterion.
- max_iter: Maximum number of iterations.

Returns:
- x: The solution vector.
"""
x = x0
r = b - np.dot(A, x)
p = r.copy()
rsold = np.dot(r, r)

for i in range(max_iter):
Ap = np.dot(A, p)
alpha = rsold / np.dot(p, Ap)
x += alpha * p
r -= alpha * Ap
rsnew = np.dot(r, r)
if np.sqrt(rsnew) < tol:
break
p = r + (rsnew / rsold) * p
rsold = rsnew
return x
代码
文本

6.3 Complete Python Code for Energy Minimization

This script completes the energy minimization task by:

  • Defining the LJ potential and its gradient (the force).
  • Generating an initial random configuration of points (representing atoms).
  • Computing the total energy of the system and its gradient with respect to atomic positions.
  • Implementing the steepest descent optimization method to minimize the energy by adjusting the atomic positions based on the calculated forces (gradients).
代码
文本
[ ]
import numpy as np

# Define LJ potential and its derivative (force)
def LJ_reduced_pot(r):
return 4 * ((1.0 / r)**12 - (1.0 / r)**6)

def LJ_reduced_force(r):
return 24 * (2 * (1 / r)**12 - (1 / r)**6) / r

# Generate random configuration of points
def genPts(N):
np.random.seed(10) # Seed for reproducibility
pts = np.zeros(N*2) # Initialize points array
for i in range(N*2):
pts[i] = np.random.uniform(-2, 2) # Generate points within [-2, 2]
return pts

# Define the total energy function
def E(x):
N = int(len(x) / 2)
eng_pot = np.zeros(N)
R = x.reshape(N, 2) # Reshape flat array to Nx2 for coordinates
for i in range(N - 1):
for j in range(i + 1, N):
Rij = R[j] - R[i]
r = np.linalg.norm(Rij)
eng_pot[i] += 0.5 * LJ_reduced_pot(r)
eng_pot[j] += 0.5 * LJ_reduced_pot(r)
return np.sum(eng_pot)

# Define the gradient of the energy function
def dEdR(x):
N = int(len(x) / 2)
grad = np.zeros_like(x)
R = x.reshape(N, 2)
for i in range(N - 1):
for j in range(i + 1, N):
Rij = R[j] - R[i]
r = np.linalg.norm(Rij)
force_mag = LJ_reduced_force(r)
# Compute the force vector and distribute it to the i-th and j-th atoms
Fij = force_mag * (Rij / r)
grad[i*2:(i+1)*2] -= Fij # Apply force to atom i
grad[j*2:(j+1)*2] += Fij # Apply force to atom j
return grad

# Optimization method: Steepest Descent
def steepest_descent(E, dEdR, x0, alpha=0.01, tol=1e-5, max_iter=1000):
x = x0
for _ in range(max_iter):
grad = dEdR(x)
if np.linalg.norm(grad) < tol:
break
x -= alpha * grad
return x

# Generate initial points and minimize energy
N = 5 # Number of atoms
x0 = genPts(N)
xmin = steepest_descent(E, dEdR, x0)

print("Initial Configuration:", x0.reshape(N, 2))
print("Optimized Configuration:", xmin.reshape(N, 2))
代码
文本
2403-计算材料学原理
2403-计算材料学原理
已赞1
推荐阅读
公开
刘晓琳-第8天-2403-计算材料学原理
《计算材料学》组队共读
《计算材料学》组队共读
bohrfdf396
发布于 2024-03-16
公开
Filtering_Part3 for signal processing
EnglishnotebookSignal Processing
EnglishnotebookSignal Processing
喇叭花
发布于 2023-08-21