## The structure and dynamics of

## materials using machine learning

### Dissertation

### zur Erlangung des Doktorgrades der Naturwissenschaften

### Dr. rer. nat.

### der

### Naturwissenschaftlichen Fakult¨a II

### Chemie, Physik und Mathematik

### der Martin-Luther-Universit¨at

### Halle-Wittenberg

### vorgelegt von

### Herrn Gon¸calves Marques, M´ario Rui

### geb. am 24.10.1992 in Coimbra, Portugal

### Betreuer: Prof. Dr. Miguel A. L. Marques

### 1. Gutachter: Prof. Dr. Miguel A. L. Marques

### 2. Gutachter: Prof. Dr. Wolfgang Paul

### 3. Gutachter: Prof. Dr. Patrick Rinke

### Datum der Abgabe: 17. Dezember 2019

### Datum der ¨offentlichen Verteidigung: 5. Mai 2020

## Contents

Introduction 5

1 Many body problem and density functional theory 9

1.1 Many body problem . . . 10

1.2 Density functional theory . . . 13

1.2.1 Hohenberg-Kohn theorems . . . 14

1.2.2 Kohn-Sham scheme . . . 16

1.2.3 Jacob’s ladder . . . 17

2 Structural prediction and molecular dynamics 21 2.1 Structure prediction . . . 22

2.1.1 Minima hopping method . . . 24

2.1.2 Genetic algorithms . . . 25

2.2 Molecular dynamics . . . 26

2.2.1 Overview of molecular dynamics simulations . . . 26

2.2.2 Berendsen thermostat and barostat . . . 27

3 Machine learning in material science 29 3.1 Overview of machine learning . . . 29

3.2 Data . . . 32 3.3 Features . . . 33 3.4 Algorithms . . . 36 3.4.1 Neural networks . . . 36 3.5 Recent applications . . . 40 3.5.1 Applications to solids . . . 40 3.5.2 Force-fields . . . 42

4 Neural Networks force fields 47 4.1 Behler and Parrinelo neural networks in the ænet package . . . 47

4.2 Stress tensor . . . 48

4.3 Training neural networks for forces and stresses . . . 51

4.3.1 Genetic algorithms . . . 52

4.3.2 Back-propagation algorithm . . . 53

4.4 Example force-fields . . . 55

4.4.1 Data . . . 56 3

4.4.2 Details on the features, algorithm and training . . . 59

4.4.3 Validation . . . 60

4.5 Applications . . . 66

4.5.1 Phonon dispersion . . . 66

4.5.2 Molecular dynamics and melting temperature . . . 67

4.5.3 Structural prediction and defects . . . 71

4.6 Interpretability of the neural networks . . . 75

5 Copper based materials and cluster expansions 77 5.1 Cluster expansions . . . 77

5.2 CZTS . . . 79

5.2.1 Genetic algorithms and stability . . . 81

5.2.2 Kesterite or stannite . . . 84

5.3 Cuprous iodide . . . 84

5.3.1 Copper vacancy complexes and stability . . . 86

5.3.2 Phonon dispersion . . . 89

5.3.3 Density of states . . . 89

Conclusions and outlook 93

## Introduction

All animals are equal, but some animals are more equal than others.

George Orwell Animal Farm Equal... The search for new materials has been a quest present in every age since the start of our recorded history. Nowadays, we seek materials for various reasons. Here are a few examples: In terms of energy production, the search for renewable, efficient, and sustainable sources of energy led to the development of photovoltaics. Yet, in 2018, fewer than 4% of the total world energy consumption come from this source. Several factors contribute to this low percentage, however from a technical point of view, the efficiency of photovoltaic cells can still be increased with the improvement of the design of the devices and with the application of materials that possess optimal properties for the construction of these cells. An example of the latter is a semiconductor material with a direct band gap that absorbs in the visible range of the spectra and possesses high conductivity.

In terms of everyday gadgets, integrated circuits, and other electronic devices, the indus-try, and even consumers, expect developments at the pace of Moore’s Law. This means that every year we expect a decrease in the size of transistors. Additionally, for many years, it was possible to increase the off and on switch rate of the transistors, which can be translated into higher computer (or processor) performance at every new generation of devices. However, this is no longer a possibility since some integrated circuits already reached fundamental thermal limits due to the ever increasing power consumption of these circuits. This pro-moted the research of new transistor designs with different materials and new architectures over the past 10 years.

Finally, we live in a digital universe characterized by an insatiable demand for information and by the creation of 2.5 quintillion bytes of data each day. Moreover, the pace at which this information is created has been increasing every year due the improved availability of electronic devices, i.e. more people are getting access to the internet and there is also an increase of the number of devices per person. Furthermore, the pace is also increasing with the development of new technologies, such as the internet of things, virtual reality, 5G, video surveillance, among others... This led to the creation of huge data centers for the purpose of storing and processing this information. They require complex cooling systems and when the volume of information increases, so does the power consumption. This motivated researchers to look for more efficient devices, which in many cases, involve the application of different materials.

One can see that the demand is high for materials exhibiting properties desired for the construction of several electronic devices. Although several materials already display some

of these properties, in many cases their performance, relative abundance, price, and toxicity limit their large scale application. Remember that some materials are more equal than others. In other cases, the search for new materials just comes from the desire to generate more efficient devices.

Here we are going to provide a humble contribution towards the solution of this problem. In the past, the discovery of new materials came only from their experimental synthesis and characterization. This type of research was slow since experiments required expensive resources and were time consuming. Recently, the cost and time of materials design was greatly diminished due to the combination of experiments with computer simulations, in particular computational structural prediction methods [1–4]. This revolution was possi-ble due to the increase of computer resources, and the development of electronic structure methods and their efficient implementation in computer packages. All of these allowed for innumerous high-throughput studies and for the creation of several databases containing in-formation on known crystal structures and chemical substances. To give some examples, the Inorganic Crystal Structure Database contains information on 210,229 crystal structures [5], the Cambridge Structural Database on 1 million small-molecule organic and metal-organic crystal structures [6], and the Chemical Abstract Service registry on 157 million unique or-ganic and inoror-ganic chemical substances [7]. If we remove duplicates and alloys from the Inorganic Crystal Structure Database, we realize that, nowadays, we have information on 50 000 different inorganic materials. This number probably includes the most possible ele-mental substances and binary compounds, however it lacks many complex compounds, such as quaternary compounds.

The creation of all of these data is paving the way to yet another revolution in the field
of material science: that of machine learning (ML). ML techniques take advantage of large
amounts of data to find hidden patterns and a relation between input data and a certain
target property. The application of these techniques to material science problems is recent
and lacks the complexity exhibited in other fields. However, it has been shown already that
ML methods further decrease the time necessary to find new stable phases and allow for a
more efficient exploration of all estimated possible materials, which number around 10100 _{[8].}
Often, structural prediction simulations require the evaluation of the total energies (forces
and stresses) of millions of phases in order to search through the potential energy surface
of a system. Among these, only a few points are indeed interesting: those corresponding to
the minima of the energy surface, which might correspond to the ground-state structures.
Many methods were developed to study the intricacies of the potential energy surface of
a certain system, such as the minima hopping method [9, 10], random sampling [11, 12],
and evolutionary algorithms [13, 14], to name just a few. Usually these methods use density
functional theory [15–17] for the energy evaluations. However, studies with such methods are
limited in the number of atoms included in the unit cell (usually no more than 10) since the
number of minima, and therefore, the number of calculations required, grows exponentially
with the number of atoms.

Similarly, molecular dynamics [18] simulations, used to calculate different properties of these materials, also require millions of evaluations of total energies of a system. Not only because they entail systems with more than millions of atoms, but also because they require long simulation times in order for the systems to reach equilibrium and to behave according to the ensembles of statistical mechanics. For these reasons, normally they employ classical

CONTENTS 7 force-fields for the energy evaluations (which come with a loss in accuracy), although we can find some smaller examples using density functional theory.

The aim of this thesis is to develop strategies to counter these obstacles using machine learning techniques. In particular, we resort to neural networks, genetic algorithms, and cluster expansions to speed-up first principles studies and to construct efficient, yet accu-rate, methods for energy evaluation. Although technically not part of the machine learning repertoire of tools, genetic algorithms and cluster expansions share some of their character-istics. Actually, a good introductory book on machine learning, Ref. [19], includes genetic algorithms as a type of machine learning since biological evolution can be seen as a learning process. Additionally, cluster expansions consist of a least-square fit of the total energies of a system based on a correlation matrix, which looks similar to several algorithms of supervised learning, which is the most common type of machine learning. Neural networks are, without a doubt, the most famous and the most successful of the machine learning algorithms.

This thesis is organized as follows. In chapter 1 we start our discussion from its founda-tion: the many body problem [20] and one of its most successful solutions: density functional theory. An efficient, accurate theory, that relies on an hypothesis and on an approximation. Afterwards, in chapter 2, we discuss the problems we intend to solve with density functional theory, namely structural prediction and molecular dynamics. These are the basis for the calculation of many interesting and important properties of materials. The obstacles that arise from these concern both simulation time and size, as well as the time required for a sin-gle calculation. An attempt to surpass them, by finding methods that are both accurate and efficient, revolves around machine learning [19, 21, 22], that we promptly discuss in chapter 3. In particular, we discuss applications of machine learning in material science [23]. This leads to the core of this work: neural networks force-fields [24]. We discuss them from their inception to the most recent research and then, in chapter 4, we present our methodology to construct neural network force-fields capable of describing the potential energy surface (PES) of solids using relatively small, unbiased training sets. To apply these force-fields in molec-ular dynamics and structural prediction simulations they have to provide accurate forces and stresses. Unfortunately, the high accuracy of the energy is not a sufficient condition to assure an appropriate accuracy for these derivatives of the energy. This led us to develop and implement methodologies to optimize the neural networks with respect to energies, forces, and stresses. Additionally, we use our results to show the challenges, limitations, and po-tential of such force-fields, and we discuss their interpretability. Moreover, our methodology permitted the study of large complex systems, such as the formation of defects in Si and the melting of metals with an accuracy comparable to density functional theory. We take on a different route in chapter 5, where we discuss and use cluster expansions to tackle the structural prediction of copper based materials. In particular, we use genetic algorithms to identify secondary phases of Cu2ZnSn(S,Se)4 (CZTS), which usually hinder the efficiency of solar cells made out of this photovoltaic material. Moreover we study the transition between the kesterite and the stannite phases in Cu2Zn1−xSnFexSe4 compounds. Our last study involved the formation of complexes of defects in CuI [25], a transparent conducting semiconductor (TCS). Although the role played by Cu vacancies in the p-type transparent conductivity of CuI has been properly acknowledged, the way they arrange themselves, as well as their optimal and maximum concentrations remained unclear. Our objective was to provide an answer to these unresolved questions. We note that usually these three studies

are too taxing to treat only with density functional theory.

## Chapter 1

## Many body problem and density

## functional theory

Who in the world am I? Ah, that’s the great puzzle.

Lewis Carroll Alice in Wonderland Puzzle... The many-body problem [20, 26] represents perfectly how both complicated and unrewarding physics can be. For simplicity, think of an atom, or even a molecule, placed in any region of space and time, subject to whichever field, or fields, and basically try to name all the interactions that maintain its structure and keep the electrons bound to the nuclei, or not, if the field is strong enough... Now, neglect most of them. Keep only the interactions between electrons and nuclei, and their self-interactions. Oh, and consider that electrons and nuclei can move, but not very fast. One should try to avoid all those relativistic shenanigans as much as possible. In fact, for most of it, think of the movement of the electrons as slow, but instantaneous when compared to the movement of the nuclei. Moreover, remove most of the intricacies related to spin, specially its complicated interactions with orbitals or colinearity. At most consider that electrons can have spin up or down. Ah, what remains is the puzzle usually refereed to as the many-body problem. Now, try to solve it! The peculiarity of this problem is that, even after so many simplifications, it remains rather unsolvable... Strangely this is what motivates physicists the most. The joy of the challenge that dwell within every phenomena exhibited by matter. Not only that, but the elegance of nature, hidden behind the mysteries of the universe, and unveiled by that undoubtedly extraordinary approach. It is quite remarkable. 1

In this chapter we introduce the many body problem and the Born-Oppenheimer approx-imation. Furthermore we explain how to obtain the most important properties in electronic structure theory. Afterwards, we present the most reasonable theory to solve the many body problem: density functional theory (DFT). We start with its foundation, the Hohenberg-Kohn theorem, and the hypothesis that lead to its most commonly used form: Hohenberg-Kohn-Sham DFT. We finish the chapter with the Jacob’s ladder and the description of the most used approximation to the exchange and correlation energy functional in material science.

1_{Adaptaded from Ref. [27]}

### 1.1

### Many body problem

The description of matter and its properties using theoretical methods starts with the inter-action between N electrons and M nuclei, which can be cast as the Hamiltonian

ˆ
H =_{−} ~
2
2me
X
i
∇2
i +
X
i,I
ZIe2
|ri− RI|
+ 1
2
X
i6=j
e2
|ri− rj|
−X
I
~2
2MI∇
2
I+
1
2
X
I6=J
ZIZJe2
|RI− RJ|
,
(1.1)

where the lowercase subscripts denote electrons with mass me and charge e at position ri, and the uppercase subscripts denote nuclei with mass MI and charge ZI at position RI. Throughout this thesis we will adopt Hartree atomic units (e = me = ~ = 4π0 = 1) to simplify equations. Under these units we can describe the terms of the aforementioned Hamiltonian as follows. The first term represents the kinetic energy operator for electrons

ˆ
T = _{−}1
2
N
X
i=1
∇2
i, (1.2)

while the second term, the potential operator, represents the interaction between electrons and nuclei, ˆ V = N,M X i,I=1 v(ri, RI), = − N,M X i,I=1 ZI |ri− RI| , (1.3)

and the third, is the electron-electron interaction operator, ˆ W = 1 2 N X i6=j w(ri, rj) = 1 2 N X i6=j 1 |ri− rj| . (1.4)

The last two terms are the nuclei kinetic operator and the nuclei-nuclei interaction operator ˆ TN = − 1 2mI M X I=1 ∇2 I , WˆN = 1 2 M X I6=J 1 |RI− RJ| . (1.5)

As the Hamiltonian in eq. (1.1) is time-independent, the eigenstates of the fundamental
equation governing a non-relativistic quantum system, i.e., the time dependent Schr¨odinger
equation, consist on a phase modulation factor (e−iEt_{) times the solution of the }
time-independent Schr¨odinger equation

ˆ

H(_{{r}, {R})Ψ({r}, {R}) = EΨ({r}, {R}),} (1.6)

where the abbreviation {r} = (r1, ..., rN) was used. The many-body wavefunction is a function of 3(N + M ) spacial coordinates and N + M spin coordinates. Unfortunately, the solution of this equation is usually an impossible task. However, close inspection of the many-body Hamiltonian provides a clue towards the simplification of the many-many-body problem. The

1.1. MANY BODY PROBLEM 11 nuclei kinetic term in the eq. (1.1) provides a small contribution to the energy, as the inverse mass of the nuclei 1

mI stands as rather ”small”. As the electrons’ mass is much smaller than

the mass of the nuclei, when the nuclei move, the electrons appear to adjust their positions instantaneously. Therefore, the electrons move adiabatically with the nuclei. This is the reasoning behind the adiabatic or Born-Oppenheimer approximation [20].

The full solutions for the coupled system of electrons and nuclei Ψ({r}, {R}) can be written in terms of functions of the nuclear coordinates ξi({R}) and electron wavefunctions Ψi({r} : {R}, which depend upon the nuclear positions as parameters:

Ψ(_{{r}, {R}) =}X
i

ξi({R})Ψi({r} : {R}). (1.7)

In this manner, the time-independent Schr¨odinger equation 1.6 can be decoupled into an equation for the electrons

[ ˆT ({r}) + ˆW ({r}) + ˆV ({r}, {R})]Ψi({r} : {R}) = Ei Ψi({r} : {R}) (1.8) and into a purely nuclear equation for each electronic state i:

[ ˆTN({R}) + ˆUi({R})] ξi({R}) = E ξi({R}) (1.9) In the previous equations Ei({R}) stands for the eigenvalues of the electron equation and

ˆ

Ui({R}) represents a modified potential function for the nuclear motion that includes the interactions between nuclei and Ei({R}), among other terms (see Ref [20]).

In spite of the simplifications that arose from the adiabatic approximation, these equa-tions are far too difficult to solve for a reasonable number of electrons and nuclei. Normally, in electronic structure, the equation for the nuclei is neglected in favour of a classical one while the equation for the electrons is further simplified. Before even attempting to solve eq. (1.8), it is necessary to understand how to extract the most important properties in electronic structure theory. These are the ground state total energy, the electron density, and excitations. Additionally, one can include two energy derivatives: the forces and the stress tensor.

The total energy of a system is given by the expectation value of the Hamiltonian E[Ψ] = hΨ| ˆH|Ψi

hΨ|Ψi . (1.10)

By definition, the ground state Ψ0 is associated with the lowest energy. Consequently, the variation of the energy functional (E[Ψ]) with respect to the wavefunction has to be stationary for the ground state. In fact, this variation leads to the Schr¨odinger equation:

δE[Ψ0] δΨ∗ 0 = HΨˆ 0 hΨ0|Ψ0i− hΨ0| ˆH|Ψ0i Ψ0 hΨ0|Ψ0i 2 = 0 hence HΨˆ 0 = E0Ψ0. (1.11)

This can also be found by varying the energy subject to the constrain of orthonormality using the method of Lagrange multipliers

which is equivalent to the variational (Rayleigh-Ritz) principle. Additionally, a small devi-ation from the ground state wave function wields

E [Ψ0+ δΨ] = hΨ 0+ δΨ| ˆH|Ψ0 + δΨi hΨ0+ δΨ|Ψ0+ δΨi = E0hΨ0|Ψ0i + hδΨ| ˆH|δΨi hΨ0|Ψ0i + hδΨ|δΨi = E0+ O δΨ2. (1.13) Thus, the variational principle provides a strategy to discover approximations to the ground state wavefunction through energy minimization. Furthermore, the error for such approxi-mation for the ground state converges with second order of the deviation.

In a similar fashion, the expectation value of the density operator ˆ

n(r) = X

i=1,N

δ (r− ri) (1.14)

provides the electron density
n(r) = hΨ|ˆn(r)|Ψi
hΨ|Ψi = N
R d3_{r}
2· · · d3rN
P
σ1|Ψ (r, r2, r3, . . . , rN)|
2
R d3_{r}
1d3r2· · · d3rN|Ψ (r1, r2, r3, . . . rN)|2
. (1.15)

As we will see later in this chapter, the electron density can be used to express all quantum mechanical observables as a functional of this real, scalar function of three variables. For example the expectation value of the interaction between electrons and nuclei (from eq. (1.3)) can be written as:

hΨ| ˆV (r)|Ψi = Z

d3_{r V}

ext(r) n(r). (1.16)

In condensed matter, excitations are nothing more than small perturbations of a system. Examples of excitations are variations of the ground state (e.g. small displacements of the ions in phonon modes) or true electronic excitations (e.g. optical electronic excitations). Therefore, after finding the ground state, perturbation theory techniques provide the appro-priate tools to calculate excitations, such as excitation spectra and the real and imaginary parts of response functions.

Forces, on the other hand, can be calculated as in classical mechanics, as was noted by Ehrenfest [28] already in 1927, and by many that followed [20, 29–32]. The force (Hellman-Feynmann) theorem [33] was derived by Feynman in 1939 however, who explicitly demon-strated that the force on a nucleus depends strictly on the charge density and not on the electron kinetic energy, exchange, and correlation. The force conjugate to any parameter describing a system (e.g. the position of a nucleus RI) can always be expressed as

FI = −

∂E ∂RI

(1.17)
Performing the differentiation, assuming _{hΨ|Ψi = 1 in eq. (1.10) for convenience, leads to}

∂E ∂RI = hΨ| ∂ ˆH ∂RI |Ψi + h ∂Ψ ∂RI| ˆ H|Ψi + hΨ| ˆH| ∂Ψ ∂RIi = hΨ| ∂ ˆH ∂RI |Ψi (1.18)

1.2. DENSITY FUNCTIONAL THEORY 13 where we used the fact that the Hamiltonian is Hermitian. Furthermore, we can explicitly write the Hamiltonian terms to reach

FI =
∂E
∂RI
= _{− hΨ|} ∂ ˆV
∂RI |Ψi = −
Z
d3rn(r)∂V (r)
∂RI
. (1.19)

Similarly, the stress (generalized virial) theorem [34–36] provides us a different type of variation. For a system in equilibrium, the stress tensor σαβ derivation requires the application of an infinitesimal homogeneous scaling to the ground state, and the derivative of the energy with respect to the symmetric strain αβ

σαβ =− 1 Ω ∂E ∂αβ . (1.20)

Here Ω is the volume, and α and β Cartesian indices that come from the scaling of the space

rα → (δαβ + αβ) rβ, (1.21)

where r represents particle positions and translation vectors. Under this scaling the wave-function changes [37] to Ψ({ri}) = det (δαβ+ αβ) −1/2 Ψ (δαβ + αβ) −1 riβ . (1.22)

Actually, the wavefunctions and the nuclear positions subjected to an expansion or a com-pression can change in different ways, however these other ways do not contribute to the energy (to first order) since the wavefunction and the nuclear positions are at their varia-tional minima. The combination of the previous equations with eq. (1.10) and performing the integrations by changing variables results in

σαβ =− hΨ| X k ~2 2mk∇ kα∇kβ− 1 2 X k,k0 k6=k0 (rkk0) α(rkk0)β rkk0 d drkk0 ˆ V |Ψi , (1.23)

where k and k0 _{represent particles, and r}

kk0 is the distance between them. The trace of

the previous equation P = _{−}P

ασαα amounts to the well known virial theorem for the pressure [38, 39]. This derivation in terms of the stretching of the ground-state was first derived by Fock in 1930 [40]. If only Coulomb interactions are present and if we include all the contributions from the nuclei and electrons in the potential energy, we can obtain the relation

3P Ω = 2Ekinetic + Epotential. (1.24)

### 1.2

### Density functional theory

We must now explain why solving eq. (1.8) is unfeasible. Firstly, the computational methods required to solve this equation scale exponentially with the number of electrons. And that is just for finding the ground-state. Secondly the many-body wavefunction contains much more information than necessary. This can be further understood if we consider the example of

Ref. [41]. Consider the solution of an oxygen atom. Even neglecting spin, the wavefunction
depends on 24 coordinates: 3 spatial coordinates for each of the 8 electrons. The solution of
this problem usually requires a basis set or the discretization of space. So let us consider a
small grid of 10 points per coordinate. This means that to store the wavefunction we need
1024_{numbers. In scientific calculations, these numbers are usually stored as double precision}
floating points, with each requiring 64 bits. This means that to store the wavefunction of the
oxygen atom we need roughly 8_{×10}12_{TB... From here we can speculate how tremendous the}
task of calculating additional properties would be, such as phonons or critical temperatures,
or even solving the time dependent equation.

Luckily, there is a way to bypass these problems by considering the density as a basic variable and re-writing the energy of a quantum system as a functional of the density. This idea was first proposed by Thomas [42] and Fermi [43] in 1927. In their method, the system of electrons is described as a classical liquid and the kinetic energy is approximated as an explicit functional of the density. The Thomas-Fermi method was further developed in the following years. For example in 1930 Dirac [44] formulated the local approximation for the exchange, which was neglected in the original method. However, the formulation of density functional theory as we know today, i.e., as an exact theory of many-body systems only appeared in 1964 with the work of Hohenberg and Kohn [45].

Finally, we would like to note that DFT is not the only (approximate) solution to the many-body problem, although it is the most efficient for solids. Other theories worth men-tion are many-body perturbamen-tion theory [46–49] (in particular the GW approximamen-tion [50] and the Bethe-Salpeter equation [51]), coupled cluster [26], Hartree-Fock [20, 26], and full configuration interaction [26].

### 1.2.1

### Hohenberg-Kohn theorems

At the heart of DFT lies the Hohenberg-Kohn theorems [15, 20, 52, 53], first proved by Hohenberg and Kohn by reductio ad absurdum and using the variational principle. The first of these theorems states that the external potential ˆV (r) of a system of interacting particles is a unique functional of the ground state density, apart from a trivial additive constant. This means that the mappings A and B between ground state density, wavefunction, and the external potential are bijective (fully invertible)

{ ˆV_{}}_{←−}_{−→ {Ψ}A 0}
B

←−−→ {n0}. (1.25)

In other words, that there is a one to one correspondence between the ground state electron density and the external potential. Proofs of this theorem can be found in the aforemen-tioned references or in any other DFT textbook [15–17]. Since the Hamiltonian and the wavefunctions are fully determined given the knowledge of the ground state density (up to a constant shift in the energy), all properties of the system are completely determined. This means that the nondegenerate ground-state wave function and the expectation value of any observable ˆO are functionals of the ground-state density

O0 = O[n0] =hΨ0[n0]| ˆO|Ψ0[n0]i . (1.26) This is true in particular for the energy functional. The second of the Hohenberg-Kohn theorems reveals that an universal functional of the density F [n(r)] can be defined for any

1.2. DENSITY FUNCTIONAL THEORY 15 number of particles or external potential. Then, for a certain external potential V (r), an energy functional of the density E[n(r)] can also be defined. Moreover, the ground-state density is the density that minimizes this functional, for which the functional provides the ground-state energy. Once again, proof of this theorem can be found in any other DFT textbook [15–17]. Here we will just show the definitions of these functionals:

E[n(r)] = Z

d3_{r v(r)n(r) + F [n(r)] =}
Z

d3_{r v(r)n(r) +}_{hΨ| ˆ}_{T + ˆ}_{W}_{|Ψi} _{(1.27)}
Clearly, for the ground-state density, the functional equals the ground-state energy. We note
that the original proof of Hohenberg and Kohn is restricted to V-representable densities, i.e.,
densities that are the ground-state densities of the electron Hamiltonian with a particular
potential. However, many reasonable densities have been shown to be non-V-representable.
So, it is noteworthy to mention the constrained-search independent formulations of Levy [54–
56] and Lieb [56–58] for the Hohenberg-Kohn functional. Their approach consists in defining
a two-step minimization procedure, where the first step consists in minimizing the energy
over the class of wavefunctions with the same density n(r):

ELL[n(r)] = Z

d3_{r v(r)n(r) + min}

Ψ→n(r)hΨ| ˆT + ˆW|Ψi . (1.28) This leads to an unique lowest energy for that density, and the ground-state density is found by minimizing this functional of the density

E0 = min

{n} ELL[n]. (1.29)

Formally, this can be achieved by varying the energy subject to the particle number constrain using the Lagrange multiplier (µ):

δE = δnF [n] + Z d3r v(r)n(r)− µ Z d3r n(r) − No = 0. (1.30)

On top of that, carrying out the functional derivatives results in a Euler equation δF [n]

δn(r) + v(r)− µ = 0. (1.31)

In this formulation the energy functional is defined for any density obtained from a wave-function ΨN for N-electrons, or in other words, for any N-representable density, which is fantastic since any reasonable density satisfy the N-representability condition [20, 59].

The Hohenberg-Kohn theorems reveal how to reformulate the many-body problem for N electrons resorting to the density instead of the wavefunction, and ensure that that density is sufficient to determine all properties of a system. However this relation is rather subtle and it still remains unclear how one can extract a set of properties directly from the density. This could have been an huge setback for density functional theory, if not for the brilliant Kohn-Sham approach [60].

### 1.2.2

### Kohn-Sham scheme

The success of DFT and the reason why it is one of the most widely used methods for electronic structure calculations originates from the work of Kohn and Sham. Their approach consists in replacing the original many-body problem by an auxiliary independent particle problem [20]. To do so, however, we are forced to consider a hypothesis: the existence of an auxiliary non-interacting system whose ground-state density represents the ground-state density of an interacting system.

Now the ground-state wavefunction of such a system of N non-interacting electrons can be written as a Slater determinant of single-particle orbitals like

ψ(x1, x2..., xN) = 1 √ N ϕ1(x1) ϕ2(x1) . . . ϕN(x1) ϕ1(x2) ϕ2(x2) . . . ϕN(x2) ... ... . .. ... ϕ1(xN) ϕ2(xN) . . . ϕN(xN) ,

where the orbitals that form the antisymmetric wavefunction satisfy the equation h − ∇ 2 2 + vs[n](r) i ϕi(r) = iϕi(r). (1.32)

Then, it is trivial to obtain the energy functional of such auxiliary system Es[n] = Ts[n] +

Z
d3_{r v}

s(r)n(r). (1.33)

The constrained variation of this functional with respect to the density results in the Euler equation

δTs[n]

δn(r) + vs(r)− µs = 0, (1.34)

where Ts the non-interacting kinetic energy operator and msa Lagrange multiplier. The gist of the Kohn-Sham approach is to rearrange the terms of the energy functional in eq. (1.27) in a away that resembles the one above:

E[n] = T [n] + W [n] +
Z
d3_{r v(r)n(r)}
= Ts[n] + (T [n]− Ts[n]) + EH[n] + (W [n]− EH[n]) +
Z
d3_{r v(r)n(r)}
= Ts[n] + EH[n] + Exc[n] +
Z
d3_{r v(r)n(r).}
(1.35)

In this equation we used the definition of the self-interaction energy, that takes into account the classical Coulomb interaction between the electrons,

EH[n] =
1
2
Z
d3_{r}
Z
d3_{r}0 n(r)n(r0)
|r − r0_{|} . (1.36)

also known as the Hartree energy, and that of the exchange and correlation energy functional Exc[n] = T [n]− Ts[n] + W [n]− EH[n]. (1.37)

1.2. DENSITY FUNCTIONAL THEORY 17 Again, we note that the rearrangement in equation (1.35) is only possible under the assump-tion that the ground state density of the interacting system can be represented as the ground state density of the non-interacting system. With this rearrangement, the Euler equation for the interacting system (eq. (1.27)) becomes

δTs[n]

δn(r) + v(r) + vH+ vxc− µ = 0. (1.38)

Turns out that the Euler equations for both systems (eqs. (1.34) and (1.38)) are equivalent if

vs = v(r) + vH+ vxc− (µ − µs) = v(r) + vH+ vxc. (1.39) where the difference between Lagrange multipliers was absorbed by the exchange and cor-relation term. In this manner, one can calculate the variation of the energy functional in eq. (1.35) with respect to the wavefunctions of the auxiliary system and obtain the well known equations of Kohn and Sham

h −∇2

2 + v(r) + vHartree[n](r) + vxc[n] (r) i

ψi(r) = εiψi(r) . (1.40) The exchange and correlation potential and the Hartree potential are defined as the func-tional derivative of their energy counterparts, for example

vxc[n] (r) =

δExc[n]

δn(r) . (1.41)

Finally, the electronic density can be calculated as n(r) =

occ. X

i

|ψi(r)|2, (1.42)

where the sum runs over all the occupied states.

The beauty of the Kohn-Sham equations is that the solution of the many-body problem no longer involves complicated wave-functions of many-body interacting electrons. The ground-state density is now obtained from several single-particle wave-functions that obey a Schr¨odinger like equation with an extremely complex potential. And if this potential were known, the self-consistent-cycle solution of the Kohn-Sham equations would yield the exact ground state density and energy for the interacting system.

### 1.2.3

### Jacob’s ladder

The elusive exchange and correlation energy functional of Kohn-Sham DFT is often split in two terms: the exchange functional (Ex) and the correlation functional (Ec):

Exc[n] = Ex[n] + Ec[n] = Z

d3_{r n(r)[}

where i represents energy per electron of the system. We use this notation throughout this section. Moreover, these functionals can be defined as [61]

Ex[n] =hΦs[n]| ˆW| |Φs[n]i −
1
2
Z
d3_{rd}3_{r}0
n(r)n (r0) w (_{|r − r}0_{|)}
Ec[n] =hΨ[n]| ˆT + ˆV + ˆW|Ψ[n]i − hΦs[n]| ˆT + ˆV + ˆW|Φs[n]i
(1.44)
where_{|Φ}s[n]i is the Kohn-Sham wavefunction and |Ψ[n]i the true ground state wavefunction
of the interacting system with density n. Nonetheless, the exact mathematical expression for
these functionals of the density, that should describe many-body interactions, is not known.
One could, in principle, construct an explicit form for the functional after solving all possible
electronic systems. Yet this approach remains rather unfeasible. So, as it usually happens,
approximations are required and, over the last decades, hundreds of approximations for the
exchange and correlation functionals were proposed [62–65]. The major difficulty, however,
concerns the impossibility of systematically improving these functionals of the density.
Noth-ing guarantees that the addition of further Noth-ingredients, that satisfy more exact constrains
or that make the functional more flexible, leads to an improvement in the description of
all types of interactions across distinct chemical environments. Still, an hierarchy has been
in development since the work of Kohn and Sham [60], an hierarchy coined by Perdew as
”Jacob’s ladder” [66]. The metaphor starts with the Hartree world at the bottom, where
exchange and correlation energies are zero and classical electrostatics perfectly describes
the interaction between electrons. Then every rung represents the inclusion of a different
ingredient in the functional and the belief is that the ladder culminates in the Heaven of
chemical accuracy. This concept of chemical accuracy was further discussed by Pople in
his Nobel prize lecture [67], where he argues that a global accuracy of 1 kcal/mol (roughly
43 meV/atom) for energies with respect to experimental values would be appropriate.

The first rung consists on the local spin density approximation (LDA), where the func-tional depends on the density as in

ExcLDA= Z

d3r n(r) LDAxc (|n(r)|) . (1.45)

The second rung sees the inclusion of the gradient of the density in the approximation EGGA

xc = Z

d3_{r n(r) }GGA

xc (|n(r)| , |∇n(r)|) (1.46)

This is the form of the generalized-gradient approximation (GGA), and functionals of this
family are often regarded as semi-local due to the infinitesimal region around r spanned by
the gradient. Following the trend, the ingredient required for the third rung is the Laplacian
of the density _{∇}2_{n(r) and/or similar quantities, such as kinetic energy density}

τ (r) = 1 2 occ X i |∇ψi(r)|2. (1.47)

With these quantities the functional form changes to that of a meta-GGA (mGGA):
EmGGA
xc =
Z
d3_{r n(r) }mGGA
xc |n(r)| , |∇n(r)| , |τ(r)| ,
∇2n(r)
(1.48)

1.2. DENSITY FUNCTIONAL THEORY 19 The fourth rung takes on a different approach: the inclusion of terms involving the depen-dence on the occupied Kohn-Sham orbitals, which can be achieved in different manners. For example, by including exact exchange and a compatible correlation, i.e. mixing a fraction αx of exact (Hartree-Fock) exchange with another functional (either GGA or mGGA) as in

Ehyb
xc =−
αx
2
Z
d3_{r}
Z
d3_{r}0ψi(r)ψ
∗
i (r
0_{) ψ}
j(r0) ψj∗(r)
|r − r0_{|} + E
DFT
xc [n]. (1.49)

This is the form of a hybrid functional and it can truly be regarded as truly non-local due to the exact exchange term. The last known rung, for which we do not show the functional form, incorporates the dependence on all orbitals (both occupied and unoccupied). Functionals of this kind contain exact exchange and exact partial correlation. Examples include the inclu-sion of post-Hartree-Fock correlation in the approximation denoted as double-hybrids [68] and the random phase approximation plus corrections [66].

Among all these families of functionals, the GGA proposed by Perdew, Burk and Ernzer-hof (PBE [69]) stands out, as it is the most used functional in material science. Many reasons contributed to this, for example while the most accurate approximation for the exchange and correlation energy, the hybrid functionals, require the computation of exact exchange, which is a rather demanding operation in plane wave codes, the PBE does not. It is in fact very computational efficient. Furthermore it provides a fairly accurate reproduction of crystal structures energies and lattice constants (among other properties). Furthermore, the PBE was presented at the correct time and witnessed the boom of electronic structure packages. It was used for many calculations and re-used for comparison reasons. Not only that, but it was the approximation of choice for many databases, such as the Materials Project [70].

The exchange part of this approximation is given by [16, 71]
PBEx =
LDA
x F
PBE
x
FxPBE = 1 + k−
k
1 + (µs2_{/k)},
(1.50)
where k = 0.804 and µ = 0.21951. The dimensionless gradient s is defined it terms of the
gradient of the density, the density, and the local Fermi wave vector as in

s = |∇n| 2kFn

. (1.51)

While the correlation part is chosen as [16, 71] GGA

xc = LDA

c + H(rs, ζ, t), (1.52)

where ζ = (n↑

−n↓_{)/n is the spin polarization, t =}

|∇n|/(2φksn) is a dimensionless gradient,
and rs is the local Seitz radius. In the definition of t, ksis the Thomas-Fermi screening wave
vector and φ = [(1 + ζ)2/3_{+ (1}_{− ζ)}2/3_{]/2 is just a spin scaling factor. Using these definitions,}
we can write H as
H = e
2_{γφ}3
a0
log 1 + βt
2
γ
1 + At2
1 + At2_{+ A}2_{t}4. (1.53)

Here, γ and β are constants and the function A represents A = β γ e−a0 LDA c e2γφ3 − 1 −1 . (1.54)

## Chapter 2

## Structural prediction and molecular

## dynamics

You start a question, and it’s like starting a stone. You sit quietly on the top of a hill; and away the stone goes, starting others...

Robert L. Stevenson The Strange Case of Dr. Jekyll and Mr. Hyde Starting... In the last chapter we explained Kohn-Sham DFT. In the words of Richard Martin [20]: ”So long as the true many-body solution is sufficiently close to the independent-particle formulation, e.g. the states must have the same symmetry, then the Kohn-Sham approach provides insightful guidance and powerful methods for electronic structure theory.” Kohn-Sham DFT is indeed a remarkable theory that allows for the determination of the ground-state of a system. Actually, how can we be sure that we reached the ground state of a system? Surely performing a self-consistent calculation for a random configuration of 8 copper atoms followed by a geometry relaxation should result in a supercell of its ground state fcc lattice. Unfortunately, this is not always the case: the minimization procedure might encounter a local minima of the potential energy surface (PES), or even a saddle point. This simple example reveals a particular difficult problem when considering solids, which pertains the determination of a ground-state crystal structure of a system from only its chemical composition.

From a different point of view, often the study of a system involves the determination of dynamical properties and the simulation of the system under a specific statistical ensemble. For example, the latter requires an accurate evaluation of forces of large supercells in order to properly simulate the evolution of a system, while keeping certain properties of the system constant, such as the total pressure or temperature.

The solution to these problems took many years to be found and as often occurs in science, such development came from many brilliant contributions that still continue to start others. In this chapter we describe two types of simulations particularly relevant in the field of materials science: structural prediction and molecular dynamics (MD). We focus more prominently it the flavours that we used for our applications, mainly the minima hopping method (MHM), genetic algorithms, and the Berendsen thermostat and barostat.

### 2.1

### Structure prediction

In 1988 John Maddox draw attention to a considerable problem in material science in a Nature editorial. He wrote: ”One of the continuing scandals of physical science is that it remains in general impossible to predict the structure of even the simplest crystalline solids from a knowledge of their chemical composition.”[72] In fact, at the time, crystal structures were considered as unpredictable as the behaviour of the stock exchange. However, the situation change dramatically in 2003-2006 with the proposal of several approaches that aimed at the solution of this problem, only possible due to the explosive development of electronic structures methods. The idea behind structure prediction can be extract from Maddox text: find the stable crystal structure of a certain material knowing only its chemical compositions (for specific thermodynamic conditions).

Here stability pertains to both thermodynamic and dynamical stability. Thermodynamic stability means the minimum of the Gibbs free energy. So a structure will not decompose into another, if the free energy of the decomposition channel is positive. Frequently, many calculations resort to conditions of zero pressure and temperature. In these conditions, the relevant thermodynamic quantity is the total energy, and a structure is stable if its energy is lower than that of its individual constituents. For example, consider the formation energy of a binary compound AiBj:

EF = EAiBj/N − (xiEA+ xjEB), (2.1)

where E denotes energy, x concentrations, and N the total number of atoms in the structure. The formation energy is then an energy per atom. If the formation energy is negative, then AiBj is more stable than its constituents. Then, to evaluate thermodynamic stability we just have to compare energies with respect to all possible decomposition channels. On the other hand, dynamical stability concerns the second derivatives of the energy and the identification of minima among stationary points of the PES (points whose first derivatives are zero). For example, local optimizers search for points that minimize the forces that a structure is subjected to. However these points might correspond to saddle points of the energy surface instead of minima. Their proper identification requires the second derivative test or the calculation of related properties, such as phonon frequencies. For instance, a structure incorrectly deemed as thermodynamic stable, due to the lack of information on all possible decomposition channels, might be proper identified as unstable if it displays imaginary phonon frequencies, which indicate dynamical instability.

But why is this problem of structure prediction so complex? Well, each chemical
com-position is associated with an infinite number of atomic arrangements, and from these no
one knows how many correspond to a local minima of the free energy surface. Furthermore,
among these, it is also unknown which ones are the most stable and most probable to be
synthesised in a laboratory. Fortunately, several approaches were developed to tackle this
problem, namely topological approaches [2], approaches based on empirical correlations
us-ing either structural diagrams [73–75] or data minus-ing approaches [76, 77], and approaches
based on computational optimization [2]. Topological approaches rely on known information
of the chemistry and symmetry of the system. For example, knowledge of sp3_{-hybridization}
of carbon atoms leads to the diamond structure. Approaches based on empirical
correla-tions require large databases of know stable crystal structures, and employ machine learning

2.1. STRUCTURE PREDICTION 23 techniques or structural diagrams of basic properties (such as ionic radius and Mendeleev number), to uncover patterns among the data of similar structures. On the other hand, computational optimization consists in explicitly performing calculations to explore the free energy surface with the objective of finding its minima. Contrary to the other approaches that are biased, this approach can lead to completely unforeseen results and novel structures. Still this is not an easy task. First of all, exhaustive search optimization techniques have to be discarded since crystal structure prediction is an high-dimensional problem, in an extremely complex landscape, and admits an enormous number of feasible solutions. To make matters worse, this problem scales exponentially with the system size, as the number of points in the energy landscape can be obtained from

C = V /δ 3 N Y i N ni , (2.2)

where V is the volume of the unit cell that contains N atoms, δ is a discretization parameter, and ni represents the number of atoms in the unit cell with type i. Moreover, the number of local minima depends exponentially on the dimentionality d of the energy landscape, which can be calculated as

d∗ = 3N + 3_{− κ,} (2.3)

where 3N − 3 degrees of freedom come from the atomic positions, 6 from the lattice pa-rameters, and the non-integer κ represents the number of correlated dimensions. Luckily, performing structure relaxations provides a simplification to the problem. In fact, for some cases structure relaxation greatly reduces the dimensionality of the problem. For example, during relaxations the interatomic distances adjust to sensible, physical values.

This implies that the best methods for structure prediction contain both a local and a global optimization procedure. Over the years, several methods were developed to tackle this exceptionally difficult problem, such as random sampling [11, 12, 78, 79], simulated annealing [80–82], metadynamics [83], minima hopping [9], and evolutionary algorithms [3, 13, 14, 84–88]. In the next sections we proceed with the discussion of both the minima hopping method (MHM) and genetic algorithms due to their importance in the scope of this thesis. However, before proceeding we are going to discuss extensions to the problem just described.

Often, the interest is not in the knowledge of a stable crystal structure, but on the stable crystal structure that exhibits a certain property. This can be achieved with an hybrid optimization procedure that combines a local optimization for the energy with a global optimization for the property of interest [3, 86].

Another extension to the problem consists in the addition of the other chemical compo-sitions, and subsequent prediction of the entire set of stable chemical compositions. In this manner, the complexity of the problem increases, as the energy landscape depends now on both compositional and structural coordinates. Moreover, the end result will now include the minima for all compositions and the set of ground-states located on the convex hull of thermodynamic stability. This convex hull is the hyper-surface in composition space that contains all the thermodynamic stable materials, i.e., materials that lack any decomposition channel and that, therefore, will not decompose into other (more stable) phases. The distance to the convex hull represents the energy (or free energy) released in the decomposition.

### 2.1.1

### Minima hopping method

The MHM [9, 10] is a global prediction method or global optimization procedure. This algorithm can be placed with genetic algorithms and particle swarm, as methods that do not rely on thermodynamic principles and Markov-based Monte Carlo methods, such as as simulated annealing, basin hopping, and multicanonical methods.

The efficiency of a global optimization procedure can be understood as how fast it can climb out of wrong basins (local minima) and consequently find the global minima of a complex function, such as a potential energy surface.

Figure 2.1 shows three distinct steps in a usual MHM run. The MHM consists in 2 parts: an inner part that performs jumps into the local minimum of different basins and an outer part that will accept or reject this local minimum.

Minima are accepted or rejected based on thresholding: if the difference in energies between the new and the current minimum is smaller than a certain variable tolerance, the minimum is accepted. In this manner, there is a preference for steps that lower the energy, yet steps that increase it can also be accepted.

Ener gy ( eV) 0.0 -0.8 -1.6 -2.4 -7.5 -5.0 -2.5 0.0 2.5 5.0 -7.5 {x} A C B

Figure 2.1: Scheme of a MHM run. A) Initial random structure that can be anywhere in the PES. B) Geometry optimization leads to a minimum of the PES. C) After several geometry optimizations and short molecular dynamics the method finds the global minimum of the PES.

On the other hand, the inner part relies on short molecular dynamics to escape from the current local minimum, followed by geometry relaxation to the closest local minimum. The geometry optimization can be done by standard steepest descent and conjugate gra-dient methods, or any other local optimization method. Usually we use the optimization procedure of the code we are using, in this case the geometry optimization of vasp. Now, the molecular dynamics simulations are not physical. The initial velocities of the atoms are chosen according to a Boltzmann distribution, in such a manner that the total kinetic energy is equal to a certain parameter γkin. In this manner, the system has sufficient energy to cross over any barrier of height up to the value of this parameter. Also in play here is the Bell-Evans-Polanyi principle. The principle states that highly exothermic chemical reactions have a low activation energy. In the context of global optimization this means that is more probable to find low energy minima if the jump between basins overcomes a low barrier rather than a high barrier. In practise, large γkin leads to a larger search space, while smaller values require extra MD simulations to find an escape path. It turns out that dynamically

2.1. STRUCTURE PREDICTION 25 varying γkin during the simulation, such that half of the MD simulations find a new basin is almost optimal.

### 2.1.2

### Genetic algorithms

Genetic algorithms (GA) are random-based classical evolutionary algorithms [89, 90]. As the name suggests, these algorithms draw inspiration from natural evolution processes, where a population competes for limited resources within an environment, which leads to natural selection (or survival of the fittest). Although initialy proposed by Holland [91] to study adaptive behaviour, genetic algorithms are widely known as optimization methods ever since their earlier successes reported in the works of Goldberg [92], De Jong [93], among others. Particularly in material science, genetic algorithms are renown global structure optimizers.

Traditional, genetic algorithms follow a rather fixed workflow. They start with the gen-eration of a population of µ individuals. Though duplicate individuals might be present, the diversity of the initial population improves the efficiency of the algorithm. Each individual is characterized by a genotype and a fitness value. Actually, a more rigorous description would also include phenotype, the observable characteristics of the collection of genes (genotype).

For structural prediction each individual might represent a structure for a certain chemical composition, the phenotype can then be the list of different atoms in the structure (Si or Ge for example), while its encoding is the genotype (this could be an array with zeros for Si and ones for Ge).

After the generation comes the evaluation: the individual fitness value comes from the evaluation of the population using a fitness function, which represents the requirements the population should adapt to meet. Following our example, this can be the calculation of the formation energy of each structure from the collection of genes, and it can even include a local optimization, such as a geometry relaxation.

The next step involves the parent selection: pairs of individuals are selected to become the parents of the next generation. Normally, this selection depends on the fitness function, so that the best individuals (those with higher fitness value) have a higher chance of being selected. Several algorithms exist to perform this selection: fitness proportional selection, ranking selection, tournament selection, uniform parent selection, and over-selection for large populations. Their description can be found in any genetic algorithms book, such as Ref [89]. Afterwords, it is time to apply (with a certain probability) the variation operators, namely the mutation and the recombination (or crossover) operators. These operators can be im-plemented in a plethora of ways and the efficiency of the genetic algorithms is once again tied with their quality. As the name implies, the crossover operator merges the informa-tion from the genotypes of the parents in one or several offspring genotypes. For example, an one-point crossover divides the genotype of both parents at the same point and creates one or two offsprings by merging the portions from different parents (and keeping the same length). So, a parent with only Si atoms can recombine with another with just Ge atoms to form a structure with 3 Si and 1 Ge. On the other hand, the mutation operator involves a stochastic slight variation of a genotype and usually occurs with a low probability. When it occurs though, it increases the diversity of a population and it might stear the optimiza-tion procedure from persistent local minima. Figure 2.2 displays examples of well known mutation operators.

1 2 3 4 5 6 7 8 1 2 3 64 5 7 8 1 2 3 4 5 6 7 8 1 2 6 4 53 7 8 1 2 3 4 5 6 7 8 1 2 53 46 7 8 1 2 3 4 5 6 7 8 1 26 543 7 8 A B C D

Figure 2.2: Swap (A), insert (B), scramble (C), and inversion (D) mutation examples.

Finally, a new population arises from survivor selection and the algorithm repeats itself until a certain number of populations have been created or until a certain fitness value is achieved. The new population of µ individuals is selected from the previous µ individuals and from the λ offprings. A handful of methods exist to perform this replacement based on age or on fitness: replace worst, elitism, round-robin tournament, (µ + λ) selection, and (µ, λ) selection. Again their description can be found in Ref. [89].

### 2.2

### Molecular dynamics

Another kind of simulations that remain particularly relevant in the field of material science and quantum chemistry are molecular dynamics (MD) simulations, as they allow to obtain dynamic properties of many particle systems.

### 2.2.1

### Overview of molecular dynamics simulations

MD simulations have been thoroughly used to model statistical ensembles and to study innumerous properties such as thermal conductivity [94, 95], critical temperatures [96–98], the folding of proteins [99–101], dynamical stability [102], among others. The term MD designates the solution of the classical equations of motion (usually Newton’s equations) for a set of molecules [18] and was first associated with the simulation of a system of hard spheres by Alder and Wainwright [103, 104]. Usually the equations of motion are integrated using the St¨ormer-Verlet algorithm [105, 106] (in particular velocity Verlet), as this method to solve differential equations possess 3 important properties: reversibility, symmetry, and symplecticity. When solving equations of motion, reversibility broadly means that inverting the initial velocities only changes the direction of the movement, symmetry that if we reverse time at any step, we can return to the initial position, and symplecticity that the energy is nearly conserved provided that a sufficiently small time-step is used. It also implies that the volume in phase-space is conserved in the flow (one-step map).

The St¨ormer-Verlet integration of the equations of motion leads to simulations that pre-serve the volume, the total energy, and the number of atoms of a system, i.e. simulations that reproduce a microcanonical ensemble. In a similar fashion, other statistical ensembles can by reproduced by adding additional terms to the equations of motion, to allow for the preservation of other quantities, such as temperature and pressure. Physically this entails the

2.2. MOLECULAR DYNAMICS 27 coupling of a system to an external heat bath or its placement in a pressure bath. In practise these constrains are achieved with thermostats and barostats. To simulate a canonical en-semble, where the temperature, the volume and the number of particles remain constant, we can resort to the Langevin [18, 107], the Nos´e-Hoover [108], the Andersen [109] or the Berend-sen thermostats. On the other hand, the pressure can be preserved by resorting to barostats such as the Nos´e-Hoover [18], the Berendsen [110], and the Parrinello-Rahman [111, 112] barostats. Furthermore, by coupling both a thermostat and a barostat, simulations can preserved the number of atoms, the pressure, and the temperature of a system, i.e., they can reproduce an isothermal-isobaric (or NPT) ensemble. The next section describes the thermostat and barostat used in the MD simulations described in section 4.5.2.

### 2.2.2

### Berendsen thermostat and barostat

Coupling a system to a heat bath can be accomplished by inserting stochastic and friction terms in the equations of motion. Starting from the Newton’s equation of motion, this results in the Langevin equation

mi˙vi = Fi− miγivi+ Ri(t), (2.4)

where γirepresents damping constants and Ri a Gaussian stochastic variable with zero mean and with intensity

hRi(t)Rj(t + τ )i = 2miγikT0δ(τ )δij. (2.5) The idea behind the Berendsen thermostat is to consider how such coupling affects the temperature T of the system. According to the equipartition theorem, this can be obtained by calculating the time derivative of the kinetic energy. After some integrations this results in dEk dt = 3N X i=1 viFi+ 2γ 3N 2 kT0− Ek , (2.6)

which contains the derivative of the potential energy and an additional term describing the coupling to the heat bath. In terms of temperature, this extra term can be written as

dT dt

bath

= 2γ (T0− T ) . (2.7)

However, a similar variation can be obtain by just changing the equations of motion to mi˙vi = Fi+ miγ

T0 T − 1

vi. (2.8)

where 2γ can be see as the inverse of a coupling constant (τT = 1/2γ). Finally, this
corre-sponds to a proportional scaling of the velocities at every time step (v _{→ λv) with}

λ = 1 + ∆t τT T0 T − 1 1/2 . (2.9)

Similarly, the coupling to an constant pressure bath can be achieved by adding to the equations of motion a term that alters the pressure according to

dP dt bath = P0− P τP . (2.10)

This term is nothing more than a simple proportional coordinate scaling that changes the equations of motion to

˙x = v + αx, (2.11)

where α is obtained from the time derivative of the pressure dP dt =− 1 βV dV dt =− 3α β , (2.12)

where β is the is the isothermal compressibility. We note that the pressure is calculated using the virial theorem [38, 39]

P = 1 V N kBT + 1 3 X i hri· fii ! . (2.13)

Thus α =−β (P0− P ) /3τP. Finally, this change in the equation of motion is equivalent to
a proportional scaling of the coordinates and box length at every time step (x _{→ µx) with}
(up to first order)

µ = 1− β∆t 3τP

(P0− P ) . (2.14)

Here we showed the equations for the barostat and the thermostat of Berendsen, but not the algorithm to incorporate them successfully. This can be found in Ref. [110].

The coupling constants of these thermostat and barostat represent double-edged-swords and the choice of their values is quite critical for the success of the simulation. Low values can be used for thermalization purposes. However they can lead to instabilities or wrong fluctuations. On the other hand, large values can lead to wrong oscillations. In particular, the Berendsen thermostat suppresses fluctuations of the kinetic energy and the corresponding error scales with the inverse of the number of atoms. So, most of the ensemble averages will remain unaffected for very large systems. However, this is not the case for fluctuation properties, such as the heat capacity [113]. Meanwhile, the Berendsen barostat provides correct average pressures (even with only a rough estimate for the isothermal compressibility) but may not reproduce the correct NPT ensemble. Therefore, the values for the coupling constants have to be chosen with care in order to obtain realistic fluctuations [18, 110, 113].

## Chapter 3

## Machine learning in material science

The fact that the price must be paid is proof it is worth paying.

Robert Jordan The Eye of the World Price... In the previous chapter we discussed two problems, or types of simulations, that require the accurate and efficient calculation of energies and its derivatives. Furthermore, we mentioned that the amount of calculations required to solve these problems might become too cumbersome even for a method as efficient as DFT. Materials science researchers have faced these problems before and found several satisfying solutions, which usually involve a price. Often an increase of efficiency can be gained from the application of methods constructed with more lenient conditions and that are less accurate than DFT. For example classical force-fields are the standard tool to calculate energies and forces in MD simulations. However, the quest for more accurate and efficient methods led researchers to the field of machine learning. Furthermore, machine learning provides tools to extract information from huge chunks of data in ways that far surpasses the capabilities of the human mind.

In this chapter we discuss machine learning. We start with an overview where we present the basics, the most successful applications, and the different categories of machine learning. We follow it by the applications in the field of material science. We describe its basic tenants: data, features, and algorithms (in particular neural networks). Finally we present the most recent or successful applications in the field of material science. The research presented here was published in Ref. [23].

### 3.1

### Overview of machine learning

Machine learning [19, 21] consists of a collection of statistical models that search for pat-terns in order to extract the necessary information to perform a specific task, without re-sorting to explicit instructions. Currently, machine learning algorithms are highly sought and well regarded due to their accomplishments while performing regression, classification, clustering, dimensionality reduction, optimal experimental design, or decision making. In fact, machine learning algorithms proved extremely successful at extracting information and relations from big chunks of high dimensional data. Furthermore, these algorithms man-age to surpass human capabilities and obtain outstanding results in several fields, such as

face recognition [114–117], image classification [118], driving cars [119], and playing games: Atari [120], Go [121], chess [122], and others [123, 124]. These algorithms even spread to many of our daily life activities, for example to image and speech recognition [125, 126], credit scores [127], fraud detection [128], web-searches [129], email/spam filtering [130], and many, many others.

Meanwhile, the application of these algorithms also extended to the fields of biology and chemistry, where they obtained fantastic results [131, 132]. However in solid-state material science, this integration has been slower. Machine learning algorithms also boast several accomplishments in this field [133–138], in particular in chemical sciences [139], in materials design of thermoelectrics and photovoltaics [140], in the development of lithium-ion batter-ies [141], and in atomistic simulations [142]. Nonetheless, many of the published applications remain very basic and lack complexity. Often applications involve small training sets or sim-ple fitting procedures that do not require this kind of techniques. This means that these applications do not take full advantage of the power of machine learning, and consequently, are unable to replicate their accomplishments in other fields. Fortunately, this is changing quite rapidly, on par with the growing interest in these kind of techniques. In section 3.5 we mention more applications of machine learning in the field of material science.

Different types of machine learning algorithms deviate from each other based on their approach to solve a certain problem or task, the task itself, and their input and output. This offers a way to organize the machine learning algorithms under three main categories: supervised learning, unsupervised learning, and reinforcement learning.

The most widespread type of learning is certainly supervised learning, which comes as no surprised, due to its similarity to a simple fitting procedure. In supervised learning, models take advantage of a collection of data, containing both the input and the target property or response, to find hidden patterns among the data and construct a function that can extrapolate an unknown target property based on a given input. On the other hand, unsupervised learning includes algorithms whose objective consists in the categorization of data based on the similarities found among the inputs. This type of learning is not conditioned, in the sense that the outputs (or categories) are not known a priori. Finally, reinforcement learning encompasses goal-oriented algorithms [143, 144]. In these algorithms an agent in a certain state interacts with its environment by performing an action. From this interaction results a reward, which the agent intends to maximize. With successive actions, the agent improves its policy, i.e., the strategy to determine the next action based on the current state and the maximum reward.

Worth mentioning is the possibility of overlap between techniques of different categories. For example semi-supervised learning exists between supervised and unsupervised learning. These are algorithms that take advantage of some data containing the target properties to improve its performance on the identification of unlabelled data (as in unsupervised learning). Usually this is particularly useful to learn representations [145].

Here, we will focus on the explanation of the workflow of supervised learning algorithms (see fig. 3.1). As mentioned above, this is the most common type of learning, specially in the field of material science. Furthermore, it represents the type of algorithms used for the work described in this thesis.

The first step in any machine learning algorithm consists in the generation of data. This can occur in a plethora of ways, such as performing calculations or just from observations.

3.1. OVERVIEW OF MACHINE LEARNING 31
Data
**Reﬁne model**
(Optimize hyper-
parameters,
Change features ...)
Final model
evaluation
Generation
or collection of labeled
data
Cleaning
of the data
Training
Final model
Predictions for
new data
Validation set
Training set
Test set
Feature extraction,
feature engineering ,...
Select algorithm

Figure 3.1: Supervised learning workflow.

Usually this step involves a lot of work, as the data has to be collected and cleaned, before one can generate varied, consistent, and accurate data sets. In section 3.2 we further discuss this topic in association with material science. As mentioned, for supervised learning, this data is labelled, this means that it contains both the target property as well as the information required (by an algorithm) to compute said property. Often, as the data is generated, the machine learning algorithm that will fit it is also chosen. We mention some of these algorithms in section 3.4 and in particular we describe neural networks. After this selection, follows the extraction of the relevant information from the data and its processing, in order to provide the chosen algorithm with suitable inputs, called features or descriptors, for the task at hand. A more detailed discussion of these features follows in section 3.3. For now it is enough to understand that the raw information requires processing so that it can be understood by the machine (learning model).

Once all of these tasks are accomplished, the model is trained to learn, i.e. identify and reproduce, the intricacies of the data. Normally this entails the minimization of a cost function that evaluates the performance of the model in a subset of the data, the training set.