• No results found

Electronic Structure and Statistical Methods Applied to Nanomagnetism, Diluted Magnetic Semiconductors and Spintronics

N/A
N/A
Protected

Academic year: 2021

Share "Electronic Structure and Statistical Methods Applied to Nanomagnetism, Diluted Magnetic Semiconductors and Spintronics"

Copied!
82
0
0

Loading.... (view fulltext now)

Full text

(1)

(2)  

(3)

(4)        

(5)         . 

(6)              

(7)  

(8) 

(9)         

(10) 

(11)    

(12)     !"#$.  

(13)       . # %&'%(&%) # *%('')(&+(  , - ,,,.('/.

(14)  

(15) 

(16)     

(17)      

(18)  

(19)    

(20)         !  "# "$ %$ &  '  &    & ('  ') *'  

(21) 

(22) +  

(23)   

(24) ,

(25) ')   -.  /) "$) , 

(26)  0  

(27)  0    1 '  !  2

(28)  

(29)     1 

(30)   0

(31)  

(32)  0 

(33) 

(34) ) ! 

(35)     

(36) ) .

(37)  

(38)

(39)        

(40)         3") 45 )    ) 60-2 #7$$78""7") *' '   

(41) '  ) 6

(42) ' &     &     &   

(43) 

(44) .  

(45)   

(46) ) 1   &    

(47)  & '       

(48)    

(49)   

(50)   910:

(51)  ' &7  &  

(52)    

(53)  

(54) .  

(55) 

(56) &  

(57)    

(58)       ' ) 6  ' +

(59) '  

(60)  

(61)  

(62)  & '  

(63)    

(64) 10        

(65)   

(66) '  

(67) 

(68) & '      ) *' 

(69)        

(70)  

(71) 

(72) ) 0  &   

(73)       

(74) ) ! '      ' 

(75)     

(76) &&

(77) 

(78)       '    

(79) + '  

(80)   

(81) ) 

(82)  1

(83)  ;   

(84)  '   

(85)      

(86) '     2<;<; &

(87)       .     ) 6

(88) ' '    

(89)       

(90)  &    1

(91) 7    

(92)    

(93) 

(94)

(95)  

(96)    

(97)    

(98) ) *'   

(99)     + ' ''.   '   

(100)      

(101) '   

(102) )  '     

(103) '  

(104)     & '

(105)   

(106)  

(107)  & 

(108)

(109)  

(110)    

(111)    

(112) )    

(113) 

(114)   

(115)   1

(116)  ;         '

(117)  

(118)   

(119)    

(120)   

(121)

(122)  

(123)    

(124)      !"#  

(125)   $# ! % &'(#    # )*+&,-,   #  = /  -. "$ 6002 8$78" 60-2 #7$$78""7" 

(126) %

(127) 

(128) %%% 7$53" 9' %<<

(129) )>)< ?

(130) @

(131) %

(132) 

(133) %%% 7$53":.

(134) Dedicated to Matt Groening and !K7 Records.

(135)

(136) List of Papers. This thesis is based on the following papers, which are referred to in the text by their Roman numerals. I. Defect-Induced Magnetic Structure in (Ga1−x Mnx )As P. A. Korzhavyi, I. A. Abrikosov, E. A. Smirnova, L. Bergqvist, P. Mohn, R . Mathieu, P. Svedlindh, J. Sadowski, E. I. Isaev, Yu. Kh. Vekilov and O. Eriksson Physical Review Letters 88, 187202 (2002).. II. Magnetic and electronic structure of (Ga1−x Mnx )As L. Bergqvist, P. A. Korzhavyi, B. Sanyal, S. Mirbt, I. A. Abrikosov, L. Nordström, E. A. Smirnova, P. Mohn, P. Svedlindh and O. Eriksson Physical Review B 67 , 205201 (2003).. III. Ferromagnetism in diluted magnetic semiconductors: A comparision between ab initio mean-field, RPA, and MC treatments G. Bouzerar, J. Kudrnovský, L. Bergqvist and P. Bruno Physical Review B, 68, 081203 (2003).. IV. Magnetic percolation in diluted magnetic semiconductors L. Bergqvist, O. Eriksson, J. Kudrnovský, V. Drchal, P. .A. Korzhavyi and I. Turek Physical Review Letters 93, 137202 (2004).. V. Magnetic properties and disorder effects in diluted magnetic semiconductors L. Bergqvist, O. Eriksson, J. Kudrnovský, V. Drchal, A. Bergman, L. Nordström and I. Turek Submitted to Physical Review B.. v.

(137) VI. VII. VIII. Electronic structure and magnetism of diluted magnetic semiconductors O. Eriksson, L. Bergqvist, B. Sanyal, J. Kudrnovský, V. Drchal, P. Korzhavyi and I. Turek Journal of Physics: Condensed Matter, 16, S5481 (2004). Exchange interactions and critical temperatures in diluted magnetic semiconductors J. Kudrnovský, V. Drchal, I. Turek, L. Bergqvist, O. Eriksson, G. Bouzerar, L. M. Sandratskii and P. Bruno Journal of Physics: Condensed Matter, 16, S5571 (2004). Ferromagnetic materials in the zinc-blende structure B. Sanyal, L. Bergqvist and O. Eriksson Physical Review B. 68 . 054417 (2003).. IX. Exchange interactions and Curie temperatures in NiMnSb and Ni2 MnSb compounds J. Rusz, L. Bergqvist, J. Kudrnovský and I. Turek Submitted to Physical Review B.. X. Electronic structure and magnetism of diluted magnetic semiconductors and derivatives B. Sanyal, L. Bergqvist, O. Eriksson and B. Johansson Journal of magnetism and magnetic materials, 272-276, 1581 (2004).. XI. Magnetism of Fe/V and Fe/Co multilayers O. Eriksson, L. Bergqvist, E. Holmström, A. Bergman, O. LeBacq, S. Frota-Pessoa, B. Hjörvarsson and L. Nordström Journal of Physics: Condensed Matter, 15, 599 (2003).. XII. Structural and magnetic aspects of multilayer interfaces E. Holmström, L. Bergqvist, B. Skubic and O. Eriksson Journal of magnetism and magnetic materials, 272-276, 941 (2004).. XIII. On the sharpness of the interfaces in metallic multilayers E. Holmström, L. Nordström, L. Bergqvist, B. Skubic, B. Hjörvarsson, I. A. Abrikosov, P. Svedlindh and O. Eriksson Proceedings of the National Academy of Sciences, 101, 4743 (2004).. vi.

(138) XIV. Theory of weakly coupled two-dimensional magnets L. Bergqvist and O. Eriksson Submitted to Physical Review Letters.. XV. Conditions for Noncollinear Instabilities of Ferromagnetic Materials R. Lizzáraga, L. Nordström, L. Bergqvist, A. Bergman, E. Sjöstedt, P. Mohn and O. Eriksson Physical Review Letters, 93, 107205 (2004).. XVI. Crystal and magnetic structure of Mn3 IrSi T. Eriksson, R. Lizzáraga, S. Felton, L. Bergqvist, Y. Andersson, P. Nordblad and O. Eriksson Physical Review B, 69, 054422 (2004).. XVII. Stuctural and magnetic characterization of Mn3 IrGe and Mn3 Ir(Si1−x Gex ): experiments and theory T. Eriksson, L. Bergqvist, P. Nordblad, O. Eriksson and Y. Andersson Journal of Solid State Chemistry, 177, 4058 (2004).. XVIII. Magnetic properties of selected Mn based transition metal compounds with β -Mn structure: experiments and theory T. Eriksson, L. Bergqvist, O. Eriksson, P. Nordblad and Y. Andersson Submitted to Physical Review B.. XIX. Cycloidal magnetic order in the compound IrMnSi T. Eriksson, L. Bergqvist, T. Burkert, S. Felton, P. Nordblad, O. Eriksson and Y. Andersson Accepted in Physical Review B, 71, (2005).. XX. Spin wave dispersion and Curie temperature in YFe2 and UFe2 L. M. Sandratskii, P. Bruno, L. Bergqvist and O. Eriksson Submitted to Physical Review B.. Reprints were made with permission from the publishers.. vii.

(139)

(140) Contents. 1 2. 3. 4. 5 6. 7. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Density Functional Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 The many-body problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The local density (LDA) and the generalized gradient approximations (GGA). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Spin density functional theory . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Non-collinear magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 Spin spirals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computional methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Bloch’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Basis set expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The LMTO-ASA and the KKR-ASA methods . . . . . . . . . . . . . 3.4 KKR-ASA-CPA Green’s function method . . . . . . . . . . . . . . . . Monte Carlo simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Thermodynamic Averages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Importance Sampling and Metropolis Algorithm . . . . . . . . . . . 4.4 Practical Implementation of the Metropolis Algorithm . . . . . . . 4.5 Finite Size Scaling Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . Magnetism at finite temperatures . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Exchange interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spintronics and diluted magnetic semiconductors . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Diluted Magnetic Semiconductors (DMS) . . . . . . . . . . . . . . . . 6.3 The Heisenberg model on a diluted random spin system . . . . . 6.4 Disorder and percolation in DMS . . . . . . . . . . . . . . . . . . . . . . 6.5 Half-metallic ferromagnets . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Perspectives and outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Magnetic nanostructures and nanomagnetism . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Fe/Co and Fe/V multilayers . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Ni4 /CuN /Co2 trilayers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 1 3 3 5 6 7 8 11 11 11 12 15 17 17 18 18 19 21 23 24 29 29 31 32 34 38 39 43 43 43 44 ix.

(141) 8. Noncollinear magnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Noncollinear magnetism on a triangular lattice . . . . . . . . . . . . 8.3 Magnetism of complex Mn-based compounds . . . . . . . . . . . . . 8.4 Magnetism of MnIrSi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Magnetism of YFe2 and UFe2 . . . . . . . . . . . . . . . . . . . . . . . . . 8.6 Perspectives and Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Summary in Swedish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Parallel computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Parallel implementation of the Metropolis algorithm . . . . . . . . A.3 Parallel implementation of the noncollinear TB-LMTO-ASA method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. x. 49 49 50 50 52 54 54 55 57 57 59 59 60 63.

(142) 1 Introduction. We are all using solid materials in our everyday life, so they are obviously very important to mankind. We have come to a point today where we can, by using powerful computers as a tool, simulate and predict properties of solid materials on a theoretical basis by applying quantum mechanics. The introduction of quantum mechanics in the beginning of the 20’th century provided a deeper insight to how atoms can combine into molecules and solids, why solids can be insulators, metals or semiconductors, why certain materials are magnetic etc. Calculations not only predict properties on existing materials that may be confirmed by experiments, but they can also provide studies on materials that do not yet exist in nature, to predict novel and exotic properties. Magnetic materials have been known to mankind for a very long time. The most famous example to the general public is probably the compass needle but today magnets are used everywhere, as permanent magnets in electrical motors in our cars, in harddrives on our computers and mp3-players, in loudspeakers, credit cards etc. Meanwhile, semiconductor based devices that use the charge of individual electrons as the information carrier, have extensively been used in the last century in integrated devices like processors, memory modules and diodes used in computers, amplifiers, cell phones etc. The performance of these devices has increased with several orders of magnitude over the last decades, thanks to development in, among other things, manufacturing and fabrication which makes it possible to shrink the dimension of the transistors and pack the transistors more densely in order to make a faster device. However, nowadays this route to improve the performance seems to face severe difficulties, because the dimensions are becoming so small that classical physics no longer is applicable and quantum effects start to become important. Of course, this has been known for a long time and so far the industry has found clever new ways to overcome the problem and make the devices go faster. This development will eventually come to an end where it is not possible to shrink the dimensions further. It is here where spintronics enter. The idea is to make devices where quantum mechanics is actually employed and not fought against. The electrons not only carry electrical charge but also spin, a purely relativistic quantum effect that can not be explained by classical physics. The spin is then manipulated in the spintronic devices, instead of the charge. 1.

(143) This thesis mainly presents theoretical calculations of magnetic materials and studies of novel materials aimed for future applications in spintronics. Emphasis is put on the properties at finite temperatures but several other aspects are investigated as well. Calculations have been performed both on bulk systems and low dimensional systems, like multilayers and trilayers. The main body of the calculations is based on two different methods; the density functional theory and Monte Carlo simulations. Density Functional Theory (DFT) is a method that tries to find a simple solution to the full many-body quantum mechanical problem, often with the use of advanced computations. DFT has been so successful in describing many physical problems that the main person behind the theory, W. Kohn, was awarded the Nobel Prize in 1998. Although successful, DFT introduces several approximations and must be used with care. At present, DFT is not strictly applicable for any finite temperatures, so in order to describe finite temperatures another method is needed. In this work, Monte Carlo simulations have been employed where the relevent parameters are calculated from DFT. The thesis is organized as follows. In Chapter 2, a short introduction to DFT is given along with the implementation of magnetism in the programs including noncollinear magnetism. In Chapter 3 the implementations of the DFT programs are given. The second computational method employed is the Monte Carlo method, which is covered in some detail in Chapter 4. Magnetism at finite temperatures is outlined in Chapter 5. In Chapter 6-8, some selected results are given; Chapter 6 deals with spintronics, Chapter 7 is about nanomagnetism and Chapter 8 treats noncollinear magnetism in selected Mnbased compounds. During these more than five years of research there are some work that has been done that can not be published as a regular paper, but still needs to be documented in some form. Code development of our computer programs is such an example and in the Appendix some of this work is presented along with an introduction to parallel computing. It is of a much more technical nature than the rest of the chapters and may be valuable for other people in the field but it might be of less interest for a more general reader.. 2.

(144) 2 Density Functional Theory. 2.1. The many-body problem. According to quantum mechanics, all information of a system of interacting electrons and nucleus is contained in the many-body wavefunction Ψ, which can be obtained by solving the many-body Schrödinger equation HΨ = EΨ,. (2.1). where H is the Hamiltonian operator and E the total energy of the system. The Hamiltonian operator contains the motion of each individual electron and nuclei for every atom in the system and it has the form (in atomic units)1 H = −∑ i. ∇2Ri Zi Z j 2Zi 1 − ∑ ∇2ri − ∑ +∑ +∑ , 2Mi i i, j |ri − R j | i= j |ri − r j | i= j |Ri − R j |. (2.2). where r and R are the coordinates for each nucleus and electron, and Z denotes the atomic number. The first two terms in the Hamiltonian are kinetic operators acting on nuclei and electrons, respectively. The last three terms are Coulomb interactions between electrons and nuclei, electrons and electrons, and nuclei and nuclei. A macroscopic solid consists of N ≈ 1023 electrons and nuclei, which makes Eq. (2.1) impossible to solve in realistic cases. Therefore several approximations need to be introduced. In solid state physics, the very important BornOppenheimer approximation is commonly used and allows us to decouple the Hamiltonian in an electronic and a nuclear part and treat them separately, namely it allows us to solve the electronic part in a fixed external potential generated by the nuclei. Then the first term in Eq. (2.2) is removed and the last term is replaced by a constant. In practice, a lot of information of the many-body system system can be obtained from the total energy E and the electron density n(r). That is exactly what the Hohenberg-Kohn theorem [1] states within the Density Functional Theory (DFT). Kohn and coworkers showed rigorously that the total energy is an unique functional of the electron density n(r) and it has its minimum at the ground-state density. This simpli1 Rydberg. atomic units are adopted. The conversion factors are h¯ = 1, me =. 1 2. and e2 = 2.. 3.

(145) fies the many-body problem enormously since the theorem lets us focus on the electron density consisting of 3 space variables instead of the many-body wavefunction consisting of 3N space variables (spin excluded). It is then possible to construct a total energy functional E[n(r)] of the electron density for the many-body system. Usually, this functional for the interacting electrons in the fixed external potential is written as E[n(r)] = F[n(r)] +. . d3r vext (r)n(r),. (2.3). where F[n(r)] is an universal functional that does not depend on vext (r). To get a practical scheme using the ideas above, Kohn and Sham [2] have shown that instead of solving the many-body equation, Eq. (2.1), it suffices to solve an effective one-particle equation  2  −∇ + ve f f (r) ψi (r) = εi ψi (r).. (2.4). The effective potential ve f f (r) has the form ve f f (r) = vext (r) + 2. . d3r. n(r ) + vxc (r), |r − r |. (2.5). where the first part is the external potential generated by the nuclei, the second part is the Hartree term originating from electron-electron interactions and the last part is the exchange-correlation potential containing all many-body effects. The density is constructed using N. n(r) = ∑ |ψi (r)|2 .. (2.6). i=1. The set of equations (2.4-2.6) represent the Kohn-Sham equations. The KohnSham equation, Eq. (2.4), can be viewed as a Schödinger equation in which the external potential is replaced by the effective potential, Eq. (2.5), which depends on the density. The density itself depends on the one-particle states ψi . The Kohn-Sham equation therefore needs to be solved in a self-consistent manner. The functional F[n] in Eq. (2.3) has the form F[n] = Ts [n] +. . d3r d3r. n(r)n(r ) + Exc [n], |r − r |. (2.7). where the kinetic energy functional Ts [n] has the form Ts [n] = ∑ εi −. . d3r ve f f (r)n(r).. (2.8). i. The whole total energy functional is then obtained by combining Eqs. (2.3), 4.

(146) (2.5), (2.7) and (2.8) and has the form E[n] = ∑ εi − i. . n(r)n(r ) drdr − |r − r | 3. 3 . . d3r vxc (r)n(r) + Exc [n].. (2.9). The exact exchange-correlation potential vxc and functional Exc [n] is however not known and further approximations are needed. If they were, all manybody effects are included in the total energy functional and the whole treatise is exact.. 2.2. The local density (LDA) and the generalized gradient approximations (GGA).. In order to make any real calculations based on the Kohn-Sham ansatz, an approximation of the exchange-correlation functional has to be made. The most common and widely used approximation is the local density approximation (LDA) where the exchange-correlation energy density is assumed to be the same as in a homogeneous electron gas with that density, LDA Exc [n] =. . hom d3r n(r)εxc (n(r)),. (2.10). hom (n(r)) is the sum of the exchange and the correlation energy of where εxc the homogeneous electron gas of density n(r). The exchange energy can be calculated analytically and the correlation energy has been calculated to great accuracy numerically from quantum Monte Carlo methods [3]. The exchange LDA correlation potential vLDA xc (r) is the functional derivative of Exc , which can be written as. hom vLDA xc (r) = εxc (n(r)) + n(r). hom ([n], r) δ εxc . δ n(r). (2.11). Although the local density approximation is rather simple and expected to be valid only for the homogeneous cases, it turns out that it usually works remarkably well even for inhomogeneous cases. However, for solids LDA very often gives too small equilibrium volumes (∼ 3%) due to overbinding. An improvement to LDA is in many cases the generalized gradient approximation (GGA), where not only the density itself enters in the exchange-correlation energy but also its local gradients GGA Exc [n] =. . hom (n(r), |∇n|). d3r n(r)εxc. (2.12) 5.

(147) However, there does not exist a unique GGA functional and many different forms have been suggested. Perhaps the most successful ones are those suggested by Perdew and Wang (PW91) [4] and its simpler form by Perdew, Burke and Enzerhof (PBE96) [5]. Both keep the good part of LDA, namely the sum rules for the exchange-correlation hole, and then add gradient corrections without violating the basic properties of LDA. In many cases the GGA provides an improvement over LDA, especially regarding equilibrium volumes that are closer to experimental values.. 2.3. Spin density functional theory. The discussion has so far only been valid for non-magnetic systems. However, it is possible to extend the formalism to handle spin-polarized systems which has been done by von Barth and Hedin [6] in the spin-density functional theory. Formally, this is done by replacing the density n(r) by a generalized density matrix n(r) m(r) ρ(r) = 1+ ·σ. (2.13) 2 2 Here 1 is the 2x2 unit matrix, m(r) the magnetization density and σ = (σx , σy , σz ) are the Pauli spin matrices. An important consequence is that each one-electron state must be represented as a spinor function   αi (r) ψi (r) = , (2.14) βi (r) where αi and βi are the two spin projections. Moreover, all operators need to be represented as 2x2 matrices. The explicit form of the charge and magnetization density is then n(r) = ∑Ni=1 |ψi (r)|2 ,. and m(r) = ∑Ni=1 ψi† (r)σ ψi (r),. and the density matrix has the form   N [αi∗ (r)βi (r)]∗ |αi (r)|2 . ρ(r) = ∑ [αi∗ (r)βi (r)] |βi (r)|2 i=1. (2.15). (2.16). The sum of the diagonal elements give the charge density while the off-diagonal elements can give rise to noncollinear magnetism since the two spin projections are allowed to hybridize. The Kohn-Sham equation now takes the general form [7] 6.

(148) ∑ β.  (r) ψiβ = εi δαβ ψiβ −δαβ ∇2 + veff αβ. . α = 1, 2,. (2.17). where the only non-diagonal part of veff αβ (r) is the exchange correlation potential which has to be rotated to a local frame of references in which the exchange-correlation potential is diagonal (Section 2.4). In the case of collinear magnetism, a unique global magnetization axis can be defined in, for instance, the z-direction. Then the density matrix and operators all reduce to a diagonal form. The two spin projections have different potentials and can be solved independently of each other so that the density matrix is completely described by the scalar quantities n = n↑ + n↓ and mz = n↑ − n↓ .. 2.4. Non-collinear magnetism. An essential difference between collinear and non-collinear magnetism is that the latter lacks a global spin quantization axis [7, 8]. In the case of noncollinear magnetism we must keep the spinor formalism and work with 2x2 matrices for the operators. A first approximation, which can be lifted by treating the magnetization density as a vector field, is to consider only inter-noncollinearity, that is, within each muffin-tin or atomic sphere there is a unique quantization axis (which is different for different spheres). A local frame of reference defined by the Euler angles, θν and φν , with respect to some global frame of reference is then introduced in each sphere, labeled by ν , so the density matrix and effective potential matrix are diagonal in that local frame. By using the standard spin- 12 rotation matrix [9]      . . cos θ2 exp iφ2 sin θ2 exp − iφ2     , (2.18) U(θ , φ ) = . . − sin θ2 exp iφ2 cos θ2 exp − iφ2 the effective potential matrix in the global frame of reference can be obtained by an unitary transformation   ↑ v (r) 0 ef f ve f f (r) = U† (θ (r), φ (r)) U(θ (r), φ (r). (2.19) 0 v↓e f f (r) Here v↑e f f (r) and v↓e f f (r) are the components of the effective potential matrix in the local frame of reference. The usual parameterizations of the exchangecorrelation potential can now be used in that frame. In the case of GGA, modifications are needed that involve the gradient of the spin axis. The Euler angles θ and φ are directly obtained from the density matrix [7] 7.

(149)  2 (Re(ρ12 ))2 + (Im(ρ12 ))2 tan θ = , ρ11 − ρ22 tan φ = −. 2.4.1. Im(ρ12 ) . Re(ρ12 ). (2.20). (2.21). Spin spirals. A particularly important form of non-collinear magnetic structure, is the so called spin spiral. It is a periodic structure where the magnetization density both changes in value and direction throughout the crystal. An example is shown in Fig. 2.1 and mathematically the spiral is defined by assigning Cartesian coordinates for each magnetic moment [8] m = m [cos φ sin θ , sin φ sin θ , cos θ ] .. (2.22). Here φ = q · R, where q is the spin spiral propagation vector, and θ is the azimuthal Euler angle between the magnetic moment and the propagation vector.. Figure 2.1: A spin spiral magnetic structure with propagation vector in z-direction.. Spin spirals posses special symmetries, formulated within so-called spinspace group operations [10, 11], which makes it possible to derive a generalized Bloch theorem (in absence of spin-orbit coupling) which not only includes translation but also rotation. This theorem makes it possible to treat the system within the chemical unit cell without need for a supercell. The eigenstates will take the following form:   q αik (r)ei(k− 2 )·r , (2.23) ψik (r, q) = q βik (r)ei(k+ 2 )·r 8.

(150) which has the consequence that the two spin channels are allowed to hybridize.. 9.

(151)

(152) 3 Computional methods. 3.1. Bloch’s theorem. Our goal is to find solutions to the Schrödinger equation for electrons moving in an effective potential denoted by V (r) in a crystal (−∇2 +V (r))ψi (r) = εi ψi (r).. (3.1). The translational symmetry of the crystal has the consequence that the potential has to be invariant under displacement by any lattice translation vector R, i.e. V (r + R) = V (r). (3.2) The vectors R are expressed by the Bravais lattice vectors ai , i = 1, 2, 3 found in any textbook in solid-state physics, e.g. Ashcroft-Mermin [12], so that R = ni ai = n1 a1 + n2 a2 + n3 a3 ,. (3.3). where ni are integers. The solutions to the Schrödinger equation in a periodic potential is then ψnk (r + R) = eik·R ψnk (r), (3.4) where k is a vector in reciprocal space and n is the band index. The last equation is the Bloch’s theorem.. 3.2. Basis set expansion. In order to solve the Kohn-Sham (KS) equation, Eq. (2.4), the wavefunction needs to be expanded in a known basis set {χi } in the following form N. |ψ = ∑ ci |χi .. (3.5). i=1. Operators are then treated as matrices and functions as vectors on a computer. The Kohn-Sham equation can then be transformed to a general eigenvalue problem (H − εO) c = 0, (3.6) 11.

(153) where c is the vector of {ci } and H and O are the Hamiltonian and overlap matrices respectively, with matrix elements {H }i j = χi |H|χ j , {O}i j = χi |χ j .. (3.7). The eigenvalue problem and calculation of matrix elements are the most time consuming parts in the self-consistent cycle and therefore needs to be implemented in an efficient way. These issues will be covered in more detail in Section A.3. A very important consequence of the translational periodicity of an infinite crystal is that the basis set can be written as a Bloch sum ∞. χi (k, r) = ∑ eik·T χi (r − T),. (3.8). T. where T is a primitive translation vector and k is a vector in the reciprocal space. This means that in order to obtain the density in the entire space, the KS-equation needs to be solved only for a selected number of k-points lying inside the Brillouin zone (BZ). The equations can be solved separately and independently for each k-point, though an integration over the Brillouin zone is necessary in the end to obtain the band energy.. 3.3. The LMTO-ASA and the KKR-ASA methods. The chosen basis set {χi } needs on one hand to be mathematically simple to facilitate the calculation of matrix elements and on the another hand it needs to be suitable to the problem of interest to reduce the number of basis functions. In 1975, Andersen [13, 14] constructed a basis set of linear-muffin-tin-orbitals (LMTO), which are very well adapted to the crystal problem. It is a minimal basis set which reduces the size of the Hamiltonian matrix, that in turn reduces the computational effort. The starting point is to approximate the full crystal potential with a muffintin potential VMT of the shape of a spherically symmetric potential well with radius S near the atomic site and a flat potential outside (interstitial)  V (r), r ≤ S VMT (r) = (3.9) VMT Z , r ≥ S. One advantage of the chosen form of the potential is that the wavefunctions can be represented in terms of the eigenstates in each region, i.e. spherical harmonics multiplied by a radial function around each atom and spherical 12.

(154) waves in between. The entire problem is then recast into a matching problem on the sphere boundary. More specifically, inside the muffin-tin sphere where the potential is spherical symmetric the solution can be separated in a radial and an angular dependent function ϕRL (ε, r) = il ϕRl (ε, r)YL (ˆr),. (3.10). where YL (ˆr) is a spherical harmonics and L = {l, m}. The radial function, ϕRl (r, ε), is the numerical solution to the radial scalar-relativistic Dirac equation [7]      V (r) − ε 1 d l(l + 1) 2 d + (V (r) − ε) 1 − r + − 2 r dr dr r2 c2  −1 dV d. + V (r) − ε − c2 (3.11) ϕRl (ε, r) = 0, dr dr. where c is the speed of light. The equation is solved using a logarithmic radial grid where the density of grid points is higher near the nucleus. Outside the muffin-tin orbital (interstitial), where the potential is constant the solution is a linear combination of spherical Bessel JL (κr) and Neumann KL (κr) functions, respectively. Here κ 2 = ε −VMT Z is the kinetic energy. Using the very important Atomic Sphere Approximation (ASA), which is employed both in the LMTO and in the Korringa-Kohn-Rostoker (KKR-ASA) method (Section 3.4), the radius of the muffin-tin spheres S is chosen to be equal to the Weigner-Seitz radius SW S . The muffin-tin spheres then overlap with each other and are space-filling, i.e. the interstitial part of the crystal is neglected. Moreover, in the ASA the kinetic energy κ 2 in the interstital is chosen to be equal to zero. If the crystal is not close-packed it is necessary to include so called empty spheres to reduce the overlap of muffin-tin spheres. To a first approximation, one can correct on ASA by adding some extra terms to the LMTO or KKR matrices, the so called combined correction terms [14]. In the ASA, the muffin-tin orbitals (MTO) have the form [15]  χLMT O (ε, r). =. Nl (ε)ϕL (ε, r) + Pl (ε)JL (r), r ≤ S, KL (r),. where Nl (ε) = (2l + 1). r ≥ S,  r l+1. Pl (ε) = 2(2l + 1). S. 1 1 , ϕ(ε, S) l − Dl (ε).  r 2l+1 D (ε) + l + 1 l. S. Dl (ε) − l. ,. ,. (3.12). (3.13). (3.14) 13.

(155)  Dl (ε) =. S ∂ ϕl (ε, r) ϕl (ε, r) ∂r.  .. (3.15). r=S. The function Nl (ε) is the so-called normalization function, Pl (ε) the potential function and Dl (ε) is the logarithmic derivative at the muffin-tin sphere boundary S. The construction ensures that the muffin-tin orbital is continuous and differentiable in all space. The envelope function centered at a muffin-tin at R can be expressed inside a neighboring muffin-tin sphere at R’ in terms of regular solutions (Bessel functions) JL (rR ) as KL (rR ) = − ∑ SRL,R L JL (rR ),. (3.16). L. where R = R and SRL,R L are structure constants which only depend on crystal structure. It is possible by a transformation to screen the structure matrix and envelope function in order to minimize the overlap between different muffin-tin orbitals [16, 17]. This representation is called tight binding representation (TB). An essential difference between the KKR-ASA and the LMTO method is that the former uses the exact parameterization of the potential function while in LMTO a linearization is used. The basic idea behind the linearization is to eliminate the energy dependence in the basis set. This can be achieved by approximating the radial solutions by a Taylor expansion to first order ϕRl (ε, r) ≈ ϕRl (εν , r) + (ε − εν )ϕ˙ Rl (εν , r),. (3.17). where the dot denotes a energy derivative that is evaluated at the linearization energy, εν . Then, the LMTO-basis can be used in the wavefunction expansion, Eq. (3.5), and in the general eigenvalue problem, Eq. (3.6), in order to obtain a solution to the one-particle Schrödinger equation using the variational principle. Alternatively, one could use the fact that all the tails from all other muffin-tin spheres must cancel inside a given muffin-tin sphere. This is the tail cancellation theorem, which gives rise to the KKR-ASA equations which has the form det [Pl (ε)δLL − SLL (k)] = 0, (3.18) where k is a vector in the Brillouin zone. In this way, energy-dependent basis functions, Eq. (3.12), are used in the tail cancellation theorem in the KKR-ASA method while in the LMTO-ASA method, energy-independent basis functions are employed using the variational principle. Both methods are expected to give similar results in most cases. Both the TB-LMTO-ASA and the KKR-ASA method have been generalized to handle noncollinear magnetism as described in Section 2.4. An im14.

(156) portant modification to the collinear case is that the structure constant matrix SLL (k) in Eq. (3.18) or in the calculation of matrix elements in the Hamiltonian and overlap matrices, Eq. (3.7), must be rotated from the global frame of reference to the local frame by the unitary transformation    0 SLL (k − 12 q) SLL (k) = U(θ (r), φ (r)) U† (θ (r), φ (r),  0 SLL (k + 12 q) (3.19) where q is the spiral propagation vector.. 3.4. KKR-ASA-CPA Green’s function method. The strength behind the Green’s function approach compared to the standard Hamiltonian is its ease to handle perturbations from an ideal state, for instance impurities and disorder. The Green’s function G of the system can be obtained from a reference Green’s function G0 from the Dyson relations G = G0 + G0tG0 + G0tG0tG0 + .... = G0 + G0tG ⇒ −1 G = (G−1 0 − t) ,. (3.20). where t is a scattering matrix. The poles on the real axis of the Green’s function represent the eigenstates of the system. In the KKR-ASA Green’s function method, an auxiliary Green’s function is introduced from the relation g(k, z) = [P(z) − S(k)]−1 ,. (3.21). where z is a complex energy. The corresponding real-space Green’s function GLL (r, r , z) is obtained by a transformation [15, 18]. Once the Green’s function is obtained, all physical quantities are accessible, for instance, the local density of states that is directly obtained by 1 nL (r, z) = − Im GLL (r, z), π. and the electron density n(r) = −. 1 π.  εF −∞. dz Im GLL (r, r , z).. (3.22). (3.23). For the evaluation of the last equation, a contour integration in the complex plane is usually employed. In order to describe a random alloy and disorder the coherent potential approximation (CPA) has been used. The main idea behind CPA is to replace the random arrangement of the atoms with a uniform effective medium that describes the average properties of the system and restores the translational in15.

(157) variance. An involved analysis how to calculate the effective medium includes a Green’s function analysis of the multiple-scattering theory and it is found in several textbooks, e.g. Turek [15]. Here it suffices to say that CPA is a very convenient method to treat disorder and multi component alloys since it can handle any concentration of impurities. The main shortcoming of the CPA is that it only gives averages of the properties and can not deal with fluctuations of the properties due to local environment effects.. 16.

(158) 4 Monte Carlo simulations. 4.1. Introduction. Monte Carlo simulations and molecular dynamics simulations are the two main approaches to computer simulations in statistical physics. The aims of these simulations are to study equilibrium and non equilibrium thermodynamic systems by stochastic computer simulations. Computer simulations allow for studies of complicated systems where analytical solutions are not possible. Computer simulations also have some advantages over experiments, for example one can calculate and study properties that are difficult to obtain in experiments, for instance, correlations between different atoms. Further, one can predict the behavior of the system of interest before expensive and time-consuming experiments are performed, facilitating the understanding of the system. There are, of course, also some disadvantages with computer simulations. Firstly, a real system is often very complex and in order to model such a system approximations and simplifications are necessary. The modeling of the system is crucial for the simulations to be good, so that great care must be taken into account. Secondly, another disadvantage is that a computer only has a finite memory size, so the simulations must be performed on fairly small systems, typically N ≈ 102 −107 atoms, while a real system has approximately N ≈ 1023 atoms. Therefore a nontrivial extrapolation to much larger systems has to be performed. Fortunately, there are well developed methods, called the finite size scaling theory, for how one should do this extrapolation (Section 4.5). Still, qualitative knowledge of a real system can be obtained through studies of simplified model systems. The simulations on these model systems are in principle numerically exact, i.e., the results are accurate apart from statistical errors, that can be made as small as desired if only enough computing time is used. Monte Carlo (henceforth called MC) simulations will only be discussed in the following. MC simulations in statistical physics rely on the use of random numbers to generate a stochastic trajectory through the phase space of the model considered, and to calculate thermal averages if equilibrium properties are desired. The first successful simulation was carried out by Metropolis et al. [19] in 1953. Since then this technique has been under heavy development which is likely to continue in the future, since better and better computers 17.

(159) become available with time. Presently, it is possible to carry out simulations of good quality even on personal computers but the simulations have to be performed on parallel computers if high accuracy and/or large systems are required. MC simulations can be applied to many various fields like: diffusion processes in solids, fluid-, surface- and plasma- physics, properties of alloys, crystal growth kinetics, quantum many-body problems, critical phenomena in magnetic systems, kinetics of adsorption on surfaces and thermal properties of disordered systems. For a more complete survey of MC simulations in statistical physics, see Refs. [20–22].. 4.2. Thermodynamic Averages. The thermodynamic system of interest consists of N particles in a volume V at a specified temperature T. The magnetic field H can also be included as a additional thermodynamic coordinate. Now each particle i is described by a set of dynamical variables {αi }. In spin models, which this work deals with, {αi } corresponds to the spin vector Si of particle i. Let Xν be a point in phase space Ω. Then Xν becomes Xν = {{α1 } {α2 }, . . . , {αN }},. (4.1). where Xν fully describes the configuration of the system. The interactions between the particles are described by a Hamiltonian, H(Xν ). The probability density Peq of the point Xν to lie in a differential volume element around Ω is given in equilibrium statistical mechanics as Peq (Xν ) =. 1 exp[−H(Xν )/kB T ] Z. (4.2). where Z is the partition function, kB Boltzmann’s constant and T the temperature. Further, let A(Xν ) denote a thermodynamic observable. The thermal average (expectation value) of A(Xν ) is given in classical statistical mechanics as [23] 1 AT = Z. 4.3.  Ω. dXν A(Xν )exp[−H(Xν )/kB T ]. (4.3). Importance Sampling and Metropolis Algorithm. The basic idea behind MC simulations is to calculate the phase space integrals in Eq. (4.3) numerically. In principle, standard numerical integration methods could be used, but the problem is the high dimensionality of the integration 18.

(160) space Ω. Another problem is that the exponential factor (Boltzmann factor) in Eq. (4.2) and Eq. (4.3) is almost vanishingly small for most of the configurations. This means that a very few number of configurations will contribute to the expectation value of A. Therefore, a method is needed to restrict the sampling only to the interesting volume in phase space. Metropolis et al. [19] introduced in 1953 a sampling algorithm based on this concept, called importance sampling. In this algorithm a configuration Xν is not chosen completely at random but with a probability Peq (Xν ) that is proportional to its Boltzmann factor. Then the average of A over M phase space points Xν A ≈ A =. −1 ∑M ν=1 exp[−H(Xν )/kB T ]A(Xν )P (Xν ) −1 ∑M ν=1 exp[−H(Xν )/kB T ]P (Xν ). (4.4). reduces to a simple arithmetic average A =. 1 M ∑ A(Xν ) M ν=1. (4.5). The probability Peq (Xν ) is not explicitly known though. This was solved by Metropolis who proposed a method to generate a sequence of states Xν → Xν+1 → Xν+2 → · · · , where each step has a transition probability W (Xν → Xν+1 ). This kind of sequence is called a Markov chain. From the theory of Markov chains in probability theory, one can show that P(Xν ) → Peq (Xν ) as M → ∞, if the condition of detailed balance is fulfilled. Peq (Xν )W (Xν → Xν  ) = Peq (Xν  )W (Xν  → Xν ).. (4.6). A simple choice for W fulfilling the necessary conditions is given in terms of the energy change ∆E = H(Xν  ) − H(Xν ), as proposed by Metropolis  1 dE < 0 W (Xν → Xν  ) = (4.7) exp(−∆E/kB T ) dE > 0 Note that this is the only place where the temperature enters in the algorithm.. 4.4. Practical Implementation of the Metropolis Algorithm. In the preceding sections the basic theory behind MC simulations has been outlined. In this section some additional comments and how a MC simulation is actually performed will be discussed. Markov chains were briefly discussed in Section 4.3, but nothing was said about what the move X → X’ actually 19.

(161) means in practice. The condition of detailed balance, Eq. (4.6), implies  ∆E  W (X → X ) exp = , (4.8) − W (X → X) kB T where ∆E is the energy change caused by the move X → X’. The success of the trial move X → X’ depends on the energy difference, which has to be fairly small. On the other hand, if the energy change is large, it is almost impossible to carry out a trial move and the simulation will have a very slow convergence. In practice, a MC simulation procedure is performed in the following way as   

(162)       .         

(163)  ! "   # $%%&  ,&. -. '  . %. ()  .    * +  * '            +  *'. Figure 4.1: Flow chart of a Monte Carlo simulation with Metropolis algorithm.. shown in Fig. 4.1. First, one specifies initial conditions for the set {αi } of dynamical variables. This choice is arbitrary since it should not influence the final configuration. However, it is good to use different initial conditions to check that this holds. Then, one particle i is chosen either randomly or systematically and its dynamical variable αi is changed to αi . The energy change ∆E associated with this trial move is calculated. From the energy change, the transition probability W is calculated using Eq. (4.7). Then, a random number r uniformly distributed in [0, 1] is drawn. If r > W , the trial 20.

(164) move is rejected and the state with the old configuration {αi } is kept and is counted once more in the averaging. Otherwise, if r < W , the trial move is accepted and the new state with configuration {αi } is used in the averaging. This procedure is then repeated. Finally, the averages of the desired quantities are calculated. The Metropolis algorithm is very well suited for running on massive parallel computers in order to study very large systems not possible on a serial computer. We have successfully implemented a fine-grained parallel version of the Metropolis algorithm based on domain decomposition of the lattice. More technical details of the implementation can be found in Section A.2.. 4.5. Finite Size Scaling Theory. In a MC simulation, the number of spins N is typically between 102 and 107 , while a real system has approximately 1023 spins. Due to the finite lattice size considered, finite size effects are introduced in the simulation. These effects are particularly severe in the critical region near the phase transition. Unlike experiments though, the lattice size can be varied in the simulation and the magnitude of the finite size effects can therefore be estimated using Finite Size Scaling Theory (henceforth called FSS). The FSS theory is a very powerful method in MC simulations for estimating and eliminating finite size effects. The scaling properties of the observables magnetization, M , susceptibility, χ and relaxation time, τ are obtained directly from the FSS [21, 24]   β β  L = L− ν M(tL  ν1 ) M = L− ν M ζ L γ γ 1 = L ν χ(tL ν ) kB T χ = L ν χ ζ L 1 τ = Lz τ = Lz τ(tL ν ). ζ. (4.9) (4.10) (4.11). Here, t = 1 − TTc , β , γ, ν and z are all critical exponents characterizing the  , χ and τ are scaling functions and ζ is the correlation phase transition [23], M length. In practice, Eqs. (4.9)-(4.11) are used by varying simultaneously the critical temperature and critical exponents until the curves for different lattice sizes collapse on a single curve. It is obvious that this “trial and error” technique is not of a very high accuracy, the error is typically a few percent. As long as a very high accuracy is not required, this technique is however very straight forward and easy to use. It must be noticed though that Eqs. (4.9)-(4.11) are only supposed to hold in the limit L → ∞,t → 0, but it turns out that the 21.

(165) equations hold reasonably well even for small lattices and temperatures away from Tc . A second method to obtain the critical temperature and critical exponents is to estimate the position of the maxima of the observables at Tc for different lattice sizes and extrapolate to infinite size. For a finite lattice with linear size L the critical temperature is shifted compared to an infinite lattice by Tc (L) − Tc (∞) ∝ L− ν . 1. (4.12). Both these methods require the calculation of critical exponents. The correlation length ζ cannot exceed the lattice size L. As a consequence of this, the transition is rounded for a finite lattice. The susceptibility, for instance, is not diverging completely but exhibits a finite height of the maximum. A third and alternative method, which does not require the critical exponents, to obtain the critical temperature Tc is to look at the fourth order cumulant UL , introduced by Binder [22] UL = 1 −. M 4  . 3 M 2 2. (4.13). The scaling properties of UL are  UL = U. L ζ.  ν ). = U(εL 1. (4.14). One can show that UL → C for T > Tc as L → ∞, UL → 23 for T < Tc and UL → U ∗ (U ∗ ∈ [C, 23 ]) for T = Tc . UL decreases monotonically with T . If a pair of curves UL and UL are plotted against T , they should intersect at T = 0, T → ∞, and T = Tc . Therefore Tc can be determined by just looking at the intercepts of the curves. This technique has proven to be very powerful and has been used in all simulations in this work.. 22.

(166) 5 Magnetism at finite temperatures. Although density functional theory formally can be extended to finite temperatures, it is very rarely used, mainly because there exists no physically meaningful approximation of the exchange-correlation potential. Statistical mechanics treatments of model Hamiltonians is much more frequently used, as in this work. The basic properties of a ferromagnet at finite temperatures are as follows : At zero temperature, the magnetic moments are aligned and the material has a large spontaneous moment MS even in zero applied field. The temperature dependence of the magnetization at low temperatures is proportional to T 3/2 according to the Bloch law. All ferromagnets undergo a phase transition to a paramagnetic state with no spontaneous moment when heated to sufficiently high temperatures, because thermal fluctuations destroy the ferromagnetic order. The transition temperature is called the Curie temperature Tc . Tc gives the upper operational temperature of the magnet and is of fundamental importance for applications. There are basically two different types of magnetic excitations in a ferromagnet, namely 1. Stoner excitations, in which an electron is excited from an occupied state of the majority spin channel to an empty state of minority spin channel creating an electron-hole pair of triplet spin. This causes longitudinal fluctuations of the magnetization. 2. Magnons or spin waves, responsible for collective transverse fluctuations of the magnetization. At very low temperatures, spin wave excitations are of much more importance than Stoner excitations. A reasonable approximation is therefore to completely neglect the Stoner excitations and focus only on the spin waves. The approximation is well justified for magnets with large exchange splitting, ∆, for instance, diluted magnetic semiconductors (Section 6.2). A further approximation commonly used is the adiabatic approximation which is based on the assumption that the motion of the electrons is much faster than that of the magnetic moments. This assumption implies that we can separate the timescales of the problem similar to the Born-Oppenheimer approximation and allows us to perform calculations on a static spin-wave neglecting the precession motion of the moments. It can be shown that this is a valid approximation 23.

(167) if the spin wave energies ω(q) are small compared to the bandwidth and the exchange splitting. In order to obtain thermodynamic properties of ferromagnets, for instance Tc , a two step approach has been employed. In the first step the self consistent electronic structure is calculated at zero temperature using density functional methods and the total energy is mapped onto an effective classical Heisenberg Hamiltonian He f f = − ∑ Ji j ei · e j .. (5.1). i= j. Here Ji j is the exchange interaction between the magnetic atoms i and j and ei is the unit vector of the magnetic moment at site ri . Note that the magnetic moment is included in the exchange interaction and positive (negative) values correspond to ferromagnetic (antiferromagnetic) coupling. In the second step the Hamiltonian is solved using statistical mechanics methods like Monte Carlo (MC), the random-phase approximation (RPA) or the mean field approximation (MFA). In both MFA and RPA, the critical temperature is directly related to the exchange parameters. More specifically, the Tc estimate in MFA is (1 atom per cell) kB TcMFA =. 2 3. ∑ J0 j ,. (5.2). j=0. and in RPA 3 (kB TcRPA )−1 = − lim Gm (z); 2 z→0. Gm (z) =. 1 [z − J(0) + J(q)]−1 , N∑ q. (5.3). where z is a complex energy and Gm (z) is a magnon Green function. In general, TcRPA < TcMFA , and in the case when all Ji j are ferromagnetic, it can be shown that TcRPA represents a lower estimate of Tc and TcMFA an upper estimate of Tc [25]. The MC method gives a numerically exact solution to Eq. (5.1), including Tc and the critical exponents.. 5.1. Exchange interactions. There are basically two different approaches to calculate the exchange interactions, Ji j , in the Heisenberg model from first principles. The first approach is a real space method based on multiple-scattering theory and employs the Andersen local force theorem to calculate the energy change due to an infinitesimal rotation of a central moment in a ferromagnet [26, 27]. In the framework of LMTO-ASA or KKR-ASA and CPA the energy change could then be related 24.

(168) to the exchange interaction employing the vertex-cancellation theorem [28] as    1 Q,Q QQ ,↑ Q Q,↓ Q Q ¯ Ji j = − TrL ∆i (z) g¯i j (z) ∆ j (z) g¯ ji (z) dz . (5.4) 8πi C Here TrL denotes the trace over the angular momentum L = (m), ∆Q i (z) = Q,↑ Q,↓ Pi (z)−Pi (z) is a diagonal matrix defined via the potential functions PiQ,σ (z) and is closely related to the exchange splitting corresponding to the magnetic  ,↑  atom Q, g¯iQQ (z) and g¯Qji Q,↓ (z) refer to site off-diagonal blocks of the condij tionally averaged Green function, namely the average over all configurations with a pair of magnetic-atoms fixed at the sites i and j with the components Q and Q . The energy integration is performed along a contour in the complex energy plane which encircles the occupied part of the valence band. Positive  (negative) values of JiQQ correspond to the ferromagnetic (antiferromagnetic) j coupling, respectively, while the values of magnetic moments are included in  the definition of JiQQ j ’s by construction. A sum rule exists for the exchange interactions  Q J¯i0,Q = ∑ J¯iQQ (5.5) j cj , j,Q. . where J¯i0,Q is the on-site exchange interaction and cQj is the concentration of the component Q at site j. Alternatively, J¯i0,Q can be calculated from the relation    1 0,Q Q,↑ Q,↓ Q,↑ Q,↓ Q Q ¯ Ji = TrL ∆Q i (z)(g¯ii (z) − g¯ii (z)) + ∆i (z)g¯ii (z)∆i (z)g¯ii (z)) dz , 8πi C (5.6) where g¯Q,σ (z) is on-site diagonal block of the conditionally averaged Green ii function. In the case of non-random systems, the condition in Eq. (5.5) is exactly fulfilled within numerical accuracy while in random systems the difference could be as large as 20%. The violation of the sum rule indicates the importance of the vertex corrections in the evaluation of the averages of the exchange interactions and serves as an additional check of the validity of the calculated exchange interactions. The second method is the frozen magnon method [29–31]. In contrast to the method above it is a reciprocal space method based on spin spirals (Section 2.4.1). In the frozen-magnon method, the total energy E(q, θ ) of a selected number of spirals is calculated ∆E(q, θ ) =. ∑ J0 j. 1 − exp(iq · R0j ) sin2 θ ,. (5.7). j=0. where R0 j = R0 − R j and ∆E(q, θ ) = E(q, θ ) − E(0, θ ). The last equation is 25.

(169) obtained by combining Eqns. (2.22) and (5.1) and can be written as ∆E(q, θ ) = [J(0) − J(q)] sin2 θ ,. (5.8). where J(q) is the Fourier transform of the exchange parameters J(q) =. ∑ J0 j exp(iq · R0j ).. (5.9). j=0. To obtain the exchange parameters in real space, an inverse Fourier transformation must be performed over a uniform mesh of N q-points J0 j =. 1 J(q)exp(−iq · R0j ). N∑ q. (5.10). Typically around 500 q-points in the whole BZ are needed to obtain convergence for the first 30 shells of exchange interactions. It should be noted that the Fourier components J(q) in Eq. (5.9), could be obtained either from a self-consistent calculation or from the force-theorem. Both of the methods described here complement each other and should in principle give the same set of exchange parameters if calculations are made properly. An example is displayed in Fig. 5.1 for bcc Fe where the first 18 shells of exchange interactions have been calculated using both methods. The spin wave (magnon) spectrum of the Heisenberg model (1 atom per cell) is obtained from the relation 4 ω(q) = [J(0) − J(q)] , (5.11) M. 1.2 LMTO (frozen−magnon) KKR (real space). 1. J0j (mRy). 0.8 0.6 0.4 0.2 0 −0.2 0.5. 1. 1.5. 2. 2.5. 3. 3.5. d (a). Figure 5.1: Exchange interactions in mRy plotted as a function of distance (given in lattice constant a) of bcc Fe using both the real-space and the frozen-magnon methods.. 26.

(170) and in the frozen-magnon approach 4 ∆E(q, θ ) , θ →0 M sin2 θ. ω(q) = lim. (5.12). where θ is the azimuthal Euler angle. For cubic systems and for small qvectors, ω(q) ≈ D|q|2 with the spin wave stiffness constant D equal to D=. 2 |R|2 J0R . 3M ∑ R. (5.13). Each of the two methods described here has their own strengths and disadvantages. Regarding calculations of magnon spectra, the frozen-magnon approach is clearly superior while the real-space method is more suitable for calculations of exchange interactions, especially for systems with more than one atom per cell.. 27.

(171)

(172) 6 Spintronics and diluted magnetic semiconductors. 6.1. Introduction. Semiconductor based electronics is the foundation to the information technology society we live in today. It is based on the manipulation of the charge of electrons and applications are among others integrated circuits. Mass storage of information, like in hard disks and magnetic tapes, manipulates the spin of the electron in a ferromagnetic material. The charge and spin degrees of freedom of the electrons have been used separately. Recently, a new field has emerged, called spintronics or sometimes magnetoelectronics, which combine both the spin and charge of electrons to obtain devices with new functionality and increased performance. The advantages of these new devices would be nonvolatility, increased data processing speed, decreased electric power consumption, and increased transistor density compared with conventional semiconductor devices. The current status in this field can be found in Refs.[32][36]. One main class of application of spintronics is sensors. The first successful application of this kind was the GMR (Giant Magneto Resistance) read head used in hard disks. The effect was discovered in 1988 [37] and the first commercial products were out in 1997. Nowadays, in principle all hard disks on the market have GMR heads. The concept of magneto resistance (MR) is briefly the following, in a multilayer, which has alternative magnetic and nonmagnetic layers, the resistance is different if the magnetic layers are ferromagnetic or antiferromagnetic coupled, which can be tuned by applying an external magnetic field. The next application that is expected to have a large commercial and economical impact is nonvolatile memories. The term nonvolatile means that the information stays in the memory even if the electric power is switched off, in contrast to semiconductor memories that we use today. Magnetoresistance random access memory (MRAM) products are already commercially available and apart from being nonvolatile have several potential advantages compared to the semiconductor memories (DRAM,SRAM) which makes them interesting; they have much faster writing times, consume less energy for writing and are insensitive to cosmic radiation. Moreover, compared to hard disks that we use today, MRAM has data access times that are much shorter. 29.

(173) The second main class of application of spintronics is spin transistors and the underlying basic concept of spin injection. Spin injection of a spin polarized current from a ferromagnet into a semiconductor is necessary in order to carry out qubit (quantum bit) operations required for quantum computing. The first demonstration of how a spin-FET (Field Effect Transistor) might operate was done by Datta and Das [38] in 1990. It has not yet been realized in experiments, mainly because it requires spin injection from a ferromagnetic metal (Fe) into a semiconductor (e.g. InAlAs or InGaAs). This has been shown to be practically difficult due to the large conductivity mismatch between a metal and a semiconductor [39]. The use of ferromagnetic semiconductors, especially diluted magnetic semiconductors (DMS) is instead rather advantageous. DMS will be discussed in more details in Section 6.2. It has been demonstrated by Ohno [40] that it is indeed possible to inject spins from a DMS ((Ga,Mn)As) into a semiconductor. Moreover, a spin polarization as large as possible is most desired. So called half metallic ferromagnets (HMF) are the ultimate materials in this respect since 100 % spin polarization is expected from these materials. Half metallic materials are discussed in more details in Section 6.5. In this thesis, Papers I-X, deals with spintronics. In Paper I, a study of a common defect in DMS, namely antisites, is considered and its effects on the magnetic structure. In this case, Mn-doped GaAs was studied. It is shown that the system develops a magnetic structure with partially disordered moments for increasing concentration of antisites. In Paper II, a systematic study of the electronic and magnetic structure of Mn-doped GaAs was performed, including the effect of the common defects As antisites and Mn interstitial atoms, and calculation of critical temperatures using the frozen-magnon approach and the VCA in the statistical treatment of the Heisenberg Hamiltonian. In Paper III, the critical temperatures of Mn-doped GaAs including As antisites were calculated using exchange parameters from CPA calculations. The treatment of the Heisenberg model was based on the VCA and three different approaches were employed, the mean field approximation (MFA), the random phase approximation (RPA) and Monte Carlo (MC) simulations. It is shown that all three approaches agree well with each other, suggesting that the spin fluctuations are small in these systems. In Papers IV and V, a similar approach as in Paper III is employed to calculate critical temperatures but with the extremely important difference that the real random lattice is used in the MC simulations. The use of a real random lattice has important consequences which are discussed in more details in Section 6.3. The Papers VI and VII are overview articles of Papers I-V. The Papers VIII-X are devoted to half-metallic materials. In Paper VIII a systematic study in order to find half-metallic materials in the zincblende structure was performed which is partly summarized in Paper X, and in Paper IX the exchange interactions and Curie temperatures 30.

(174) were evaluated for the Heusler alloys NiMnSb and Ni2 MnSb. In the following sections, some selected results from the above papers are presented.. 6.2. Diluted Magnetic Semiconductors (DMS). Ferromagnetism and semiconducting properties coexist in magnetic semiconductors, such as Eu- and Mn- chalcogenides and Cr spinels, but the crystal structure of such materials is very different from traditional semiconductors like Si and GaAs used in semiconductor industry today. In addition, the critical temperature is quite low and the crystal growth of these materials is difficult. Instead, magnetic semiconductors based on non-magnetic semiconductors, so called diluted magnetic semiconductors (DMS), are more desirable. They can be realized by alloying of magnetic elements, for instance Mn. In Fig. 6.1, the three types of semiconductors are sketched. a). b). c). Figure 6.1: Three types of magnetic semiconductors. a) magnetic semiconductor, b) diluted magnetic semiconductor, an alloy of a magnetic element and a nonmagnetic semiconductor, c) nonmagnetic semiconductor with no magnetic ions.. In II-VI- based DMS, such as ZnSe and ZnS, the valence of group II cation (Zn), is the same as the common magnetic ion Mn, making it difficult to dope them to p- or n-type. The magnetic interactions are dominated by antiferromagnetic direct exchange between the Mn atoms, resulting in antiferromagnetic, paramagnetic or spin-glass structure. However, ferromagnetism can sometimes be obtained by additional hole doping or using another magnetic material of another valence, like Cr or Co. In III-V based DMS, it is possible to have ferromagnetic semiconductors by substituting randomly the cation by a magnetic ion like Mn2+ . The Mn2+ ion both leads to local moment formation and act as acceptor, introducing valence band holes that are very important when determining the electronic and magnetic properties. Ferromagnetism in DMS occurs because of interactions between the magnetic local moments that are mediated by holes in the semiconductor valence band but the exact origin of the ferromagnetism is still under debate. The most 31.

References

Related documents

The way the particles arrange to reach minimal energy can be observed in fig- ure 5.1. In the figure a composition of two images is shown. The background image is the calculation of

We trained linear support vector machines (SVM) to classify two regions from T 1 weighted MRI images of the brain, using the original Haralick features and the invariant

To get the maximum orbital moment in a p-shell with two electrons, 202 both the saturated w101 0 and the partially polarised w0 will contribute to he energy gain, but also to the

Texture analysis in treatment outcome assessment is studied in Paper II, where we showed that texture can distinguish between groups of patients with different

These compounds were selected by di fferent criteria: 6 compounds based on predicted activity by QSAR and PCM modeling, 25 compounds based on structure- based docking

The specific statistical methods we investigate is the likelihood ratio, which gives expressions for the drift parameters for CKLS and least squares estimation, which is used

Since there is a higher abundance of water chemistry data in surface com- pared to deep water (as stressed in Chapter 2.3) multiple regressions along with surface chemistry data

Importantly, the uniformity of the pattern was observed to be consistent over a large area: the SEM side-view image in Figure 2.2 (b) shows the cross-section