• No results found

Elastic and inelastic scattering effects in conductance measurements at the nanoscale : A theoretical treatise

N/A
N/A
Protected

Academic year: 2021

Share "Elastic and inelastic scattering effects in conductance measurements at the nanoscale : A theoretical treatise"

Copied!
88
0
0

Loading.... (view fulltext now)

Full text

(1)

ACTA UNIVERSITATIS

UPSALIENSIS UPPSALA

2015

Digital Comprehensive Summaries of Uppsala Dissertations

from the Faculty of Science and Technology

1282

Elastic and inelastic scattering

effects in conductance

measurements at the nanoscale

A theoretical treatise

PETER BERGGREN

ISSN 1651-6214 ISBN 978-91-554-9321-9 urn:nbn:se:uu:diva-261609

(2)

Dissertation presented at Uppsala University to be publicly examined in Häggsalen, Lägerhyddsvägen 1, Uppsala, Friday, 16 October 2015 at 09:00 for the degree of Doctor of Philosophy. The examination will be conducted in English. Faculty examiner: Wolfgang Belzig (Universitet Konstanz).

Abstract

Berggren, P. 2015. Elastic and inelastic scattering effects in conductance measurements at the

nanoscale. A theoretical treatise. Digital Comprehensive Summaries of Uppsala Dissertations

from the Faculty of Science and Technology 1282. 87 pp. Uppsala: Acta Universitatis

Upsaliensis. ISBN 978-91-554-9321-9.

Elastic and inelastic interactions are studied in tunnel junctions of a superconducting nanoelectromechanical setup and in response to resent experimental superconducting scanning tunneling microscope findings on a paramagnetic molecule. In addition, the electron density of molecular graphene is modeled by a scattering theory approach in very good agreement with experiment. All studies where conducted through the use of model Hamiltonians and a Green function formalism. The nanoelectromechanical system comprise two fixed superconducting leads in-between which a cantilever suspended superconducting island oscillates in an asymmetric fashion with respect to both fixed leads. The Josephson current is found to modulate the island motion which in turn affects the current, such that parameter regions of periodic, quasi periodic and chaotic behavior arise. Our modeled STM setup reproduces the experimentally obtained spin excitations of the paramagnetic molecule and we show a probable cause for the increased uniaxial anisotropy observed when closing the gap distance of tip and substrate. A wider parameter space is also investigated including effects of external magnetic fields, temperature and transverse anisotropy. Molecular graphene turns out to be well described by our adopted scattering theory, producing results that are in good agreement with experiment. Several point like scattering centers are therefore well suited to describe a continuously decaying potential and effects of impurities are easily calculated.

Keywords: Scattering theory, Scanning tunneling microscopy, tunnel junctions, molecular

graphene, paramagnetic molecules, spin interaction, nano electromechanical system, Josephson junction, superconductivity, chaos

Peter Berggren, Department of Physics and Astronomy, Box 516, Uppsala University, SE-751 20 Uppsala, Sweden.

© Peter Berggren 2015 ISSN 1651-6214 ISBN 978-91-554-9321-9

(3)
(4)
(5)

List of papers

This thesis is based on the following papers, which are referred to in the text by their Roman numerals.

I Stability and chaos of a driven nano electromechanical Josephson

junction

P. Berggren and J. Fransson

II Spin inelastic electron tunneling spectroscopy on local magnetic

moment embedded in Josephson junction P. Berggren and J. Fransson

III Theory of spin inelastic tunneling spectroscopy for

superconductor-superconductor and superconductor-metal junctions

P. Berggren and J. Fransson

IV Molecular graphene under the eye of scattering theory

H. Hammar, P. Berggren and J. Fransson

(6)
(7)

Contents

Part I: Introduction . . . .9

Part II: Theoretical framework . . . 13

1 Short introduction to quantum mechanics . . . 15

2 Model making in many body systems. . . 19

2.1 Hamiltonian descriptions . . . .20

2.2 Spin Hamiltonian . . . 21

3 Green functions in many body physics . . . 22

3.1 General background to Green functions . . . 22

3.2 Green functions in many-body systems . . . .24

3.2.1 Variants of the many-body Green function. . . 28

3.2.2 Green functions at finite temperature . . . .28

3.2.3 Green functions of free field excitations . . . 32

3.2.4 The Heisenberg equation of motion and the perturbative expansion . . . 34

4 Tunnel junctions and scanning tunnelling microscopy . . . 35

4.1 Scanning tunnelling microscopy (STM) . . . .35

4.2 Theoretical description of tunnel junctions . . . 37

5 Scattering theory for surface electrons interacting with Dirac delta-function like potentials . . . 42

6 Superconductivity . . . .44

6.1 Key points of BCS theory . . . .45

7 Notes on chaos . . . 48

Part III: Accessible versions of the published papers . . . 51

8 Stability and chaos of a driven nanoelectromechanical Josephson junction. . . .53

8.1 Results for zero bias voltage. . . .56

8.2 Results for finite bias voltage . . . 58

9 Theory of spin inelastic tunneling spectroscopy for Josephson and superconductor-metal junctions . . . .61

(8)

9.2 Results for a Spin 5/2 magnetic molecule . . . 66

9.3 Anisotropy dependence on tip to sample distance . . . 69

9.4 Concluding remarks . . . 71

10 Molecular graphene under the eye of scattering theory . . . 73

11 Outlook. . . .79

12 Acknowledgments . . . 80

13 Summary in swedish . . . 81

(9)

Part I:

Introduction

Research and general interest in nanotechnology has exploded in resent years as applications are becoming feasible also for medical, mechanical and con-sumer applications [1]. In electronics the nanometer has been the length scale for many years and processor chips have continued to develop to a point where they are now manufactured with 14 nm architectures [2]. However, conven-tional technology has recently started to close in on the physical limits of size. Take for example the storage capacity of modern hard drives. These are con-structed with a thin layer of a magnetic material spread over the plane sur-face of a substrate. The magnetic film is in turn divided into small sections, called domains, that have two different preferred magnetization directions dis-cernible by the read head. Each domain represents one bit, encoded as a 1 or a 0 by the magnetic direction. The read head can change the bit state of the domain by supplying energy, such that the magnetic state can overcome the potential barrier that separates the two energy minima. The problem is that this process can happen spontaneously if the magnetic domain is thermally excited and the stability with respect to thermal influence is set by the volume of the domain, for a given material. By making smaller domains the bits be-come more sensitive to thermal information loss which obviously limits the density of bits that can be packed on the magnetic thin film.

Research into alternative ways in which a bit may be stored, with higher density, is therefore important as demand continues to grow with the tion society. An additional benefit is that the energy associated with informa-tion storage and processing often is lowered as a wanted byproduct of smaller architectures. A candidate for a next generation technology in data storage is the use of densely packed magnetic molecules. These are molecules where a

(10)

local atomic magnetic moment is positioned within a surrounding molecular cage that provides an anisotropy field generating preferred magnetic direc-tions. The advantage of such molecules over thin film domains is that they are orders of magnitude smaller yet still have high enough barriers, suppressing spontaneous transitions between bit states, to be used in room temperature [3]. In papers II and III a magnetic molecule of this kind is considered, where the spin energy levels are mapped out in the environment of a superconductor-superconductor and superconductor-superconductor-metal gap. This is a setup commonly re-ferred to as a tunnel junction and the system has been reported to show some promising features, such as long mean lifetimes for the excited spin states [4]. Something that could open up for use in the capacity of a computer working memory, possibly in a quantum computer context.

Tunnel junctions generally consist of two conductors that are separated by a thin insulating layer. The usefulness of such a device comes from the quan-tum mechanical property of tunneling which allows for electron flow between the conductors that is classically forbidden. Devices of this kind lie outside the area of common knowledge even though they appear in many electron-ics applications, such as hard drives and solar cells [5, 6, 7]. In experimental physics tunnel junctions play a very important role. Scanning tunneling mi-croscopes (STM:s), an invention that earned G. Binnig and H. Rohrer a Nobel prize [8], are tunnel junctions that can be swept over a material surface and make measurements so detailed that individual atoms are "seen". Apart from taking measurements the STM can also be used to move atoms on a surface into patterns of almost any two dimensional shape. For measurements of ex-treme sensitivity to magnetic fields tunnel junctions known as superconducting quantum interference devices (SQIUD:s) are used.

While papers II and III are modeled as STM experiments paper I is a study of an asymmetric double tunnel junction that incorporates nano mechanical motion, under superconducting conditions. This setup reveals some interest-ing characteristics as the tunnelinterest-ing current is modulated by the motion of an oscillator. The interplay between current and vibrations open up for different regions of operation that are periodic, quasi periodic and chaotic.

In the final paper, IV, a scattering theoretical approach is used to study molecular graphene with great experimental agreement. Graphene is single layer of the carbon honeycomb structure in graphite, several of which are laid down on a piece of paper when writing with a normal pencil. The material has many record breaking properties such being the strongest ever tested and some of these properties carry over to molecular graphene, e.g. Dirac fermions. Molecular graphene is constructed by placing atomistic or molecular scatter-ing centers on a metallic surface in a triangular pattern that is the dual to the honeycomb. Surface electron density is then forced to the honeycomb struc-ture which simulates graphene. The adopted theory is very flexible and it is shown that a number of point like scattering centers can be used to simulate

(11)

a continuous spatially decaying potential. Impurity defects are hence easily added or taken away.

(12)
(13)

Part II:

Theoretical framework

Of all the people who will attempt to read this thesis my dad is one. His interest in physics in many ways kindled my own and since he lacks formal education in the subject, the first introductory section serves to give him and others shar-ing a similar background some basic insight into quantum mechanics. Others who may read this thesis are experts in the field and know the theory inti-mately. This group can not expect any new insights from the theory part as the intention of the sections, following the first, is to provide undergraduate to graduate physicists reference material for understanding or reading up on the contents of the published papers.

(14)
(15)

1. Short introduction to quantum mechanics

In the advent of physics, as the science known to us, classical mechanics was the first to evolve as it describes interactions of bodies on our own length scale. Objects within such dimensions are simply the most easily accessible for controlled experiments. In the late 1700th century precise measurements and a new mathematical framework led Newton to bring forth a paradigm shift of science as he formulated laws of nature that relates quantities of physics, which also serve to define them in terms of each other. He famously stated that

~

F= m ∗~a, (1.1)

where ~F is the force acting on a body of mass m to cause an acceleration

in the direction and magnitude ~a. A force is hence defined as the quantity that makes a body of mass m accelerate by the amount a in a given direction. This equation, known as Newton’s second law, is an equation of motion since it determines the movement of a body subject to an external force. Within the speeds, masses and forces obtainable for experiments in Newton’s days the motion of any body precisely followed the second law and it became the corner stone in what is now referred to as classical mechanics.

A common example that is easily solvable with the application of Newton’s second law is the harmonic oscillator. This example transfers well to quantum mechanics and connects on a fundamental level to the quantum mechanical description of the world that is quantum field theory. Picture an object with mass m tied to the end of a spring that in turn is tied to an immensely massive object M, such that M  m. The more massive object may then as an ap-proximation be considered fixed and serve as the point of reference to which the less massive body moves. In reality the most common way to realize the setup is to let a small weight hang on a spring tied to a rigid armature securely fastened to earth. The origin of motion is defined to be where m is at rest. If x denotes the distance that m is moved from the origin the force acting on

mfrom the spring is F = −ks∗ x, where ksis the spring constant which has a

value that is lower for a soft spring compared to a harder spring. This is known as a restoring force because it acts to restore m to its origin whether it is being pulled down or lifted up. To obtain a mathematical expression that predicts the position of m at any specific time given the set of initial conditions, that determine how far m has been pulled down before release and at which speed

mis let go, Newtons second law can be applied directly,

(16)

In the one dimension x, acceleration is the change of speed over one time unit,

a= dv/dt, and speed is the change of position over one time unit v = dx/dt.

The equation of motion may in other words be rewritten as d2x

dt2 = −

ks

mx, (1.3)

which is a differential equation fulfilled by

x(t) = x0∗ cos(ω ∗ t), (1.4)

where x0 is the amplitude of the oscillatory motion and ω =

p

ks/m is the

angular frequency, if m is initially pulled down by x0 before release. A pen

attached to m will draw the curve illustrated in Figure 1.1 if if touches a paper scrolling past m at constant speed. This wave form is known as a sinusoidal curve and it predicts the position of m at any given time since it represents the paper drawn to infinite time.

Before comparing the above example to the quantum mechanical counter-part, is is helpful to realize that we by observing the actions of objects in our surrounding condition and familiarize our brains to anticipate movement to such a degree that we obtain an intuitive understanding. Our intuitive under-standing is fragile, however, and prone to misconception, even when observ-ing macroscopic objects we may be surprised, as the simple rotatobserv-ing bicycle wheel experiment shows us when we are first exposed to it. If a person sits still on a swivel chair and is given a rotating bicycle wheel with handles in the hub, that are orientated vertically, he or she will start to spin together with the chair, about its axis of rotation, if the wheel is turned upside down. The physics is clear about this, since angular momentum must be conserved, but it surprises the intuition because we rarely come across the phenomenon in everyday life. It is also on our length scale that the classical laws hold up the best. When applied to the very large and small, careful measurement reveal that the observation don’t match up to the Newtonian theories perfectly.

Through the evolution of technology these worlds of extremes became ac-cessible, and by a collective effort in the early twentieth century the laws of nature for things massive and tiny, beyond our imagination, where formu-lated. Our brains Instinctively try to picture these worlds, obscured to our eyes, through the lens of a macroscopic conditioning and care must taken not to force macroscopic concepts blatantly. The theories of relativity and quan-tum mechanics are therefore not easily understood and exposure over time is needed to get accustomed.

The quantum mechanical counterpart to Newtons second law is called the Schrödinger equation,

i¯h∂

(17)

and it is formulated in terms of the energy of the system rather than the forces of Newton’s second law. Ψ(r,t) is the wave function of the system, and for simplicity, lets say that it represents a particle. The wave function then gives the probability amplitude for the particle to be at position R at time t. In other words, when we calculate what the particle is up to it doesn’t really exist at any specific point in space or time. We only know how the the probability,

|Ψ(r,t)|2, of the particle to be somewhere develops. This whole notion is in

stark contrast to classical mechanics where the solution to the equation of mo-tion tells you the posimo-tion, speed and acceleramo-tion of an object at any given time. A quantum mechanical particle can therefore never be said to be some-where unless it is measured and forced to make an imprint at that instance in the measuring device.

H is called the Hamiltonian and it contains the total energy of the system.

For the particle this ads up to its kinetic energy and its potential energy, if affected by the surroundings. Mathematically,

H=−¯h

2

2m∇ + V (r, t), (1.6)

where the first term is the kinetic energy and the second term is the potential energy. The other symbols that appear in the equation are, i, the imaginary unit, ¯h, the reduced Planck constant, that for example relates the uncertainty of a particles position to its uncertainty in momentum through ∆x∆p ≤ ¯h/2 and m which is the mass. The quantum mechanical version of the harmonic oscillator

ψk(r ) |ψk(r)|2 x(t) x t (a) (b) ћω/2 ћω r

Figure 1.1. (a) Sinusoidal curve that represents the position, x, of a harmonically oscillating body over time t. (b) A quantum harmonic oscillator has quantized energy levels and the curves show the probability amplitude and probability for a particle to be at a given point in space.

example given earlier is referred to as a quantum harmonic oscillator and it is illustrative to compare the differences. To find a solution the Schrödinger equation is solved with the potential term

V(r) =1

2mω

(18)

where ω is the angular frequency of the oscillator. The first notable difference is that the solution looks much more complicated,

Ψn(r) = 1 √ 2nn! mω π¯h 1/4 e−mωr22¯h (−1)nemω¯h r2 d n drn(e −mω ¯h r2) n = 1, 2, 3, . . . (1.8) which is generally the case. The really interesting thing however, is to look at the energy of the solution

En= ¯hω  n+1 2  , (1.9)

which has become quantized, meaning that the oscillator only vibrates at cer-tain energies that are equally spaced. Much like the string of a guitar only vibrates at certain frequencies, from the fundamental harmonic and upwards through its multiples. Figure 1.1 (a) illustrates the motion of a classical har-monic oscillator and the curve indicates where the moving body is at a given time on the t-axis. In contrast (b) illustrates a quantum harmonic oscillator and the moving body can no longer be said to be at a certain place at a given point, instead the curves indicate what probability amplitude, Ψ(r), and what probability, |Ψ(r)|, the body has to be at a given point for the different energy levels available. These energy levels all correspond to a quantum state of the oscillator and to switch between them the oscillator has to give or be given an amount of energy equal to the level spacing. Since the quantum mechani-cal body has to be somewhere, the area under a probability curve sums up to

R

(19)

2. Model making in many body systems

In the previous section the fundamental laws of physics - classical and quan-tum mechanic - were touched upon. These laws are precisely formulated and perfectly describe our micro- and macroscopic world up to the most extreme circumstances regarding energy and gravitational fields. Consequently, most physicists do not deal with the advancement of fundamental physics itself but with the complexity of interacting systems. In mathematical terms these sys-tems are often defined by a Hamiltonian on the form

H= H0+ HI, (2.1)

where H is the total Hamiltonian including interactions, H0 is a Hamiltonian

with known solutions and HIaccounts for the interactions. Obviously, finding

solutions to the full Hamiltonian is the aim for theoretical studies of a par-ticular problem. When quantum mechanics was first formulated most prob-lems that could conceivably yield analytical solutions where worked out very rapidly, e.g. the electron wave function and energy levels of hydrogen, the quantum harmonic oscillator, etc.. Today researchers face problems that al-most, without exception, demand approximate solutions, often given through the aid of computers. This is not surprising since the description of a real physical system generally concern more than two interacting particles. Even classically the limit to obtaining a closed analytical solution, that can predict the system at any given time, is two interacting particles. Hence, the eigen-states to the Hamiltonian (2.1), that constitute the solutions, often need to be sought by means of some perturbative expansion. These are methods whose use are limited to situations where the interaction can be considered weak, such that a solution can be written as a series expansion where successive terms include higher orders of the interaction and can be neglected.

This thesis complies work where the perturbative methods are given within a Green function formalism, detailed in section 3. The problems considered are all of a many body character, composed of enough particles that fluctua-tions from the average are small, to which a quantum field theoretic, or second quantization, description is suited.

In quantum field theory fields are quantized into operators that create or de-stroy particle excitations of the field. The electromagnetic field is an example that can be quantized. In the process the field becomes and operator that acts on the quantum state of the field to generate or destroy the field excitations that are photons. For the "matter"-field given by the wave function defined in (1.5)

(20)

above the quantization promotes it to and operator that creates or destroys the field excitations that, for example, are given by electrons. Hence, while the wave function can give the probability for a particle to be at a specific point the field theoretic operator can give the density of electrons at a given point.

2.1 Hamiltonian descriptions

The basic methodology applied in the studies of this thesis can be described in two steps. First, an interesting problem is identified and model Hamiltonian, that includes the interactions needed to replicate the important features of the problem, is identified. Second, a Green function formalism is applied to that Hamiltonian in order to reach the solution. This process boils down to find-ing a balance between the complexity of the Hamiltonian and the number of interactions deemed necessary to find a solution that has the sought features. The process is not foolproof and sometimes the results fail to give the desired answers. It may then be difficult to see whether the model Hamiltonian is to blame or if the Green functions were truncated at to low order.

The thesis includes tunnel junction and scattering studies where material specific properties are of secondary importance. Materials are therefore pur-posefully replicated as metals or superconductors with ideal dispersion and density of states (DOS). Electrons within all metallic conductors are hence governed by the free particle Hamiltonian

H0=

εkc†ckσ, (2.2)

where c† is the creation operator and ckσ is the annihilation operator. The

Planck constant ¯h is generally set to equal 1 to simplify notation, the sub-script k is the wave vector or momentum vector and σ is the spin index. The superconducting state is given by the Bardeen, Cooper and Schrieffer (BCS) Hamiltonian, detailed in section (6).

Electron or quasiparticle tunneling is treated as scattering off of a tunneling potential, through an interaction Hamiltonian detailed in section 4.2. Other in-teractions include energy exchange with magnetic moments which are detailed below.

The full Hamiltonian is also the generator of time evolution for a quantum mechanical system. In the following text equations are written in either the Heisenberg picture or the interaction picture. In the Heisenberg picture the time dependence of an operator is given by

ckσ(t) = eiHtckσe−iHt (2.3)

while the state vectors are constant in time. In the interaction picture operators evolve in time as

(21)

and state vectors as

|kσ (t)i = eiH0te−i(H0+HI)t|kσ (0)i. (2.5)

2.2 Spin Hamiltonian

The Hamiltonian for a generic spin situated in an external magnetic field, B, and an anisotropy field may be written

HS= −gµBB · S + DSz2+ E Sx2− S2y , (2.6)

where g is the g-factor, µB is the Bohr magneton, S = (Sx, Sy, Sz) is the spin

vector and D and E are the uniaxial and the transverse anisotropy energies, defining the anisotropy field [3]. The first term of the Hamiltonian Zeeman-splits the spin eigenstates in energy with an amount proportional to the field strength and the spin will in general align itself with the direction of the mag-netic field. The second term is a phenomenological representation of the uni-axial anisotropy field whose energy D splits the spin state energy levels in a twofold degenerate way

D< 0 : E±N< E±N−1< E±N−2< . . . ,

E±N/2< E±N/2−1< E±N/2−2< . . . , N = odd

D> 0 : E±N> E±N−1> E±N−2> . . . ,

E±N/2> E±N/2−1> E±N/2−2> . . . , N = odd.

(2.7)

For D < 0 the state with the largest spin projection onto the Z-axis clearly has the lowest energy which sets a favored spin direction along the Z-axis, often referred to as the "easy-axis". For D > 0 on the other hand, the spin state with the smallest spin projection onto the Z-axis is favored an the spin will prefer to lie in the XY -plane, often referred to as an "easy-plane".

The last term in the spin Hamiltonian represents the transverse anisotropy and it expresses the difference between the X and Y directions. However, since

it fails to commute with Sz it mixes the spin states into linear combinations.

The mixed states prevents any axis from being uniquely easy or hard and by convention the axis directions are assigned such that |D| is maximized and E>0.

(22)

3. Green functions in many body physics

In first quantization the wave function is the premier quantity sought, from which all measurable properties of a system can be found. In second quanti-zation where we deal with many-body systems the Green functions serve this purpose to a large extent. Green functions are often called propagators be-cause they give the probability amplitude for a particle appearing at position x

at time t to be at position x0 at a different time t0. Through this ability Green

functions probe the entire space for a considered test-particle, accounting for interactions with other particles and external potentials, to map out how the density of particles distributes in space at given times. For thorough reviews of Green functions, see references [9, 10].

3.1 General background to Green functions

The term Green function originally comes from mathematics, where they are also referred to as impulse response functions, because they solve inhomoge-neous differential equations by measuring momentary impulses that may then be integrated over. Mathematically, if the differential operator L(r) acts on a function such that

L(r) f (r) = u(r), (3.1)

then the Green function solves the related problem

L(r)G(r, r0) = δ (r − r0), (3.2)

where δ (r − r0) is the Dirac delta function. The solution to the sought function

f(r) is then found by solving the integral-equation

f(r) =

Z

G(r, r0)u(r0)dr0. (3.3)

In quantum mechanics we are often looking for the solution to the similar problem,

[H0+V (r)] Ψ(r,t) = i∂tΨ(r, t), (3.4)

where the solutions ψ(r,t) to H0 are known, V (r) accounts for additional

ex-ternal potentials, and a time dependence t is included. Inspired by eq. (3.2) we may then define Green functions that satisfy

[i∂t− H0] g(x, x0) = δ (x − x0), (3.5)

(23)

where, x = (r,t), the lower case g(x, x0) signifies that it solves the equation

without V (x) and the upper case G(x, x0) signifies that it solves the equation

with full interactions. g(x) and G(x) will from now on be referred to as bare and dressed Green functions respectively. The Schrödinger equation (3.4) may be used to connect the the wave functions with the Green functions,

Ψ(x, t) = ψ (x, t) + Z dx0dt0g(x, x0t,t0)V (x0)Ψ(x0,t0), (3.7) Ψ(x, t) = ψ (x, t) + Z dx0dt0G(x, x0t,t0)V (x0)ψ(x0,t0), (3.8)

through two equivalent integral equations, that includes either the bare or dressed Green function. These equation are seemingly as difficult to solve as the original equation (3.4). By inspection it is, however, easy to see that an iterative process may be employed on eq. (3.7) to generate an infinite series,

Ψ = ψ + gV ψ + gV gV ψ + gV gV gV ψ + · · ·

= ψ + (g + gV g + gV gV g + · · · )V ψ (3.9)

where the implied integrals and the variable dependence has been excluded for clarity. In the last step eq. (3.8) is regained if we equate

G= g + gV g + gV gV g + gV gV gV g + · · ·

= g + gV (g + gV g + gV gV g + gV gV gV g + · · · ), (3.10)

where the expression in the parenthesis is once again nothing but G. This equation,

G= g + gV G, (3.11)

is known as the Dyson equation, and for all convergent series expansions we now have a powerful and foolproof way of obtaining the Green function, G, in a perturbative manner.

To see why the Green function is referred to as a propagator in mathematical

terms we observe that the probability amplitude for a particle located at x0 at

time t0 to be at the position x at time t, can be found by applying the time

evolution operator in the Schrödinger picture, U (t,t0) = exp[−iH(t − t0)], to

an initial quantum state at time t0,

|α,ti = e−iH(t−t0)|α,t0i, (3.12)

where H is the total Hamiltonian H = H0+ V . If a complete set of position

states,R

dx0|x0ihx0|, and eigenstates, ∑a|aiha|, are introduced while we

multi-ply the equation from the left with the position state hx|, we get

hx|α,ti =

a

Z

dx0dt0hx|aiha|x0ihx0|α,t0ie−iEa(t−t0),

(24)

which equates to

Ψ(x, t) =

Z

dx0dt0Gr(x, x0,t,t)Ψ(x0,t0), (3.14)

if Gr(x, x0,t,t0) = −iθ (t − t0)hx|e−iH(t−t0)|x0i. This function incidentally

sat-isfies equation (3.25) and is, hence, nothing but the Green function, which

clearly acts on the wave function Ψ(x0,t0) and propagates it to Ψ(x,t). In the

last derivation step the superscript t is added to the Green function to signify that we explicitly consider causal interactions, insured by the Heaviside step

function θ (t − t0), such that interactions occurring at t0 only affect the system

at later times t > t0. Green functions that satisfies this condition are called

retarded while anti causal Green functions that are nonzero for t0> t only, are

called advanced.

Functions structured like the Green functions will also be referred to as cor-relation functions in general because they give the corcor-relation between quan-tum states that may differ i other parameters (and quanquan-tum numbers) than position and time.

3.2 Green functions in many-body systems

In the previous section we defined the retarded Green function, Gr, that

prop-agates a wave function through space from an earlier to a later time. In second quantization for many body systems the equivalent Green function,

Gr(x, x0,t,t0) = −iθ (t − t0)h{cλ(x,t), c†λ(x0,t0)}i, (3.15)

will be used mainly by its own merits, which demands a careful interpretation of its physical meaning. In order to help form a mental picture and connect to a physical system c is intentionally used to denote the creation and annihila-tion operators, which is commonly reserved for electrons. Note also that the

position dependence makes the cλ(x,t) and c†λ(x0,t0) field operators, strictly

speaking. The use of "c" implies that we are dealing with fermions, confined to

Fermi-Dirac statistics1, which is why the anti-commutator, {A, B} = AB + BA,

appears within the brackets. For bosons, confined to Bose-Einstein statistics2,

the usual commutator, [A, B] = AB − BA, needs to be used and "a" will denote a generic boson destruction operator following this point. The subscript λ is the collection of quantum numbers that define the specific particle considered - for electrons λ = p, σ generally, where p is the momentum and σ is the spin. At zero temperature the bras and kets, h. . .i, represent the ground state of the

sys-tem Hamiltonian, H, or the ground state of the unperturbed hamiltonian, H0.

Which of these the bras and kets refer to is implied by their use. The Green

1See equation 3.36 for the finite temperature dependence. 2See equation 3.37 for the finite temperature dependence.

(25)

functions of the unperturbed Hamiltonian is generally known and used to find an approximation for the unknown Green function of the full Hamiltonian.

To get a sense of the physical process the Green function describes we look at how the operators act on different occupied or unoccupied Fock states. When the creation operator acts on the state vector for the full Hamiltonian, in

e.g. θ (t − t0)hcλ(x,t)c†

λ(x

0,t0)i, at t0the result is 0 if it tries to create a particle

in a state that is already occupied, due to its fermionic properties. If the state

is unoccupied an electron is created at position x0that will scatter until t when

the annihilation operator destroys the electron at position x. The scattering events modify the amount of probability amplitude that can be found at x the

time t0− t later, which is what the Green’s function measures.

If we consider θ (t0− t)hc†λ(x0,t0)cλ(x,t)i instead, the destruction operator

is the first to act on the Fock state at the earlier time t. In order to yield a nonzero result the state defined by λ needs to be occupied in this case. This condition is fulfilled by all energy states below the Fermi energy where the removal of a negatively charged particle creates a hole with positive charge. A Green’s function of this kind hence propagates a hole rather than an electron,

or more strictly, gives the amplitude for at hole created at x to be at x0the time

t− t0 later.

The order of the operators evidently plays an important role for what phys-ical process the Green function describes, and since the operators act at dif-ferent times, so does the ordering with respect to time. For electrons causality dictates that operators of earlier events on the time line should be moved to the right of later events, regardless of the number of operators within the bracket. Since the operators follow commutation and anti-commutation relations these movements must be done with care. In order to emphasize where time order-ing should take place we introduce the time orderorder-ing operator, T {. . .}, that ensures a causal behavior. The generalized Green function for the above dis-cussed physical events is for example defined

Gt(x, x0,t,t0) = −ihT {cλ(x,t)c†λ(x0,t0)}i

= −iθ (t − t0)hcλ(x,t)c†λ(x0,t0)i + θ (t0− t)hc†λ(x0,t0)cλ(x,t)i, (3.16) where the time ordering operator arranges the field operators appropriately to the step-functions. Note the different signs of the two terms that are a consequence of the fermionic anti-commutation behavior as opposed to the bosonic case where both terms share the same sign.

For a static system, only the time difference, t − t0, between the two

op-erators is important. The time dependence therefore disappears if the two

operators act on the state at equal times, and if the state vectors are of H0we

get

(26)

at zero temperature. The number operator n(λ ) = c†λcλ simply counts the oc-cupation of the state λ , which is either 1 or 0 at zero temperature where the Fermi-Dirac distribution function f (ε) = θ (−ε). At finite temperatures, how-ever, where the Fermi-Dirac distribution governs the occupation of fermions, states may be occupied by fractions, as we shall see.

For a static system where the operators act on a state of the full Hamiltonian at different times, however, we need a way to connect these to the known states of the noninteracting Hamiltonian. By working in the interaction picture the time dependence is separated such that the interaction evolves the states while the unperturbed hamiltonian evolves the operators. The interaction may then be turned on adiabatically in the infinite past to evolve the unperturbed state into the state of full interaction we seek. To regain the unperturbed state in the infinite future the interaction is turned off adiabatically . This procedure is captured in the Gell-Mann–Low theorem that states that there exists an opera-tor S(t,t0) = U (t)U†(t0) ⇒ ∂tS(t,t0) = −iV (t)S(t,t0) ⇒ S(t,t0) = Tne−i Rt t0dt1V(t1) o , (3.18)

where V (t1) = eiH0t1Ve−iH0t1and U (t) = exp(iH0t)exp(−iHt) is the time-evolution

operator in the interaction picture, that evolves the ground state of the unper-turbed Hamiltonian to the ground state of the interacting Hamiltonian by the

operation |Ψ0i = S(0, −∞)|ψ0i and analogously hΨ0| = hψ0|S(∞, 0). The time

ordered Green function can then be written as

Gt(t,t0) = −ihΨ0|T {cλ(t)c † λ(t 0 )}|Ψ0i = −iθ (t − t0) × hψ0|S(−∞, 0)S(0,t)cλ(t)S(t, 0)S(0,t 0)c† λ(t 0)S(t0, 0)S(0, −∞)|ψ 0i + iθ (t0− t) × hψ0|S(−∞, 0)S(0,t0)c†λ(t0)S(t0, 0)S(0,t)cλ(t)S(t, 0)S(0, −∞)|ψ0i. (3.19) Both the bra and ket state vectors are now defined as the ground state in the infinite past, but we ideally would like to have the bra run to the infinite future since it sets the upper integration limit of the S-matrix expansion in eq (3.18). To fix this the bra state may be rewritten as,

hψ0|S(−∞, 0) = hψ0|S(∞, −∞)S(−∞, 0) hψ0|S(∞, −∞)|ψ0i = hψ0|S(∞, 0) hψ0|S(∞, −∞)|ψ0i , (3.20)

which allows us to simplify the time ordered Green function (3.19) to

Gt(t,t0) =−ihψ0|T {cλ(t)c † λ(t 0)S(∞, −∞)}|ψ 0i hψ0|S(∞, −∞)|ψ0i . (3.21)

(27)

The time ordered Green function is now expressed in terms of known ground state vectors and the full Green function can be found in a perturbative fashion or exactly by expanding the S-matrix,

S(∞, −∞) = Te−iR−∞∞ V(t0)dt0=

n=0 (−i)n+1 n! Z ∞ −∞ dt1· · · dtnT{V (t1) · · ·V (tn)}. (3.22) It is important to remark here that the denominator in (3.21) in general pro-duces an infinite sum of terms. These are referred to as vacuum polarization terms and show up as disconnected diagrams in the Feynman diagram repre-sentation. As luck would have it however, these terms also show up in the nominator to completely cancel the denominator. A result of this cancellation is that the Green function may be calculated by discarding the denominator while only terms that correspond to connected Feynman diagrams are kept. The higher orders in the S-matrix expansion will contain many creation and annihilation operators in even numbers. In order to separate these into pairs that we can interpret as Green functions there is a method that follows what is known as Wick’s theorem. It states that an expression of several operators should be uncoupled into a sum of all possible pairings where each one is time ordered, e.g. hT {cα(t1)c†β(t2)cλ(t3)c † δ(t4)}i0=hT {cα(t1)c † β(t2)}i0hT {cλ(t3)c † δ(t4)}i0 − hT {cα(t1)c†δ(t4)}i0hT {cλ(t3)c † β(t2)}i0 (3.23) for fermions.

Under non-equilibrium conditions or generally in cases where the studied system, initially know at t → −∞ as |ψi, fails to go back to that initial state as t → ∞ in the future, the state hΨ(t → ∞)| remains completely unknown. In order to avoid the reference to the infinite future and circumvent this problem a method has been developed based on the idea that the integration path in the S-matrix expansion can be shifted slightly into the complex plane just above or below the real time-axis. The contour path of integration can then run from τ → −∞ + iδ , where the initial state is known, in a loop that smoothly tran-sitions to the lower complex plane at τ = a such that the contour follows real time axis back to the initial state along τ = t − iδ until t → ∞. When the limit

a→ ∞ is taken, all possible events may transpire in time, without having to

refer to a final state at the infinite future. This mathematical trick may seen questionable at a first glance but is actually well grounded and works for non equilibrium and well as equilibrium conditions at the expense of a somewhat raised mathematical complexity [11].

(28)

3.2.1 Variants of the many-body Green function

So far we have defined two different kinds of Green functions that are both im-portant as computational tools and because they carry direct physical meaning. In the previous section it was hinted that non-equilibrium situations demand an even richer Green function toolbox. Though not all calculations behind the papers compiled in this thesis where done within the non-equilibrium frame-work the same Green function definitions are used throughout. The most

com-monly appearing - lesser, G<and greater, G>- Green functions also serve as

building blocks for the remaining 4 and are, for fermions, defined as,

G>(x, x0,t,t0) = −ihcλ(x,t)c † λ(x 0 ,t0)i G<(x, x0,t,t0) = ihc† λ(x 0,t0)c λ(x,t)i, (3.24) where the signs < and > indicate the time ordering of the operators. If the

integration path is along the complex contour around the real time axis t > t0

implies that t0is a point to the left of the later time t on the upper time contour.

If the events indicated by t and t0occur on the lower time contour t > t0means

that the later time t marks a point to the left of t0. The other relevant Green

functions are Gt(x, x0,t,t0) = θ (t − t0)G>(x, x0,t,t0) + θ (t0− t)G<(x, x0,t,t0), G¯t(x, x0,t,t0) = θ (t0− t)G>(x, x0,t,t0) + θ (t − t0)G<(x, x0,t,t0), Gr(x, x0,t,t0) = θ (t − t0)G>(x, x0,t,t0) − G<(x, x0,t,t0) , Ga(x, x0,t,t0) = θ (t0− t)G<(x, x0,t,t0) − G>(x, x0,t,t0) , (3.25)

of which the time ordered, Gt, and retarded, Gr, were already defined earlier.

The Anti-time-ordered Green function, G¯t, is for leftward bound complex time

contours what the time-ordered Green function, Gt, is for rightward bound

paths. Ga, called the advanced Green functions, is the time order opposite of

Gr.

The corresponding Green function lineup for bosons, such as phonons, can be generated using the expressions, (3.25), above if the lesser and greater Green functions are redefined,

D>(x, x0,t,t0) = −ihQλ(x,t)Q † λ(x 0 ,t0)i D<(x, x0,t,t0) = −ihQλ(x0,t0)Qλ(x,t)i, (3.26)

to account for the ordinary commutation relation. Qq= aq+ a†−qis the

Her-mitian displacement operator for the coupled atoms of a solid.

3.2.2 Green functions at finite temperature

At zero temperature all fermions within a given system will occupy the lowest lying energy states one by one, while a similar set of bosons all share the single

(29)

lowest energy state. At finite temperature the system is often considered in the grand canonical ensemble, where contact with a large heat bath keeps the temperature of the system fixed at the same time as the particle number may fluctuate. Since the particle mean energy of the system is directly related to the temperature, several particles necessarily occupy higher energy states in some configuration, as opposed to the zero temperature case. For many body systems the exact configuration of occupied states is never known and because a large number of configurations share the same energy no know single state closes the Green function. Instead we look at the average amplitude given by all possible states, weighted by the probability of finding the system in each one of them. Within the constraints of the canonical ensemble, where the studied system is in contact with a heat-bath that may exchange thermal energy with the system but not particles, the probability of finding the system in a

specific state is given by the Boltzmann distribution P(Ei) = exp(−β Ei)/Z,

where the partition function is Z = ∑iexp(−β Ei) and β = (kBT)−1, where kB

is the Boltzmann constant. In the Green function formalism we introduce this temperature dependence by defining the density matrix operator

ρ = e−β H =

ν

|νie−β Eνhν| (3.27)

and changing the bracket definition in accordance with

hcλ(t)c†λ(t0)i = ∑νhν|ρcλ(t)c † λ(t 0)|νi ∑νhν|ρ|νi . (3.28)

This correlation function now expresses the amplitude for a particle, with

quantum numbers λ , created at t0 to be destroyed at t, weighted by the

prob-ability that the acted upon Fock states are occupied. Note also that the cor-relation function references one temperature only, which makes it viable to use in situations restricted to thermal equilibrium, where the Green function is the same between any equally spaced times. If the perturbative potential lacks

any explicit time dependence the Green function G(t,t0) can consequently be

written G(t − t0), which permits us to go back and forth between time and

fre-quency space through Fourier transform. As it stands above the expression is written in the Heisenberg picture, but λ is not generally an eigenstate of the to-tal Hamiltonian and if we want to use the S-matrix expansion in the interaction picture to access the unperturbed eigenstates we see that the thermodynamic

factor exp[−β (H0+ V )] also includes V in addition to the time evolution

op-erators exp[−it(H0+ V )]. A convenient way to expand both factors, on equal

footing, is to redefine time by shifting it into the complex plane through it → τ, so that exp(−β H)exp(−itH) → exp[−(β + τ)H].

It is now possible to define a time ordering operator, Tτ for imaginary

times τ that works just like the time ordering operator for real time, e.g. Tτ{ ˆc(τ) ˆc†(τ0)} equals ˆc(τ) ˆc†(τ0) if τ > τ0and ˆc†(τ0) ˆc(τ) if τ < τ0. The time

(30)

evolution operator for τ is simply defined ˆ

U(τ) = eH0τe−Hτ (3.29)

in direct correspondence to the real time case and the S-matrix, ˆ S(τ, τ0) = ˆU(τ) ˆU−1(τ0) ⇒ ∂τS(τ, τˆ 0 ) = − ˆV(τ) ˆS(τ, τ0) ⇒ ˆ S(τ, τ0) = Tτ n e− Rτ τ 0dτ1V(τ1) o , (3.30)

expands as expected. Note the absence of the imaginary unit in the expres-sions. The thermodynamic density matrix can now be expressed in terms of the

ˆ

S-matrix if we consider β to be an imaginary time, exp(−β H) = exp(−β H0) ˆS(β , 0).

Through the mathematical trick of viewing β as an imaginary time a general time-ordered correlation function can be written

hTτ{ ˆcλ(τ) ˆc † λ(τ 0)}i = ∑νhν|e−β HTτ{ ˆcλ(τ) ˆc † λ(τ 0)}|νi ∑νhν|e−β H|νi =∑νhν|e −β H0S(β , 0)Tˆ τ{ ˆS(0, τ) ˆcλ(τ) ˆS(τ, τ 0) ˆc† λ(τ 0) ˆS(τ0, 0)}|νi ∑νhν|e−β H0S(β , 0)|νiˆ =hTτ{ ˆS(β , 0) ˆcλ(τ) ˆc † λ(τ 0)}i 0 h ˆS(β , 0)i0 , (3.31)

where the properties of the ˆS-matrix and time ordering operator have been

used and the brackets in the last step equal h· · · i0= ∑νhν|exp(−β H0) · · · |νi.

The correlation function can now be calculated, just as in the zero temperature

case, by expanding the ˆS-matrix,

hTτ{ ˆcλ(τ) ˆc † λ(τ 0)}i = ∑ ∞ n (−1)n n! Rβ 0 dτ1· · · dτnhTτcˆλ(τ) ˆV(τ1) · · · ˆV(τn) ˆc † λ(τ 0)i 0 h ˆS(β , 0)i0 , (3.32) where the creation and annihilation operators now act on the unperturbed states. When defined as

GMλ(τ, τ) ≡ −hTτ{ ˆcλ(τ) ˆc

† λ(τ

0)i (3.33)

this correlation function is referred to as the Matsubara Green function. To decouple averages in the sum of (3.32) that contain more than two op-erators in the correct way we once again use Wick’s theorem. The integration

limits of the ˆS-matrix expansion goes from 0 to β in the imaginary time

for-malism and it is not as evident why the unperturbed states are recovered in these limits, as it is when the temperature is 0. It can, however, be shown that

(31)

the imaginary time domain is limited to −β ≤ τ ≤ β for the Matsubara Green

function. The cyclic properties of the trace over states, ∑νh· · · iν, in fact forces

the Matsubara Green function, not only to depend solely on the time difference

such that GM(τ, τ0) = GM(τ − τ0), but also to abide by the property,

GM(τ) = GM(τ + β ) if − β < τ < 0 for Bosons

GM(τ) = −GM(τ + β ) if − β < τ < 0 for Fermions, (3.34)

and conversely for 0 < τ < β . According to Fourier analysis these properties allows for the definition of the Fourier transform

GM(iωn) = Z β 0 dτG M(τ)eiωnτ GM(τ) = 1 β

n e −iωnτGM(iω n) ω =(2n + 1)π β (3.35)

for Fermions if n is odd. For Bosons the same relations hold for any even n

where ωn= 2nπ/β , see [9] for a thorough discussion. In frequency space,

where most calculations are done, the ˆS-matrix expansion of the Matsubara

Green function can be rewritten as a Dyson equation that may be used to find the full function in a perturbative fashion.

We have now found ways to obtain the Matsubara Green function at finite temperatures in both imaginary time and frequency space. The Matsubara Green function does not correspond to a physical quantity directly, however, and to be useful it needs to be translated into a Green function of real time. This process turns out to be very simple and it is the reason why the Matsubara Green function is useful. Once the Matsubara Green function has been found in frequency space the analytic continuation iω → ω + iδ immediately pro-duces the Fourier transform of the retarded Green function defined in (3.25).

The Matsubara Green function method is not the only way to calculate the correct retarded Green function for finite temperatures. In real time the time ordered Green function may also be expanded into a Dyson type equation by

application of the Heisenberg equation of motion, i∂ta(t) = [a, H](t). The

retarded Green function can then be found by the means of non-equilibrium theory that we will look closer at in section 3.2.4 and ??.

As special, and very useful, case where results can be found for finite tem-peratures without resorting to the Matsubara or non-equilibrium methods con-cerns unperturbed excitations. The Green function of free electrons is, for ex-ample, ultimately what the more complicated interacting electron expressions are expanded in. A free electron is defined by its quantum numbers λ = k, σ , where k is the wave vector directly related to the electron momentum and σ is the spin quantum number which is either up or down. If we now look at

(32)

the lesser Green function defined in (3.24) and consider a single point, such

that x = x0at one given time, such that t = t0, the operators within the electron

states |k, σ i, of energy εk, are just what is referred to as the occupation

num-ber operator, c†ckσ. Using the definition of the thermal average with dropped

spin indices, (3.28), we can see that

hc†kcki = ∑k0hk0|e−β H0c†kck|k0i ∑k0hk0|e−β H0|k0i =∑ 1 nk=0e−β ∑k06=k εk0nk0−β εknkn k ∑1nk=0e−β ∑k06=k εk0nk0−β εknk = 0 + e −β ∑k06=kεk0nk0e−β εk e−β ∑k06=kεk0nk0+ e−β ∑k06=kεk0nk0e−β εk = 1 eβ εk+ 1= f (εk) (3.36)

where the first term on the second row represent the case where the quantum

state |k, σ i is empty nk= hk|c†kck|ki = 0, while the second term represents

the case where said state is occupied. f (ε) is the Fermi-Dirac distribution function that gives a measure of the probability for a fermion energy state to be occupied thermally. Note that this number may be in fractions.

Bosons can occupy any quantum state with any positive integer value, nk=

1, 2, 3, . . ., and consequently distribute differently than fermions under finite temperatures. Using the average from (3.36), but with bosonic operators it can be shown that

ha†kaki =

1

eβ εk− 1 = nB(εk), (3.37)

see [10] for proof, where nB(εk) is the Bose-Einstein distribution that gives

the average occpation of a boson energy state by thermal excitation.

A common system to study, that relates to experimental measurements, is one where the electric potential of a lead is varied up or down, in relation to its Fermi energy, by an applied bias voltage. The measured quantity in such a setup is often the current flowing from the lead, to the part under examination, and onwards to an additional lead. Each lead consequently exchanges particles with the thermal reservoir in order to avoid building up or depleting charge, which means that they need to be treated within the grand canonical ensemble. In mathematical terms a chemical potential, µ, is introduced to the

Hamilto-nian, HG= H − µc†λcλ, to account for the applied voltage, which is reflected

in the distribution function as a change in the energy variable εkG→ εk− µ.

3.2.3 Green functions of free field excitations

Six Green functions were defined in section 3.2.1 that could all be written in terms of the lesser and greater Green functions. In section 3.2.2 we also saw that the space and time independent correlation function of the unper-turbed Hamiltonian gives the thermal occupation at a given energy under fi-nite temperatures and equilibrium conditions. If the Fock states of the average

(33)

are eigenstates of the free electron Hamiltonian, H0= ∑kc†kck, the time

de-pendence is easily extracted as well and we can write the Green functions as mathematical functions on closed form. These are very useful since we ulti-mately use them to express the Green functions of interacting particles through some perturbative expansion. The greater and lesser Green functions for a free electron turn out to equal,

G>(0)k (t,t0) = −i[1 − f (εk)]e−iεk(t−t

0) , G<(0)k (t,t0) = i f (εk)e−iεk(t−t 0) , (3.38) while the time ordered, retarded and advanced are

Gt(0)k (t,t0) = −i[θ (t − t0) − f (εk)]e−iεk(t−t

0)

, Gk¯t(0)(t,t0) = −i[θ (t0− t) − f (εk)]e−iεk(t−t

0)

, Gr(0)k (t,t0) = −iθ (t − t0)e−iεk(t−t0),

Ga(0)k (t,t0) = iθ (t0− t)e−iεk(t−t0).

(3.39)

The free electron Green functions obviously depend on the time difference,

t− t0, only and can hence be Fourier transformed into functions of frequency,

G>(0)k (ω) = −2πi[1 − f (εk)]δ (ω − εk), G<(0)k (ω) = 2πi f (εk)δ (ω − εk), Gtk(0)(ω) = 1 ω − εk+ iδk , G¯t(0)k (ω) = −1 ω − εk− iδk , Gr(0)k (ω) = 1 ω − εk+ iδ , Ga(0)k (ω) = 1 ω − εk− iδk . (3.40)

where δ is infinitesimal and positive, while δk is infinitesimal and negative

if k < kF, but positive if k > kF. For non-interacting phonons the equivalent

Green functions in time are

D>q(t,t0) = −i{[nB(εq) + 1]e−iωq(t−t

0)

+ nB(εq)eiωq(t−t

0)

}, D<q(t,t0) = −i{[nB(εq) + 1]eiωq(t−t

0) + nB(εq)e−iωq(t−t 0) }, Drq(t,t0) = −2θ (t − t0) sin[ωq(t − t0)], Daq(t,t0) = −2θ (t0− t) sin[ωq(t − t0)],

Dtq(t,t0) = −i{[nB(εq) + θ (t0− t)]eiωq(t−t

0)

+ [nq(εq) + θ (t − t0)]e−iωq(t−t

0)

}, Dq¯t(t,t0) = −i{[nB(εq) + θ (t − t0)]eiωq(t−t

0)

+ [nq(εq) + θ (t0− t)]e−iωq(t−t

0)

}. (3.41)

(34)

3.2.4 The Heisenberg equation of motion and the perturbative

expansion

A closed expansion for the dressed Green function can often be found in the time-domain by successively differentiating the Green function with respect to time and applying the Heisenberg equation of motion,

∂tcλ(t) = −i[cλ(t), H], (3.42)

where H is the total Hamiltonian that can be written H = ∑λελc

λcλ+ V as

before. The time-derivative of e.g. a general fermion time-ordered Green function is

∂tGtλ(t,t0) = ∂t

h

−iθ (t − t0)hcλ(t)c†λ(t0)i + iθ (t0− t)hc†λ(t0)cλ(t)ii, (3.43)

which equates to ∂tGtλ(t,t0) = − iδ (t − t0)hcλ(t)c † λ(t 0 )i − θ (t − t0)h[cλ, H](t)c†λ(t0)i − iδ (t − t0)hc†λ(t0)cλ(t)i + θ (t 0 − t)hc†λ(t0)[cλ, H](t)i (3.44) and since [cλ, H] =

λ0 ελ0[cλ, c † λ0cλ0] + [cλ,V ] = ελcλ+ [cλ,V ] (3.45)

the equation may be written

(i∂t− ελ)G t λ(t,t 0) = δ (t − t0) − ihT {[c λ,V ](t)c † λ(t 0)}i. (3.46)

Now, if the bare time ordered Green function is differentiated with respect to

time, ∂tgtλ(t,t0), the same process as above yields the relation

gtλ(t,t0) =δ (t − t

0)

(i∂t− ε)

, (3.47)

which can be identified in equation (3.46) above. Finally, by using f (x) =

R

f(x0)δ (x−x0)dx0and defining Fλt(t,t0) = −ihT {[cλ,V ](t)c

† λ(t

0)}i the dressed

time ordered Green function becomes Gtλ(t,t0) = gtλ(t,t0) +

Z

gtλ(t,t1)Fλt(t1,t0)dt1. (3.48)

Depending on the interaction, Fλt either includes Gtλ as a factor, closing the

expression to an equation that may be iteratively solved, or Fλt(t,t0) must be

differentiated with respect to time in the same manner as above to hopefully

recover Gtλ such that

Gtλ(t,t0) = gtλ(t,t0) + Z gtλ(t,t1)Σtλ(t1,t2)G t λ(t2− t 0 )dt1dt2, (3.49)

(35)

4. Tunnel junctions and scanning tunnelling

microscopy

Tunnel junctions come in a wide variety of different of shapes and forms, as parts of experimental setups or in nanoscale electronics, designed to operate with as many different purposes i mind. In 1975 the tunneling magnetoresis-tance (TMR) was discovered in a magnetic tunnel junction (MTJ) [12] at low temperatures which led to research that eventually blossomed with the discov-ery of the giant magnetoresistance (GMR) in the late 1980s [5, 6]. An effect that is utilized in all modern high density hard disk drives. In multi junction so-lar cells the tunnel junction provides a low resistance separation between the n and p doped subcells [7]. For superconducting tunnel junctions, the Josephson effect is utilized in e.g superconducting quantum interference devices (SQIDs) which are magnetic field sensors of extremely high sensitivity [13, 14].

For experimental applications, devices such as break junctions offer a means to measure electrical currents through single molecules or chains of atoms [15]. A tunnel junction is also an essential part of scanning tunneling mi-croscopy (STM) which is one of the most important resent experimental tools that has the ability to image surfaces of materials down to the atomic scale as well as to make very local spectroscopic measurements. Since STMs play an important role in the theoretical studies conducted for this thesis, a slightly more detailed description follows below.

Between these different devices the basic functionality does set a common denominator. In essence two conductors are positioned close enough, on ei-ther side of an insulator, that electrons (quasiparticles) may tunnel quantum mechanically from one conductor to the other through the potential barrier of the insulator. The resulting tunneling current, determined by a number of fac-tors such as the DOS spectrum of the two leads, the temperature of the leads, barrier of the insulator, the biasing voltage applied or the magnetic proper-ties of the two leads, is typically measured. For the purpose of experimental measurements or in order to achieve some wanted functionality, an object of study, that the tunneling electrons may interact with, is often placed between the leads within the insulating part. These objects may be e.g. quantum dots, molecules, magnetic or nonmagnetic atoms, etc.

4.1 Scanning tunnelling microscopy (STM)

In 1982 the first paper with experimental results from STM measurements was published [8], which later earned G. Binnig and H. Rohrer the Nobel prize in

(36)

1986. The apparatus they had built was revolutionary because it provided a means to view the surface of a conductive material with atomic resolution. Later, in 1990, D.M. Eigler and E.K. Schweizer used a STM to move randomly placed xenon atoms on a plane nickel surface into organised patterns, thereby proving that the manipulation of individual atoms is possible [16]. Ever since, STM has continued to be an invaluable asset to physicists for probing local quantum mechanical phenomena and development has continued with the re-alisation of superconducting [4] and spin polarized STM [17].

STM:s are fairly simple in principle. A very sharp tip of a conducting ma-terial, ideally ending with a single atom, is held in position, a few Å over a conducting sample material by piezoelectric actuators that can move the tip in all directions. A small voltage bias can be applied over the tip to sample gap, that stretches the exponentially decaying tip/sample electron wave functions to overlap so that a small chance of tunneling i achieved. For a given bias volt-age the resulting tunneling current, I(r), will be proportional to the LDOS of the sample surface and DOS:s of the tip [18]. When comparing measurements with experiments, however, the differential conductance, dI/dV , is often con-sidered and by choosing a tip that has a fairly flat DOS in the energy range of the bias voltage, about the Fermi level, the differential conductance can be shown to equal the local DOS of the surface directly to a good approximation. Measurements can be performed in two different ways. Either by keeping the tunneling current constant as the tip scans the sample surface by continually adjusting the tip height by feedback from the current, or by keeping the tip height constant in a fixed (x, y) position as the voltage is varied. The first method records the tip heights over a scanned portion of the sample surface which is translated to a topographic picture of the LDOS. STM tunneling cur-rents varies exponentially with the tip height which makes the method sensi-tive to surface variations. The second method records the current variation in a fixed position for a range of tunneling electron energies. The current variation is then translated to give a spectral picture of the surface electron LDOS.

The prevalent theory for basic interpretation of the images and spectra pro-vided by a STM comes from Bardeen [19] and later from Tersoff and Hamann [18] applied specifically to STM. Bardeen presume that the potential barrier can be treated as a perturbation and by truncating at the lowest order correction retrieves Fermi’s golden rule for transitions, which translates to

I=2πe

¯h

t,ν

|Ttν|2f(εt)[1 − f (εν)]δ (εt+ eV − εν) (4.1)

for the tunnelling current in the tip to surface direction. e above is the electron charge, t and ν labels the tip and surface states, f (ε) is the Fermi function, and V gives the gap voltage. The tunnelling matrix element is found to be

T= ¯h 2 2m Z dS · ψt∗∇ψ¯ ν− ψν∇ψ¯ ∗ t , (4.2)

(37)

if the integral is taken over a surface within the potential barrier that sepa-rates the tip wave function from the surface wave function, such that they are described by their own Hamiltonians. Expression (4.1) is fairly intuitive, the delta function assures that only tip states, elevated by eV , with a matching

surface state are accounted for. The Fermi function f (εt) gives the probability

for a tip state with energy εt to be occupied by an electron while 1 − f (εν)

gives the probability of a surface state to be unoccupied. The matrix element

Tt,ν squared gives the quantum mechanical probability for a tip state ψt to

transition to a surface state ψν.

Calculating I(r) from expression (4.1) essentially comes down to solving

the integral |Tt,ν(r)| but with a few approximations appropriate to real

exper-iments a simpler and more attainable expression is reached. First off, if the tip of the STM is treated as a point source, confining the tip wave function, the transition matrix element will depend only in the surface wave function at

the tip and |Tt,ν|2∝ |ψν|2. Then assuming experiments are done in low

tem-peratures with small gap voltages the Fermi functions can be replaced by step functions, which is true in the limit. The total expression will then be non zero only in the energy gap 0 < ε < eV . Furthermore, by taking the continuum limit of the discreet tip state summation one ends up with,

I(r0) ∝ Z eV 0 ρt (ε)

ν |ψν(r)0|2δ (ε − εν)dε, (4.3)

where ρt(ε) is the tip density of states and r0represents the position of the tip

in relation to the surface. The local density of states of the substrate surface can be identified within the integral and (4.3) takes the short form,

I(r) ∝

Z eV

0

LODS(r, ε)dε, (4.4)

if ρt(ε) is considered constant over the energy limit. By looking at the

differ-ential conductance it is clear that dI(r, ε)

dV ∝ LDOS(r, ε ). (4.5)

as stated above.

4.2 Theoretical description of tunnel junctions

The theoretical framework used to describe tunnel junctions within the papers published for this thesis rests on a Hamiltonian description, in the spirit of Cohen et al. [20], where the conducting leads are treated as mathematically separated and semi infinite having their own heat baths. The lead

(38)

donate/accept an arbitrary number of electrons without changing the internal state. The only connection between the leads is accounted for by a tunneling term of the kind

kqσ

Tkqc†cqσ+ H.C. (4.6)

that destroys an electron in one lead, represented by the momentum subscript q, after which an electron is created in the other lead, represented by the mo-mentum subscript k. H.C. is the hermitian conjugate of the first term and takes

care of the opposite process where electrons tunnel in the other direction. Tkq

is the tunneling matrix element, that sets the rate of tunneling. It is, again, given by the wave function overlap between wave functions of the two leads. As such, it varies exponentially with the distance between the leads, apart from being dependent on the particle energy and the wave number. For small bias

voltages, eV , in relation to the Fermi energy EF, however, Tkqis often treated

as constant, as variations in Tkqare on the order of E/EF and k/kF [9].

The method of obtaining a closed mathematical expression for the tunneling current is most easily illustrated by an example. Following the steps of Mahan [9], suppose a system of two metallic leads, denoted by the subscripts L and R - for left and right, that are separated by insulating gap, is determined by the total Hamiltonian H=HL+ HR+ HT =

k εkc†kck+

q εqc†qcq+

kk Tkqc†kcq+

kq Tkq∗c†qck (4.7)

where the conduction electrons in each lead are considered free and the spin indices are removed.

The current of tunneling electrons flowing through the system in response to an applied bias voltage can then be expressed as the rate of change in electron number for either of the leads, i.e. the number of electrons removed from one lead must end up in the other lead and the average rate at which this happens is the definition of current. This statement translated into mathematics reads

IL(t) = −eh ˙NL(t)i, (4.8)

where the left side has be chosen to define positive and negative current and

NL= ∑kc†kck is the number operator. Using the Heisenberg equation of

mo-tion, ˙NL(t) = i[H, NL](t), the current can be written as

IL(t) = −ie

kq  Tkqhc†k(t)cq(t) − Tkq∗hc†q(t)ck(t)i  = −2ieIm

kq Tkqhc†k(t)cq(t)i ! (4.9)

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

The government formally announced on April 28 that it will seek a 15 percent across-the- board reduction in summer power consumption, a step back from its initial plan to seek a

(Received 15 December 2012; revised manuscript received 20 March 2013; published 3 June 2013) We address local inelastic scattering from the vibrational impurity adsorbed onto

Density functional theory is a method used to describe interacting electrons by means of their corresponding probability density, instead of a many- electron wave function..