• No results found

The fundamental structure of matter

N/A
N/A
Protected

Academic year: 2022

Share "The fundamental structure of matter"

Copied!
146
0
0

Loading.... (view fulltext now)

Full text

(1)

L U L E A I U N I V E R S I T Y

O F T E C H N O L O G Y

L

V ^ / V^_> X V ' I V I Y__1_J< i . X X J — i X i '

The Fundamental Structure of Matter

JOHAN HANSSON

Division o f Physics

1998:04 • ISSN: 1402 - 1544 • ISRN: L T U - D T - - 98/4 - - SE

(2)

The Fundamental Structure of Matter

av

Johan Hansson

Akademisk avhandling

som med vederbörligt tillstånd av Tekniska fakultetsnämnden vid Luleå tekniska universitet för avläggande av teknologie doktorsexamen, kommer att ofientligt försvaras i universitetets sal E243, fredagen den 27 mars 1998, kl. 10.00.

Fakultetsopponent är professor Boris Kopeliovich, Max-Planck-Institut für Kernphysik, Heidelberg, Tyskland.

Avhandlingen försvaras på engelska.

Doctoral Thesis 1998:04 ISSN: 1402 - 1544 ISRN: LTU - DT — 98/4 - - SE

(3)

Doctoral Thesis

The Fundamental Structure of Matter

JOHAN HANSSON

Division of Physics Luleå University of Technology

SE-971 87 Luleå Sweden

Luleå 1998

(4)

The Fundamental Structure of Matter

Johan Hansson Division of Physics Luleå University of Technology

SE-971 87 Luleå, Sweden (February 1998)

A B S T R A C T

The subject of this thesis is "the fundamental structure of mat- ter", that is, the quest of understanding the deepest level of the physical world, and the interactions relevant at that level. The hope is that, as one goes down deeper, the laws are going to be simpler, not necessarily in mathematical terms, but in conceptual terms. The goal is fewer and fewer ad hoc assumptions, inspiring and driving the pursuit for the fundamental structure of matter.

The thesis consists of an introductory part, giving a broad overview of where the subject stands today, and of a more detailed part, containing our own contributions to the advances of this knowl- edge. Six reproduced papers are appended at the end. There we treat the fundamental structure of matter on three different lev- els. The first three papers are concerned with the inner structure of particles (hadrons) that interact via the strong nuclear force. Here we have investigated the interactions of the so-called quarks inside hadrons, taking into account also their spin structure. Besides pro- tons and neutrons, we have also studied more exotic particles con- taining quarks, so-called mesons, that are only produced in high- energy collision processes. On a more fundamental, but speculative, level we have constructed a new model for an underlying substruc- ture common to both quarks and leptons (particles unaffected by the strong interaction), i.e., all particles that build up matter. We also investigate some of the physical consequences of this model, partic- ularly the possibility of radiative neutrino decay. On the large scale, we analyse the origin of the so-called dark matter in the Universe, which we propose is composed out of enormous lumps exclusively made of quarks, without any "normal" hadrons. We also explore the connection of this phenomenon to the mysterious bursts of gamma- rays seen in astrophysics.

(5)

There are therefore agents in nature able to make the particles of bodies stick together by very strong attractions. And it is the business of experimental philosophy to find them out.

Isaac Newton

Pure logical thinking cannot yield us any knowledge of the empirical world;

all knowledge of reality starts from experience and ends in it.

Albert Einstein

We are not likely to know the right questions until we are close to knowing the answers.

Steven Weinberg

42

Deep Thought

In our field we have the right to do anything we want. It's just a guess.

If you guess that everything can be encapsulated in a very small number of laws, you have the right to try.

Richard Feynman

You're damned if you do, and you're damned if you don't!

Bart Simpson

(6)

L I S T O F P A P E R S

This doctoral thesis includes the following appended papers:

I

Polarized inclusive leptoproduction, IN —> hX, and the hadron helicity density matrix, p(h):

Possible measurements and predictions M . Anselmino, M . Boglione, J. Hansson, F. Murgia

Physical Review D 54, 828-837 (1996) I I

A potential diquark in the proton S. Fredriksson, J. Hansson, S. Ekelin Zeitschrift für Physik C 75, 107-111 (1997)

I I I

Production of meson pairs, involving tensor and pseudotensor mesons,

in photon-photon collisions

L. Houra-Yaou, P. Kessler, J. Parisi, F. Murgia, J. Hansson Zeitschrift für Physik C 76, 537-547 (1997)

r v

A Quark-Matter Dominated Universe

D. Enström, S. Fredriksson, J. Hansson, A. Nicolaidis, S. Ekelin astro-ph/9802236

Submitted for publication V

Preon Trinity

J.-J. Dugne, S. Fredriksson, J. Hansson, E. Predazzi hep-ph/9802339

Submitted for publication V I

Neutrino Decay in a three-Preon model J. Hansson

To be submitted for publication

(7)

Contents

1 I N T R O D U C T I O N 1 2 M E T A P H Y S I C A L R E F L E C T I O N S 3

3 H I S T O R I C A L O V E R V I E W O F P A R T I C L E P H Y S I C S 5 4 W H A T I S A C T U A L L Y M E A S U R E D I N E X P E R I M E N T S ? 11

5 T H E O R Y 14 5.1 Introduction 14

5.2 QED 21 5.3 QCD 25 5.4 Electroweak Theory 28

6 A T T E M P T S T O W A R D S A B E T T E R U N D E R S T A N D I N G

O F N A T U R E 35 6.1 Hadron Structure 35

1 Spin Structure of Hadrons; Paper I 35 2 Static Quark Structure of the Proton; Paper I I 37

3 Generalization of the Brodsky-Lepage model; Paper I I I . . 40

6.2 Astrophysical Connection; Paper TV 42 6.3 Quark and Lepton Substructure 45

1 Our Preon Model; Paper V 45 2 Electromagnetic Neutrino Decay; Paper V I 47

7 L O O S E E N D S / F U T U R E W O R K 49

7.1 Generalities 49 7.2 Masses 52 7.3 Gravitation? 54 8 C O N C L U S I O N S 57 9 A C K N O W L E D G E M E N T S 59

10 R E F E R E N C E S 61

(8)

1. I N T R O D U C T I O N

The purpose of particle physics is to find the most basic building-blocks of matter, and the way in which they interact. The underlying idea is that all the complexity of all the phenomena occurring in the world really is a consequence of a few, basic laws of physics, which are expressible in mathematical terms. The immediate surroundings of ours are almost completely the result of two forces;

gravity and electromagnetism. Right now, while you are reading this, gravity tries to pull you towards the centre of the earth, while electromagnetism supplies the resisting force, ultimately through the "Pauli principle"- a quantum-mechanical facet of the world, which enables you to stay put, exactly balancing the down- ward pull. These two forces are both believed to be infinite in reach, i.e., a lump of matter or electrical charge here influence phenomena arbitrarily far away in the universe, the effect however falling as the inverse square of the distance. At the end of the 19th century, a new and totally unexpected force was discovered, through the experimental observation of radioactive decay. This force, "the weak nuclear force", in its modern generalization, is of importance only up to distances of around 1 0- 1 8 metres. Still, it is being regarded as a fundamental law of na- ture. In the 1920s, it was evident that yet another force had to be added to the other three. A t that point one knew that the atomic nucleus contains protons, which carry a positive electric charge, equal in magnitude to the negative elec- tron charge. As positive charges repel each other, a force much stronger than the electromagnetic must act in nuclei, in order to keep them from disintegrating because of the electric repulsion. This new force became known as the "strong nuclear force" and operates within distances up to approximately 1 0- 1 5 metres.

In the framework of modern relativistic quantum theory, or "quantum field the- ory" , these forces are mediated by force-carrying particles called "gauge bosons", propagating between the fundamental "matter particles", which are quarks and leptons. The quantum-mechanical generalization of ordinary electromagnetism is called quantum electrodynamics (or QED for short). The force-carrying particle of QED is the photon, the massless particle of light. The two nuclear forces have no classical, or macroscopic, analogues so they are quantum-mechanical from the beginning.

The modern formulation of the weak nuclear force is mediated by three gauge bosons, called Z°, W+ and W~. These differ from the gauge bosons of the other two forces, as they are both massive and contain electrically charged members,

(9)

thus raising a suspicion that they might actually not be fundamental. For math- ematical reasons of consistency, the electromagnetic and weak forces are partially unified into an "electro-weak" theory.

The strong nuclear force, in its modern guise, is mediated by eight massless gauge bosons, called gluons. This theory is inspired by QED and this justifies the use of a similar name: Quantum Chromodynamics (QCD). "Chromo" (Greek for colour) stems from the fact that the gluons (and quarks) carry "colour", a quantum number which has nothing in common, except for the name, with the usual concept of colour. The "colour" in QCD plays the same role as that of electric charge in QED.

The whole package of these three quantum interactions is called "the stan- dard model". I t is often designated by the mathematical expression SU(3) x 5/7(2) x U(l), which denotes the direct product of the fundamental "internal gauge groups" relevant for the strong, weak, and electromagnetic interactions.

The fourth interaction, gravity, is the odd man out. Although it is the oldest force known to man, i t is also the least understood from a microscopic standpoint.

There is no consistent "quantum" theory of gravity. What saves particle physics is the fact that gravity is extremely weak, so its influence can in practice be totally ignored for particle reactions. As an example of its weakness, the whole mass of the earth had to "pull" on the proverbial apple in order to break its tiny stalk (held together by electromagnetic forces), releasing the f r u i t to fall onto Newton's head.

Physics helps in comprehending the universe by giving a qualitative view, or model, of how the world functions. This general view is first built on specific experimental facts, then strengthened or amended by comparing quantitative predictions of the model with experiments done under idealized conditions. By this "cross-fertilization" of experiment and theoretical model building, an increas- ingly accurate picture of the world is constructed, hopefully leading us nearer and nearer the "truth".

2

(10)

2. M E T A P H Y S I C A L R E F L E C T I O N S

I would like to start with some general questions of principle, essentially lying outside of physics proper, in the sense that they cannot be experimentally checked, at least at present. These reflections should be seen as a possible background structure of physics, that is, "metaphysics".

• Do basic underlying "laws of nature" exist, or is nature totally random at the fundamental level, the "laws" we recognize only being due to some

"self-ordering" at some scale?

• I f laws exist, are they simple enough for the human mind to grasp? That is, is the human brain sufficiently evolved to uncover these laws and bring order, regularity and comprehension to the sometimes chaotic empirical facts about nature?

• Are the four, presumably basic, forces of nature found until now the only ones which exist? One should keep in mind that a hundred years ago, only the two long-range forces were known. When future experiments with higher precision and/or energy are performed, they might well reveal new, totally unexpected phenomena. Alternatively, are the known forces merely different facets of one single force, described by one natural law?

• I f an "ultimate" theory is constructed, is it then possible to test i t experi- mentally? Unless a technological breakthrough is made, we will soon reach the limit, both physically and economically, of larger and more powerful accelerators. In such a case, tests of fundamental theories are relegated to indirect, cosmological data, with their inherent uncertainty and non- repeatability.

• Even though the four basic forces known today are fairly simple, i t is almost always impossible to predict the behaviour of even mildly complex systems from these presumably fundamental laws. Is this a true facet of nature, or is this only a consequence of our theories being formulated in a less than ideal way?

• Is i t merely an accident that the characteristic energy scales of the theories are such that one scale hardly interferes with another, or is it due to some underlying ordering mechanism? That is, i n atomic physics (with energies

(11)

in the eV to keV range), one can disregard almost all of nuclear physics. In nuclear physics (MeV) the degrees of freedom of particle physics are, most often, neither needed nor wanted. In particle physics (GeV and higher), one can totally disregard gravity and still achieve excellent agreement between theoretical and experimental results.

• The concept of "constituents" might be meaningless on the fundamental level. For instance, when interactions become strong, i t is doubtful if the particle concept is relevant. A reductionist reasoning seems to be sub- consciously ingrained, at least in the western world. I t is so natural to assume that things must "consist" of something smaller. The (unsuccess- ful) attempt to describe the strong interaction by "Bootstrap" theory in the 1960s, was a step away from reductionism, as all hadrons in that model were "equally fundamental". There might be natural principles that differ from the hierarchical structure of natural laws known today.

• If basic laws of nature do exist, and they someday become completely known, one still needs boundary conditions to get answers about the world, at least i f the laws are formulated as differential equations, like now. Where do these boundary conditions come from? Are they intimately connected with cosmology, or will they also arise naturally from the right "theory of everything" ?

In the most fundamental theory today, that of particle physics, we of course assume that fundamental physical laws do exist, and the purpose of the whole game is to find these laws, and the constituents which follow them. For the moment, this seems to be the most sensible thing to do under the circumstances.

4

(12)

3. H I S T O R I C A L O V E R V I E W O F P A R T I C L E P H Y S I C S

The concept of elementary particles was already implicit in the ancient Greek idea of "atomism", i.e., that all matter might be built up from indivisible, smallest constituents. I n the 17th century, Newton was convinced that light was composed of particles, or "corpuscles", but neither the ideas of Newton nor those of the ancient Greeks were testable at the time, because of technological limitations.

Modern particle physics can be said to have been born with the experimental discovery of the electron (e) in 1897 by Thomson [1]. He identified the cathode rays he was examining as being composed of particles with a definite mass and electric charge. This discovery was also made independently by others at the same time, but Thomson went further and proposed that these particles, "electrons", were fundamental constituents of matter. In 1911 Rutherford [2] discovered that atoms have a very small, electrically charged nucleus, some 1 0- 4 times the size of the whole atom. This paved the way for the Bohr model of the atom [3], where electrons revolve in given orbits around the nucleus. In just a few short years, the atom had gone from a suspiciously looking idea, cooked up to explain some chemical reactions, into a successful model, capable of explaining, among other things, the hitherto completely bewildering atomic spectra.

Light had long been thought of as a wavelike phenomenon. Indeed, this view had been experimentally verified by the double-slit experiment of Young in 1804. Therefore, Einstein's explanation of the photo-electric effect as a "parti- cle of light"-phenomenon [4] was resisted by almost all physicists. Einstein took Planck's explanation of the black-body "paradox" at face value, and whereas Planck thought that the quantization of radiation energy was a consequence of oscillating atoms, Einstein realized that i t was the light itself that was quantized into "photons" (7). This was the first example of the infamous "particle-wave"- duality of quantum mechanics. Indeed, just a little later the electrons (particles) were shown to behave like waves under the right circumstances, as evidenced by their diffraction by crystals. This led to the hypothesis of de Broglie, that all particles have an associated wavelength, and later to the Schrödinger equation.

In 1932 the neutron (n) was discovered [5], and the modern atomic structure was complete, with a massive nucleus consisting of protons (p) and neutrons, surrounded by a "cloud" of light electrons.

The first example of anti-matter was found in 1933, by the observation (in cosmic rays) of a positron ( e+) , the positively charged anti-electron [6]. The

(13)

existence of anti-matter was anticipated on theoretical grounds through the work on relativistic quantum mechanics by Dirac. According to theory, every particle should have an anti-partner with the same mass and spin (an "intrinsic" angular momentum of the particle, measured in units of h, Planck's constant), but with all "quantum numbers" opposite. Why the Universe seems to be built solely from matter in the first place, and not an equal admixture of matter and antimatter, is an unresolved fundamental question.

A totally unexpected particle, the muon (p), was discovered in 1937, again via the study of cosmic rays [7,8]. The muon was first mistaken for a particle conjectured by the Japanese theorist Yukawa to be responsible for the nuclear force. As time went on, it became increasingly clear that the muon did not participate in the strong nuclear interaction. I t had to be a new, fundamental particle, very much related to the electron. In fact, it seems identical to the electron in every respect, except being some 207 times heavier. A l l fundamental

"fermions" (spin-1/2 particles) that do not take part in the strong interaction are called Uptons.

A particle close to what Yukawa had hypothesized, was actually detected in 1947; the pi-meson (it) [9,10]. This was the first example of a strongly interacting particle ("hadron") apart from the proton and neutron. This was just the first of hundreds of new hadrons that would be discovered in subsequent years, when ever more powerful particle accelerators were constructed. This development led to alternating periods of confusion and reconciliation among particle physicists.

So far, the new particles had been related to the "normal" constituents in a direct way. No new properties had to be introduced in order to explain the new particles. This would change very soon. I n 1947 another unstable hadron was discovered, today called the K-meson (K) [11,12]. The theoretical explanation re- quired the introduction of a new, additive, quantum number called "strangeness".

In any particle reaction involving the strong interaction, the total strangeness is conserved. This means that when a particle with a positive strangeness is pro- duced, a particle with the corresponding negative strangeness must also be pro- duced, assuming that the initial particles were "non-strange". In the old days, this became known as "associated production". The weak interaction, however, does not conserve strangeness, so a if-meson (with strangeness one) can dis- integrate into, e.g., three pions, particles carrying no strangeness at all. This explained why strange particles are easy to produce (through strong interaction), while being paradoxically long-lived ( 1 0- 9 s). Other particles with strangeness

6

(14)

were quickly discovered afterwards.

Starting in the 1950s, a new class of particles was discovered. These have life- times of around 10~2 5 s. Due to their extremely short lifetime, they were called

"resonances", and the reason they are so short-lived is that they can disintegrate via the strong interaction. The resonances, and the pattern eventually observed among them, were instrumental i n formulating the modern theory of quarks as the basic constituents of all hadrons. Of particular interest is the "Eightfold Way" model [13], which managed to catalog all of the known hadrons at the time through a simple group-theoretical scheme. Furthermore, i t predicted a new particle and its approximate mass, through a "hole" in the scheme; the The experimental observation of the fi", at roughly the predicted mass, was a great triumph for the Eightfold Way [14]. The fl~ decays weakly, so i t is a "particle", not a resonance. This is due to the fact that i t carries three units of strangeness, and is lighter than any other combination of strange particles combined, thus prohibiting, through energy conservation, a strong interaction. Gell-Mann then went a step further, and proposed that the striking regularity of the Eightfold Way could be explained by assuming that all hadrons were built up from only three different elementary constituents, the "quarks", arranged in different ways [15]. These were the "up" (u), "down" (d) and "strange" (s) quarks. Essentially the same scheme was suggested independently by Zweig [16], who called the basic constituents "aces".

p

n A

o-

7C+ TC- KO K °

FIG. 1. The quark structure of some hadrons: the proton, neutron, A, charged pions and neutral K-mesons.

Even earlier, the proton and the neutron were noted to have the "wrong"

(15)

magnetic moments for being elementary particles. Experiments gave the the values 2.79 and —1.91 (in units of eh/2m, the "nuclear magneton") for the proton and the neutron, compared to the expected theoretical value of 1 and 0 if they were elementary. The quark model qualitatively explains these magnetic moments, as the neutron has electrically charged constituents, although its net charge is zero.

The quark model was also arrived at, almost independently, by another line of research. I t had for long been known that the proton is not pointlike, but has a structure, and a size of approximately 10~1 5 m. This was achieved by scat- tering energetic electrons off protons [17]. By using ever increasing energies in the accelerators, the researchers were able to probe deeper and deeper into the structure of matter, and in 1969 the first (indirect) experimental signal of a sub- structure inside the proton was seen [18]. This substructure was later identified with quarks. By observing the way electrons (and later also neutrinos) scattered, it was deduced that the quarks have a spin of 1/2 [19].

In 1974 a hadron, later named J/ip, was discovered independently by two groups [20,21] (one group calling i t J , the other ip). I t could not be fitted into the existing scheme of quarks, and a new, fourth quark, with a new additive quantum number called "charm", was needed to explain i t . This charm quark (c) had already been anticipated by some physicists on theoretical grounds.

FIG. 2. The quark structure of the J/ib and T mesons, the first particles observed containing "charm" and "bottom" quarks.

By the discovery of the "upsilon" particle, T , in 1977 the quark family had to be enlarged again, to five members. The new quark (6) carries a new quantum number called "bottom" (or "beauty") [22,23]. This, in turn, led to the theoret- ical expectation of a sixth quark, dubbed the "top" quark ( f ) , which was even- tually experimentally verified i n 1995 after a long search of almost two decades

8

(16)

[24,25]. This concludes the status of quarks known today, and there are (within the standard model) no expectations of additional quarks. This should not be taken as a proof for the non-existence of further quarks, as not many physicists believed in the existence of the charm and/or bottom quarks, before they were experimentally discovered. The discovery of quarks is complicated further by the empirical fact that quarks do not seem to occur isolated, only confined within their parent hadrons. I t is thus hard to measure their physical parameters, com- pared to, for instance, electrons which can be examined more directly. Why the quarks should be permanently confined inside the hadrons is not known, although several plausibility arguments exist.

In the meantime, while the "hadronic particle zoo" was discovered, and its theoretical interpretation was made, startling advances in non-strongly interact- ing physics were also made. In 1956 the discovery of (anti)neutrinos was made [26]. The neutrino (v) was introduced in 1930 by Pauli, to remedy a paradoxi- cal property of nuclear beta-decay, which seemed to violate energy conservation.

Pauli's idea was that part of the energy invisibly disappeared via a massless spin-1/2 particle, which almost did not interact at all with other particles. The experimental verification of the existence of a second, independent neutrino, was made in 1962 [27]. The first observed neutrino is intimately related to the elec- tron, while the other is similarly related to the muon. Hence, they are called electron-neutrino (ue) and muon-neutrino (vß).

In 1974, a third, even heavier variant of the electron was discovered, the tau ( T ) [28]. I t is theoretically expected that the tau is also accompanied by a neutrino [vr), although i t has not been observed to date.

The experimental situation is well described by the standard model of particle physics. The spin-1 /2 "matter particles" of the standard model are six leptons (e, fi, T and their associated neutrinos) and six quarks (u, d, s, c, b, t). Each quark can have three independent "colours", giving in principle eighteen distinct quarks.

These basic constituents of matter interact through three different types of force- carrying "gauge bosons", the photon (electromagnetism), the weak bosons, W*- and Z° (the weak force), and the gluons (the "strong" force). A l l gauge bosons have an intrinsic spin of one (in units of Planck's constant, n).

The standard model is the condensation of the knowledge gained in particle physics during the 20th century. I t is the foundational basis of modern particle physics, and has withstood the test of increasingly detailed experimental scrutiny.

Despite this, particle physicists agree that i t cannot be the final answer. There

(17)

are just too many loose ends that the standard model (by construction) cannot handle. Some of these questions will be discussed in chapter 7.

The particles of the standard model are summarized below. The parentheses contain the electric charge (in units of e, the magnitude of the electron charge) and the mass (given in MeV/c2 = 106 eV/c2) of the particles.

T h e "matter particles"

(spin-1/2)

of the standard model Quarks Leptons u (+2/3, ~ 5) e ( - 1 , 0.511) d ( - 1 / 3 , ~ 10) Ve (0, 0 ?) s ( - 1 / 3 , ~ 200) H ( - 1 , 105.6) c (+2/3, ~ 1300) v, (0, 0 ?) 6 ( - 1 / 3 , ~ 4300) r ( - 1 , 1777) t (+2/3, ~ 180000) v, (0, 0 ?)

T h e "force-carrying particles" (spin-1) of the standard model

Gauge Bosons Photon, 7 (0, 0)

Gluon, g (0, 0) W± ( ± 1 , 8033Ö)

Z° (0, 91187)

As the quarks seem to be permanently confined in hadrons, their masses are really not that well defined. They are therefore not particles in the usual sense, as no unambiguous particle interpretation exists during permanent interactions.

(18)

4. W H A T IS A C T U A L L Y M E A S U R E D I N E X P E R I M E N T S ? We should pause for a moment, and consider how particle reactions are actu- ally observed. In most particle detectors, it is ultimately the resulting ionization, caused by moving electrically charged particles or the excitation of the constituent atoms, that is being measured, e.g., tracks in a bubble chamber or photographic emulsion, sparks in a spark chamber, or pulses in an electronic device. The exis- tence of neutral particles is deduced indirectly by imposing conservation of energy, momentum and quantum numbers in a reaction, or by studying secondary reac- tions caused by them. No real quantum properties of the particles are measured directly, the observed "tracks" being classical entities. By accelerating, collimat- ing, and bending the particle beam by means of (classical) electromagnetic fields, one is in a way continuously "measuring" the particles, hence making it mean- ingful to consider them as classical particles. The mean energy loss for a charged particle through ionization (collision with electrons) of the surrounding media is, to a good approximation, given by the "Bethe-Bloch formula"

dE dx

4irNAz2a2h2c2 Z r j . 2mv2 N t r \

A[HI ( l - v2l c2)] - ^ ( 4-1 }

where x = pl is the traversed "path length", measured in k g / m2 (p = density of medium, / = distance traversed by particle), NA is Avogadro's number, a as 1/137 is the fine-structure constant, z and v are the charge (in units of e) and velocity of the particle, m is the electron mass, Z and A are the atomic and mass numbers of atoms in the medium, and, finally I ~ 16 Z0-9 eV is an effective ionization potential, averaged over all electrons. The magnitude of dE/dx gives the relative "thickness", or density, of the tracks observed in, for instance, a bubble chamber. We see that the dependence of dE/dx on the medium is weak, as Z/A ~ 0.5 for most materials, except for the heaviest atoms and hydrogen.

It is also independent of the mass of the incident particle, M. dE/dx decreases as 1/v2 at non-relativistic velocities, and after going through a minimum at E ~ 3 M c2, i t increases logarithmically with the relativistic factor 7 = (1 — u2/ c2)- 1/2. Polarization effects in the medium limit further rise when 7 ~ 103 for gases, or 7 ~ 10 for solids. I t is possible to measure the mean ionization loss with an accuracy of a few percent. Through these measurements, 7, and hence v, can be extracted. I f the momentum is known independently (see Eq. (4.3)), the rest-mass of the particle can be deduced.

(19)

The energy loss of heavy charged particles is dominated by ionization. For electrons, however, another mechanism of losing energy to the material dominates.

This is through the process called "bremsstrahlung", whereby electrons radiate photons when accelerated by the intense electric fields near heavy atomic nuclei.

The corresponding acceleration of heavier charged particles is suppressed, due to their larger masses, so their energy loss is dominated by the ionization process.

For electrons, the average bremsstrahlung loss is approximated by the "Bethe- Heitler formula"

dE

dx m2c4 A Z1'2

brems

By integrating, one sees that the energy of the incoming electron is propor- tional to e~x,x°, where X0 is the "radiation length", a parameter that depends on the material.

The momentum, p — mv, of a charged particle (with charge ze) can be mea- sured by applying a known magnetic field over the detector, and measuring the curvature radius due to the Lorentz force,

p = mv = zeBr ~ 0.3 zBr, (4-3)

where the momentum is given in units of GeV/c. The magnitude of the magnetic field, B, is given in Tesla, and the curvature, r, is given in metres. As the energy loss depends on the velocity, i t is then possible to directly measure the masses of electrically charged particles.

Particle detectors must accurately record the position, arrival time and iden- tity of the charged particles. Detailed determination of the position is crucial for estimating the momentum of a particle (from its recorded positional deflection in a known magnetic field). Accurate timing is important for knowing if different particles came from the same interaction. The identity of a particle is inferred through its measured mass, electric charge, decay modes (if i t is unstable) and from its interactions, or non-interactions, through the strong, weak, and electro- magnetic forces. A combination of detectors is required to meet these demands, as no all-purpose detector exists. For an overview of particle detectors, see [29].

One clear-cut conclusion of the above discussion, is that theory and experi- ment are inseparably connected, and cannot, even in principle, be disentangled.

No experimental interpretation is possible without theoretical input, or assump- tions, and a theoretical world-view is useless in the exact sciences unless it is testable (falsifyable!) by experiment. To a great extent, it is the theory itself

(20)

that dennes what can be observed, as i t gives the interpretational framework of what is seen. The theory defines its own observables.

(21)

5. T H E O R Y 5.1. Introduction

Theoretical and experimental results are two complementary aspects of physics. In this chapter, I will only describe theories that have been successfully tested experimentally. Tentative theories, like GUTs and string theory, which have neither been verified nor disproved by experiment, will not be touched upon in this chapter.

"Natural units", in which a — c — 1, will be used throughout f r o m now on, apart from where explicit inclusion of h and c leads to easier comprehension of the physics involved.

The so-called standard model of particle physics consists of three different interactions between the fundamental constituents, quarks and leptons. Two out of the three interactions, the electromagnetic and weak, are partially unified into the electroweak theory. The theoretical framework of the standard model is quantum field theory (QFT), an outgrowth and generalization of relativistic (one- particle) quantum mechanics invented in the 1920s. QFT is the theory of local quantum fields, defined at every point in space-time (i.e., all interactions between fields are restricted to propagate with a speed less than, or equal to c, the speed of light in vacuum). It was obvious at an early stage that the "marriage" between special relativity and quantum mechanics implies the phenomenon of creation and annihilation of particles. In relativistic quantum mechanics the number of particles is not conserved (constant) like it is in the non-relativistic version, thus leading to the requirement of quantum field theory, in which particles can be both created and annihilated. Another viewpoint, due to Feynman [30], can be interpreted as a theory where the number of particles is actually conserved, a particle with negative energy propagating backwards in time being identical to the corresponding antiparticle with positive energy propagating forwards in time.

This is a very appealing idea physically, as particles, when actually detected, always appear as localized "pointlike" entities, never as "waves" or fields.

14

(22)

t

X

FIG. 3. a) In the so-called canonical interpretation, a free electron comes in, travel- ling along the x-axis. At time t j an electron-positron pair is independently created at point 1 (with the arrow of the positron pointing in the "wrong" direction, by conven- tion). At time t2, the positron created at point 1 annihilates the original electron at point 2, giving a photon (not shown). Only the electron from the electron-positron pair in point 1 is left, and continues to travel as a free particle, b) In the Feynman picture, a free electron comes in along the x-axis. At point 2, it interacts with the photon and scatters backwards in time to point 1, where it again interacts and scatters forwards in time, giving the outgoing electron. We can thus see the propagation of the electron as a one-particle phenomenon.

The theory is usually formulated in a Lagrangian framework, which is mani- festly relativistically covariant. The basis of the theory is the action, 5, defined as

where C is the "classical Lagrangian density" of the dynamical system. In all

<j)i and their first derivatives, 9M = (d/dt,—d/dx,—d/dy,—d/dz). As dAx is Lorentz invariant, we see that C also has to be Lorentz invariant in order for the action to be so. The variational principle ("least action")

(5.1)

theories of interest to us, the Lagrangian contains only the (independent) fields

(55 = 0, (5.2)

gives the "classical" equations of motion:

d{dß4>i)'

dC (5.3)

(23)

The theory is then "quantized", as when the action of a process is comparable to %, Planck's constant, quantum mechanics has to be taken into account. When the velocity v ~ c, relativity must also be included. In typical particle physics reactions 5 ~ % and v ~ c.

For each continuous symmetry, under which the Lagrangian (or action) is un- altered, a conservation law of some physical quantity is implied [31]. (Symmetries which are present at the classical level, but are broken by quantization are called

"anomalies". They are caused by the renormalization which is necessary to avoid divergencies.) For example, the invariance of the Lagrangian under translations in space-time implies conservation of momentum and energy. Invariance under rotations gives conservation of angular momentum. These are examples of the fundamental Poincaré symmetry of space-time in special relativity. There are also abstract "internal" symmetries, which are not connected to the external space- time, but act on the quantum-mechanical states. Examples are the symmetries 17(1) of electromagnetism, SU(2) of weak interactions and 517(3) of colour, which imply conservation of, respectively, electrical charge, weak isospin "charge" and colour "charge". The lepton and baryon numbers, on the other hand, are not

"protected" by continuous symmetries of the Lagrangian, and i t is thus possible that they are not conserved, although no violation has been observed in experi- ments.

It is also an empirical fact that the weaker the interaction, the more symme- tries are broken. The strong interaction violates no known symmetries, electro- magnetism violates isospin (otherwise the proton and neutron would be identi- cal). The weak interaction violates isospin, parity P ("space reflection" x —> —x), charge conjugation C (turning particle into antiparticle) and also CP combined.

The latter implies violation of time reversal T, as CPT is assumed to be an exact symmetry in all current theories. The weak interaction also does not, in gen- eral, conserve the identity (flavour) of a quark. Gravity presumably violates even more symmetries as it is the weakest interaction by far. One possible example is C PT-violation, another being energy non-conservation, as no local definition of the gravitational field energy exists.

When constructed, QFT had severe divergency problems. A first rough ap- proximation of the physical phenomena agreed fairly well with the experimental results. However, when more detailed, higher-order corrections were calculated, which were expected to be small, they were found to give infinite contributions. I n quantum electrodynamics (QED), this problem was "swept under the rug" by the

(24)

renormalization prescription invented in the late 1940s by Feynman, Schwinger, Tomonaga and Dyson [32]. Fundamental particles are point-like objects, and the interaction occurs at a single point in space-time, which in turn leads to divergen- cies. In successful quantum field theories these divergencies can be removed by the renormalization. One by-product is that the coupling strengths become scale- dependent, i.e., energy-dependent. The couplings are said to "run". Interactions that do not contain direct self-interaction between the gauge bosons, generically called "abelian theories", grow stronger at higher momentum transfers. In pure QED, which is an example of an abelian theory, the effective charge tends to infinity with the energy of the process with which it is measured. "Non-abelian theories", which include direct interaction between their gauge bosons, become weaker with the energy-transfer. Strictly speaking, this is the case only when the gauge bosons dominate over the fermions. Fermions "screen" the charge, requir- ing a larger "bare" charge, while non-abelian bosons "anti-screen", necessitating a smaller bare charge. In abelian theories, only fermions contribute, hence the universal strengthening.

It can be shown that by redefining the mass and charge, all divergencies in QED disappear order by order in the perturbative expansion. The naive "bare"

(unmeasurable) mass and coupling in the Lagrangian are not the "real" physical values, as these parameters are interpreted as the classical mass and charge. They are altered by quantum effects, due to an ever-present surrounding cloud of virtual particles.

The physical (or renormalized) mass is the combination of the "bare" mass and the shift of the mass due to quantum corrections, i.e.,

The correction Sm originates in the self-interaction of the charged particle with its own field (self-energy), and would of course be zero if the charge was zero. I t is supposed to be "small" compared to m, although i t is infinite!

In the same way, the charge is altered due to "vacuum polarization", a con- sequence of ever-present charged, virtual particle-antiparticle pairs, i.e.,

The quantum corrections in QED can be written as a power series in the physical coupling:

mphyS = m + Sm. (5.4)

^"phys — ^ ~T~ (5.5)

Sm — mp/ij/ s( a A i + a2A2 + a3A3 + ...), (5.6)

(25)

Se = ephys(l + aBl + c?B2 + ...), (5.7) where A; and Bi are coefficients independent of mPhys and ephys, logarithmically

divergent in the momentum cutoff A f r o m loop integrations. The momentum of virtual particles being emitted and reabsorbed (making a "loop") must be inte- grated over, but the integral typically diverges when the upper integration limit (oo) is approached. The cutoff (A) is a way to make unambiguous manipulations with otherwise divergent quantities. By using m = mvhys — Sm and e = ephys — Se in the Lagrangian, the "correction" terms cancel identical terms coming from the self-energy in the interaction part of the Lagrangian. Then the limit A —> oo can be taken. I f this was not the case, physically measurable quantities would depend on A, and thus be sensitive of how large A is chosen. Modern approaches to renormalization use "dimensional regularization" instead of a momentum cut- off. This is a mathematical trick, where the dimension of integrals is altered to insure convergence, restored to four only at the end of the calculation, when all potentially infinite terms have been removed. For instance, the proof that the electroweak theory is renormalizable relies on dimensional regularization. The formal renormalization procedure is highly technical and non-trivial, and i t took several decades before all remaining problems in Dyson's original paper [33] had been solved (mainly by Salam and Weinberg).

Renormalization, in effect, makes i t possible to disregard the high-energy limit of the theory, where i t is expected to break down anyway due to intervening new physics, making i t possible to get extremely close agreement between theory and experiment, at least in QED. The other two interactions, based on more complicated non-abelian internal symmetry groups, makes them more elaborate theoretically. The first construction of non-abelian field theory with local gauge invariance was made by Yang and Mills in 1954. Hence such theories are often labelled "Yang-Mills" [34].

The quantization of Yang-Mills theory was carried out to first order in virtual loops by Feynman [35], who also was the first to introduce compensating "ghost"

particles in order to preserve unitarity, the requirement that the sum of all prob- abilities should be unity, and later to all orders by DeWitt [36] and by Faddeev and Popov [37].

It was proven by t ' Hooft in the early 1970s [38] that (broken) non-abelian quantum field theories are renormalizable.

The renormalization, as well as the whole usage of Feynman diagrams, is based on a perturbation expansion of the interaction terms in the Lagrangian.

18

(26)

This makes it suspect that the whole theory of QFT is defined solely by the perturbative expansion [39], in that case making non-perturbative quantum field theory a "non-subject" at present. A quantum field theory without interactions is mathematically well-defined, and often exactly solvable, but of course of no use to describe nature. The philosophy behind introducing interactions in modern quan- tum field theories like the standard model is by guessing the appropriate internal symmetry group, and then demanding local gauge invariance. The "compensat- ing terms" needed to assure that this local invariance is obeyed are supposed to be just the right interaction terms, and the fields necessary to restore invariance are called "gauge fields". Hence the name "gauge field theories".

The Lagrangian density for a free Dirac field can be written

C = i>(ijßdß - m)ib, (5.8)

where ib denotes the field of an arbitrary fermion, notably a quark or a lepton.

In addition, ip = ^t7 ° , where tb' is the hermitian conjugate (complex conjugate combined with transpose in a matrix representation), and m is the mass of the particle described by the field ib. The 7s are the usual 4 x 4 Dirac matrices, defined by the fundamental anti-commutation relation

7 Y + 7 Y = 2 ^ , (5.9)

where

<7°° = -9U = -922 = -933 = 1, (5.10)

is the Minkowski space-time metric of special relativity. A l l gs with "mixed"

indices are identically zero.

Einstein's summation convention is used throughout, implying summation over repeated indices appearing in both up and down positions (fi, v 6 0,1,2,3).

It is easy to see that the Lagrangian of the free Dirac field is invariant under the global unitary transformation, or <7(l)-rotation,

ib -» f = e~iaip, (5.11)

where a is a constant. The gauge criterion amounts to replacing the global transformation by a local one, i.e.,

a-*a(x), (5.12)

(27)

where a is now a function of x, which denotes any space-time point. In order to maintain the invariance of the Lagrangian under this local transformation, a compensating term has to be introduced. It is possible to restore invariance by replacing the ordinary derivative by the "covariant" derivative, defined as

Dß = dß + ieAß{x), (5.13)

and requiring that the new field Aß transforms like

Aß -> Äß = Aß + -£dßa, (5.14)

under the gauge transformation. It is known that vector potentials are only de- fined physically up to an arbitrary gradient term, as the field strength tensor FßV,

i.e., the physically measurable quantities, is unchanged by the additional term because derivatives commute. If the field Aß is identified with the electromagnetic four-potential, we see that this has just the right form of vector potential plus a gradient, so i t is physically allowed to make this transformation. One can thus use local gauge invariance as a dynamical principle, generating the interaction terms needed, starting from the free Lagrangian.

For non-abelian gauge theories, the procedure is the same, but the gauge transformation is written as

xb-^ip' = eriTaa*[xH, (5.15)

where each Ta is now a matrix, and the index a runs over the dimensionality of the group. Also, the covariant derivative is modified to

Dß = dß + igTaA(x), (5.16)

where g gives the strength of the coupling, and is thus analogous to e in elec- trodynamics. The coupling strengths are not given by gauge invariance, but are introduced "by hand" to comply with observation. The requirement of gauge invariance implies that the gauge fields transform as

Al -> Äß = A; + -gdßa\x) + a\x)thcA% (5.17)

under the infinitesimal transformation a(x), where /£. are the so-called struc- ture constants of the group. This implies that a non-abelian theory will have interactions even in the absence of matter particles, as the "gauge fields", Aß(x),

20

(28)

couple (non-linearly) directly to each other, as also the pure gauge field term in the Lagrangian must be modified to be invariant under gauge transformations,

The abelian case is seen to be a special case of the non-abelian case, as the structure constants for an abelian group are identically zero.

One remaining fundamental problem, is that although

Special Relativity -f- Quantum Mechanics —> Quantum FieldTheory, (5.19)

i t is not known whether general relativity and quantum mechanics can be com- bined:

Quantum Electrodynamics was the first example of a quantum field theory, invented by Dirac shortly after the formulation of quantum mechanics [40], and it is still by far the most successful. QED was improved (making unambigous com- putations of higher order processes possible) by Feynman, Schwinger, Dyson and Tomonaga in the late 1940s, by the successful application of the renormalization procedure to arbitrary order in the perturbative series. Agreement between the- ory and measurement is striking all the way down to one part in 101 Q [41]. The Aß field interacts with all electrically charged particles, the quarks and the charged leptons (electron, muon, tau). The quarks and leptons are all "spinors", i.e., carry intrinsic spin 1/2. The interaction is generated by demanding local gauge invariance of the theory under the group ?7(1), which essentially is the group of rotations and reflections in the complex plane. As these commute, the group is abelian, and the resulting theory is simple compared to QCD and the weak interaction. These interactions are mediated by the photon, a massless "vector"

boson, carrying spin 1. The (exact?) masslessness of the photon is an empirical observation, as electromagnetic effects are long-ranged. This masslessness is en- coded theoretically in the fact that the U(l) symmetry is exact (unbroken). The fundamental Lagrangian (density) of QED is written

= 0 ^ - 8 ^ 1 - 9 / ^ 1 . (5.18)

General Relativity + QuantumMechanics —» ? (5.20)

5.2. Q E D

(5.21)

(29)

where the "covariant derivative" is given by

DpSdp + ieApix), (5.22)

the second term of which is the compensating term introduced by the requirement of local gauge invariance. The covariant derivative contains the only interaction term between rb and A, and if it would be replaced by only the ordinary partial derivative, the two fields would decouple, as they no longer would interact. I t would then be possible to solve the theory exactly, but this would of course not have any physical meaning, as a non-interacting theory is completely sterile, and hence of no use to describe natural phenomena. The interaction term breathes life into the theory, but at the same time makes it impossible to obtain an exact solution, making a perturbative expansion in the interaction term indispensible.

The last term in the Lagrangian (5.21) represents the free electromagnetic field, where

= dßAv - dvA„ (5.23)

and it is gauge invariant as it stands, so it introduces no additional compensating terms. A photon-mass term would have the form m2AßAß, which is not gauge invariant. Higher order gauge invariant couplings, like %\>oliVxbF>lvwhere <rM!/ =

i{lnlv — 7ji7i/)/2, and higher tensorial couplings are ruled out if we demand renormalizability.

The Feynman rules for the perturbative expansion of the interaction can then be deduced from the Lagrangian. A t least this was the "old" viewpoint, where the Lagrangian was considered to be more fundamental than the Feynman rules, and that the rules had to be "derived". Today, some experts prefer the Feynman rules as the real definition of the theory, and regard the Lagrangian as an unnec- essary background structure. The Feynman rules are actually "better" than the derivation, in the sense that the latter is somewhat ambiguous. In any case, the Lagrangian is probably indispensible for non-perturbative treatments, and the identification of symmetries.

22

(30)

e e

* OH

FIG. 4. As an example of the utility of Feynman diagrams, the probability amplitude of the scattering of two electrons, treated in a perturbative expansion in the interaction, is depicted. Each diagram represents a well-defined mathematical expression. Although arbitrarily high orders are in principle included, in QED it usually suffices to include a few orders in the coupling a, due to its smallness. The "zeroth" order (a°) is of course superfluous, as no transition is implied by it.

The Feynman rules give, in principle, an algorithm for calculating the Lorentz invariant probability amplitudes of a theory to arbitrary order in the coupling constant. I n reality, however, it is extremely cumbersome to go to high orders, as the number of diagrams grows enormously with each successive order. In QED, orders up to a4 have been fully computed (implying four-loop diagrams, the total number of diagrams computed being greater than 1000), where a = e2/47r is the fine-structure constant [41]. Each vertex in QED gives a factor proportional to

\fÖL.

The amplitude is given as a sum (with a factor (—1) between graphs that differ only by an interchange of two external fermion lines) of distinct diagrams, each a product of factors associated with:

i) External lines, representing incoming and outgoing particles. The fermion external lines are given by correctly normalized spinor solutions of the Dirac equa-

(31)

tion, u(p) and v(p), where incoming/outgoing particles are given by u(p)/ü(p), while the opposite is true for antiparticles: incoming v(p), outgoing v(p).

Incoming/outgoing photons have factors eß(k)/e*ß(k), the polarization vector of the photon.

ii) Internal lines ("propagators") representing the virtual particles.

Each internal fermion line (with 4-momentum pß) has a propagator

(5.24) pM7^ — m + ie

A n internal anti-fermion is regarded as a fermion with p —* —p.

Each internal photon line (with 4-momentum kß), has a propagator (in Feyn- man gauge)

-i-7pT~. (5.25) kl + it

The 4-momentum (q) in internal loops is not fixed by the incoming particles, and is integrated over: f ( 2 x )_ 4< f4g . These loops lead to the divergencies that must be removed by renormalization. For each closed fermion loop, an extra factor (—1) is inserted. For each closed loop containing n identical boson lines, there is a factor 1/n!.

iii) Vertices, representing the interactions, as derived from the relevant La- grangian.

Four-momentum is conserved at the vertices, leading to virtual particles which do not have their physical masses. They are said to be "off mass-shell". I n pure QED each vertex gives a factor —ie^ß.

So, given the Feynman diagram representation of the perturbative expansion of the interaction, i t is possible to assemble each diagram as a "lego"-set, which is then computed. A statistical weight, S = 11^ l/n*;!, is included if there are rik identical particles in the final state, where k denotes distinct species. Usually, tricks are utilized to associate similar diagrams with each other, to minimize the actual computation.

The transition rate, i.e., the probability per unit time of an interaction taking place, is expressed as

W = 2n\Mif\2ps, (5.26)

24

(32)

where M y is the probability amplitude ( = the diagrams) of the process to the desired accuracy in a, and pj is the Lorentz-invariant "density of final states", i.e., the final phase space available to the reaction.

The transition rate, W, is equally applicable to the decay of unstable particles as to collisions (scattering) of particles. The "cross-section" for a collision is defined as

(5.27) where <p is the flux of incoming particles through a unit area per unit time.

Hence, the cross-section has the unit of area, and can roughly be considered as the effective "area" of the target particle, outside of which an oncoming particle would be undisturbed.

5.3. Q C D

Quantum Chromodynamics is the result of a long struggle to understand the strong interaction. In the 1960s it was "unfashionable" to work in quantum field theory, and many theorists thought that i t was out-dated, and could not be applied to the strong inteaction. Instead, theoretical work was dominated by S-matrix theory, dual-resonance models, and strings (the predecessor to todays superstrings). A few physicists, however, continued to apply quantum field theory to the problem.

Gell-Mann and Fritzsch, later together with Leutwyler, invented what was to become QCD [42] by evolving and refining a model first put forward by Nambu [43] (in essence a vector gluon model). Weinberg [44] also made early contribu- tions to the theory. QCD is, more or less, modelled on QED, which one knew was a successful quantum field theory, in fact the only one known to apply to the real world at the time. QCD is, however, much more complicated and rich from a theoretical point of view, because of a more complicated group structure, called 5/7(3), the group of unitary 3 x 3 matrices. The group is "non-abelian", as the matrices do not commute; AB ^ B A. Just like for QED, the group symmetry is unbroken, yielding massless force carriers, or gauge bosons, called gluons. The fact that the interaction of QCD is so short-ranged (about 1 f m = 1 0- 1 5 m) despite massless gluons, is conjectured to be due to the self-interaction between gluons, arising because gluons themselves carry the "5/7(3)-charge", colour. Theories based on non-abelian gauge groups generically have this property. This peculiar

(33)

behaviour of QCD is also believed to be responsible for the "confinement" of quarks into hadrons. Confinement has so far never been proven to be a conse- quence of QCD, although there are many plausibility arguments and computer simulations which point in that direction [45]. At very high energies, or more precisely, momentum-transfers, the strength of QCD diminishes, making quarks and gluons inside hadrons behave as if they were almost free. This phenomenon is called "asymptotic freedom" [46], and almost all successful applications of QCD make use of this fortunate effect. The QCD Lagrangian can formally be written in exactly the same form as the QED Lagrangian,

CQCD = M i l ßD ß - m)a% - \G*,G%, (5.28)

but it is actually much more complicated. A l l repeated indices are summed over. The Greek indices (u, v) mark the summation over space-time, just like before. The Latin indices A and (a, b) represent the eight different colours and six different quark types ("flavours"). The xb now denotes the quark fields and m their appropriate masses. The interaction is "blind" to the quark flavour, i.e., whether a quark is u, d, s, c, b or t. Only the colour matters. The covariant derivative is now given by

Dß = dß + igsFAG, (5.29)

where gs is the QCD coupling parameter, analogous to e in QED, FA are the eight generators of 5(7(3), and Gß are the eight gluon fields. The last term in the Lagrangian is the pure gluon contribution, but in contrast to photons in QED this does not correspond to the free propagation of gluons. Instead, for the pure gauge term in the Lagrangian (5.28) to be invariant under the gauge transformation, the "field strength" must be

Gt = dßGA - dvG - gsficGGcv, (5.30) where the f ßC are the mathematical structure constants of the group 5/7(3).

We directly see that the gluons interact among themselves through the last term in (5.30). This is in stark contrast to the case of QED, where no (direct) interaction between photons is possible.

26

(34)

FIG. 5. The main difference between interactions generated by abelian and non-abelian gauge symmetries is that the latter include direct self-interaction between the gauge fields, whereas the former do not. This introduces the new types of funda- mental interaction vertices shown.

The renormalization induces a coupling, variable with the exchanged 4- momentum squared in the process,

a Å Q 2 ) ~ ( 3 3 - 2 / ) * r z ( Q V A2) ' ( 5"3 1 ) where / is the the number of distinct quark flavours (experimentally, / = 6).

A is the only dimensionful parameter of QCD, and lies somewhere between 100 and 200 MeV. The formula (5.31) is derived via perturbation theory in "leading- log approximation", where contributions from dominant logarithms in successive orders have been summed. The formula is of course only valid for Q2 > A2 as the perturbation theory used to derive it breaks down when Q2 ~ A2. Roughly, one can think of A as being the energy scale where QCD becomes truly strong.

We see that when / < 16 the theory will be "asymptotically free" [46]. When the transfered four-momentum squared becomes large enough, the quarks and gluons will behave as if they were essentially free of interactions, justifying a perturbative calculation.

In a typical scattering event involving hadrons, the cross-section can, very schematically, be written as

<J = GaD, (5.32)

where a is the scattering of elementary constituents, calculable in perturbation theory i f the scattering is "hard" enough (high momentum transfer). G gives the distribution of quarks (and gluons) in the original hadron, while D gives the

"hadronization" (or "fragmentation") of a given outgoing quark (or gluon) into observable hadronic particles. G and D are not calculable f r o m first principles

References

Related documents

The main advantages of using a two-flavor scenario is the large reduction of the number of fundamental neutrino oscillation parameters (there are only two in the two-flavor case,

spårbarhet av resurser i leverantörskedjan, ekonomiskt stöd för att minska miljörelaterade risker, riktlinjer för hur företag kan agera för att minska miljöriskerna,

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

I två av projektets delstudier har Tillväxtanalys studerat närmare hur väl det svenska regel- verket står sig i en internationell jämförelse, dels när det gäller att

Since this study carries more than one explanatory variable, a multivariate linear regression model will be incorporated in the analysis to evaluate the effect of elite

In this thesis we investigated the Internet and social media usage for the truck drivers and owners in Bulgaria, Romania, Turkey and Ukraine, with a special focus on

In this Letter, we study the matter effects on neutrino mixing and oscilla- tions in the limit where the matter density becomes infinite and the resulting neutrino

It is not new that anti-abortion legislation is basing their arguments mainly on the rights of the fetus. One of the bills specifically refers to the 14 th amendment of the