• No results found

Erik Hagersten

N/A
N/A
Protected

Academic year: 2022

Share "Erik Hagersten"

Copied!
172
0
0

Loading.... (view fulltext now)

Full text

(1)

Welcome to DARK2

(IT, MN)

Erik Hagersten

Uppsala University

(2)

DARK2

Literature Computer Architecture A Quantitative Approach (4th edition) John Hennesey & David Pattersson

Lecturer Erik Hagersten gives most lectures and is responsible for the course Andreas Sandberg is responsible for the laborations and the hand-ins.

Jakob Carlström from Xelerated will teach network processors.

Sverker Holmgren will teach parallel programming.

David Black-Schaffer will teach about graphics processors.

Dan Ekblom from Virtutech will teach about the SIMICS simulator.

Mandatory Assignment

There are two lab assignments that all participants have to complete before a hard deadline. (+ a Microprocessor Report/

Microbenchmark if you are doing the MN2 version) Optional

Assignment

There are three (optional) hand-in assignments : Memory, CPU, Multiprocessors. You will get extra credit at the exam …

Examination Written exam at the end of the course. No books are allowed.

www.it.uu.se/edu/course/homepage/dark2/ht09

(3)

DARK2 2009

DARK2 On the web

www.it.uu.se/edu/course/homepage/dark2/ht09

DARK2, Fall 09 Welcome!

News FAQ

Schedule Slides

New Papers Assignments

Reading instr 4:ed

Exam

(4)

DARK2

Exam and bonus

2 Mandatory labs

Microprocessor Report or Microbenchmark to get 7,5p Exam

Bonus system:

Optional bonus activities at labs (2 x 4p bonus) 3 Optional handins (3 x 8p bonus)

32p/64p at the exam = PASS Doctor Ducks (sv doktoränder):

Mandatory: The three handins, two labs+bonus, a new

GPU/OpenCL assignment and no exam.

(5)

DARK2 2009

Goal for this course

Understand why modern computer systems are designed the way the are

A good understanding of:

pipelining, caches, TLP, ILP, memory system, coherence, and memory models.

specialized systems, e.g., network processors, graphics and multicores

implementation issues for modern processors: memory system, multicores, GPUs, NW processors,

Be able to design an efficient memory system and pipeline at a functional level

Understand how to write an efficient program wrt modern memory systems, modern pipelined

architectures, TLP and multicores

(6)

DARK2

DARK2 in a nutshell

1. Memory Systems (~Appendix C in 4th Ed) Caches, VM, DRAM, microbenchmarks, optimizing SW 2. Multiprocessors

TLP: coherence, memory models, interconnects, scalability, clusters, … 3. CPUs

ILP: pipelines, scheduling, superscalars, VLIWs, embedded, …

4. Widening + Future (~Chapter 1 in 4th Ed)

Technology impact, TLP+ILP in the CPU, Multicores (!!)

(7)

DARK2 2009

Part1: Memory System (Schedule )

em

Introduction to SIMICS and introduction to Lab1 15.15-17.00

1211 03/11

Profiling and optimizing for the memory system 15.15-17.00

1211 29/10

Caches and virtual memory 10.15-12.00

1211 28/10

Caches and virtual memory 15.15-17.00

1111 27/10

Welcome+Intro 08.15-19.00

1311 26/10

Topic Time

Room Day

Group B 08.15-12.00

1515 09/11

Group A 08:15-12.00

1515 06/11

Lab 1

Hard deadline => solutions handed in after deadline will be ignored

•12/11 at 10.14: Lab 1 (or use the lab occasions).

•12/11 at 10.14: Handin 1 to AS (Leave them in AS's Mail Box)

(8)

Introduction to Computer Architecture

Erik Hagersten

Uppsala University

(9)

DARK2 2009

What is computer architecture?

“Bridging the gap between programs and transistors”

“Finding the best model to execute the programs”

best={fast, cheap, energy-efficient, reliable, predictable, …}

(10)

DARK2

”Only” 20 years ago: APZ 212

”the AXE supercomputer”

(11)

DARK2 2009

APZ 212

marketing brochure quotes:

”Very compact”

6 times the performance 1/6:th the size

1/5 the power consumption

”A breakthrough in computer science”

”Why more CPU power?”

”All the power needed for future development”

”…800,000 BHCA, should that ever be needed”

”SPC computer science at its most elegance”

”Using 64 kbit memory chips”

”1500W power consumption

(12)

DARK2

CPU Improvements

2000 2005 2010 2015 Year

Relative Performance [log scale]

Hi sto ric al ra te: 55 % /y ea r

1000

100

10

1

??

??

??

(13)

DARK2 2009

How do we get good performance?

Creating and exploring:

1) Locality

a) Spatial locality

b) Temporal locality

c) Geographical locality

2) Parallelism

a) Instruction level

b) Thread level

(14)

DARK2

Execution in a CPU

”Machine Code”

”Data”

CPU

Memory

(15)

DARK2 2009

Register

Register - - based based machine machine

Example: C := A + B

6 5 4 3 2 1

A:12 B:14 C:10

LD R1, [A]

LD R7, [B]

ADD R2, R1, R7 ST R2, [C]

Data:

7 8 10 9 11

?

? ?

”Machine Code”

12 12 14

+

26 14

12 14

12 26 Program 26

counter

(PC)

(16)

DARK2

How ”long” is a CPU cycle?

1982: 5MHz

200ns 60 m (in vacum)

2002: 3GHz clock

0.3ns 10cm (in vacum)

0.3ns 3mm (on silicon)

(17)

DARK2 2009

Lifting the CPU hood (simplified…)

D C B A

CPU

Mem

Instructions:

(18)

DARK2

Pipeline

D C B A

Mem Instructions:

I R X W

Regs

(19)

DARK2 2009

Pipeline

A

Mem I R X W

Regs

(20)

DARK2

Pipeline

A

Mem I R X W

Regs

(21)

DARK2 2009

Pipeline

A

Mem I R X W

Regs

(22)

DARK2

Pipeline:

A

Mem I R X W

Regs

I = Instruction fetch R = Read register X = Execute

W = Write register

(23)

DARK2 2009

Pipeline system in the book Pipeline system in the book

I I R R X X M M W W

(d) (d) s1 s1 s2 s2

st st data data pc pc

dest data dest data

Data

Instr

(24)

DARK2

Register Operations:

Add R1, R2, R3

A

Mem I R X W

Regs 2 3 1

OP: + Ifetch

C P

(25)

DARK2 2009

Initially

D C B A

Mem I R X W

Regs

LD RegA, (100 + RegC) IF RegC < 100 GOTO A RegB := RegA + 1

RegC := RegC + 1

PC

(26)

DARK2

Cycle 1

D C B A

Mem I R X W

Regs

LD RegA, (100 + RegC) IF RegC < 100 GOTO A RegB := RegA + 1

RegC := RegC + 1

PC

(27)

DARK2 2009

Cycle 2

D C B A

Mem I R X W

Regs

LD RegA, (100 + RegC) IF RegC < 100 GOTO A RegB := RegA + 1

RegC := RegC + 1

PC

(28)

DARK2

Cycle 3

D C B A

Mem I R X W

Regs

LD RegA, (100 + RegC) IF RegC < 100 GOTO A RegB := RegA + 1

RegC := RegC + 1

+

PC

(29)

DARK2 2009

Cycle 4

D C B A

Mem I R X W

Regs

LD RegA, (100 + RegC) IF RegC < 100 GOTO A RegB := RegA + 1

RegC := RegC + 1

+

PC

(30)

DARK2

Today: ~10-20 stages and 4-6 pipes

Mem

I R

Regs

+Shorter cycletime (more GHz)

+Many instructions started each cycle

- Very hard to find ”enough” independent instr.

I R B M M W

I R B M M W

I R B M M W

I I I I Issue

logic

(31)

DARK2 2009

Modern MEM: ~200 CPU cycles

I R

Regs

I R B M M W

I R B M M W

I R B M M W

I I I I Issue

logic

Mem +Shorter cycletime (more GHz)

+Many instructions started each cycle

- Very hard to find ”enough” independent instr.

- Sloow memory access will dominate

200 cycles

(32)

DARK2

Connecting to the Memory System Connecting to the Memory System

I I R R X X M M W W

(d) (d) s1 s1 s2 s2

st st data data pc pc

dest data dest data

Data

Instr

Data Data

Memory Memory System System

I I nstr nstr

Memory

Memory

System

System

(33)

Caches and more caches

spam, spam, spam and spam or

Erik Hagersten

Uppsala University, Sweden

eh@it.uu.se

(34)

DARK2

Fix: Use a cache

Mem

I R

Regs

B M M W

I R B M M W

I R B M M W

I R B M M W

I I I I Issue

logic

200cycles 1GB

~32kB

$

~1 cycles

(35)

DARK2 2009

Webster about “cache”

1. cache \'kash\ n [F, fr. cacher to press, hide, fr. (assumed) VL coacticare to press] together, fr. L coactare to compel, fr.

coactus, pp. of cogere to compel - more at COGENT 1a: a

hiding place esp. for concealing and preserving provisions or

implements 1b: a secure place of storage 2: something hidden

or stored in a cache

(36)

DARK2

Cache knowledge useful when...

Designing a new computer

Writing an optimized program

or compiler

or operating system …

Implementing software caching

Web caches Proxies

File systems

(37)

DARK2 2009

Memory/storage

SRAM DRAM disk

sram

2000: 1ns 1ns 3ns 10ns 150ns 5 000 000ns 1kB 64k 4MB 1GB 1 TB

200ns 200ns 200ns 10 000 000ns)

(38)

DARK2

Address Book Cache

Looking for Tommy’s Telephone Number

Ö Ä Å Z Y X V U T

TOMMY 12345

Ö Ä Å Z Y X V

“Address Tag”

One entry per page =>

Direct-mapped caches with 28 entries

“Data”

Indexing

function

(39)

DARK2 2009

Address Book Cache

Looking for Tommy’s Number

Ö Ä Å Z Y X V U T

OMMY 12345

TOMMY

EQ?

index

(40)

DARK2

Address Book Cache

Looking for Tomas’ Number

Ö Ä Å Z Y X V U T

OMMY 12345

TOMAS

EQ?

index

Miss!

Lookup Tomas’ number in

(41)

DARK2 2009

Address Book Cache

Looking for Tomas’ Number

Y Z X V U T

OMMY 12345

TOMAS

index

Replace TOMMY’s data with TOMAS’ data.

There is no other choice (direct mapped)

OMAS 23457

Ö Ä

Å

(42)

DARK2

Cache

CPU

Cache address

data (a word) hit

Memory address

data

(43)

DARK2 2009

Cache Organization

Cache

OMAS 23457

TOMAS

index

=

Hit (1) (1)

(4) (4)

1

Addr tag

&

(1)

Data (5 digits) (1)

Valid

(44)

DARK2

Cache

Cache Organization (really)

4kB, direct mapped

index

=

(1) Hit?

(32)

1

Addr tag

&

(1)

Data (1)

Valid

00100110000101001010011010100011

1k entries of 4 bytes each (?)

(?) (?)

0101001

0010011100101

32 bit address identifying

a byte in memory

Ordinary Memory

msb lsb

”Byte”

What is a

good

index

function

(45)

DARK2 2009

Cache Organization

4kB, direct mapped

index

=

(1) Hit?

(32)

1

Addr tag

&

(1)

Data (1)

Valid

00100110000101001010011010100011

1k entries of 4 bytes each

(10)

(20) (20)

0101001

0010011100101

32 bit address Identifies the byte within a word

msb lsb

Mem

Overhead:

21/32= 66%

Latency =

SRAM+CMP+AND

(46)

DARK2

Cache

CPU

Cache address

data (a word) hit

Memory address

data

Hit: Use the data provided from the cache

~Hit: Use data from memory and also store it in

the cache

(47)

DARK2 2009

Cache performance parameters

Cache “hit rate” [%]

Cache “miss rate” [%] (= 1 - hit_rate) Hit time [CPU cycles]

Miss time [CPU cycles]

Hit bandwidth Miss bandwidth Write strategy

….

(48)

DARK2

How to rate architecture performance?

Marketing:

Frequency / Numbe of cores…

Architecture “goodness”:

CPI = Cycles Per Instruction IPC = Instructions Per Cycle

Benchmarking:

SPEC-fp, SPEC-int, …

TPC-C, TPC-D, …

(49)

DARK2 2009

Cache performance example

Assumption:

Infinite bandwidth

A perfect 1.0 CyclesPerInstruction (CPI) CPU 100% instruction cache hit rate

Total number of cycles =

#Instr. * ( (1 - mem_ratio) * 1 +

mem_ratio * avg_mem_accesstime) =

= #Instr * ( (1- mem_ratio) +

mem_ratio * (hit_rate * hit_time +

(1 - hit_rate) * miss_time) CPI = 1 -mem_ratio +

mem_ratio * (hit_rate * hit_time +

(1 - hit_rate) * miss_time)

(50)

DARK2

Example Numbers

CPI = 1 - mem_ratio +

mem_ratio * (hit_rate * hit_time) +

mem_ratio * (1 - hit_rate) * miss_time) mem_ratio = 0.25

hit_rate = 0.85 hit_time = 3

miss_time = 100

CPI = 0.75 + 0.25 * 0.85 * 3 + 0.25 * 0.15 * 100 =

0.75 + 0.64 + 3.75 = 5.14

(51)

DARK2 2009

What if ...

CPI = 1 -mem_ratio +

mem_ratio * (hit_rate * hit_time) +

mem_ratio * (1 - hit_rate) * miss_time) mem_ratio = 0.25

hit_rate = 0.85

hit_time = 3 CPU HIT MISS

miss_time = 100 == > 0.75 + 0.64 + 3.75 = 5.14

•Twice as fast CPU ==> 0.37 + 0.64 + 3.75 = 4.77

•Faster memory (70c) ==> 0.75 + 0.64 + 2.62 = 4.01

•Improve hit_rate (0.95) => 0.75 + 0.71 + 1.25 = 2.71

(52)

DARK2

How to get more effective caches:

Larger cache (more capacity)

Cache block size (larger cache lines)

More placement choice (more associativity) Innovative caches (victim, skewed, …)

Cache hierarchies (L1, L2, L3, CMR)

Latency-hiding (weaker memory models) Latency-avoiding (prefetching)

Cache avoiding (cache bypass) Optimized application/compiler

(53)

DARK2 2009

Why do you miss in a cache

Mark Hill’s three “Cs”

Compulsory miss (touching data for the first time) Capacity miss (the cache is too small)

Conflict misses (non-ideal cache implementation) (too many names starting with “H”)

(Multiprocessors)

Communication (imposed by communication)

False sharing (side-effect from large cache blocks)

(54)

DARK2

Avoiding Capacity Misses –

a huge address book

Lots of pages. One entry per page.

Ö Ä Å Z Y X ÖV ÖU ÖT

LING 12345

ÖÖ ÖÄ ÖÅ ÖZ ÖY ÖX

“Address Tag”

One entry per page =>

Direct-mapped caches with 784 (28 x 28) entries

“Data”

New

Indexing

function

(55)

DARK2 2009

Cache Organization

1MB, direct mapped

index

=

(1) Hit?

(32)

1

Addr tag

&

(1)

Data

(1)

Valid

00100110000101001010011010100011

256k entries

(18)

(12) (12)

0101001

0010011100101

32 bit address Identifies the byte within a word

msb lsb

Mem Overhead:

13/32= 40%

Latency =

SRAM+CMP+AND

(56)

DARK2

Pros/Cons Large Caches

++ The safest way to get improved hit rate -- SRAMs are very expensive!!

-- Larger size ==> slower speed more load on “signals”

longer distances

-- (power consumption)

-- (reliability)

(57)

DARK2 2009

Why do you hit in a cache?

Temporal locality

Likely to access the same data again soon

Spatial locality

Likely to access nearby data again soon

Typical access pattern:

(inner loop stepping through an array) A, B, C, A+1, B, C, A+2, B, C, ...

temporal spatial

(58)

DARK2 (32)

Multiplexer (4:1 mux) Identifies the word within a cache line

(2) Select

Identifies a byte within a word

Fetch more than a word:

cache blocks(a.k.a cache line)

1MB, direct mapped, CacheLine=16B

1

00100110000101001010011010100011

64k entries

0101001

0010011100101

(16)

index

0010011100101 0010011100101 0010011100101

=

(1)

&

(1)

(12) (12)

(32) (32) (32) (32)

128 bits

msb lsb

Mem

Overhead:

13/128= 10%

Latency =

SRAM+CMP+AND

(59)

DARK2 2009

Example in Class

Direct mapped cache:

Cache size = 64 kB Cache line = 16 B Word size = 4B

32 bits address (byte addressable)

“There are 10 kinds of people in the world:

Those who understand binary number and

those who do not.”

(60)

DARK2

Pros/Cons Large Cache Lines

++ Explores spatial locality

++ Fits well with modern DRAMs

* first DRAM access slow

* subsequent accesses fast (“page mode”)

-- Poor usage of SRAM & BW for some patterns -- Higher miss penalty (fix: critical word first) -- (False sharing in multiprocessors)

Perf

(61)

DARK2

2009 Thanks: Dr. Erik Berg

UART: StatCache Graph

app=matrix multiply

Small cache:

Short cache lines are netter

Large caches

Longer cache lines are better

Huge caches Everything fits

Note: this is just a single example, but

the conclusion typically holds for most

applications.

(62)

DARK2

Cache Conflicts

Typical access pattern:

(inner loop stepping through an array) A, B, C, A+1, B, C, A+2, B, C, …

What if B and C index to the same cache location Conflict misses -- big time!

Potential performance loss 10-100x

temporal spatial

(63)

DARK2 2009

Address Book Cache

Two names per page: index first, then search.

Ö Ä Å Z Y X V U T

OMAS

TOMMY

EQ?

index

12345 23457

OMMY

EQ?

(64)

DARK2 How should the

Avoiding conflict: More associativity

1MB, 2-way set-associative, CL=4B

index

(32)

1

Data

00100110000101001010011010100011

128k

“sets”

(17) 0101001

0010011100101

Identifies a byte within a word

Multiplexer (2:1 mux)

(32) (32)

1 0101001

0010011100101

Hit?

=

&

(13)

Select (1) (13)

=

&

“logic”

msb lsb

Latency =

SRAM+CMP+AND+

LOGIC+MUX

One “set”

(65)

DARK2 2009

Pros/Cons Associativity

++ Avoids conflict misses -- Slower access time

-- More complex implementation comparators, muxes, ...

-- Requires more pins (for external

SRAM…)

(66)

DARK2

Going all the way…!

1MB, fully associative, CL=16B

index

=

Hit? 4B

1

&

00100110000101001010011010100011

One “set”

(0) (28)

0101001

0010011100101

Identifies a byte within a word

Multiplexer (256k:1 mux) Identifies the word within a cache line

Select (16) (13)

16B 16B

1 0101001

0010011100101

=

&

“logic”

(2)

1 0101001

0010011100101

1

=

&

=

&

... 16B

64k

comparators

(67)

DARK2 2009

Fully Associative

Very expensive

Only used for small caches (and sometimes TLBs)

CAM = Contents-addressable memory

~Fully-associative cache storing key+data

Provide key to CAM and get the associated

data

(68)

DARK2

A combination thereof

1MB, 2-way, CL=16B

index

=

Hit? (32)

1

&

Data

001001100001010010100110101000110

32k

“sets”

(15) (13)

0101001

0010011100101

Identifies a byte within a word

Multiplexer (8:1 mux) Identifies the word within a cache line

Select (1)

(13)

(128) (128)

1 0101001

0010011100101

=

&

“logic”

(2)

(256)

msb lsb

(69)

DARK2 2009

Example in Class

Cache size = 2 MB Cache line = 64 B

Word size = 8B (64 bits) 4-way set associative

32 bits address (byte addressable)

(70)

DARK2

Who to replace?

Picking a “victim”

Least-recently used (aka LRU)

Considered the “best” algorithm (which is not always true…)

Only practical up to limited number of ways

Not most recently used

Remember who used it last: 8-way -> 3 bits/CL

Pseudo-LRU

E.g., based on course time stamps.

Used in the VM system

Random replacement

Can’t continuously to have “bad luck...

(71)

DARK2 2009

Cache Model:

Random vs. LRU

Random

LRU

LRU Random

Art Equake

(72)

DARK2

4-way sub-blocked cache

1MB, direct mapped, Block=64B, sub-block=16B

index

=

Hit?

(4)

(32)

1

&

Data

00100110000101001010011010100011

16k

(14)

(12)

0101001

0010011100101

Identifies a byte within a word

0010011100101

16:1 mux

Identifies the word within a cache line

(2) (12)

(128) (128) (128) (128)

512 bits

0 1 0

& & &

logic

4:1 mux

(2) Sub block within a block

msb lsb

Mem

Overhead:

16/512= 3%

(73)

DARK2 2009

Pros/Cons Sub-blocking

++ Lowers the memory overhead

++ (Avoids problems with false sharing -- MP) ++ Avoids problems with bandwidth waste

-- Will not explore as much spatial locality -- Still poor utilization of SRAM

-- Fewer sparse “things” allocated

(74)

DARK2

Replacing dirty cache lines

Write-back

Write dirty data back to memory (next level) at replacement

A “dirty bit” indicates an altered cache line

Write-through

Always write through to the next level (as well)

data will never be dirty no write-backs

(75)

DARK2 2009

Write Buffer/Store Buffer

Do not need the old value for a store

One option: Write around (no write allocate in caches) used for lower level smaller caches

CPU cache

WB:

stores loads

= =

=

(76)

DARK2

Innovative cache: Victim cache

CPU

Cache address

data

hit Memory

address

data

Victim Cache (VC): a small, fairly associative cache (~10s of entries) Lookup: search cache and VC in parallel

Cache replacement: move victim to the VC and replace in VC VC hit: swap VC data with the corresponding data in Cache

address VC

data (a word)

hit

(77)

DARK2 2009

Skewed Associative Cache

A, B and C have a three-way conflict

2-way A

B C

4-way A

B C

It has been shown that 2-way skewed performs roughly the same as 4-way caches

2-way skewed A

B

C

(78)

DARK2

Skewed-associative cache:

Different indexing functions

index

=

1

&

00100110000101001010011010100011

128k entries

(17)

(13)

0101001

0010011100101

32 bit address Identifies the byte within a word

1 0101001

0010011100101

f 1 f 2

(>18)

(>18) (17)

msb lsb

2:1mux

=

&

function

(32) (32)

(32)

(79)

DARK2 2009

UART: Elbow cache

Increase “associativity” when needed

Performs roughly the same as an 8-way cache Slightly faster

Uses much less power!!

A B

If severe conflict:

make room A

B C Conflict!!

A

B

C

(80)

DARK2

Cache Hierarchy Latency

200:1 between on-chip SRAM - DRAM cache hierarchies

L1: small on-chip cache

Runs in tandem with pipeline small

VIPT caches adds constraints (more later…)

L2: large SRAM on-chip

Communication latency becomes more important

L3: Off-chip SRAM

Huge cache ~10x faster than DRAM

(81)

DARK2 2009

Cache Hierarchy

CPU

L1$

on-chip L2$

on-module

L3$

on-board

Memory

(82)

DARK2

Topology of caches: Harvard Arch

CPU needs a new instruction each cycle 25% of instruction LD/ST

Data and Instr. have different access patterns

==> Separate D and I first level cache

==> Unified 2nd and 3rd level caches

(83)

DARK2 2009

Common Cache Structure

CPU

I$ D$

L2$

... L3$

L1: CL=32B, Size=32kB, 4-way, 1ns, split I/D

L2: CL=128B, Size= 1MB, 8-way, 4ns, unified

L3: CL=128B, Size= 32MB, 2-way, 15ns, unified

(84)

DARK2

Hardware prefetching

Hardware ”monitor” looking for patterns in memory asccesses

Brings data of anticipated future accesses into the cache prior to their usage

Two major types:

Sequential prefetching (typically page-based, 2nd level cache and higher). Detechs sequential cache lines missing in the cache.

PC-based prefetching, integrated with the pipeline.

Finds per-PC strides. Can find more complicated

patterns.

(85)

DARK2 2009

Why do you miss in a cache

Mark Hill’s three “Cs”

Compulsory miss (touching data for the first time) Capacity miss (the cache is too small)

Conflict misses (imperfect cache implementation)

(Multiprocessors)

Communication (imposed by communication)

False sharing (side-effect from large cache blocks)

(86)

DARK2

How are we doing?

Creating and exploring:

1) Locality

a) Spatial locality

b) Temporal locality

c) Geographical locality

2) Parallelism

a) Instruction level

b) Thread level

(87)

Memory Technology

Erik Hagersten

Uppsala University, Sweden

eh@it.uu.se

(88)

DARK2

Main memory characteristics Main memory characteristics

Performance of main memory (from 3 rd Ed…

faster today)

Access time: time between address is latched and data is available (~50ns)

Cycle time: time between requests (~100 ns)

Total access time: from ld to REG valid (~150ns)

• Main memory is built from DRAM: Dynamic RAM

• 1 transistor/bit ==> more error prune and slow

• Refresh and precharge

• Cache memory is built from SRAM: Static RAM

• about 4-6 transistors/bit

(89)

DARK2 2009

DRAM organization DRAM organization

4Mbit memory array One bit memory cell

Bit line Word line

Capacitance

Ro w de co d er

RAS

Address 11

(4) Data

out

Column decoder CAS Column latch

2048×2048 cell matrix

The address is multiplexed Row/Address Strobe (RAS/CAS)

“Thin” organizations (between x16 and x1) to decrease pin load

Refresh of memory cells decreases bandwidth

Bit-error rate creates a need for error-correction (ECC)

(90)

DARK2

SRAM organization SRAM organization

Ro w d e co d er

Column decoder 512×512×4

cell matrix In buffer

D iff. -a m p lif ye r

A

1

A

2

A

3

A

4

A

5

A

6

A

7

A

8

A

9

A

0

A

10

A

11

A

12

A

13

A

14

A

15

A

16

A

17

I/O

3

I/O

2

I/O

1

I/O

0

CE WE O E

Address is typically not multiplexed

Each cell consists of about 4-6 transistors

Wider organization (x18 or x36), typically few chips

Often parity protected (ECC becoming more common)

(91)

DARK2 2009

Error Detection and Correction Error Detection and Correction

Error-correction and detection

E.g., 64 bit data protected by 8 bits of ECC

Protects DRAM and high-availability SRAM applications Double bit error detection (”crash and burn” )

Chip kill detection (all bits of one chip stuck at all-1 or all-0) Single bit correction

Need “memory scrubbing” in order to get good coverage

Parity

E.g., 8 bit data protected by 1 bit parity

Protects SRAM and data paths

Single-bit ”crash and burn” detection

Not sufficient for large SRAMs today!!

(92)

DARK2

Correcting the Error Correcting the Error

Correction on the fly by hardware

no performance-glitch

great for cycle-level redundancy fixes the problem for now…

Trap to software

correct the data value and write back to memory

Memory scrubber

kernel process that periodically touches all of memory

(93)

DARK2 2009

Improving main memory

Improving main memory performance performance

Page-mode => faster access within a small distance Improves bandwidth per pin -- not time to critical word

Single wide bank improves access time to the complete CL

Multiple banks improves bandwidth

(94)

DARK2

Newer kind of DRAM...

SDRAM (5-1-1-1 @100 MHz)

Mem controller provides strobe for next seq. access

DDR-DRAM (5-½-½-½)

Transfer data on both edges

RAMBUS

Fast unidirectional circular bus Split transaction addr/data

Each DRAM devices implements RAS/CAS/refresh…

internally

CPU and DRAM on the same chip?? (IMEM)...

(95)

DARK2 2009

Newer DRAMs …

(Several DRAM arrays on a die)

12,8 800

DDR3-1600

8,5 533

DDR3-1066

6,4 400

DDR2-800

4,3 266

DDR2-533

2,4 150

DDR-300

2,1 133

DDR-260

BW

(GB/s per DIMM)

Clock rate (MHz)

Name

2006 access latency: slow=50ns, fast=30ns, cycle time=60ns

Prefetch buffer on DRAM chips

(96)

DARK2

The Endian Mess

Big Endian

Little Endian

00 00 00 5f

lsb msb

0

64MB

00 00 00 5f 0

64MB

Store the value 0x5F

o H e l l

lsb msb

0

64MB

l l e H o 0

64MB

Store the string Hello

4 5 6 7 0 1 2 3

lsb msb

0

64MB

7 6 5 4 3 2 1 0 0

64MB

Numbering the bytes

Word

(97)

Virtual Memory System

Erik Hagersten

Uppsala University, Sweden

eh@it.uu.se

(98)

DARK2

Physical Memory

Physical Memory

Disk

0

64MB

PROGRAM

(99)

DARK2 2009

Virtual and Physical Memory

0

4GB text heap

data stack

Context A 0

4GB text heap

data stack

Context B

Physical Memory

Disk

0

64MB

Segments

PROGRAM

$1 $2

(Caches)

(100)

DARK2

Translation & Protection

0

4GB text heap

data

Context A

0

4GB text heap

data

Context B

Physical Memory

Disk

0

64MB R R RW

RW

stack stack

Virtual Memory

(101)

DARK2 2009

Virtual memory

Virtual memory parameters parameters

Compared to first-level cache parameters

Parameter First-level cache Virtual memory Block (page) size 16-128 bytes 4K-64K bytes Hit time 1-2 clock cycles 40-100 clock cycles

Miss penalty (Access time) (Transfer time)

8-100 clock cycles (6-60 clock cycles) (2-40 clock cycles)

700K-6000K clock cycles (500K-4000K clock cycles) (200K-2000K clock cycles)

Miss rate 0.5%-10% 0.00001%-0.001%

Data memory size 16 Kbyte - 1 Mbyte 16 Mbyte - 8 Gbyte

Replacement in cache handled by HW. Replacement in VM handled by SW

VM hit latency very low (often zero cycles)

VM miss latency huge (several kinds of misses)

Allocation size is one ”page” 4kB and up)

(102)

DARK2

VM: Block placement VM: Block placement

Where can a block (page) be placed in main memory?

What is the organization of the VM?

The high miss penalty makes SW solutions to implement a fully associative address mapping feasible at page faults

A page from disk may occupy any pageframe in PA

Some restriction can be helpful (page coloring)

(103)

DARK2 2009

VM: Block identification VM: Block identification

Use a page table stored in main

memory: • Suppose 8 Kbyte pages, 48 bit virtual address

• Page table occupies

2 48 /2 13 * 4B = 2 37 = 128GB!!!

•Solutions:

• Only one entry per

physical page is needed

• Multi-level page table (dynamic)

• Inverted page table

(~hashing)

(104)

DARK2

kseg kseg

Address translation Address translation

Multi-level table: The Alpha 21064 (

seg1 seg1

seg0 seg0 seg1 seg1

seg0 seg0 seg1 seg1

seg0 seg0

Kernel segment Used by OS.

Does not use virtual memory.

User segment 1 Used for stack.

User segment 0 Used for instr. &

static data &

heap Segment is selected

by bit 62 & 63 in addr.

PTE

Page Table Entry:

(translation & protection)

(105)

DARK2 2009

Protection mechanisms Protection mechanisms

The address translation mechanism can be used to provide memory protection:

Use protection attribute bits for each page

Stored in the page table entry (PTE) (and TLB…) Each physical page gets its own per process

protection

Violations detected during the address

translation cause exceptions (i.e., SW trap)

Supervisor/user modes necessary to prevent

user processes from changing e.g. PTEs

(106)

DARK2

Fast address translation Fast address translation

How can we avoid three extra memory references for each original memory reference?

Store the most commonly used address translations in a cache—Translation Look-aside Buffer (TLB)

==> The caches rears their ugly faces again!

P TLB

lookup

Cache Main

memory

VA PA

Transl.

in mem

Data

Addr

(107)

DARK2 2009

Do we need a fast TLB?

Why do a TLB lookup for every L1 access?

Why not cache virtual addresses instead?

Move the TLB on the other side of the cache

It is only needed for finding stuff in Memory anyhow The TLB can be made larger and slower – or can it?

P TLB

lookup

Cache Main

memory

VA PA

Transl.

in mem

Data

(108)

DARK2

Aliasing Problem Aliasing Problem

The same physical page may be accessed using different virtual addresses

A virtual cache will cause confusion -- a write by one process may not be observed

Flushing the cache on each process switch is slow (and may only help partly)

=>VIPT (VirtuallyIndexedPhysicallyTagged) is the answer

Direct-mapped cache no larger than a page

No more sets than there are cache lines on a page + logic

Page coloring can be used to guarantee correspondence

between more PA and VA bits (e.g., Sun Microsystems)

(109)

DARK2 2009

Virtually Indexed Physically Tagged

=VIPT

Have to guarantee that all aliases have the same index

L1_cache_size < (page-size * associativity) Page coloring can help further

P TLB

lookup Cache

Main memory

VA

PA

Transl.

in mem Data

Index PA Addr tag =

Hit

(110)

DARK2

What is the capacity of the TLB What is the capacity of the TLB

Typical TLB size = 0.5 - 2kB

Each translation entry 4 - 8B ==> 32 - 500 entries

Typical page size = 4kB - 16kB TLB-reach = 0.1MB - 8MB

FIX:

Multiple page sizes, e.g., 8kB and 8 MB

TSB -- A direct-mapped translation in

memory as a “second-level TLB”

(111)

DARK2 2009

VM: Page replacement VM: Page replacement

Most important: minimize number of page faults

Page replacement strategies:

• FIFO—First-In-First-Out

• LRU—Least Recently Used

• Approximation to LRU

• Each page has a reference bit that is set on a reference

• The OS periodically resets the reference bits

• When a page is replaced, a page with a reference bit that is not

set is chosen

(112)

DARK2

So far…

Data L1$

Unified L2$

CPU

Memory

D D D D

D D D

I

D D D

D D

I I I I TLB miss TLB

(transl$) TLB

fill

PT PT PT PT

PT PT TLB fill

PF handler

I Page

fault

D

Disk

(113)

DARK2 2009

Adding TSB (software TLB cache)

TLBD Atrans$

TLB fill

PF handler

PT PT PT PT

PT PT

D D D D

D D D

I

D D D

D D

I I I I I

Data Page L1$

fault

TLB miss

Unified L2$

D

CPU

Memory Disk

TSB

TLB fill

(114)

DARK2

VM: Write strategy VM: Write strategy

Write back!

Write through is impossible to use:

Too long access time to disk

The write buffer would need to be prohibitively large

The I/O system would need an extremely high bandwidth

Write back or Write through?

(115)

DARK2 2009

VM dictionary VM dictionary

Virtual Memory System The “cache” languge Virtual address ~Cache address

Physical address ~Cache location

Page ~Huge cache block

Page fault ~Extremely painfull $miss

Page-fault handler ~The software filling the $

Page-out Write-back if dirty

(116)

DARK2

Caches Everywhere…

D cache I cache L2 cache L3 cache ITLB

DTLB TSB

Virtual memory system Branch predictors

Directory cache

(117)

Exploring the Memory of a Computer System

Erik Hagersten

Uppsala University, Sweden

eh@it.uu.se

(118)

DARK2

Micro Benchmark Signature

for (times = 0; times < Max; times++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

Measuring the averge access time to memory, while varying ArraySize and Stride, will allow us to reverse-engineer the memory system.

(need to turn off HW prefetching...)

(119)

DARK2 2009

Micro Benchmark Signature

for (times = 0; times < Max; times++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

Time (ns)

Stride (bytes)

4 16 64 256 1 K 4 K 16 K 64 K 256 K 1 M 4 M

0 100 200 300 400 500 600 700

8 M 4 M 2 M 1 M 512 K 256 K 128 K 64 K 32 K 16 K

Stride(bytes)

Av g ti m e (ns)

(120)

DARK2

Stepping through the array

for (times = 0; times < Max; times++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

0 Array Size = 16, Stride=4

0 Array Size = 32, Stride=4…

0 Array Size = 16, Stride=8…

0

(121)

DARK2 2009

Micro Benchmark Signature

for (times = 0; times < Max; time++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

Time (ns)

Stride (bytes)

4 16 64 256 1 K 4 K 16 K 64 K 256 K 1 M 4 M

0 100 200 300 400 500 600 700

8 M 4 M 2 M 1 M 512 K 256 K 128 K 64 K 32 K 16 K

ArraySize=8MB

ArraySize=512kB

ArraySize=32-256kB ArraySize=16kB

Stride(bytes)

Av g ti m e (ns)

(122)

DARK2

Micro Benchmark Signature

for (times = 0; times < Max; time++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

Time (ns)

Stride (bytes)

4 16 64 256 1 K 4 K 16 K 64 K 256 K 1 M 4 M

0 100 200 300 400 500 600 700

8 M 4 M 2 M 1 M 512 K 256 K 128 K 64 K 32 K 16 K

ArraySize=8MB

ArraySize=512kB

ArraySize=32kB-256kB ArraySize=16kB

L1$ hit

L2$hit=40ns Mem=300ns

Mem+TLBmiss

L2$ block Page L1$ block

L2$+TLBmiss

(123)

DARK2 2009

Twice as large L2 cache ???

for (times = 0; times < Max; time++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

Time (ns)

Stride (bytes)

4 16 64 256 1 K 4 K 16 K 64 K 256 K 1 M 4 M

0 100 200 300 400 500 600 700

8 M 4 M 2 M 1 M 512 K 256 K 128 K 64 K 32 K 16 K

ArraySize=8MB

ArraySize=512kB

ArraySize=32-256kB ArraySize=16kB

Stride(bytes)

Av g ti m e (ns)

ArraySize=1M

(124)

DARK2

Twice as large TLB…

for (times = 0; times < Max; time++) /* many times*/

for (i=0; i < ArraySize; i = i + Stride)

dummy = A[i]; /* touch an item in the array */

Time (ns)

4 16 64 256 1 K 4 K 16 K 64 K 256 K 1 M 4 M

0 100 200 300 400 500 600 700

8 M 4 M 2 M 1 M 512 K 256 K 128 K 64 K 32 K 16 K

ArraySize=8MB

ArraySize=512kB

ArraySize=32-256kB ArraySize=16kB

Stride(bytes)

Av g ti m e (ns)

ArraySize=1MB

(125)

DARK2 2009

Can software

help us?

How are we doing?

Creating and exploring:

1) Locality

a) Spatial locality

b) Temporal locality

c) Geographical locality

2) Parallelism

a) Instruction level

b) Thread level

(126)

Optimizing for Speed

Erik Hagersten

Uppsala University, Sweden

eh@it.uu.se

(127)

DARK2 2009

What is the potential gain?

Latency difference L1$ and mem: ~50x

Bandwidth difference L1$ and mem: ~20x Repeated TLB misses adds a factor ~2-3x Execute from L1$ instead from mem ==>

50-150x improvement

At least a factor 2-4x is within reach

(128)

DARK2

Optimizing for cache performance

Keep the active footprint small

Use the entire cache line once it has been brought into the cache

Fetch a cache line prior to its usage

Let the CPU that already has the data in its cache do the job

...

(129)

DARK2 2009

What can go Wrong?

A Simple Example…

N

N

Perform a diagonal copy 10 times

(130)

DARK2

Example: Loop order

//Optimized Example A for (i=1; i<N; i++) {

for (j=1; j<N; j++) { A[i][j]= A[i-1][j-1];

} }

//Unoptimized Example A for (j=1; j<N; j++) {

for (i=1; i<N; i++) { A[i][j] = A[i-1][j-1];

} }

?

(131)

DARK2 2009

0 2 4 6 8 10 12 14 16 18 20

16 32 64 128 256 512 1024 20 48

4096 Array side

S p eed u p vs Un O p t Athlon64 x2

Pentium D Core 2 Duo

Performance Difference:

Loop order

(132)

DARK2

Example: Sparse data

//Optimized Example A for (i=1; i<N; i++) {

for (j=1; j<N; j++) {

A_data[i][j]= A_data[i-1][j-1];

} }

//Unoptimized Example A for (i=1; i<N; i++) {

for (j=1; j<N; j++) {

A[i][j].data = A[i-1][j-1].data;

} }

d d d d d d d d

(133)

DARK2 2009

Performance Difference:

Sparse Data

0 2 4 6 8 10 12 14 16

16 32 64 128 256 51 2

102 4

204 8 409 6 Array side

S p e e dup v s U n O P T Athlon64 x2

Pentium D

Core 2 Duo

(134)

DARK2

Loop Merging

/* Unoptimized */

for (i = 0; i < N; i = i + 1) for (j = 0; j < N; j = j + 1)

a[i][j] = 2 * b[i][j];

for (i = 0; i < N; i = i + 1) for (j = 0; j < N; j = j + 1)

c[i][j] = K * b[i][j] + d[i][j]/2

/* Optimized */

for (i = 0; i < N; i = i + 1) for (j = 0; j < N; j = j + 1)

a[i][j] = 2 * b[i][j];

c[i][j] = K * b[i][j] + d[i][j]/2;

(135)

DARK2 2009

Padding of data structures

j i

1024

1024

A A+1024*4

A+2048*4 index

(32)

1

Data

00100110000101001010011010100011

(17) 0101001

0010011100101

Multip (2:1 m

(32)

1 0101001

Hit?

=

&

(13)

Select (1) (13)

=

&

“logic”

lsb

(136)

DARK2

Padding of data structures

j i

1024+padding

1024

j

A A+1024*4+padding A+2048*4+2*padding

allocate more memory than needed

index 1

00100110000101001010011010100011

(17) 010

Hit?

=

&

(13)

S (13)

=

&

“logic”

lsb

References

Related documents

The detection mechanism described above is not directly applicable to a competitive-update protocol because in a competitive-update protocol it is not sufficient to know the number

When some object must be removed from a cache, a remove message is sent to all neighbor caches directly connected to the proxy cache..

Utilization, i.e., fraction cache data used (scale to the right) Possible miss rate if utilization problem was fixed. Running one

Dept of Information Technology| www.it.uu.se 2 © Erik Hagersten| user.it.uu.se/~eh..

Dept of Information Technology| www.it.uu.se 2 © Erik Hagersten| www.docs.uu.se/~eh..

No more sets than there are cache lines on a page + logic Page coloring can be used to guarantee correspondence between more PA and VA bits (e.g., Sun

Dept of Information Technology| www.it.uu.se 2 © Erik Hagersten| http://user.it.uu.se/~eh..

• Most of the blocks in LLCs are dead and consume precious cache space. • State-of-the-art schemes predict dead-blocks without any semantic information from the