1204521 Digital Computer Architecture
Lecture 10a I/O Introduction: Storage Devices,
Metrics, & Productivity
Pradondet Nilagupta
Original note from
Professor David A. Patterson
Spring 2001
2204521 Digital Computer Architecture
I/OFormerly the orphan in the architecture domain
• Before– I/O meant the non-processor and memory stuff
ป Disk, tape, LAN, WAN, etc.
ป Performance was not a major concern
– Devices characterized as extraneous, non-priority, infrequently used, slow
– Exception is the swap area of the diskป Part of the memory hierarchy
ป Part of system performance, but you’re in trouble if you use it often
ป A BIG deal for business applications, sometimes critical in scientific apps.
3204521 Digital Computer Architecture
I/O Now
Lots of flavors• Secondary storage
– Disks, tapes, CD ROM, DVD
• Communication – Networks
• Human Interface– Video, audio, keyboards
ป Multimedia!– “Real World” interfaces.
ป Temperature, pressure, position, velocity, voltage, current…
ป Digital Signal Processing (DSP)
4204521 Digital Computer Architecture
I/O Trends• Trends– market is communications crazy
ป voice response & transaction systems, real-time videoป multimedia expectationsป creates many modes - continuous/fragmented, common/rare,
dedicated/interoperableป diversity is even more chaotic than beforeป single bucket I/O didn’t work before - worse now
– even standard networks come in gigabit/sec flavorsป for multicomputersป I/O is the bottleneck
• Result– significant focus on system bus performance
ป common bridge to the memory system and the I/O systemsป critical performance component for the SMP server platforms
5204521 Digital Computer Architecture
System vs. CPU Performance
• Care about speed at which user jobs get done– throughput - how many jobs/time
– latency - how quick for a single job
• CPU performance main factor when:– job mix is fits in the memory
ป namely there are very few page faults
• I/O performance main factor when:– the job is too big for the memory - paging is dominant
– when the job involves lots of unexpected filesป OLTP
ป database
ป almost every commercial application you can imagine
– and then there is graphics & specialty I/O devices
6204521 Digital Computer Architecture
System Performancedepends on lots of things in the worst case
• CPU
• compiler
• operating System
• cache
• main Memory
• memory-IO bus
• I/O controller or channel
• I/O drivers and interrupt handlers
• I/O devices: there are many types– level of autonomous behavior
– amount of internal buffer capacity
– + device specific parameters for latency and throughput
7204521 Digital Computer Architecture
Motivation: Who Cares About I/O?
• CPU Performance: 60% per year• I/O system performance limited by mechanical delays
(disk I/O)< 10% per year (IO per sec or MB per sec)
• Amdahl's Law: system speed-up limited by the slowest part!
10% IO & 10x CPU => 5x Performance (lose 50%)10% IO & 100x CPU => 10x Performance (lose 90%)
• I/O bottleneck: Diminishing fraction of time in CPUDiminishing value of faster CPUs
8204521 Digital Computer Architecture
Storage System Issues• Historical Context of Storage I/O
• Secondary and Tertiary Storage Devices
• Storage I/O Performance Measures
• Processor Interface Issues
• A Little Queuing Theory
• Redundant Arrarys of Inexpensive Disks (RAID)
• I/O Buses
9204521 Digital Computer Architecture
I/O Systems
Processor
Cache
Memory - I/O Bus
MainMemory
I/OController
Disk Disk
I/OController
I/OController
Graphics Network
interruptsinterrupts
10204521 Digital Computer Architecture
Keys to a Balanced System• It’s all about overlap of I/O vs. CPU
– Timeworkload = TimeCPU + TimeI/O – Timeoverlap
• Consider the effect of speeding up just one…
• Latency vs. Bandwidth
Latency
Throughput
Classic Rule:
Optimize for the common case
11204521 Digital Computer Architecture
I/O System Design Considerations
• Depends on type of I/O device– size, bandwidth, and type of transaction– frequency of transaction– defer vs. do now
• Appropriate memory bus utilization• What should the controller do
ป programmed I/Oป interrupt vs. polledป priority or notป DMAป buffering issues - what happens on over-runป protectionป validation
12204521 Digital Computer Architecture
Types of I/O Devices
• Consider– Behavior
ป Read, Write, Both, once vs. multiple
ป size of average transaction
ป bandwidth
ป latency
– Partner - the speed of the slowest link theoryป human operated (interactive or not)
ป machine operated (local or remote)
Some I/O DevicesDevice Behavior Partner Data Rate
(KB/sec)
Keyboard Input Human 0.01
Mouse Input Human 0.02
Voice Input Input Human 0.20
Scanner Input Human 200+
Voice Output Output Human 0.60
Impact Printer Output Human 1.0
Laser Printer Output Human 100
Graphics Display Output Human 30,000
CPU to Frame Buffer
Output Machine 200
Network-terminal I/O Machine 0.05
Network – LAN I/O Machine 200-???
Optical Disk Storage Machine 500
Disk Storage Machine 2,000-8,000
Tape Storage Machine 2000
14204521 Digital Computer Architecture
Is I/O Important?
• Depends on your application– business - disks for file system I/O– graphics - graphics cards or specialty co-
processors– parallelism - the communications fabric
• Our focus = mainline uniprocessing– storage subsystems– networks
• Noteworthy Point– the traditional orphan– but now often viewed more as a front line topic
15204521 Digital Computer Architecture
Technology Trends
Disk Capacity now doubles every 18 months; before1990 every 36 motnhs
• Today: Processing Power Doubles Every 18 months
• Today: Memory Size Doubles Every 18 months(4X/3yr)
• Today: Disk Capacity Doubles Every 18 months
• Disk Positioning Rate (Seek + Rotate) Doubles Every Ten Years!
The I/OGAP
The I/OGAP
16204521 Digital Computer Architecture
Storage Technology Drivers
• Driven by the prevailing computing paradigm– 1950s: migration from batch to on-line processing
– 1990s: migration to ubiquitous computingป computers in phones, books, cars, video cameras, …
ป nationwide fiber optical network with wireless tails
• Effects on storage industry:– Embedded storage
ป smaller, cheaper, more reliable, lower power
– Data utilitiesป high capacity, hierarchically managed storage
17204521 Digital Computer Architecture
Historical Perspective• 1956 IBM Ramac — early 1970s Winchester
– Developed for mainframe computers, proprietary interfaces– Steady shrink in form factor: 27 in. to 14 in.
• 1970s developments– 5.25 inch floppy disk formfactor (microcode into mainframe)– early emergence of industry standard disk interfaces
ป ST506, SASI, SMD, ESDI
• Early 1980s– PCs and first generation workstations
• Mid 1980s– Client/server computing – Centralized storage on file server
ป accelerates disk downsizing: 8 inch to 5.25 inch– Mass market disk drives become a reality
ป industry standards: SCSI, IPI, IDEป 5.25 inch drives for standalone PCs, End of proprietary interfaces
18204521 Digital Computer Architecture
Disk History
Data densityMbit/sq. in.
Capacity ofUnit ShownMegabytes
1973:1. 7 Mbit/sq. in140 MBytes
1979:7. 7 Mbit/sq. in2,300 MBytes
source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces”
19204521 Digital Computer Architecture
Historical Perspective
• Late 1980s/Early 1990s:– Laptops, notebooks, (palmtops)
– 3.5 inch, 2.5 inch, (1.8 inch formfactors)
– Formfactor plus capacity drives market, not so much performanceป Recently Bandwidth improving at 40%/ year
– Challenged by DRAM, flash RAM in PCMCIA cardsป still expensive, Intel promises but doesn’t deliver
ป unattractive MBytes per cubic inch
– Optical disk fails on performace (e.g., NEXT) but finds niche (CD ROM)
20204521 Digital Computer Architecture
Disk History
1989:63 Mbit/sq. in60,000 MBytes
1997:1450 Mbit/sq. in2300 MBytes
source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces”
1997:3090 Mbit/sq. in8100 MBytes
21204521 Digital Computer Architecture
MBits per square inch: DRAM as % of Disk over time
0%
5%
10%
15%
20%
25%
30%
35%
40%
1974 1980 1986 1992 1998
source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces”
470 v. 3000 Mb/si
9 v. 22 Mb/si
0.2 v. 1.7 Mb/si
22204521 Digital Computer Architecture
Alternative Data Storage Technologies: Early 1990s
Cap BPI TPI BPI*TPI Data Xfer Access
Technology (MB) (Million) (KByte/s) Time
Conventional Tape:
Cartridge (.25") 150 12000 104 1.2 92 minutes
IBM 3490 (.5") 800 22860 38 0.9 3000 seconds
Helical Scan Tape:
Video (8mm) 4600 43200 1638 71 492 45 secs
DAT (4mm) 1300 61000 1870 114 183 20 secs
Magnetic & Optical Disk:
Hard Disk (5.25") 1200 33528 1880 63 3000 18 ms
IBM 3390 (10.5") 3800 27940 2235 62 4250 20 ms
Sony MO (5.25") 640 24130 18796 454 88 100 ms
23204521 Digital Computer Architecture
Devices: Magnetic Disks
SectorTrack
Cylinder
HeadPlatter
• Purpose:– Long-term, nonvolatile storage– Large, inexpensive, slow level in
the storage hierarchy
• Characteristics:– Seek Time (~8 ms avg)
ป positional latency
ป rotational latency
• Transfer rate– About a sector per ms
(5-15 MB/s)– Blocks
• Capacity– Gigabytes– Quadruples every 3 years
(aerodynamics)
7200 RPM = 120 RPS => 8 ms per rev ave rot. latency = 4 ms128 sectors per track => 0.25 ms per sector1 KB per sector => 16 MB / s
Response time = Queue + Controller + Seek + Rot + Xfer
Service time
24204521 Digital Computer Architecture
Disk Device Terminology
Disk Latency = Queuing Time + Controller time +Seek Time + Rotation Time + Xfer Time
Order of magnitude times for 4K byte transfers:
Seek: 8 ms or less
Rotate: 4.2 ms @ 7200 rpm
Xfer: 1 ms @ 7200 rpm
25204521 Digital Computer Architecture
Anatomy of a Read Access• Steps
– memory mapped I/O over bus to controller
– controller starts access
– seek + rotational latency wait
– sector is read and buffered (validity checked)
– controller says ready or DMA’s to main memory and thensays ready
• Calculating Access TimeAccess Time = Cont. Delay + Block Size + 0.5 + Seek Time + Pick Time + Check Delay
Bandwidth RPM
Seek Time is very non-linear: Accelerate and decelerate times
Complicate actual value
Time forHead to passOver sector
26204521 Digital Computer Architecture
Seagate ST31401N (1993)
• 5.25” diameter
• 2.8 GB formatted capacity
• 2627 cylinders
• 21 tracks/cylinder
• 99 sectors per track - variable density encoded
• 512 bytes per sector
• rotational speed 5400 RPM
• average seek - random cylinder to cylinder = 11 ms
• minimum seek = 1.7 ms
• maximum seek = 22.5 ms
• transfer rate = 4.6 MB/sec
27204521 Digital Computer Architecture
Disk Trends• bits/unit area is common improvement metric
– assumption is fixed density encoding – may not get this since variable density control is cheaper
• Trends– Until 1988 - 29% improvement per year
ป doubles density in three years– After 1988 (and thin film oxide deposition) - 60% per year
ป 4x density in three years
• Today– 644 Mbits/in2– 3 Gbit/in2 demonstrated in labs
Areal Density = Tracks * Bits
Platter surface Inch Track Inch
28204521 Digital Computer Architecture
Disk Price Trends
• Personal Computer (as advertised)• In 1995 dollars
5K
4K
3K
2K
1K
83 84 85 86 87 88 89 90 91 92 93 94 95
20 MB
80 MB
210 MB
420 MB1050 MB
29204521 Digital Computer Architecture
Cost per Megabyte
• Improved 100x in 12 years– 1983: $300– 1990: $9– 1995: $0.90
• Comparison 1995– SRAM chips: 200 $/MB with access times around 10 ns– DRAM chip: 10 $/MB with access times around 800 ns
• there are chips with access times around 300 ns with 40 $/MB
– DRAM boards: $20 $/MB and access times around 1 us– Disk: .9 $/MB access times around 100 msec
• Disk futures look bright– all doomsday predictions have failed - e.g. bubble
memories
30204521 Digital Computer Architecture
Disk Alternatives
• DRAM and a battery– big reduction in seek time and lower latency– more reliable as long as the battery holds up– cost is not attractive– prediction: they’ll fail just like bubbles
• only some reasons will differ
• Optical Disks– CD’s are fine for archival storage due to their write once
nature• They’re also cheap and high density
– unlikely to replace disks until many writes are possible• numerous efforts at creating this technology • none likely to mature soon however
– role is likely to be for archival store and SW distribution
31204521 Digital Computer Architecture
Other Alternatives
• Robotic Tape Storage– STC Powderhorn - 60 TB at 50$/GB
• could hold the library of Congress in ASCII
– problem is tapes tend to wear out so rare access role is key• Optical Juke Boxes
– now latency depends on even more mechanical gizmo’s• Tapes - DAT vs. the old stuff
– amazing how dense you can be if you don’t have to be right• The new storage frontiers – Terabyte per cm3
– plasma– biological– holographic spread spectrum– none are anywhere close to deployment
32204521 Digital Computer Architecture
I/O Connection Issues• connecting the CPU to the I/O device world• Typical choice is a bus - advantages
– shares a common set of wires and protocols ==> cheap– often based on some standard - PCI, SCSI, etc. ==> portable
• But significant disadvantages for performance– generality ==> poor performance
• lack of precise loading model• length may also be expandable
– multiple devices imply arbitration and therefore contention– can be a bottleneck
• Common choice: multiple busses in the system– fast ones for memory and CPU to CPU interaction (maybe
I/O)– slow for more general I/O via some converter or bridge IOA
33204521 Digital Computer Architecture
Nano-layered Disk Heads• Special sensitivity of Disk head comes from “Giant Magneto-
Resistive effect” or (GMR)
• IBM is leader in this technology
– Same technology as TMJ-RAM breakthrough we described in earlier class.
Coil for writing
34204521 Digital Computer Architecture
Advantages of Small Formfactor Disk Drives
Low cost/MBHigh MB/volumeHigh MB/wattLow cost/Actuator
Cost and Environmental Efficiencies
35204521 Digital Computer Architecture
Tape vs. Disk
• Longitudinal tape uses same technology as hard disk; tracks its density improvements
• Disk head flies above surface, tape head lies on surface
• Disk fixed, tape removable
• Inherent cost-performance based on geometries: fixed rotating platters with gaps (random access, limited area, 1 media / reader)vs. removable long strips wound on spool (sequential access, "unlimited" length, multiple / reader)
• New technology trend: Helical Scan (VCR, Camcoder, DAT) Spins head at angle to tape to improve density
36204521 Digital Computer Architecture
Current Drawbacks to Tape
• Tape wear out:– Helical 100s of passes to 1000s for longitudinal
• Head wear out: – 2000 hours for helical
• Both must be accounted for in economic / reliability model
• Long rewind, eject, load, spin-up times; not inherent, just no need in marketplace (so far)
• Designed for archival
37204521 Digital Computer Architecture
Automated Cartridge System
STC 4400
6000 x 0.8 GB 3490 tapes = 5 TBytes in 1992 $500,000 O.E.M. Price
6000 x 10 GB D3 tapes = 60 TBytes in 1998
Library of Congress: all information in the world; in 1992, ASCII of all books = 30 TB
8 feet
10 feet
38204521 Digital Computer Architecture
Relative Cost of Storage Technology—Late 1995/Early 1996
Magnetic Disks5.25” 9.1 GB $2129 $0.23/MB
$1985 $0.22/MB
3.5” 4.3 GB $1199 $0.27/MB$999 $0.23/MB
2.5” 514 MB $299 $0.58/MB1.1 GB $345 $0.33/MB
Optical Disks5.25” 4.6 GB $1695+199 $0.41/MB
$1499+189 $0.39/MB
PCMCIA CardsStatic RAM 4.0 MB $700 $175/MB
Flash RAM 40.0 MB $1300 $32/MB
175 MB $3600 $20.50/MB
39204521 Digital Computer Architecture
Storage System Issues
• Historical Context of Storage I/O
• Secondary and Tertiary Storage Devices
• Storage I/O Performance Measures
• Processor Interface Issues
• A Little Queuing Theory
• Redundant Arrarys of Inexpensive Disks (RAID)
• I/O Buses
40204521 Digital Computer Architecture
Disk I/O Performance
Response time = Queue + Device Service time
100%
ResponseTime (ms)
Throughput (% total BW)
0
100
200
300
0%
Proc
Queue
IOC Device
Metrics: Response Time Throughput
41204521 Digital Computer Architecture
Response Time vs. Productivity
• Interactive environments: Each interaction or transaction has 3 parts:
– Entry Time: time for user to enter command– System Response Time: time between user entry & system
replies– Think Time: Time from response until user begins next
command1st transaction
2nd transaction
• What happens to transaction time as shrink system response time from 1.0 sec to 0.3 sec?
– With Keyboard: 4.0 sec entry, 9.4 sec think time– With Graphics: 0.25 sec entry, 1.6 sec think time
42204521 Digital Computer Architecture
Time
0.00 5.00 10.00 15.00
graphics1.0s
graphics0.3s
conventional1.0s
conventional0.3s
entry resp think
Response Time & Productivity
• 0.7sec off response saves 4.9 sec (34%) and 2.0 sec (70%) total time per transaction => greater productivity
• Another study: everyone gets more done with faster response, but novice with fast response = expert with slow
43204521 Digital Computer Architecture
Disk Time Example
• Disk Parameters:– Transfer size is 8K bytes
– Advertised average seek is 12 ms
– Disk spins at 7200 RPM
– Transfer rate is 4 MB/sec
• Controller overhead is 2 ms
• Assume that disk is idle so no queuing delay
• What is Average Disk Access Time for a Sector?– Ave seek + ave rot delay + transfer time + controller overhead
– 12 ms + 0.5/(7200 RPM/60) + 8 KB/4 MB/s + 2 ms
– 12 + 4.15 + 2 + 2 = 20 ms
• Advertised seek time assumes no locality: typically 1/4 to 1/3 advertised seek time: 20 ms => 12 ms
44204521 Digital Computer Architecture
Storage System Issues
• Historical Context of Storage I/O
• Secondary and Tertiary Storage Devices
• Storage I/O Performance Measures
• Processor Interface Issues
• A Little Queuing Theory
• Redundant Arrarys of Inexpensive Disks (RAID)
• I/O Buses
45204521 Digital Computer Architecture
Processor Interface Issues
• Processor interface– Interrupts
– Memory mapped I/O
• I/O Control Structures– Polling
– Interrupts
– DMA
– I/O Controllers
– I/O Processors
• Capacity, Access Time, Bandwidth
• Interconnections– Busses
46204521 Digital Computer Architecture
I/O Interface
Independent I/O Bus
CPU
Interface Interface
Peripheral Peripheral
Memory
memorybus
Seperate I/O instructions (in,out)
CPU
Interface Interface
Peripheral Peripheral
Memory
Lines distinguish between I/O and memory transferscommon memory
& I/O bus
VME busMultibus-IINubus
40 Mbytes/secoptimistically
10 MIP processorcompletelysaturates the bus!
47204521 Digital Computer Architecture
Memory Mapped I/O
Single Memory & I/O Bus No Separate I/O Instructions
CPU
Interface Interface
Peripheral Peripheral
Memory
ROM
RAM
I/O$
CPU
L2 $
Memory Bus
Memory Bus Adaptor
I/O bus
48204521 Digital Computer Architecture
Programmed I/O (Polling)
CPU
IOC
device
Memory
Is thedata
ready?
readdata
storedata
yesno
done? no
yes
busy wait loopnot an efficient
way to use the CPUunless the device
is very fast!
but checks for I/O completion can bedispersed amongcomputationallyintensive code
49204521 Digital Computer Architecture
Interrupt Driven Data TransferCPU
IOC
device
Memory
addsubandornop
readstore...rti
memory
userprogram(1) I/O
interrupt
(2) save PC
(3) interruptservice addr
interruptserviceroutine(4)
Device xfer rate = 10 MBytes/sec => 0 .1 x 10 sec/byte => 0.1 ตsec/byte => 1000 bytes = 100 ตsec 1000 transfers x 100 ตsecs = 100 ms = 0.1 CPU seconds
-6
User program progress only halted during actual transfer
1000 transfers at 1 ms each: 1000 interrupts @ 2 ตsec per interrupt 1000 interrupt service @ 98 ตsec each = 0.1 CPU seconds
Still far from device transfer rate! 1/2 in interrupt overhead
50204521 Digital Computer Architecture
Direct Memory Access
CPU
IOC
device
Memory DMAC
Time to do 1000 xfers at 1 msec each:1 DMA set-up sequence @ 50 ตsec1 interrupt @ 2 ตsec1 interrupt service sequence @ 48 ตsec
.0001 second of CPU time
CPU sends a starting address, direction, and length count to DMAC. Then issues "start".
DMAC provides handshake signals for PeripheralController, and Memory Addresses and handshakesignals for Memory.
0ROM
RAM
Peripherals
DMACn
Memory Mapped I/O
51204521 Digital Computer Architecture
Summary• Disk industry growing rapidly, improves:
– bandwidth 40%/yr ,
– areal density 60%/year, $/MB faster?
• queue + controller + seek + rotate + transfer
• Advertised average seek time benchmark much greater than average seek time in practice
• Response time vs. Bandwidth tradeoffs
• Value of faster response time:– 0.7sec off response saves 4.9 sec and 2.0 sec (70%) total time per
transaction => greater productivity
– everyone gets more done with faster response, but novice with fast response = expert with slow
• Processor Interface: today peripheral processors, DMA, I/O bus, interrupts
52204521 Digital Computer Architecture
Storage System Issues
• Historical Context of Storage I/O
• Secondary and Tertiary Storage Devices
• Storage I/O Performance Measures
• Processor Interface Issues
• A Little Queuing Theory
• Redundant Arrarys of Inexpensive Disks (RAID)
• I/O Buses
53204521 Digital Computer Architecture
Introduction to Queueing Theory
• More interested in long term, steady state than in startup => Arrivals = Departures
• Little’s Law: Mean number tasks in system = arrival rate x mean reponse time
– Observed by many, Little was first to prove
• Applies to any system in equilibrium, as long as nothing in black box is creating or destroying tasks
Arrivals Departures
54204521 Digital Computer Architecture
A Little Queuing Theory: Notation
• Queuing models assume state of equilibrium: input rate = output rate
• Notation: average number of arriving customers/second
Tser average time to service a customer (tradtionally = 1/ Tser )u server utilization (0..1): u = x Tser (or u = / )Tq average time/customer in queue Tsys average time/customer in system: Tsys = Tq + Tser
Lq average length of queue: Lq = x Tq
Lsys average length of system: Lsys = r x Tsys
• Little’s Law: Lengthsystem = rate x Timesystem (Mean number customers = arrival rate x mean service time)
Proc IOC Device
Queue server
System
55204521 Digital Computer Architecture
A Little Queuing Theory
• Service time completions vs. waiting time for a busy server: randomly arriving event joins a queue of arbitrary length when server is busy, otherwise serviced immediately
– Unlimited length queues key simplification
• A single server queue: combination of a servicing facility that accomodates 1 customer at a time (server) + waiting area (queue): together called a system
• Server spends a variable amount of time with customers; how do you characterize variability?
– Distribution of a random variable: histogram? curve?
Proc IOC Device
Queue server
System
56204521 Digital Computer Architecture
A Little Queuing Theory
• Server spends a variable amount of time with customers
– Weighted mean m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)/F (F=f1 + f2...)
– variance = (f1 x T12 + f2 x T2
2 +...+ fn x Tn2)/F – m1
2
Must keep track of unit of measure (100 ms2 vs. 0.1 s2 )
– Squared coefficient of variance: C = variance/m12
Unitless measure (100 ms2 vs. 0.1 s2)
• Exponential distribution C = 1 : most short relative to average, few others long; 90% < 2.3 x average, 63% < average
• Hypoexponential distribution C < 1 : most close to average, C=0.5 => 90% < 2.0 x average, only 57% < average
• Hyperexponential distribution C > 1 : further from average C=2.0 => 90% < 2.8 x average, 69% < average
Proc IOC Device
Queue server
System
Avg.
57204521 Digital Computer Architecture
A Little Queuing Theory: Variable Service Time
• Server spends a variable amount of time with customers
– Weighted mean m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)/F (F=f1 + f2...)
– Squared coefficient of variance C
• Disk response times C 1.5 (majority seeks < average)
• Yet usually pick C = 1.0 for simplicity
• Another useful value is average time must wait for server to complete task: m1(z)
– Not just 1/2 x m1 because doesn’t capture variance
– Can derive m1(z) = 1/2 x m1 x (1 + C)
– No variance => C= 0 => m1(z) = 1/2 x m1
Proc IOC Device
Queue server
System
58204521 Digital Computer Architecture
A Little Queuing Theory:Average Wait Time
• Calculating average wait time in queue Tq
– If something at server, it takes to complete on average m1(z)
– Chance server is busy = u; average delay is u x m1(z)
– All customers in line must complete; each avg Tser
Tq = u x m1(z) + Lq x Ts er= 1/2 x u x Tser x (1 + C) + Lq x Ts er
Tq = 1/2 x u x Ts er x (1 + C) + x Tq x Ts er
Tq = 1/2 x u x Ts er x (1 + C) + u x Tq
Tq x (1 – u) = Ts er x u x (1 + C) /2
Tq = Ts er x u x (1 + C) / (2 x (1 – u))
• Notation: average number of arriving customers/second
Tser average time to service a customeru server utilization (0..1): u = r x Tser
Tq average time/customer in queueLq average length of queue:Lq= r x Tq
59204521 Digital Computer Architecture
A Little Queuing Theory: M/G/1 and M/M/1
• Assumptions so far:– System in equilibrium
– Time between two successive arrivals in line are random
– Server can start on next customer immediately after prior finishes
– No limit to the queue: works First-In-First-Out
– Afterward, all customers in line must complete; each avg Tser
• Described “memoryless” or Markovian request arrival (M for C=1 exponentially random), General service distribution (no restrictions), 1 server: M/G/1 queue
• When Service times have C = 1, M/M/1 queueTq = Tser x u x (1 + C) /(2 x (1 – u)) = Tser x u / (1 – u)
Tser average time to service a customeru server utilization (0..1): u = r x Tser
Tq average time/customer in queue
60204521 Digital Computer Architecture
A Little Queuing Theory: An Example
• processor sends 10 x 8KB disk I/Os per second, requests & service exponentially distrib., avg. disk service = 20 ms
• On average, how utilized is the disk?– What is the number of requests in the queue?
– What is the average time spent in the queue?
– What is the average response time for a disk request?
• Notation: average number of arriving customers/second = 10
Tser average time to service a customer = 20 ms (0.02s)u server utilization (0..1): u = x Tser= 10/s x .02s = 0.2Tq average time/customer in queue = Tser x u / (1 – u)
= 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0 .005s)Tsys average time/customer in system: Tsys =Tq +Tser= 25 msLq average length of queue:Lq= x Tq
= 10/s x .005s = 0.05 requests in queueLsys average # tasks in system: Lsys = x Tsys = 10/s x .025s = 0.25
61204521 Digital Computer Architecture
A Little Queuing Theory: Another Example
• processor sends 20 x 8KB disk I/Os per sec, requests & service exponentially distrib., avg. disk service = 12 ms
• On average, how utilized is the disk?– What is the number of requests in the queue?– What is the average time a spent in the queue?– What is the average response time for a disk request?
• Notation: average number of arriving customers/second= 20
Tser average time to service a customer= 12 msu server utilization (0..1): u = x Tser= /s x . s = Tq average time/customer in queue = Ts er x u / (1 – u)
= x /( ) = x = msTsys average time/customer in system: Tsys =Tq +Tser= 16 msLq average length of queue:Lq= x Tq
= /s x s = requests in queue Lsys average # tasks in system : Lsys = x Tsys = /s x s =
62204521 Digital Computer Architecture
A Little Queuing Theory: Another Example
• processor sends 20 x 8KB disk I/Os per sec, requests & service exponentially distrib., avg. disk service = 12 ms
• On average, how utilized is the disk?– What is the number of requests in the queue?– What is the average time a spent in the queue?– What is the average response time for a disk request?
• Notation: average number of arriving customers/second= 20
Tser average time to service a customer= 12 msu server utilization (0..1): u = x Tser= 20/s x .012s = 0.24Tq average time/customer in queue = Ts er x u / (1 – u)
= 12 x 0.24/(1-0.24) = 12 x 0.32 = 3.8 msTsys average time/customer in system: Tsys =Tq +Tser= 15.8 msLq average length of queue:Lq= x Tq
= 20/s x .0038s = 0.076 requests in queue Lsys average # tasks in system : Lsys = x Tsys = 20/s x .016s = 0.32
63204521 Digital Computer Architecture
A Little Queuing Theory:Yet Another Example
• Suppose processor sends 10 x 8KB disk I/Os per second, squared coef. var.(C) = 1.5, avg. disk service time = 20 ms
• On average, how utilized is the disk?– What is the number of requests in the queue?
– What is the average time a spent in the queue?
– What is the average response time for a disk request?
• Notation: average number of arriving customers/second= 10
Tser average time to service a customer= 20 msu server utilization (0..1): u = x Tser= 10/s x .02s = 0.2Tq average time/customer in queue = Tser x u x (1 + C) /(2 x (1 – u))
= 20 x 0.2(2.5)/2(1 – 0.2) = 20 x 0.32 = 6.25 ms Tsys average time/customer in system: Tsys = Tq +Tser= 26 msLq average length of queue:Lq= x Tq
= 10/s x .006s = 0.06 requests in queueLsys average # tasks in system :Lsys = x Tsys = 10/s x .026s = 0.26
64204521 Digital Computer Architecture
Storage System Issues
• Historical Context of Storage I/O
• Secondary and Tertiary Storage Devices
• Storage I/O Performance Measures
• Processor Interface Issues
• A Little Queuing Theory
• Redundant Arrarys of Inexpensive Disks (RAID)
• I/O Buses
65204521 Digital Computer Architecture
Network Attached StorageDecreasing Disk Diameters
Increasing Network Bandwidth
Network File ServicesHigh PerformanceStorage Serviceon a High Speed
Network
High PerformanceStorage Serviceon a High Speed
Network
14" ป 10" ป 8" ป 5.25" ป 3.5" ป 2.5" ป 1.8" ป 1.3" ป . . .high bandwidth disk systems based on arrays of disks
3 Mb/s ป 10Mb/s ป 50 Mb/s ป 100 Mb/s ป 1 Gb/s ป 10 Gb/snetworks capable of sustaining high bandwidth transfers
Network provideswell defined physicaland logical interfaces:separate CPU and storage system!
OS structuressupporting remotefile access
66204521 Digital Computer Architecture
Manufacturing Advantages of Disk Arrays
14”10”5.25”3.5”
3.5”
Disk Array: 1 disk design
Conventional: 4 disk designs
Low End High End
Disk Product Families
67204521 Digital Computer Architecture
Replace Small # of Large Disks with Large # of Small Disks! (1988 Disks)
Data Capacity
Volume
Power
Data Rate
I/O Rate
MTTF
Cost
IBM 3390 (K)
20 GBytes
97 cu. ft.
3 KW
15 MB/s
600 I/Os/s
250 KHrs
$250K
IBM 3.5" 0061
320 MBytes
0.1 cu. ft.
11 W
1.5 MB/s
55 I/Os/s
50 KHrs
$2K
x70
23 GBytes
11 cu. ft.
1 KW
120 MB/s
3900 IOs/s
??? Hrs
$150K
Disk Arrays have potential for
large data and I/O rates
high MB per cu. ft., high MB per KW
reliability?
68204521 Digital Computer Architecture
Array Reliability
• Reliability of N disks = Reliability of 1 Disk ๗ N
50,000 Hours ๗ 70 disks = 700 hours
Disk system MTTF: Drops from 6 years to 1 month!
• Arrays (without redundancy) too unreliable to be useful!
Hot spares support reconstruction in parallel with access: very high media availability can be achievedHot spares support reconstruction in parallel with access: very high media availability can be achieved
69204521 Digital Computer Architecture
Redundant Arrays of Disks
• Files are "striped" across multiple spindles• Redundancy yields high data availability
Disks will fail
Contents reconstructed from data redundantly stored in the array
Capacity penalty to store it
Bandwidth penalty to update
Mirroring/Shadowing (high capacity cost)
Horizontal Hamming Codes (overkill)
Parity & Reed-Solomon Codes
Failure Prediction (no capacity overhead!)VaxSimPlus — Technique is controversial
Techniques:
70204521 Digital Computer Architecture
Redundant Arrays of DisksRAID 1: Disk Mirroring/Shadowing
• Each disk is fully duplicated onto its "shadow" Very high availability can be achieved
• Bandwidth sacrifice on write: Logical write = two physical writes
• Reads may be optimized
• Most expensive solution: 100% capacity overhead
Targeted for high I/O rate , high availability environments
recoverygroup
71204521 Digital Computer Architecture
Redundant Arrays of Disks RAID 3: Parity Disk
P100100111100110110010011
. . .
logical record 10010011
11001101
10010011
00110000
Striped physicalrecords
• Parity computed across recovery group to protect against hard disk failures 33% capacity cost for parity in this configuration wider arrays reduce capacity costs, decrease expected availability, increase reconstruction time• Arms logically synchronized, spindles rotationally synchronized logically a single high capacity, high transfer rate disk
Targeted for high bandwidth applications: Scientific, Image Processing
72204521 Digital Computer Architecture
Redundant Arrays of Disks RAID 5+: High I/O Rate Parity
A logical writebecomes fourphysical I/Os
Independent writespossible because ofinterleaved parity
Reed-SolomonCodes ("Q") forprotection duringreconstruction
A logical writebecomes fourphysical I/Os
Independent writespossible because ofinterleaved parity
Reed-SolomonCodes ("Q") forprotection duringreconstruction
D0 D1 D2 D3 P
D4 D5 D6 P D7
D8 D9 P D10 D11
D12 P D13 D14 D15
P D16 D17 D18 D19
D20 D21 D22 D23 P
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.Disk Columns
IncreasingLogical
Disk Addresses
Stripe
StripeUnit
Targeted for mixedapplications
73204521 Digital Computer Architecture
Problems of Disk Arrays: Small Writes
D0 D1 D2 D3 PD0'
+
+
D0' D1 D2 D3 P'
newdata
olddata
old parity
XOR
XOR
(1. Read) (2. Read)
(3. Write) (4. Write)
RAID-5: Small Write Algorithm
1 Logical Write = 2 Physical Reads + 2 Physical Writes
74204521 Digital Computer Architecture
RA90:Ave. Seek: 18.5 msRotation: 16.7 msXfer Rate: 2.8 MB/sCapacity: 1200 MB
IBM Small Disks:Ave. Seek: 12.5 msRotation: 14 msXfer Rate: 2.4 MB/sCapacity: 300 MB
Normal OperatingRange of
MostExisting Systems
all writes all reads
Mirrored RA90's
4+2 Array Group
IO/s
ec
75204521 Digital Computer Architecture
Subsystem Organization
hostarray
controller
single boarddisk
controller
single boarddisk
controller
single boarddisk
controller
single boarddisk
controller
hostadapter
manages interfaceto host, DMA
control, buffering,parity logic
physical devicecontrol
often piggy-backedin small format devices
striping software off-loaded from host to array controller
no applications modifications
no reduction of host performance
76204521 Digital Computer Architecture
System Availability: Orthogonal RAIDs
ArrayController
StringController
StringController
StringController
StringController
StringController
StringController
. . .
. . .
. . .
. . .
. . .
. . .
Data Recovery Group: unit of data redundancy
Redundant Support Components: fans, power supplies, controller, cables
End to End Data Integrity: internal parity protected data paths
77204521 Digital Computer Architecture
System-Level Availability
Fully dual redundantI/O Controller I/O Controller
Array Controller Array Controller
. . .
. . .
. . .
. . . . . .
.
.
.RecoveryGroup
Goal: No SinglePoints ofFailure
Goal: No SinglePoints ofFailure
host host
with duplicated paths, higher performance can beobtained when there are no failures
78204521 Digital Computer Architecture
Summary: A Little Queuing Theory
• Queuing models assume state of equilibrium: input rate = output rate
• Notation: r average number of arriving customers/second
Tser average time to service a customer (tradtionally ต = 1/ Tser )u server utilization (0..1): u = x Tser
Tq average time/customer in queue Tsys average time/customer in system: Tsys = Tq + Tser
Lq average length of queue: Lq = x Tq
Lsys average length of system : Lsys = x Tsys
• Little’s Law: Lengthsystem = rate x Timesystem (Mean number customers = arrival rate x mean service time)
Proc IOC Device
Queue server
System
79204521 Digital Computer Architecture
Summary: Redundant Arrays of Disks (RAID) Techniques
• Disk Mirroring, Shadowing (RAID 1)
Each disk is fully duplicated onto its "shadow" Logical write = two physical writes
100% capacity overhead
• Parity Data Bandwidth Array (RAID 3)
Parity computed horizontally
Logically a single high data bw disk
• High I/O Rate Parity Array (RAID 5)
Interleaved parity blocks
Independent reads and writes
Logical write = 2 reads + 2 writes
Parity + Reed-Solomon codes
10010011
11001101
10010011
00110010
10010011
10010011
Top Related