Lecture 25: Networks & Interconnect Networks on a...

23
Page 1 JR.S00 1 Lecture 25: Networks & Interconnect Networks on a Chip Prepared by: Professor David A. Patterson Professor Bill Dally (Stanford) Edited and presented by : Prof. Jan Rabaey Computer Science 252, Spring 2000 JR.S00 2 Project Presentations Next Week The Good News: No presentations on Tu faculty retreat – time for contemplation The Bad News: A monster session on Th: 2- 6pm (?) Each group gets 15 minutes (12’ + 3) – Use overhead or mail in powerpoint a day in advance Conference style presentation Be concise

Transcript of Lecture 25: Networks & Interconnect Networks on a...

Page 1: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 1

JR.S00 1

Lecture 25: Networks & Interconnect Networks on a Chip

Prepared by: Professor David A. PattersonProfessor Bill Dally (Stanford)

Edited and presented by : Prof. Jan RabaeyComputer Science 252, Spring 2000

JR.S00 2

Project Presentations Next Week

• The Good News: No presentations on Tufaculty retreat – time for contemplation

• The Bad News: A monster session on Th: 2-6pm (?)

• Each group gets 15 minutes (12’ + 3) – Use overhead or mail in powerpoint a day in advance

• Conference style presentation• Be concise

Page 2: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 2

JR.S00 3

Review: Interconnections

• Communication between computing elements• Protocols to cover normal and abnormal events• Performance issues: HW & SW overhead,

interconnect latency, bisection BW• Media sets cost, distance• HW and SW Interface to computer affects overhead,

latency, bandwidth

JR.S00 4

Review: Performance Metrics

Sender

Receiver

SenderOverhead

Transmission time(size ÷ bandwidth)

Transmission time(size ÷ bandwidth)

Time ofFlight

ReceiverOverhead

Transport Latency

Total Latency = Sender Overhead + Time of Flight + Message Size ÷ BW + Receiver Overhead

Total Latency

(processorbusy)

(processorbusy)

Includes header/trailer in BW calculation?

Page 3: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 3

JR.S00 5

Interconnect Issues

• Performance Measures• Interface Issues• Network Media• Connecting Multiple Computers

JR.S00 6

Connecting Multiple Computers• Shared Media vs. Switched: pairs

communicate at same time: “ point-to-point ” connections

• Aggregate BW in switched network is many times shared

– point-to-point faster since no arbitration, simpler interface

• Arbitration in Shared network?– Central arbiter for LAN?– Listen to check if being used (“ Carrier

Sensing ”)– Listen to check if collision

(“ Collision Detection ”)– Random resend to avoid repeated collisions;

not fair arbitration; – OK if low utilization

(Aka data switching interchanges, multistageinterconnection networks,interface message processors)

Page 4: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 4

JR.S00 7

Example Interconnects

Interconnect MPP LAN WANExample CM-5 Ethernet ATMMaximum length 25 m 500 m; copper: 100 m

optical: 2 km—25 kmNumber data lines 4 1 1Clock Rate 40 MHz 10 MHz < 155.5 MHz Shared vs. Switch Switch Shared SwitchMaximum number 2048 254 > 10,000

of nodesMedia Material Copper Twisted pair Twisted pair

copper wire copper wire oror Coaxial optical fibercable

JR.S00 8

Switch Topology

• Structure of the interconnect• Determines

– Degree: number of links from a node– Diameter : max number of links crossed between nodes (max dist)– Average distance : number of hops to random destination– Bisection : minimum number of links that separate the network into

two halves (worst case)

• Warning : these three-dimensional drawings must be mapped onto chips and boards which are essentially two-dimensional media

– Elegant when sketched on the blackboard may look awkward when constructed from chips, cables, boards, and boxes (largely 2D)

Page 5: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 5

JR.S00 9

Important TopologiesN = 1024

Type Degree Diameter Ave Dist Bisection Diam Ave D

1D mesh < 2 N-1 N/3 1

2D mesh < 4 2(N 1/2 - 1) 2N1/2 / 3 N1/2 63 21

3D mesh < 6 3(N 1/3 - 1) 3N1/3 / 3 N2/3 ~30 ~10

nD mesh < 2n n(N 1/n - 1) nN1/n / 3 N(n-1) / n

(N = kn)

Ring 2 N / 2 N/4 2

2D torus 4 N 1/2 N1/2 / 2 2N1/2 32 16

k-ary n-cube 2n n(N 1/n) nN1/n/2 15 8 (3D) (N = kn) nk/2 nk/4 2k n-1

Hypercube n n = LogN n/2 N/2 10 5

Cube-Connected Cycles

Hypercube 2 3

JR.S00 10

N = 1024

Type Degree Diameter Ave Dist Bisection Diam Ave D

2D Tree 3 2Log 2 N ~2Log 2 N 1 20 ~20

4D Tree 5 2Log 4 N 2Log 4 N - 2/3 1 10 9.33

kD k+1 Log k N

2D fat tree 4 Log 2 N N

2D butterfly 4 Log 2 N N/2 20 20

Topologies (cont)

CM-5 Thinned Fat Tree

Fat Tree

Page 6: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 6

JR.S00 11

Butterfly

N/2 Butterfly

°°°

N/2 Butterfly

°°°

• All paths equal length

• Unique path from anyinput to any output

• Conflicts that try to avoid

• Don’t want algorithm to have to know paths

Multistage: nodes at ends, switches in middle

JR.S00 12

Example MPP Networks

Name Number Topology Bits Clock Link Bisect. YearnCube/ten 1-1024 10-cube 1 10 MHz 1.2 640 1987iPSC/2 16-128 7-cube 1 16 MHz 2 345 1988MP-1216 32-512 2D grid 1 25 MHz 3 1,300 1989Delta 540 2D grid 16 40 MHz 40 640 1991CM-5 32-2048 fat tree 4 40 MHz 20 10,240 1991CS-2 32-1024 fat tree 8 70 MHz 50 50,000 1992Paragon 4-1024 2D grid 16 100 MHz 200 6,400 1992T3D 16-1024 3D Torus 16 150 MHz 300 19,200 1993

No standard MPP topology!MBytes/second

Page 7: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 7

JR.S00 13

Connection-Based vs. Connectionless

• Telephone: operator sets up connection between the caller and the receiver

– Once the connection is established, conversation can continue for hours

• Share transmission lines over long distances by using switches to multiplex several conversations on the same lines

– “ Time division multiplexing ” divide B/W transmission line into a fixed number of slots, with each slot assigned to a conversation

• Problem: lines busy based on number of conversations, not amount of information sent

• Advantage: reserved bandwidth

JR.S00 14

Connection-Based vs. Connectionless

• Connectionless : every package of information must have an address => packets

– Each package is routed to its destination by looking at its address

– Analogy, the postal system (sending a letter)– also called “ Statistical multiplexing ”

Page 8: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 8

JR.S00 15

Routing Messages• Shared Media

– Broadcast to everyone

• Switched Media needs real routing. Options:– Source-based routing : message specifies path to the

destination (changes of direction)– Virtual Circuit : circuit established from source to

destination, message picks the circuit to follow– Destination-based routing : message specifies

destination, switch must pick the path» deterministic : always follow same path» adaptive : pick different paths to avoid congestion,

failures» Randomized routing : pick between several good

paths to balance network load

JR.S00 16

• mesh: dimension-order routing– (x1, y1) -> (x2, y2)– first ∆x = x 2 - x1,– then ∆y = y2 - y1,

• hypercube: edge-cube routing– X = xox1x2 . . .xn -> Y = yoy1y2 . . .yn

– R = X xor Y– Traverse dimensions of differing

address in order

• tree: common ancestor• Deadlock free?

Deterministic Routing Examples

001

000

101

100

010 110

111011

Page 9: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 9

JR.S00 17

Store and Forward vs. Cut-Through• Store-and-forward policy : each switch waits for the

full packet to arrive in switch before sending to the next switch (good for WAN)

• Cut-through routing and worm hole routing : switch examines the header, decides where to send the message, and then starts forwarding it immediately

– In worm hole routing , when head of message is blocked, message stays strung out over the network, potentially blocking other messages (needs only buffer the piece of the packet that is sentbetween switches). CM-5 uses it, with each switch buffer being 4 bits per port.

– Cut through routing lets the tail continue when head is blocked,accordioning the whole message into a single switch. (Requires a buffer large enough to hold the largest packet).

JR.S00 18

Store and Forward vs. Cut-Through• Advantage

– Latency reduces from function of:

number of intermediate switches X by the size of the packet

to

time for 1st part of the packet to negotiate the switches + the packet size ÷ interconnect BW

Page 10: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 10

JR.S00 19

Congestion Control• Packet switched networks do not reserve bandwidth; this

leads to contention (connection based limits input)• Solution: prevent packets from entering until contention

is reduced (e.g., freeway on-ramp metering lights)• Options:

– Packet discarding : If packet arrives at switch and no room in buffer, packet is discarded (e.g., UDP)

– Flow control : between pairs of receivers and senders; use feedback to tell sender when allowed to send next packet

» Back-pressure : separate wires to tell to stop» Window : give original sender right to send N packets before getting

permission to send more; overlaps latency of interconnection with overhead to send & receive packet (e.g., TCP), adjustable window

– Choke packets : aka “ rate-based ”; Each packet received by busy switch in warning state sent back to the source via choke packet. Source reduces traffic to that destination by a fixed % (e.g., ATM)

JR.S00 20

Practical Issues for Interconnection Networks

• Standardization advantages:– low cost (components used repeatedly)– stability (many suppliers to chose from)

• Standardization disadvantages:– Time for committees to agree– When to standardize?

» Before anything built? => Committee does design?» Too early suppresses innovation

• Perfect interconnect vs. Fault Tolerant?– Will SW crash on single node prevent communication?

(MPP typically assume perfect)

• Reliability (vs. availability) of interconnect

Page 11: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 11

JR.S00 21

Practical IssuesInterconnection MPP LAN WANExample CM-5 Ethernet ATMStandard No Yes YesFault Tolerance? No Yes YesHot Insert? No Yes Yes

• Standards: required for WAN, LAN!• Fault Tolerance : Can nodes fail and still deliver

messages to other nodes? required for WAN, LAN!• Hot Insert : If the interconnection can survive a failure,

can it also continue operation while a new node is added to the interconnection? required for WAN, LAN!

JR.S00 22

Networks on a Chip

Page 12: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 12

JR.S00 23

Message

• Interconnect-oriented architecture can reduce the demand for interconnect bandwidth and the effect of interconnect latency by an order of magnitude through

– locality - eliminate global structures– hierarchy - expose locality in register architecture– networking - share the wires

JR.S00 24

Outline

• Technology constraints• On-chip interconnection networks

– regular wiring - well characterized– optimized circuits– efficient usage

• Placement and Migration • Architecture for locality

Page 13: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 13

JR.S00 25

On-chip wires are getting slowerx2 = s x1 0.5x

R2 = R1/s2 4x

C2 = C1 1x

tw2 = R2C2y2 = tw1/s

2 4x

tw2/tg2= tw1/(tg1s3) 8x

v = 0.5(tgRC)-1/2 (m/s)

v2 = v1s1/2 0.7x

vtg = 0.5(tg/RC)1/2 (m/gate)

v2tg2 = v1tg1s3/2 0.35x

x1 x2

y

y

tw = RCy2 RCy2 RCy2

tg tg tg

• Reach in mm reduced to 0.35

• Reach in lambda reduced to 0.70

JR.S00 26

Technology scaling makes communication the scarce resource

0.18µm256Mb DRAM16 64b FP Proc

500MHz

0.07µm4Gb DRAM

256 64b FP Proc2.5GHz

P

1999 2008

18mm30,000 tracks

1 clockrepeaters every 3mm

25mm120,000 tracks

16 clocksrepeaters every 0.5mm

P

Page 14: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 14

JR.S00 27

On-Chip Interconnection Networks

• Replace dedicated global wiring with a shared network

Dedicated wiring

C h ip

Loca lLog ic

R o uter

N e tw o rkW ire s

Network

JR.S00 28

Most Wires are Idle Most of the Time

• Don’t dedicate wires to signals, share wires across multiple signals

• Route packets not wires• Organize global wiring as an on-chip

interconnection network– allows the wiring resource to be shared keeping wires

busy most of the time– allows a single global interconnect to be re-used on

multiple designs– makes global wiring regular and highly optimized

Page 15: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 15

JR.S00 29

Dedicated wires vs. Network

'HGLFDWHG :LULQJ 2Q�&KLS 1HWZRUN6SDJKHWWL ZLULQJ 2UGHUHG ZLULQJ

9DULDWLRQ PDNHV LW KDUG WR PRGHO

FURVVWDON� UHWXUQV� OHQJWK� 5 &�

1R YDULDWLRQ� VR HDV\ WR H[DFWO\

PRGHO ;7� UHWXUQV� 5 DQG &�

'ULYHUV VL]HG IRU µZLUH PRGHO¶ ±

��� WRR ODUJH� �� WRR VPDOO

'ULYHU VL]HG H[DFWO\ IRU ZLUH

+DUG WR XVH DGYDQFHG VLJQDOLQJ (DV\ WR XVH DGYDQFHG VLJQDOLQJ

/RZ GXW\ IDFWRU +LJK GXW\ IDFWRU

1R SURWRFRO RYHUKHDG 6PDOO SURWRFRO RYHUKHDG

JR.S00 30

On-Chip Interconnection Networks

C h ip

L oca lL og ic

R ou te r

N e tw o rkW ire s

• Many chips, same global wiring

– carefully optimized wiring– well characterized– optimized circuits

» 0.1x power 0.3x delay

• Efficient protocols– dynamic routing with

pipelined control– statically scheduled– static

• Standard interface

Page 16: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 16

JR.S00 31

Circuits for On-Chip Networks

H-bridge driver100mV swing

Long, lossyRC lines

RegenerativeRepeaters

8QLIRUP��ZHOO�FKDUDFWHUL]HG�OLQHV�HQDEOH�FXVWRP�FLUFXLWV�� ���[�SRZHU���[�YHORFLW\

inP

inN

pre

s ig1P

sig1N

ph1N

sig2P

sig2N

ph2N

JR.S00 32

Interconnect: repeaters with switching

• Need repeaters every 1mm or less

• Easy to insert switching– zero-cost reconfiguration

• Minimize decision time– static routing

» fixed or regular pattern– source routing

» on-demand» requires arbitration and

fanout» can be pipelined

• Minimize buffering

1mm

1mm

Arb LUT

Page 17: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 17

JR.S00 33

Architecture for On-Chip Networks

• Topology - different constraints than off-chip networks– buffering is expensive, bandwidth is cheap– more wires between ‘tiles’ than needed for one channel

» multiple networks, higher dimensions, express channels

• Flow-control– flit-reservation flow control, pipeline control ahead of data

» latency comparable to statically scheduled networks» minimum buffering requirements

– run static, statically scheduled, and dynamic networks on one set of wires

• Interface Design– standard interface from modules to network

» pinout and protocol» independent of network implementation

Flit Interleave vs Virtual Channels(flow control through control layer) (6-flit message)

0

20

40

60

80

100

120

140

160

180

200

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Wormhole: 40bufs

Wormhole: 80bufs

Wormhole: 160bufs

VirtualChannels: 2vcsX4bufs = 40bufs

VirtualChannels: 4vcsX4bufs = 80bufs

VirtualChannels: 8vcsX4bufs=160bufs

VirtualChannels: 8vcsX8bufs=320bufs

Interleave: 40bufs

Interleave: 80bufs

Interleave: 160bufs

Page 18: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 18

JR.S00 35

SIMD and Distributed Register Files

N A rithm e tic U n its N /CA rithm e tic

U n its

C S IM D C lus te rs

N /CA rithm e tic

U n its

C S IM D C luste rs

N A rithm e tic U n itsN /C A rithm e tic

U n itsN /C A rithm e tic

U n its

Scalar SIMD

Central

DRF

JR.S00 36

Organizations

0.1

1

10

100

1000

1 10 100 1000Number of Arithmetic Units

SIMDCentral

Stream/SIMD/DRF

Hier/SIMD/DRF

SIMD/DRF

480.1

1

10

100

1000

1 10 100 1000Number of Arithmetic Units

SIMD

Central

Hier/SIMD/DRF &Stream/SIMD/DRF

SIMD/DRF

48

• 48 ALUs (32-bit ), 500 MHz• Stream or ganization im proves central or ganization b y

Area: 195x, Dela y: 20x, Power: 430x

Page 19: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 19

JR.S00 37

Performance

0.0

0.2

0.4

0.6

0.8

1.0

1.2

Spe

edup

CENTRAL SIMD SIMD/DRF HIER. STREAM

16% Performance Drop(8% with latency constraints)

0

50

100

150

200

250

Per

form

an

ce/A

rea

CENTRAL SIMD SIMD/DRF HIER. STREAM

Convolve DCT Transform Shader FIR FFT Mean

180x Improvement

JR.S00 38

Research Vehicle, The Chip of 2010

M P M M P M M P M M P M

M P M M P M M P M M P M

M P M M P M M P M M P M

M P M M P M M P M M P M

Integrated Processor-Mem ory Arch itecture

(S tanford, MIT)

Short-W ire Arch.Georgia Tech

Partitioned RegOrganization

(S tanford)Coupled Osc.Clock Distribution

(M IT)

OpticalC lock Distribution

(M IT)

Fast Drivers(S tanford)

Low -Pow er Drivers(M IT)

On-Chip Netw orks(S tanford)

Data M igration(S tanford)

L=0.07um, 25mm/side, 100K tracks/side

Page 20: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 20

JR.S00 39

Architecture Reduces Impact of Slow WiresCircuits Make Wires More Efficient

• Locality– Eliminate implicit global communication– Expose and optimize the communication– Clustered architecture– Partitioned register file– Data migration

• Networking– Route packets, not wires– Improves duty factor of wires– Single, regular, highly-optimized design

LocalR egis ter

F ile

LocalIns truc tio n

U nit

LocalR egis ter

F ile

LocalIns truc tio n

U nit

LocalR egis ter

F ile

LocalIns truc tio n

U nit

S w itc h

S ync

G lob a lR e g is te r

F ile

G lob a lIns truc tio n

F e tch & Is sue

JR.S00 40

Protocols: HW/SW Interface

• Internetworking : allows computers on independent and incompatible networks to communicate reliably and efficiently;

– Enabling technologies: SW standards that allow reliable communications without reliable networks

– Hierarchy of SW layers, giving each layer responsibility for portion of overall communications task, calledprotocol families or protocol suites

• Transmission Control Protocol/Internet Protocol (TCP/IP)

– This protocol family is the basis of the Internet– IP makes best effort to deliver; TCP guarantees delivery– TCP/IP used even when communicating locally: NFS uses IP even

though communicating across homogeneous LAN

Page 21: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 21

JR.S00 41

FTP From Stanford to Berkeley

• BARRNet is WAN for Bay Area• T1 is 1.5 mbps leased line; T3 is 45 mbps;

FDDI is 100 mbps LAN• IP sets up connection, TCP sends file

UCB1.BARRNet.net192.31.161.4

mojave.Stanford.edu36.22.0.120

CIS-Gateway.Stanford.edu

36.1.0.22

SU-CM.BARRNet.net131.119.5.3

EthernetFDDI

T1 line

T3 line

inr-108-eecs.Berkeley.edu

128.32.120.108 128.32.120.111

inr-111-cs2.Berkeley.edu

128.32.149.13

mammoth.Berkeley.edu

128.32.149.78

FDDI

FDDI

Ethernet Ethernet

Internet

fd-0.enss128.t3.ans.net

192.31.48.244Stanford,California

Berkeley,California

T3

FDDI

FDDI

Ethernet

EthernetEthernet

Hennessy

Patterson

FDDI

JR.S00 42

Protocol

• Key to protocol families is that communication occurs logically at the same level of the protocol, called peer-to-peer , but is implemented via services at the lower level

• Danger is each level increases latency if implemented as hierarchy (e.g., multiple check sums)

T

Message

H T

HH T T HH T T HH T T HH T T HH T TT

H T H T

Message

H T H T H T

Actual Actual

Actual

Actual

Logical

Logical

Actual

Page 22: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 22

JR.S00 43

TCP/IP packet

• Application sends message• TCP breaks into 64KB

segments, adds 20B header• IP adds 20B header, sends

to network• If Ethernet, broken into

1500B packets with headers, trailers

• Header, trailers have length field, destination, window number, version, ...

TCP data(� ��.%�

TCP Header

IP Header

IP Data

Ethernet

JR.S00 44

Example Networks

• Ethernet: shared media 10 Mbit/s proposed in 1978, carrier sensing with expotential backoffon collision detection

• 15 years with no improvement; higher BW?• Multiple Ethernets with devices to allow

Ehternets to operate in parallel!• 10 Mbit Ethernet successors?

– FDDI: shared media (too late)– ATM (too late?)– Switched Ethernet– 100 Mbit Ethernet (Fast Ethernet)– Gigabit Ethernet

Page 23: Lecture 25: Networks & Interconnect Networks on a Chipbwrcs.eecs.berkeley.edu/Classes/CS252/Notes/lec25-network.pdf · • The Good News: No presentations on Tu faculty retreat –

Page 23

JR.S00 45

Connecting Networks

• Bridges : connect LANs together, passing traffic from one side to another depending on the addresses in the packet.

– operate at the Ethernet protocol level– usually simpler and cheaper than routers

• Routers or Gateways : these devices connect LANs toWANs or WANs to WANs and resolve incompatible addressing.

– Generally slower than bridges, they operate at the internetworking protocol (IP) level

– Routers divide the interconnect into separate smaller subnets, which simplifies manageability and improves security

• Cisco is major supplier; basically special purpose computers

JR.S00 46

Networking Summary• Protocols allow hetereogeneous networking• Protocols allow operation in the presense of failures• Routing issues: store and forward vs. cut through,

congestion, ...• Standardization key for LAN, WAN• Internetworking protocols used as LAN protocols =>

large overhead for LAN• Integrated circuit revolutionizing networks as well as

processors