What will follow the InternetWhat will follow the Internet
Transcript of What will follow the InternetWhat will follow the Internet
1
Dr. Lawrence RobertsFounder, Chairman,Anagran
What will follow the InternetWhat will follow the Internet
2
The Beginning of the InternetARPANET became the Internet
• 1965 – MIT- 1st Packet Experiment -Roberts• 1967 - Roberts to ARPA – Designs ARPANET• 1969 – ARPANET Starts – 1st Packet Network• 1971 – ARPANET Grows to 18 nodes
• 1983 – TCP/IP installed on ARPANET – Kahn/Cerf• 1986 – NSF takes over network - NSFNET• 1991 – Internet opened to commercial use
Roberts at MIT Computer
ARPANET1971
3
Internet Early History
1
10
100
1,000
10,000
100,000
1969 1971 1973 1975 1977 1979 1981 1983 1985 1987
Ho
sts
or
Tra
ffic
inb
ps
/10
Hosts
TrafficTCP/IPNCP
EMAILFTP
ICCC Demo
Aloha-Packet Radio
SATNET - Satellite to UK
Spans US
Ethernet
DNSPacketRadioNET
“Internet”Name first used- RFC 675
TCP/IP Design
X.25 – Virtual Circuit standard
Roberts term at ARPA Kahn term at ARPA
Cerf term at ARPA
4
Original Internet DesignIt was designed for Data
File Transfer and Email main activities
Constrained by high cost of memory
– Only Packet Destination Examined
– No Source Checks
– No QoS
– No Security
– Best Effort Only
– Voice Considered
– Video not feasible
ARPANET1971
Not much change since then
5
Voice Totally moving to packets
– Low loss, low delay required
Video Totally moving to packets
– Low loss, low delay jitter required
Emergency Services No Preference Priority
Broadband Edge Must control Edge Traffic
P2P utilizes TCP unfairness – multiple flows– Congests network – 5% of users take 80% of capacity
Changing Use of InternetMajor changes in Network Use
6
Network Change Required
Video Quality Poor
– Packet Loss, Delay Jitter, VoD can be too slow
– Cannot determine if quality path exists
– Separate Networks needed for Voice, Data, and Video
Fairness Missing – Multi-flow applications overload Internet
– Currently P2P is reducing throughput of normal users
High Cost, Power, Size – Major reduction possible
Emergency Services - Need Preference Priorities
Security – Need User Authorization and Source Checking
– Must know who you are connected to
7
Where should we be in the next decade
Everyone has mobile wireless Internet device
– Voice and video person-to-person communication
– Knowledge and Entertainment access – much as video
– Location knowledgeable (GPS) – locate anything
– Payment capable, Access capable (keys), ID capable
– Remote control for anything – TV, computer, etc.
– Secured for individual – can identify individual to net
Larger computers secured from wireless device
8
Where should Internet be in the next decade
2008 2018
% World Population On-Line 22% 99%
Total Traffic PB/month 3,200 191,000
Traffic per User GB/month 2.2 26
GB/mo/user Developed areas 2.7 156
GB/mo/user Less Dev. areas 0.5 3
This means people in less developed areas will have morecapacity than is available in developed areas today!
Also, users in developed areas could see 3-10 hours of video perday (HD or SD)
Requires a 60 times increase in capacity (Moore’s Law increase)
9
TCP - Network StabilityTCP has Allowed the Network to Scale
TCP and Network Equipment keep a balance
This balance keeps the network stable– TCP speeds up until a packet lost, then slows down
– Network drops packets if overloaded
Result:– TCP grows to fill network
– Network then loses random packets
– All traffic impacted by packet losses, random rate changes
– However, system is basically stable
TCP Network
10
A New Alternative - Flow ManagementTCP or the Network need to Change
Network Equipment now drops random packets– All traffic suffers delay and jitter if there is any congestion
– Voice & Video do not slow down but still lose packets
– Data flows often lose several packets and stall
Flow Behavior Management - a new control alternative– Control the rate of each TCP flow individually
– Smoothly adjust the TCP rates to fill the desired capacity
– Flows can be put into virtual segments called classes
– Priority can adjust the relative class rates as desired
– Assured Rate option allows bandwidth guarantees for classes
Replacing random drops with rate control:– Network Stability is maintained
– All traffic moves smoothly without random loss
– Voice & Video flows cleanly with no loss or delay jitter
– Unfairness can be eliminated
12
Fairness in the beginning
A flow was a file transfer, or a voice call
The voice network had 1 flow per user
– All flows were equal (except for 911)
– Early networking was mainly terminal tocomputer
– Again we had 1 flow (each way) per user
– No long term analysis was done on fairness
It was obvious that under congestion:
Users are equalthus
Equal Capacity per Flowwas the design
13
Internet Traffic Grown 1012 since 1970
In 1999 P2P applications discovered using multiple flows could give themmore capacity and their traffic moved up to 80% of the network capacity
World Internet Traffic - History
0.000000001
0.00000001
0.0000001
0.000001
0.00001
0.0001
0.001
0.01
0.1
1
10
100
1000
10000
1970 1975 1980 1985 1990 1995 2000 2005
Pe
taB
yte
sp
er
mo
nth
Normal Traffic
P2P
TCP
ARPANET NSFNET COMMERICAL
Double each year
Electronics – Double every 18 months
14
Where is the Internet now?
The Internet is still equal capacity per flow under congestion
Computers, not users, now generate flows
– Any process can use any number of flows
– P2P takes advantage of this using 10-1000 flows
Congestion typically occurs at the Internet edge
– Here, many users share a common capacity pool
– TCP generally expands until congestion occurs
– This forces equal capacity per flow
– Then the number of flows determines each users capacity
The result is therefore unfair to users who paid the same
P2P FTP
15
Internet Traffic Recently
Since 2004, total traffic has increased 64% per year, about Moore’s Law– P2P has increased 91% per year – Consuming most of the capacity growth
– Normal traffic has only increased 22% per year –Significantly slowdown from past
Since P2P slows other traffic 5:1, users can only do 1/5 as much
This may account for the normal traffic being about 1/5 what it should bewith normal growth
World Internet Traffic - History
0
500
1000
1500
2000
2500
3000
3500
2000 2002 2004 2006 2008
Pe
taB
yte
sp
er
mo
nth
Normal Traffic
P2P
16
A New Fairness Rule is Needed
Inequity in TCP/IP – Currently equal capacity per flow– P2P has taken advantage of this, using 10-500 flows– This gives the 5% P2P users 80-95% of the capacity– P2P does not know when to stop until it sees congestion
Instead we should give equal capacity for equal pay– This is simply a revised equality rule – similar users get equal capacity– This tracks with what we pay– If network assures all similar users get equal service, file sharing will find the
best equitable method – perhaps slack time and local hosts
This is a major worldwide problem– P2P is not bad, it can be quite effective– But, without revised fairness, multi-flow applications can take capacity away
from other users, dramatically slowing their network use– It then becomes an arms race – who can use the most flows
17
ISP Problem - Dow nstream Capacity Use
6 Mbps Dow nstream , 3 Mbps upstream
0
20,000
40,000
60,000
80,000
100,000
5.0% 4.5% 4.0% 3.5% 3.0% 2.5% 2.0% 1.5% 1.0% 0.5% 0.0%% Users with P2P
Tra
ffic
Mb
ps
The Impact on ISP’s
Uncontrolled P2P growth slows normal traffic, users upset
ISP’s cannot win by adding capacity – P2P applications use it
DPI has been tried but is less than effective– Also targeting only P2P without fairness censured by FCC
Unless a fair solution is deployed, service will deteriorate– All applications will consider using multi-flows
– This leads to a major collapse of service levels world wide
Asymmetric service – Upstream is congested by P2P and downstream limited by slow ACK’s
No Regulation DPI
Equalize
Average Users
P2P Users
Wasted Capacity
18
Average User Data Rate Downstream1 Gbps Internet Feed, 100 Mbps User Acces
0
100
200
300
400
500
600
700
800
900
1,000
0.0%0.5%1.0%1.5%2.0%2.5%3.0%3.5%4.0%4.5%5.0%% Users with P2P
User
Data
Rate
-K
bp
s
The Impact on Average Users
Uncontrolled P2P impacts normal users if any congestion
– P2P generally expands until it sees congestion
– Congestion is not normally that visible – 50% average is sufficient
– The short overload peaks cause delay, loss and flow slowdown
Average Users flows run at speed of P2P flows
– P2P can use lots of very low speed flows
No Regulation DPI
Equalization
Result – Average user only gets 30 Kbps per flow – 3% of expected
19
We Must Resolve Unfairness Soon
Any application that uses more flows gets more capacity
This capacity is often taken at the expense of other users
P2P started the trend to use multiple flows - now up to 500
However, to compete, other applications will use multi-flows
– FTP and HTTP could go to using 1000 flows to compete
– Soon all ports will be used and NAT will fail
Also, UDP without flow control could be used to compete
In either case the Internet will be in trouble
20
Revised Fairness Criteria
We need not stick with “Equal Capacity per Flow”
That was our concept in the early days – one flow per person
Now, computers generate the flows, not humans
We provide or sell peak capacity to each user
We should provide “Equal Capacity per User” when overloaded– In general: Equal Capacity for Equal Pay
Today’s technology allows rate control of every flow– For multi-flow unfairness – divide capacity equally between users
– P2P still works, privacy maintained, and most users run 5-10 times faster
P2P FTP
Equal Capacity per User
21
Why is it Important to Change Fairness Rule?
P2P is attractive and growing rapidly
It cannot determine its fair share itself
The network must provide the fair boundary
Without fairness, normal users will slow down and stall
Multi-flow applications will be misled on economics
– Today most P2P users believe their peak capacity is theirs
– They do not realize they are slowing down other users
– The economics of file transfer are thus badly misjudged
– This leads to globally un-economic product decisions
User equality will lead to economic use of communications
22
Flow Behavior Management – Why now?
Routers, WAN Optimizers, & DPI depend on processing
– They do packet classification – processing is expensive
Memory cost has come down faster than processing cost
Flow Behavior Manager is memory based
– Keeping knowledge about 4 M flows and 100 K users
Allows it to support four 10 Gbps trunks in 1 RU
23
Summary – Flow Management
Flow Management can easily be added to any network
Allows fairness change to “equal capacity for equal pay”
– This will keep multi-flow applications from drowning the net
Eliminates delay and jitter
– TCP data throughput improved
– Voice and video quality moved from fair to great
Allows virtual separation of traffic into rate controlled classes
Eliminates packet loss and delay with overload
Provides complete flow records for all traffic
– Greatly improves understanding of network traffic & problems