Huawei Backbone WDM for DCI -...
-
Upload
phungthien -
Category
Documents
-
view
220 -
download
0
Transcript of Huawei Backbone WDM for DCI -...
2
Agenda
Business Transformation Drives Network Re-architecture 1
Network Re-architecture Brings Challenges to Backbone 2
Huawei DC-Centric Long-haul Transmission Solution 3
3
DC Become the Center of Service/Traffic
Disruptive Innovation
Smooth Migration
Telco
70%
80%
servers deployed in
the cloud by 2020
new applications
deployed in the cloud
by 2020 Cloud Service
DC is center of new services and Driver of Traffic
427 560
711 883
1086
1317 82%
84%
86%
88%
90%
91%
76%
78%
80%
82%
84%
86%
88%
90%
92%
0
200
400
600
800
1000
1200
1400
1600
2012 2013 2014 2015 2016 2017
non DC traffic
DC to user traffic
dc/global
Global IP traffic is DC traffic
by 2017 91%
DC traffic account for 91% in 2017
4
01010001101111010010
0101 00011011110100111 01111010
010
10
00
11
011
01
00
00
01
0
Traditional Network:
Basically North-South Traffic Today: Cloud & DC-Centric,
West-East Traffic Bursting
Core Network
Traffic Direction Change Drives Network Architecture
Transformation
5
[Finland] Hamina, 60°34′N 27°12′E
Cold Climate, Low electric for cooling
[Sweden] Lulea, 65°35′N 22°9′E
Cold Climate, Eco-rich Hydro-power
[China] Inner Mongolia, 39°58’N111°26’E
Low Land Cost, Eco-rich Coal-power
Cost factors, like Land, Manpower, Energy, Geological Security and Government Policy,
define DC location
So, Circumpolar Latitude and Energy Region is an optimal choice
[China] Guizhou,26°24’N106°28’E Moderate Climate, Stable Geology, Abundant Energy
New Large DC Intends to Address Remote & Low
Cost Site
6
1.Focus on low transmission cost and high coverage
2.Telecom network secludes consumer from information
sources, but increases hops and latency
1. Focus on users’ consumption model
2. Build own backbone network and super-connect-
node to reduce hops and latency
Data Center
AT&T Backbone Network Google’s self-built Backbone Network
DC-Centric Re-Struct Backbone Topology to De-
administration
7
Subsidiary of Finland-owned CoreNet Group, business on
IDC and international connect
2015, Cinia Build New WDM network for DCI
Connect Euro & Asia through Germany/Russia/Estonia etc.
ICPs take Finland as the excellent Global Data Hub for Data Centers
Climate: 1/3 in Arctic Circle, Annual average temperature under 2℃
Energy: Eco-rich in Hydro, Geothermal, Solar resources
Fiber: Rich in Optic Cable
Policy: Government Support, Low Tax rate
New Transnational DCI Network
Finland Cinia: Build Dedicated DCI Network for ICPs
8
90+ Local DC
UNICA Unified Architecture, Cloud DC Platform for
All Subnets
Key Features Multi-tenant Operation, public
cloud/private cloud/telecom cloud services
Unique Value Short TTM, Lower OpEx, Cross-
domain & Unified Management
Alcalá Brazil
México
3
Core DC
Telefonica: UNICA Strategical Transformation
9
China Telecom: DCI Backbone Help Business
Transition
DC-Centric ,Intensive Operation
Cover all own and OTT’s IDC
Fast BW Provision as OTT Request
Guiyang
Shenzhen
Shanghai
Beijing
Hohhot
Super DC
OTT DC
Backbone DC
New DCI break the district Limitation
<10 mins service provision automatically
<30 ms Latency guarantee between any DCs
ChinaNet1(163):Public Internet Service w/o QoS Request ChinaNet2(CN2):3G/4G, IMS, Leased Line w/i QoS Request DCI :Specified Backbone for DC Connection
Regional DC
10
Agenda
Business Transformation Drives Network Re-architecture 1
Network Re-architecture Brings Challenges to Backbone
Huawei DC-Centric Long-haul Transmission Solution 3
2
1
11
CN Core
Super CN Core
China Telecom: 8+2 DC with Full Mesh Connect
1 hop between Super CN Cores and CN Cores
Up to Max. 3300KM LH Transmission
Beijing
Shanghai
Guangzhou
Chengdu
Nanjing
Hangzhou
Guiyang
Hohhot
Hohhot-Guangzhou
3300km
Xi’an
Wuhan
DCI Needs Ultra Longhual Transmission
12
CME
NYSE 13ms
Circuit
Ex fastest route
Latency Rental per Month Operator
Current fastest route
Verizon
Spread Networks
14ms
13ms $ 300 Thousand
14ms
• Spread Networks, 2014, Spreadnetworks.com
$ 30 Thousand
• CME: Chicago Mercantile Exchange
• NYSE: New York Stock Exchange
Customer Willing to Pay More for Better Experience
13
ROADM OTN Router
Latency ≈ 0 ≈ 10us
OTN OTN OTN OTN
IP IP
If light traffic loading
≈ 50us If heavy traffic loading
=100~200ms
L0
L1
L2/L3
>100G Optical Connect
<100G OTN Aggregation
L3 Router Process
IP+Optical Synergy, Less Hops, Less Latency OTN+ROADM, Basic Configuration of WDM
WDM/OTN
Router
L3 L3 IP Bypass
Latency Optimization Proposal :
Direct Optic Connection, Less Forwarding
14
Netflix Everywhere =
60 + 130 more countries
6.5EB
1EB
2.3EB 0.2EB
Host Node~10T Traffic : NA*3(USA),EU*2(Dublin/Frankfurt)
Intercontinental & Multinational Traffic =
User Service Request + Master/Slave DCN Traffic
*1 Web Request (1KB) Brings 930.6KB East-West Traffic, enlarge 930 times more
Netflix Streaming Video Service Account for
37% of peak download internet traffic, NA
Burst Video Traffic & ICP CDN Top Deployment
Challenge Operator Backbone
15
11.11 Traffic Burst
Jan Feb Mar April May June July Aug Sep Oct Nov Dec
11.11
12.12 Flow
$14.3 Billion GMV in 2015.11.11
54% higher than 2014
31 Mins Beyond 2012 GMV
4500 Shopping On-line simultaneously
85.9 Thousand Pay/Sec Source: Alibaba CDN Traffic Monitoring Report, 2015
Traffic Between Alibaba’DC
OTT Service Periodic Burst Drive Traffic Adjust
On-demand
16
Hangzhou
Shenzhen
TTM, the Key Competitiveness for Internet Private Circuit
China Telecom: Months to deploy OTT Private
Circuit
OTT: Have to Build Own Backbone Network
Finland CINIA OTT Case
Google/Facebook: 2 Week TTM
CINIA: 3 Month
China Telecom OTT Case
3 Month
2 Week
Cinia
Current TTM
Google/Facebook
Request
2014,B Build Owned Network
2015,A Build Owned Network
Caution: This information should be deleted while talking with customers.
OTT Circuit Ask For Very Short Provisioning Time
17
1. DC-Centric, ONE Operator-ONE Network
2. Optimized national + regional backbone architecture to break administration boundaries,
building Single Backbone
3. IP + Optical Synergy, Less Layers & Less Hops, Minimize Latency
4. ROADM+OTN based DWDM network, huge bandwidth and bandwidth adjustment on demand
5. OTN+T-SDN, Fast TTM, min-level service provisioning
T-SDN
Re-architect DC-Centric LH Backbone Network
18
Agenda
Business Transformation Drives Network Re-architecture 1
Network Re-architecture Brings Challenges to Backbone
Huawei DC-Centric Long-haul Transmission Solution 3
2
1
3
19
Saving 50% CAPEX, 25% OPEX
2000km @ CFP pluggable 100G
El Paso
Datacenter
5000km @ MSA fixed 100G
Dallas
Datacenter
Others : w REG, 4*100G
Huawei:w/o REG, 2*100G
Huawei Ultra LH 100G, Fit for DCI
20
Shanghai
Los Angeles
10,000KM
Huge Long
96λ/128λ
12.8T Capacity
Single Platform for
Land & Submarine,
Easy OAM
Uniform
12,000km
Span the Atlantic
Ultra Long SLTE 100G
21
Developing: 80/100 x 200G @ 640km~1200km
Stronger FEC Constellation Shaping
Noise
Level
Freq.
Faster than Nyquist Multi Sub-carrier per wavelength
64GHz
50GHz
OSNR +3dB↑,
nonlinear performance
improving
λn
22
Optical + Electrical Switch Balancing Capacity & Efficiency
, , ,
,
Low order XC: Electrical Cross Connect
O-E O-E
Electrical Switch
O-E O-E O-E O-E
WSS WSS
WS
S
WS
S
WS
S
WS
S
CDC CDC
Optical
Switch
Agile switching
granule: ODU,
VC, PKT
TDM based to
improve
bandwidth
efficiency
Service-level
grooming and
monitoring
, , ,
,
High order XC: Optical Cross Connect
Large switching
granule: 100G
and beyond per
λ
Optical
switching is
independent of
physical media
High reliability,
Low latency,
Low Power
Consumption
23
OTN Cluster
2017
Huawei Large Capacity OTN Switch Planning
OTN/SDH/Packet…
OTN Ethernet SDH
MS-OTN SW
ROADM/OXC
25.6T
50T +, For 100G/200G
High Integration request
but no power restrict of
single shelf
100T+, For separate
subrack request per
Direction and have
power restrict of
single shelf
1.28T switch capacity
2008
2015
2017
2017
Single shelf
24
Future-oriented Optical Cross Connection
OXC 3.0 OXC 1.0 OXC 2.0
MCS
DWSS
Fiber Shuffle
MCS
DWSS
Low-loss N*M
A&D
Optical Backplane
DWSS
DWSS
…
…
OXC
EXC
OA
Tributary Card Tributary Card
OA
Fiber-shuffle based λ-Switch
• Fiber-Shuffle
• eID avoid misconnection
• MPO reduce fiber number
Optical backplane λ-Switch
• Optical Backplane
• Fiber-free connection
• Digital optical OAM
O&E convergence λ-Switch
• Compatible with both OXC&EXC
• Service sensing channels
• Self-management system
25
T-SDN, Agile Service Delivery, On-demand, Short TTM
11.11
Bandwidth Book Anytime
Bandwidth Adjust Freely
SLA Select On-demand
Commercial in 2015.12
@Zhejiang Unicom
26
On-Demand Bandwidth Reservation, Meeting pre-plan
or Burst Traffic Requirements
• The bandwidth calendar applies to client service
creation.
• Users can randomly set the time when client services
are created and deleted, implementing automatic
service provisioning and deletion.
• The minimum granularity is minute.
Bandwidth Calendar Bandwidth Policy
• The bandwidth policy applies to EPL/EVPL services.
• The bandwidth policy is set to counter with burst
service traffic.
• Bandwidth can be adjusted in specified date
Date 11.6 11.7 11.8 11.9 11.10 11.11
Bandwidth
Private line pipes
11.12
Actual service traffic
27
Low Latency &
Fast Grooming Big BW @ ULH On Demand
Large Capacity OTN
OXC Innovation
ULH Terrestrial & Submarine
ULH 200G Innovation
OTN+T-SDN
Internet-Speed
25.6T
25.6T
T-SDN
Summary
Copyright©2016 Huawei Technologies Co., Ltd. All Rights Reserved.
The information in this document may contain predictive statements including, without limitation, statements regarding the
future financial and operating results, future product portfolio, new technology, etc. There are a number of factors that
could cause actual results and developments to differ materially from those expressed or implied in the predictive
statements. Therefore, such information is provided for reference purpose only and constitutes neither an offer nor an
acceptance. Huawei may change the information at any time without notice.
Thank you